content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Hilbert II
Hilbert II is decentralised access to verified and readable mathematical knowledge. As its name already suggests, this project is in the tradition of Hilbert's program.
Hilbert II wants to become a free, world wide mathematical knowledge base that contains mathematical theorems and proofs in a formal correct form. All belonging documents are published under the GNU
Free Documentation License. We aim to adapt the common mathematical argumentation to a formal syntax. That means, whenever in mathematics a certain kind of argumentation is often used we will look
forward to integrate it into the formal language of Hilbert II. This formal language is called the QEDEQ format.
Hilbert II provides a program suite that enables a mathematician to put theorems and proofs into that knowledge base. These proofs are automatically verified by a proof checker. Also texts in "common
mathematical language" can be integrated. The mathematical axioms, definitions and propositions are combined to so called QEDEQ modules. Such a module could be seen as a mathematical textbook which
includes formal correct proofs. Because this system is not centrally administrated and references to any location in the internet are possible, a world wide mathematical knowledge base could be
build. Any proof of a theorem in this "mathematical web" could be drilled down to the very elementary rules and axioms. Think of an incredible number of mathematical textbooks with hyperlinks and
each of its proofs could be verified by Hilbert II. For each theorem the dependency of other theorems, definitions and axioms could be easily derived.
The main project is still in development but you can already download the current application. It has a GUI and can load QEDEQ module files located anywhere in the internet. It can transform QEDEQ
modules into LaTeX and UTF-8 text files. Most PDF documents of this web site were generated in fact by this application. It even can check simple formal proofs. For set theory there are simple
discrete models integrated so the application can check if a formula is valid. In new QEDEQ modules you can use QEDEQ modules that exist already in the web just by referencing them.
For further information see under state and planning and development.
There also exists a working prototype called Principia Mathematica II. It is fully capable of first order predicate logic and shows the main features and functionality of Hilbert II. | {"url":"https://qedeq.org/","timestamp":"2024-11-06T17:26:44Z","content_type":"text/html","content_length":"9229","record_id":"<urn:uuid:8ab1c343-3ccc-4aa6-a723-f8d5c4fa25d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00677.warc.gz"} |
Computing Expected Frequencies for Goodness-of-Fit Tests: A Comprehensive Guide
What are expected frequencies in goodness-of-fit tests?
Expected frequencies in goodness-of-fit tests represent the frequencies that would be expected in each category if the observed data perfectly fit the expected distribution. These tests are used to
assess whether the observed data significantly deviate from the expected distribution. By comparing the observed frequencies to the expected frequencies, we can determine if there is a significant
difference or discrepancy that cannot be attributed to random variation.
In order to compute the expected frequencies, we first need to define the expected distribution. This distribution is usually based on a theoretical or hypothesized model that describes the expected
proportions or probabilities in each category. For example, if we are conducting a goodness-of-fit test to examine the distribution of eye colors in a population, our expected distribution might be
based on the hypothesis that 25% of the population has blue eyes, 40% has brown eyes, 30% has green eyes, and 5% has other colors.
Once we have the expected distribution, we can calculate the expected frequencies. This is done by multiplying the total sample size (or the total number of observations) by the expected proportion
or probability for each category. In our eye color example, if we have a sample size of 1000 individuals, the expected frequency for blue eyes would be 1000 * 0.25 = 250, for brown eyes it would be
1000 * 0.40 = 400, for green eyes it would be 1000 * 0.30 = 300, and for other colors it would be 1000 * 0.05 = 50.
These expected frequencies provide a baseline against which we can compare the observed frequencies. The observed frequencies are the actual frequencies or counts obtained from the sample data. In
our eye color example, if we collected data from our sample of 1000 individuals and found that 280 had blue eyes, 410 had brown eyes, 290 had green eyes, and 20 had other colors, these would be the
observed frequencies.
By comparing the observed frequencies to the expected frequencies, we can determine if the differences between these two sets of frequencies are statistically significant. This is done using
statistical tests such as the chi-square test. The chi-square test calculates a test statistic that measures the discrepancy between the observed and expected frequencies, and determines whether this
discrepancy is unlikely to occur by chance alone.
If the test statistic is found to be statistically significant, it indicates that there is a significant difference between the observed and expected frequencies. This suggests that the observed data
deviate from the expected distribution and provides evidence for rejecting the null hypothesis of perfect fit. On the other hand, if the test statistic is not statistically significant, it suggests
that there is no significant difference and we fail to reject the null hypothesis, indicating that the observed data are consistent with the expected distribution.
Overall, expected frequencies play a crucial role in goodness-of-fit tests as they provide a reference point for comparing observed frequencies and assessing the significance of any deviations. These
tests are widely used in various fields such as psychology, biology, sociology, and market research to examine the fit of observed data to theoretical models or expected distributions, thus helping
researchers uncover patterns and relationships in their data.
How are expected frequencies computed for goodness-of-fit tests?
Expected frequencies are computed for goodness-of-fit tests using a specified mathematical model or assumption about the distribution of the observed data. These expected frequencies represent the
values that would be expected to occur if the observed data perfectly conformed to the assumed distribution. By comparing the observed frequencies with the expected frequencies, statisticians can
determine whether there is a significant difference between the observed and expected data.
In order to compute the expected frequencies, the first step is to choose an appropriate distribution for the data. This distribution is typically based on prior knowledge, theoretical
considerations, or expert opinion. For example, if the data represents the outcome of tossing a fair six-sided die, the expected frequencies would be evenly distributed across all six possible
Once the distribution is selected, the expected frequencies are calculated using mathematical formulas. The specific method for computing expected frequencies varies depending on the type of
distribution being used. For example, if the data follows a normal distribution, the expected frequencies can be computed based on the mean and standard deviation of the observed data.
After computing the expected frequencies, the next step is to compare them with the observed frequencies. This is typically done using the Chi-Square test, which determines the degree of deviation
between the observed and expected frequencies. The Chi-Square statistic is computed by summing the squared differences between the observed and expected frequencies, divided by the expected
The resulting Chi-Square statistic follows a Chi-Square distribution, which has a known probability distribution. By comparing the computed Chi-Square statistic with the critical values from the
Chi-Square distribution, statisticians can determine whether the deviation between the observed and expected frequencies is statistically significant.
If the computed Chi-Square statistic is greater than the critical value, it indicates that there is a significant difference between the observed and expected frequencies, suggesting that the data
does not fit the assumed distribution. Conversely, if the computed Chi-Square statistic is less than the critical value, it suggests that the data is in good agreement with the assumed distribution.
In summary, expected frequencies for goodness-of-fit tests are computed by selecting an appropriate distribution, calculating the expected frequencies based on mathematical formulas, and then
comparing them with the observed frequencies using the Chi-Square test. This statistical test helps to determine whether there is a significant difference between the observed and expected data,
providing insights into the goodness-of-fit of the assumed distribution.
Leave a Comment | {"url":"https://flhespectator.com/how-are-expected-frequencies-computed-for-goodness-of-fit-tests/","timestamp":"2024-11-04T04:55:43Z","content_type":"text/html","content_length":"147511","record_id":"<urn:uuid:8c4d5b00-b32e-44f8-99ee-6f11e2355aa8>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00609.warc.gz"} |
Statistics Tip of the Week : Sample Sizes and Margins of Error for Proportions
This may be a handy table to keep around somewhere. How big a Sample Size do we need if we want to differentiate between 2 choices in a survey or election? It's more than people usually think.
Some might have in mind the guidance on when to use the t Distribution instead of the z (Standard Normal) Distribution. We're told we can use z when n, the Sample Size, is "large". And then we learn
that some consider 30 to be large enough, while others say 100.
But as you can see from this table, n = 100 barely gets you into the game when you're doing a survey or poll. When n = 100, you have a 10% Margin of Error (MOE). That is, you can say that you have a
Statistically Significant difference if your Proportions are wider spread than 44% and 55% for the 2 candidates.
But to get to a 2% MOE, you'd need a Sample Size of 2,400. Notice also, that diminishing returns set in. To get to a 1% MOE, you'd need a sample 4 times larger than you would for 2%.
0 Comments
Leave a Reply. | {"url":"https://www.statisticsfromatoz.com/blog/statistics-tip-of-the-week-sample-sizes-and-margins-of-error-for-proportions","timestamp":"2024-11-11T10:16:42Z","content_type":"text/html","content_length":"39535","record_id":"<urn:uuid:497c97d0-8111-4678-8ef0-20c505d594b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00126.warc.gz"} |
Ch. 8 Key Concepts - Precalculus 2e | OpenStax
Key Concepts
8.1 Non-right Triangles: Law of Sines
• The Law of Sines can be used to solve oblique triangles, which are non-right triangles.
• According to the Law of Sines, the ratio of the measurement of one of the angles to the length of its opposite side equals the other two ratios of angle measure to opposite side.
• There are three possible cases: ASA, AAS, SSA. Depending on the information given, we can choose the appropriate equation to find the requested solution. See Example 1.
• The ambiguous case arises when an oblique triangle can have different outcomes.
• There are three possible cases that arise from SSA arrangement—a single solution, two possible solutions, and no solution. See Example 2 and Example 3.
• The Law of Sines can be used to solve triangles with given criteria. See Example 4.
• The general area formula for triangles translates to oblique triangles by first finding the appropriate height value. See Example 5.
• There are many trigonometric applications. They can often be solved by first drawing a diagram of the given information and then using the appropriate equation. See Example 6.
8.2 Non-right Triangles: Law of Cosines
• The Law of Cosines defines the relationship among angle measurements and lengths of sides in oblique triangles.
• The Generalized Pythagorean Theorem is the Law of Cosines for two cases of oblique triangles: SAS and SSS. Dropping an imaginary perpendicular splits the oblique triangle into two right triangles
or forms one right triangle, which allows sides to be related and measurements to be calculated. See Example 1 and Example 2.
• The Law of Cosines is useful for many types of applied problems. The first step in solving such problems is generally to draw a sketch of the problem presented. If the information given fits one
of the three models (the three equations), then apply the Law of Cosines to find a solution. See Example 3 and Example 4.
• Heron’s formula allows the calculation of area in oblique triangles. All three sides must be known to apply Heron’s formula. See Example 5 and See Example 6.
8.3 Polar Coordinates
• The polar grid is represented as a series of concentric circles radiating out from the pole, or origin.
• To plot a point in the form $( r,θ ),θ>0, ( r,θ ),θ>0,$ move in a counterclockwise direction from the polar axis by an angle of $θ, θ,$ and then extend a directed line segment from the pole the
length of $r r$ in the direction of $θ. θ.$ If $θ θ$ is negative, move in a clockwise direction, and extend a directed line segment the length of $r r$ in the direction of $θ. θ.$ See Example 1.
• If $r r$ is negative, extend the directed line segment in the opposite direction of $θ. θ.$ See Example 2.
• To convert from polar coordinates to rectangular coordinates, use the formulas $x=rcosθ x=rcosθ$ and $y=rsinθ. y=rsinθ.$ See Example 3 and Example 4.
• To convert from rectangular coordinates to polar coordinates, use one or more of the formulas: $cosθ= x r ,sinθ= y r ,tanθ= y x , cosθ= x r ,sinθ= y r ,tanθ= y x ,$ and $r= x 2 + y 2 . r= x 2 + y
2 .$ See Example 5.
• Transforming equations between polar and rectangular forms means making the appropriate substitutions based on the available formulas, together with algebraic manipulations. See Example 6,
Example 7, and Example 8.
• Using the appropriate substitutions makes it possible to rewrite a polar equation as a rectangular equation, and then graph it in the rectangular plane. See Example 9, Example 10, and Example 11.
8.4 Polar Coordinates: Graphs
• It is easier to graph polar equations if we can test the equations for symmetry with respect to the line $θ= π 2 , θ= π 2 ,$ the polar axis, or the pole.
• There are three symmetry tests that indicate whether the graph of a polar equation will exhibit symmetry. If an equation fails a symmetry test, the graph may or may not exhibit symmetry. See
Example 1.
• Polar equations may be graphed by making a table of values for $θ θ$ and $r. r.$
• The maximum value of a polar equation is found by substituting the value $θ θ$ that leads to the maximum value of the trigonometric expression.
• The zeros of a polar equation are found by setting $r=0 r=0$ and solving for $θ. θ.$ See Example 2.
• Some formulas that produce the graph of a circle in polar coordinates are given by $r=acosθ r=acosθ$ and $r=asinθ. r=asinθ.$ See Example 3.
• The formulas that produce the graphs of a cardioid are given by $r=a±bcosθ r=a±bcosθ$ and $r=a±bsinθ, r=a±bsinθ,$ for $a>0, a>0,$$b>0, b>0,$ and $a b =1. a b =1.$ See Example 4.
• The formulas that produce the graphs of a one-loop limaçon are given by $r=a±bcosθ r=a±bcosθ$ and $r=a±bsinθ r=a±bsinθ$ for $1< a b <2. 1< a b <2.$ See Example 5.
• The formulas that produce the graphs of an inner-loop limaçon are given by $r=a±bcosθ r=a±bcosθ$ and $r=a±bsinθ r=a±bsinθ$ for $a>0, a>0,$$b>0, b>0,$ and $a<b. a<b.$ See Example 6.
• The formulas that produce the graphs of a lemniscates are given by $r 2 = a 2 cos2θ r 2 = a 2 cos2θ$ and $r 2 = a 2 sin2θ, r 2 = a 2 sin2θ,$ where $a≠0. a≠0.$ See Example 7.
• The formulas that produce the graphs of rose curves are given by $r=acosnθ r=acosnθ$ and $r=asinnθ, r=asinnθ,$ where $a≠0; a≠0;$ if $n n$ is even, there are $2n 2n$ petals, and if $n n$ is odd,
there are $n n$ petals. See Example 8 and Example 9.
• The formula that produces the graph of an Archimedes’ spiral is given by $r=θ, r=θ,$$θ≥0. θ≥0.$ See Example 10.
8.6 Parametric Equations
• Parameterizing a curve involves translating a rectangular equation in two variables, $x x$ and $y, y,$ into two equations in three variables, x, y, and t. Often, more information is obtained from
a set of parametric equations. See Example 1, Example 2, and Example 3.
• Sometimes equations are simpler to graph when written in rectangular form. By eliminating $t, t,$ an equation in $x x$ and $y y$ is the result.
• To eliminate $t, t,$ solve one of the equations for $t, t,$ and substitute the expression into the second equation. See Example 4, Example 5, Example 6, and Example 7.
• Finding the rectangular equation for a curve defined parametrically is basically the same as eliminating the parameter. Solve for $t t$ in one of the equations, and substitute the expression into
the second equation. See Example 8.
• There are an infinite number of ways to choose a set of parametric equations for a curve defined as a rectangular equation.
• Find an expression for $x x$ such that the domain of the set of parametric equations remains the same as the original rectangular equation. See Example 9.
8.7 Parametric Equations: Graphs
• When there is a third variable, a third parameter on which $x x$ and $y y$ depend, parametric equations can be used.
• To graph parametric equations by plotting points, make a table with three columns labeled $t,x( t ), t,x( t ),$ and $y(t). y(t).$ Choose values for $t t$ in increasing order. Plot the last two
columns for $x x$ and $y. y.$ See Example 1 and Example 2.
• When graphing a parametric curve by plotting points, note the associated t-values and show arrows on the graph indicating the orientation of the curve. See Example 3 and Example 4.
• Parametric equations allow the direction or the orientation of the curve to be shown on the graph. Equations that are not functions can be graphed and used in many applications involving motion.
See Example 5.
• Projectile motion depends on two parametric equations: $x=( v 0 cosθ)t x=( v 0 cosθ)t$ and $y=−16 t 2 +( v 0 sinθ)t+h. y=−16 t 2 +( v 0 sinθ)t+h.$ Initial velocity is symbolized as $v 0 .θ v 0
.θ$ represents the initial angle of the object when thrown, and $h h$ represents the height at which the object is propelled.
8.8 Vectors
• The position vector has its initial point at the origin. See Example 1.
• If the position vector is the same for two vectors, they are equal. See Example 2.
• Vectors are defined by their magnitude and direction. See Example 3.
• If two vectors have the same magnitude and direction, they are equal. See Example 4.
• Vector addition and subtraction result in a new vector found by adding or subtracting corresponding elements. See Example 5.
• Scalar multiplication is multiplying a vector by a constant. Only the magnitude changes; the direction stays the same. See Example 6 and Example 7.
• Vectors are comprised of two components: the horizontal component along the positive x-axis, and the vertical component along the positive y-axis. See Example 8.
• The unit vector in the same direction of any nonzero vector is found by dividing the vector by its magnitude.
• The magnitude of a vector in the rectangular coordinate system is $| v |= a 2 + b 2 . | v |= a 2 + b 2 .$ See Example 9.
• In the rectangular coordinate system, unit vectors may be represented in terms of $i i$ and $j j$ where $i i$represents the horizontal component and $j j$represents the vertical component. Then,
v = ai + bj is a scalar multiple of $v v$by real numbers $aandb. aandb.$ See Example 10 and Example 11.
• Adding and subtracting vectors in terms of i and j consists of adding or subtracting corresponding coefficients of i and corresponding coefficients of j. See Example 12.
• A vector v = ai + bj is written in terms of magnitude and direction as $v=| v |cosθi+| v |sinθj. v=| v |cosθi+| v |sinθj.$ See Example 13.
• The dot product of two vectors is the product of the $i i$terms plus the product of the $j j$terms. See Example 14.
• We can use the dot product to find the angle between two vectors. Example 15 and Example 16.
• Dot products are useful for many types of physics applications. See Example 17. | {"url":"https://openstax.org/books/precalculus-2e/pages/8-key-concepts","timestamp":"2024-11-02T01:47:31Z","content_type":"text/html","content_length":"442047","record_id":"<urn:uuid:b513d901-8bcf-47ac-942b-c1ded9e55c11>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00757.warc.gz"} |
Re: Re: st: Direction of the effect of the cluster command on the standa
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Re: st: Direction of the effect of the cluster command on the standard error depends on the inclusion of a control variable
From Christopher Baum <[email protected]>
To "[email protected]" <[email protected]>
Subject Re: Re: st: Direction of the effect of the cluster command on the standard error depends on the inclusion of a control variable
Date Thu, 6 Jan 2011 08:20:45 -0500
On Jan 6, 2011, at 2:33 AM, Stas wrote:
> There are terrible small sample biases exhibited by -robust- and
> - -cluster()- standard errors with small # of observations and clusters,
> respectively. As was noted by Justina, four clusters is SO far away
> from asymptotics that I wouldn't even consider the clustered standard
> errors in your situation.
Just to add one thing to Stas', Justina's and Austin's replies... It is useful to think of the cluster-robust VCE estimator generating 'super-observations' , one per cluster. Thus with 4 clusters, you essentially are estimating a model with N=4 to compute the VCE. Some official Stata commands will let you do that, even when the number of coefficients > N. Baum-Schaffer-Stillman -ivreg2- (on SSC) will flag that as a problem, as it does not make much sense to do so. But one of the reasons that a small number of clusters may yield horrible results is that it represents estimation with a very small sample.
Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2011-01/msg00153.html","timestamp":"2024-11-14T15:31:28Z","content_type":"text/html","content_length":"11250","record_id":"<urn:uuid:f9fb3167-3cfe-45f1-8c17-b7a18e19cce2>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00557.warc.gz"} |
Pure pairs. I. Trees and linear anticomplete pairs
The Erdős-Hajnal conjecture asserts that for every graph H there is a constant c>0 such that every graph G that does not contain H as an induced subgraph has a clique or stable set of cardinality at
least |G|^c. In this paper, we prove a conjecture of Liebenau and Pilipczuk [10], that for every forest H there exists c>0, such that every graph G with |G|>1 contains either an induced copy of H, or
a vertex of degree at least c|G|, or two disjoint sets of at least c|G| vertices with no edges between them. It follows that for every forest H there exists c>0 such that, if G contains neither H nor
its complement as an induced subgraph, then there is a clique or stable set of cardinality at least |G|^c.
All Science Journal Classification (ASJC) codes
• Erdos-Hajnal conjecture
• Forests
• Induced subgraphs
Dive into the research topics of 'Pure pairs. I. Trees and linear anticomplete pairs'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/pure-pairs-i-trees-and-linear-anticomplete-pairs","timestamp":"2024-11-02T11:45:18Z","content_type":"text/html","content_length":"49772","record_id":"<urn:uuid:962d18f7-4ec9-4b76-b3fd-1588c1e0b49d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00035.warc.gz"} |
Arithmetic around Galois Theory
│ │ GTEM/TUBITAK SUMMER SCHOOL │ │
│ │ │ │
│ │ 8 - 19 June 2009, İstanbul │ │
┃Home│Program│Registration│Hotel info│Lecture notes │Venue│Useful info│Participants│Links│Poster┃
Groups and Models: Cherlin Bayramı 8th - 12th June 2009 Bilgi University, Istanbul
Scientific Advisory Committee
P. Dèbes, Université Lille 1
M. Emsalem, Université Lille 1
H. Önsiper, Middle East Tecnical University
M. Uludağ, Galatasaray University
Z. Wojtkowiak, Université de Nice
Objectives of GAGT 2009
This is a research level school on Algebraic Covers and their Moduli Spaces (Hurwitz spaces) in relation with Geometric Galois Theory, Inverse Galois Problems, Field Arithmetic, Fundamental
groups and Galois action, Deformation methods. There will be lectures on Constructions of covers, Ample Fields, Counting points on a Hurwitz space, Arithmetic aspects of Hurwitz spaces and
Descent theory. Afternoon sessions are devoted to talks by researchers including Ph.D. students. There will be a series of preparatory lectures on Infinite Galois Theory and Hurwitz Schemes
before the summer school.
This summer school is part of the activities of the GTEM research network of EU. Before the summer school, between 4-7 June, experts will deliver preparatory talks to students. This event will
take place in Feza Gürsey Institute (FGI) and is organised by K. Aker.
P. Dèbes, Université Lille (chair)
Research Talks
│Speaker │Talk Title │
│M. Antei, Lille │On the fundamental group scheme of a family of curves │
│L. Bary-Soroker │Frobenius automorphism and irreducible specializations │
│A. Cadoret, Bordeaux │A uniform open image theorem for $\ell$-adic representations │
│O. Cau, Lille │ │
│B. Collas, Paris 6 │Action on torsion-elements of mapping class groups by cohomological methods│
│M. Dettweiler, Heidelberg │On the automorphy of hypergeometric local systems │
│J.C. Douai, Lille │Principe de Hasse et Cohomologie des groupes │
│O. Hatami, Tehran │ │
│R. P. Holzapfel │Galois Reflection Towers │
│M. Kim, London │Diophantine geometry and Fundamental groups │
│Sergio Mendes, Lisbon │ │
│D. Neftin, Tel Aviv │On arithmetic field equivalences and crossed product division │
│Ambrus Pal, London │The real section conjecture and Smith's fixed point theorem │
│Elad Paran, Tel Aviv │Power series over generalized Krull domains │
│S. Petersen, München │ │
│J. Poineau, Regensburg │Inverse Galois Problem for Convergent Arithmetic Power Series │
│J. Schmidt, Heidelberg │ │
│S.Turkelli,Wisconsin-Madison│Homological Stability of Hurwitz Schemes │
│A. Yafaev, London │Andre-Oort and Manin-Mumford conjectures: a unified approach │
*Some talks will be announced during the summer school.
(since September 4, 2008) Last Update: 6 January 2010 17:10 PM
*This page is maintained by Celal Cem Sarioglu | {"url":"https://math.gsu.edu.tr/GAGT/","timestamp":"2024-11-02T02:20:50Z","content_type":"application/xhtml+xml","content_length":"25684","record_id":"<urn:uuid:6fa1f840-4ad0-4b2d-b51f-8cf1e4307ec0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00398.warc.gz"} |
From the Origins of Twistor Theory to Bi-Twistors and Curved Space-Times
Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact nobody.
TWTW01 - Twistors in Geometry & Physics
Twistor theory was originated in late 1963 as a geometric approach to the quantum field theoretic requirement of splitting the positive-frequency modes from the negative frequency ones. This was
addressed through translating the geometry of Minkowski space into the projective geometry CP3 , referred to as the projective twistor space PT, whose division into two halves PT+ and PT–
geometrically described the required positive/negative frequency splitting. However, as the geometry of twistor theory progressed, in relation to its physical interpretation in terms of the momentum/
angular momentum of photon states, it became clear that the splitting of PT into PT+ and PT– had more directly to do with positive/negative helicity than with positive/negative frequency. This
confusion of interpretation became more manifest with the non-linear graviton construction, whereby twistor theory broadened its scope to describe curved complex space-times, where the helicity/
frequency tension manifested itself into the “anti-self-dual” requirement for the curved 4-manifolds that could be directly described by curved twistor-space theory, and twistor theory itself
bifurcated into a “positive-definite” version of more interest to pure mathematicians and th “Lorentzian” (or even “split-signature”) version of more direct interest to physicists. The concept of a
bi-twistor is introduced here to circumvent this asymmetry and (anti-)self-dual requirement, by involving both twistors and dual twistors together, subject to their quantum commutation laws, this
providing a bi-twistor triple product (and a split-octonion algebra), enabling general space-times to be described by bi-twistors.
This talk is part of the Isaac Newton Institute Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown. | {"url":"http://talks.cam.ac.uk/talk/index/219631","timestamp":"2024-11-03T10:18:50Z","content_type":"application/xhtml+xml","content_length":"14269","record_id":"<urn:uuid:bef7b078-c2f4-45a8-8baa-58ee3e125f24>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00378.warc.gz"} |
Adaptive Mesh Refinement (AMR) for Weather and Climate Models
Adaptive Mesh Refinement (AMR) techniques provide an attractive framework for atmospheric flows since they allow an improved resolution in limited regions without requiring a fine grid resolution
throughout the entire model domain. The model regions at high resolution are kept at a minimum and can be individually tailored towards the research problem associated with atmospheric model
A solution-adaptive grid is a virtual necessity for resolving a problem with different length scales. In order to avoid under-resolving high-gradient regions in the problem, or conversely,
over-resolving low-gradient regions at the expense of more critical regions, solution adaptation is a powerful tool saving several orders of magnitude in computing resources for many problems.
Climate and weather models, or generally speaking computational fluid dynamics (CFD) codes, are among the many applications that are characterized by multiscale phenomena and their resulting
For instance, large-scale weather systems such as midlatitude cyclones drive small-scale frontal zones, thunderstorms or rain events. These small-scale features may then influence the larger scale
if, as an example, evaporation processes and turbulence at the surface trigger sensible and latent heat fluxes. But although today's atmospheric general circulation models (GCMs), and in particular
weather prediction codes, are already capable of uniformly resolving horizontal scales of order 10-20 km (e.g. the model IFS of the European Centre for Medium-Range Weather Forecasts), the
atmospheric motions of interest span many more scales than those captured in a fixed resolution model run. The widely varying spatial and temporal scales, in addition to the nonlinearity of the
dynamical system, raise an interesting and challenging modeling problem. Solving such a problem more efficiently and accurately requires variable resolution.
Until 2008, our developments of an adaptive dynamical core for weather and climate research were based on the so-called Lin-Rood Finite-Volume (FV) dynamical core that had been designed at the NASA
Goddard Space Flight Center (NASA/GSFC) in the late 1990's. This hydrostatic global dynamics package in flux form is built upon the
Lin and Rood 1996
advection algorithm, which utilizes advanced oscillation-free numerical approaches to solving the transport equation. In particular,
Lin and Rood 1996
extended a Godunov-type methodology to multiple dimensions and made use of 2nd order van Leer-type (
van Leer 1974
van Leer 1979
) and 3rd order piecewise parabolic (PPM) methods (
Colella and Woodward 1984
Carpenter et al. 1990
). In 1997, the advection scheme became the fundamental building block of a shallow water code (
Lin and Rood 1997
Lin 1997
) which then led to the development of the current 3D, primitive-equation (PE) based, finite-volume dynamics package (
Lin 2004
) on a latitude-longitude grid.
Today (in 2016), the 3D FV dynamical core (lat-lon) and its newer cubed-sphere variant (FV3, dveloped at GFDL and NASA) is used operationally for data assimilation applications at NASA/GSFC, for
weather and climate simulations at NOAA GFDL (and soon by NOAA's National Weather Service). Furthermore, the lat-lon FV dynamical core became part of NCAR's Community Earth System Model (CESM) in
2001. This climate prediction system contains the Community Atmosphere Model, CAM, with physics and dynamics components. In particular, the finite-volume dynamical core is now one of the four
available dynamics modules in CAM and is still used as the NCAR default for climate simulations with 110 km grid spacings.
Additional details of the FV lat-lon dynamics package can be found in the tutorial
'The Lin-Rood Finite Volume (FV) Dynamical Core'
The adaptive FV dynamical core has been run in two configurations: the full 3D hydrostatic dynamical core on the sphere and the corresponding 2D shallow water model that has been extracted out of the
3D version (
Jablonowski 2004
Jablonowski et al. 2004
Jablonowski et al. 2006
St-Cyr et al. 2008
Jablonowski et al. 2009
). In general, the shallow water system can be considered a 1-level version of the 3D dynamical core. This shallow water setup serves as an ideal testbed for the horizontal discretization and the 2D
adaptive-mesh strategy. It further allows the efficient and quick testing of interpolation routines at fine-coarse grid interfaces.
Both static and dynamic adaptation strategies have been tested. Static adaptations can be used to vary the resolution in pre-defined regions of interest. This includes static refinements near
mountain ranges or static coarsenings in the longitudinal direction for the implementation of a so-called reduced grid in polar regions. Dynamic adaptations are based on flow characteristics and
guided by refinement criteria that detect user-defined features of interest during a simulation. In particular, flow-based refinement criteria, such as vorticity or gradient indicators, have been
tested. Refinements and coarsenings then occur according to pre-defined threshold values.
An example of an adaptive passive advection test on the sphere is shown below. The figure shows the initial conditions for the shallow water standard test case 1 with a 90
rotation angle (see
Williamson et a. 1992
for the test specifications). Here the adapted blocks track a cosine bell as it is transported once around the sphere. Note that each self-similar blocks contains 9x6 grid points in lon x lat
direction so that the finest grid resolution in this example corresponds to a 0.625
x 0.625
grid. The maximum number of refinement levels is set to 3.
The adaptation criterion is based on a simple threshold assessment. A block is refined as soon as the height of the cosine bell exceeds a user-determined threshold value at (at least) one grid point
within the block. On the other hand, a block gets coarsened if the height of the cosine bell in the block no longer meets the criterion. The corresponding
mpeg movie (4.1 MB)
shows that the cosine bell is successfully captured as indicated by the overlaid block distribution. The movie shows a 12-day simulation. After 12-days the tracer distribution returns to its initial
position which then serves as the reference solution. There are no visible distortions of the height field as the cosine bell approaches, passes over and leaves the poles. The increased resolution
clearly helps preserve the shape and peak amplitude. The cosine bell represents a rather smooth tracer distribution. An alternative tracer field is shown below and also in this 12-day
simulation (4.4 MB)
with a rotation angle of 30
. The slotted cylinder is initialized with a constant value and is set to zero outside the inner domain. The movie shows that the sharp edges are tracked successfully by a gradient-based refinement
An example of a dynamically adapted nonlinear flow field is presented in the next figure. It shows an idealized flow over a single mountain at model day 10 (test case 5,
Williamson et a. 1992
). Here the adaptations are guided by the absolute value of the geopotential height gradient. The refined regions pick out the strong gradient regimes that are associated with the evolving wave train
behind the mountain. Other refinement criteria are also feasible for this mountain-induced wave response. For example, the following 15-day
mpeg movie (5 MB)
shows the evolution of the geopotential height field that is dynamically tracked by a relative vorticity refinement criterion. It detects the evolving lee-side wave reliably and highlights slightly
different refinement regions in comparison to the gradient criterion shown below.
Carpenter, R. L., K. K. Droegemeier, P. R. Woodward and C. E. Hane, Application of the Piecewise Parabolic Method to Meteorological Modeling, Mon. Wea. Rev., 118, 586-612, 1990.
Colella, P. and P. R. Woodward, The Piecewise Parabolic Method (PPM) for Gas-Dynamical Simulations, J. Comput. Phys., 54, 174-201, 1984.
Ferguson, J. O., C. Jablonowski, H. Johansen, P. McCorquodale, P. Colella and P. A. Ullrich, Analyzing the Adaptive Mesh Refinement (AMR) characteristics of a high-order cubed-sphere 2D shallow water
model, Mon. Wea. Rev., 144, 4641-4666, 2016
Ferguson, J. O., C. Jablonowski, and H. Johansen, Assessing Adaptive Mesh Refinement (AMR) in a Forced Shallow-Water Model with Moisture, Mon. Wea. Rev., Vol. 147, 3673–3692, 2019
Jablonowski, C., Adaptive Grids in Weather and Climate Modeling, Ph.D. dissertation, University of Michigan, Ann Arbor, MI, 2004 (
the pdf version, 8MB),
Jablonowski, C., M. Herzog, J. E. Penner, R. C. Oehmke, Q. F. Stout and B. van Leer, Adaptive Grids for Weather and Climate Models, ECMWF Seminar Proceedings on Recent Developments in Numerical
Methods for Atmospheric and Ocean Modelling, Reading, UK, 6-10 September 2004, pp. 233-250 (download the
pdf version 2MB
Jablonowski, C., M. Herzog, J. E. Penner, R. C. Oehmke, Q. F. Stout, B. van Leer and K. G. Powell, Block-Structured Adaptive Grids on the Sphere: Advection Experiments, Mon. Wea. Rev., 134,
3691-3713, 2006
Jablonowski, C. and D. L. Williamson, A Baroclinic Instability Test Case for Atmospheric Model Dynamical Cores, Quarterly J. Roy. Met. Soc., 132, No. 621C, 2943-2975, 2006
Jablonowski, C., R. C. Oehmke and Q. F. Stout, Block-structured Adaptive Meshes and Reduced Grids for Atmospheric General Circulation Models, Phil. Transaction Royal Society A, 367, 4497-4522, 2009
Lin, S.-J., A Finite-Volume Integration Method for Computing the Pressure Forces in General Vertical Coordinates, Quart. J. Roy. Meteor. Soc., 123, 1749-1762, 1997.
Lin, S.-J., A "Vertically Lagrangian" Finite-Volume Dynamical Core for Global Models, Mon. Wea. Rev., 132, 2293-2307, 2004.
Lin, S.-J. and R. B. Rood, Multidimensional Flux-Form Semi-Lagrangian Scheme, Mon. Wea. Rev., 124, 2046-2070, 1996.
Lin, S.-J. and R. B. Rood, An Explicit Flux-Form Semi-Lagrangian Shallow Water Model on the Sphere, Quart. J. Roy. Meteor. Soc., 123, 2477-22498, 1997.
McCorquodale, P., P. A. Ullrich, H. Johansen, and P. Colella, An adaptive multiblock high-order finite-volume method for solving the shallow-water equations on the sphere. Communications in Applied
Mathematics and Computational Science, 10 (2), 121–162, 2015.
Oehmke, R. C. and Q. F. Stout, Parallel Adaptive Blocks on a Sphere, in Proc. 11th SIAM Conference on Parallel Processing for Scientific Computing, 2001, CD-ROM.
Oehmke, R. C., High Performance Dynamic Array Structures, Ph.D. Dissertation, University of Michigan, Ann Arbor, 2004, Department of Electrical Engineering and Computer Science, 93 pp.
St-Cyr, A., C. Jablonowski, J. M. Dennis, H. M. Tufo and S. J. Thomas, A Comparison of Two Shallow Water Models with Non-Conforming Adaptive Grids, Mon. Wea. Rev., 136, 1898-1922, 2008.
van Leer, B., Towards the Ultimate Conservative Difference Scheme. II. Monotonicity and Conservation Combined in a Second-Order Scheme, J. Comput. Phys., 14, 361-370, 1974.
van Leer, B., Towards the Ultimate Conservative Difference Scheme. IV. A New Approach to Numerical Convection, J. Comput. Phys., 23, 276-299, 1977.
Williamson, D. L., J. B. Drake, J. J. Hack, R. Jakob and P. N. Swarztrauber, A Standard Test Set for Numerical Approximations to the Shallow Water Equations in Spherical Geometry, J. Comput. Phys.,
102, 211-224, 1992 | {"url":"https://public.websites.umich.edu/~cjablono/amr.html","timestamp":"2024-11-13T21:31:10Z","content_type":"text/html","content_length":"26094","record_id":"<urn:uuid:e23432a8-793b-4ecb-a02d-9ae22dc31066>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00766.warc.gz"} |
How can I get monthly interest?
How can I get monthly interest?
Monthly Interest Rate Calculation Example
1. Convert the annual rate from a percent to a decimal by dividing by 100: 10/100 = 0.10.
2. Now divide that number by 12 to get the monthly interest rate in decimal form: 0.10/12 = 0.0083.
How is Rd maturity amount calculated?
Recurring Deposit Calculator in India Deposit Tenure – Maturity value depends on the duration for which you invest money in RD. Generally, RD tenure ranges from 6 months to 10 years. Interest
Compound Frequency – This calculates the maturity amount based on monthly deposits you make in the RD account.
What is the first step to open an account at the bank?
How to open a bank account
1. Decide what kind of account you need.
2. Look for an account with the services you’ll use most.
3. Shop around to compare rates and fees.
4. Choose a financial institution and location.
5. Open your account.
Why is PAN card necessary for bank account?
Cash Deposit/Purchase in Banks – To prevent money laundering, PAN card is compulsory for anyone who wants to make a cash deposit or withdrawal of more than Rs. 50,000/- in a day from the bank. Credit
card request to a bank, co-operative banks or any other financial institution requires PAN card.
How do banks calculate monthly interest?
These steps can be followed to convert annual interest rate into monthly interest rate:
1. The annual rate needs to be converted from percentage to decimal format (divide the rate by 100)
2. Divide the annual rate (the decimal form) by 12.
3. Multiply the annual rate with the interest amount to obtain the monthly rate.
What is simple interest and example?
Generally, simple interest paid or received over a certain period is a fixed percentage of the principal amount that was borrowed or lent. For example, say a student obtains a simple-interest loan to
pay one year of college tuition, which costs $18,000, and the annual interest rate on the loan is 6%.
How fast can I open a bank account?
Processing your application and issuing your account number could take a day or two. And you may have to wait seven to 10 business days to receive a debit card and some account information in the
mail. If you’d prefer to open an account in-person, the process may take much longer (i.e. 30 minutes to an hour or more).
How do banks earn monthly interest?
1. Bank Fixed Deposits or Bank FDs.
2. Post Office Monthly Income Scheme or Post Office MIS.
3. The Monthly Income Scheme (MIS) offered by Department of Posts currently offers an interest rate of 7.3 per cent per annum, payable monthly.
4. Pradhan Mantri Vaya Vandana Yojana (PMVVY)
5. Senior Citizen Savings Scheme.
Is simple interest good or bad?
Essentially, simple interest is good if you’re the one paying the interest, because it will cost less than compound interest. However, if you’re the one collecting the interest—say, if you have money
deposited in a savings account—then simple interest is bad.
Which bank gives highest interest in saving account?
IDFC First Bank
How can I calculate interest?
To calculate simple interest, use this formula:
1. Principal x rate x time = interest.
2. $100 x .05 x 1 = $5 simple interest for one year.
3. $100 x .05 x 3 = $15 simple interest for three years.
How do you introduce simple interest to students?
Explain Interest With a Simple Interest Worksheet Your students now know that simple interest is a charge for money borrowed, but explaining the math behind it is best done with examples. Here,
you’ll want to introduce the simple interest equation: Interest = Principal * Rate * Time.
How do you open a bank account?
How to Open a Bank Account
1. Choose a Bank or Credit Union.
2. Visit the Bank Branch or Website.
3. Pick the Product You Want.
4. Provide Your Information.
5. Your Financial History.
6. Consent to the Terms.
7. Print, Sign, and Mail (If Required)
8. Fund Your Account.
Which bank is best for monthly interest?
Banks or NBFCs with high returns are considered as the best option for monthly income schemes. SBI, HDFC Bank, PNB Housing Finance and Bajaj Finserv are some of the top banks or NBFCs for monthly
interest FD scheme.
How do I open a bank account from home?
Apply online To open a Savings Account online, all you need is a mobile phone or laptop. You can initiate the process with just your mobile number, and the rest of the procedure does not vary much.
You simply upload your form and documents to the online portal, instead of physically going to the bank.
What is the interest of 1 lakh in SBI?
Is FD interest paid monthly?
A Fixed Deposit is the sum of money you keep with a bank as a deposit for a fixed period of time against which the bank pays you a fixed rate of interest. The other is a non-cumulative option which
is paid in the form of monthly interest or quarterly or on maturity.
What is principal amount in simple interest?
The principal is the money borrowed or initial amount of money deposited in a bank. The principal is denoted by a capital letter “P.” Interest (R) The extra amount you earn after depositing or the
extra amount you pay when settling a loan.
What is the formula of amount?
Use this simple interest calculator to find A, the Final Investment Value, using the simple interest formula: A = P(1 + rt) where P is the Principal amount of money to be invested at an Interest Rate
R% per period for t Number of Time Periods. Where r is in decimal form; r=R/100; r and t are in the same units of time.
What is maturity amount?
Maturity value is the amount to be received on the due date or on the maturity of instrument/security that investor is holding over its period of time and it is calculated by multiplying the
principal amount to the compounding interest which is further calculated by one plus rate of interest to the power which is time …
How much interest will 5 lakhs earn?
Formula of Calculation of EMI
Loan amount Interest Rate EMI per month
5 Lakh 8.35% Rs. 6,159
10 Lakh 8.50% Rs. 9,847
15 Lakh 8.60% Rs. 13,112
20 Lakh 8.70% Rs. 17,610 | {"url":"https://somme2016.org/how-to/how-can-i-get-monthly-interest/","timestamp":"2024-11-06T02:54:41Z","content_type":"text/html","content_length":"47284","record_id":"<urn:uuid:bf35fe15-9cb6-4b44-9d09-c549324c35ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00074.warc.gz"} |
Math Interview
This math strategy encourages students to deepen their math understanding. Students conduct an interview with a partner to help them ask meaningful questions, have productive discourse, use academic
language, and reflect metacognitively.
Math Interview
Students are paired with a partner and provided with discussion prompts. The prompts guide the conversation, promote the use of academic language, and help students reflect on their mathematical
thought processes. In the beginning, the conversation will seem robotic, but the more they engage with the math interview the more they internalize the questioning process. After a few interviews,
they become better at having a productive conversation centered around math.
1. Group students into pairs, Partner A and Partner B.
2. Assign each partner a different math task, Task A and Task B.
3. For the first 5-7 minutes, have students work on the problem independently with no help. Encourage students to write something or everything they know about the problem. They do not have to have
a correct answer to conduct the interview.
4. Instruct students to write their partner’s name on their interview paper.
5. Partner A interviews Partner B first.
6. If Partner B does not finish the task after the 5-7 minutes of independent work time, Partner A can help Partner B, but only after Partner A completes interviewing Partner B.
7. Once that interview is finished and the problem is correct and complete, the process starts again with Partner B interviewing Partner A.
8. If Partner B does not finish their task, then Partner B can help Partner A if needed but only after the interview is complete.
9. (Optional) Consider watching A Simple Strategy to Get Students Talking About Math video for an example on how to incorporate this strategy into your class.
Edutopia. (2022, February 4). A simple strategy to get students talking about math. Edutopia. Retrieved May 4, 2023, from https://www.edutopia.org/video/
Placement In Lesson
• Evaluate/Assessment
• Explain/Closing
• Extend/Additional Learning Activity
• Active Engagement
• Collaborate
• Conversation Starter
• Critical Thinking
• Elaborate
• Evaluate
• Problem Solving
• Reason
• Reflection
• Review
• Self-assessment
• Speak & Listen
• Writing Across Curriculum | {"url":"https://learn.k20center.ou.edu/strategy/3296","timestamp":"2024-11-06T00:49:43Z","content_type":"text/html","content_length":"18084","record_id":"<urn:uuid:f17900c7-2f72-41be-ac80-771b8cb728f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00385.warc.gz"} |
sequence geometric or neither
Detemine whether the given sequence geometric or neither if the sequence is srithmetic, find the common difference, if it is geometric, find the common ratio. If the sequence is arithmetic or
geometric, find the sum text{of} left{left(frac{5}{8}right)^{n}right}
Answered question
Detemine whether the given sequence geometric or neither if the sequence is srithmetic, find the common difference, if it is geometric, find the common ratio.
If the sequence is arithmetic or geometric, find the $\sum \text{ }\text{of}\text{ }\left\{{\left(\frac{5}{8}\right)}^{n}\right\}$ | {"url":"https://plainmath.org/algebra-ii/1569-detemine-geometric-srithmetic-difference-geometric-arithmetic-geometric","timestamp":"2024-11-11T08:52:08Z","content_type":"text/html","content_length":"198944","record_id":"<urn:uuid:e8f19baf-2f26-41b1-abf1-f69fe61f23bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00566.warc.gz"} |
HELP PLEASE - forms 80, 1221 - and other attachments
Hello - another question from me:goofy:
Right, Ive attached about 25ish probably more documents to our online 175 app - I THINK (and hope
BUT...... because they are more than 1 page long each - when Ive printed them off and then scanned them to attach,there are going to be about 4 pages for each - is that right??? Or is there someway I
can attach them as the pdf document that they are online....all in one go rather than page after page after page, if you get what I mean?? (I'm running out of email adresses to use to get 5 free
trials of converting jpg's to pdf's - so if there's a quick route - I'd like to take it
One other thing,, I noticed on another thread that someone had attached their cv - do i need to attach that as well?... its not on any list Ive looked at ........... apart from passports, birth certs
etc,, work-wise I have attached nursing diploma, anmc verification, nbsa reg and practicing cert, academic transcripts, current work references and a payslip... surely that's enough? Anyone know?
Thanks in advance!!
Hi Hazel
I am sure when we uploaded docs that were more than one page long, we just called them Form 80 page 1, form 80 page 2 etc. and uploaded them one by one.
Also, I am sure that I uploaded my CV into the work section. Only needed to be the CV of the main visa applicant though. And they like it to be in the Aussie format.
Good luck
What is form 80 and 1221? Why have applied for 176 and there is no mention of these forms?? Also as a nurse, my transcript was sent directly from the NMC to the ANMC, as my training transcript was
not available. Surely i dont need another now. If i do, then they like to send it direct, so how do i do that rather than upload it myself?? It doesnt ask for work references or anything specific to
do with my work. have they requested this seperately from you??
Sorry for all the questions. as you say is a bit of a minefield!
Hi, i hope i can help in some way. When scanning your docs in (dependant on software and type of scanner) rather than scanning as a jpg, hopefully you should be able to scan as a word doc or as a
pdf, you might have to play around with the setting etc. Hopefully if you can scan as a word/pdf doc then there should be a setting that will ask if there are anymore docs to be scanned, this will
enable you to send all docs at once. I made a similar mistake where i was scanning as a photo and was unable to send as my ISP would not allow me to send such a large file.
We used a lexmark scanner and took a bit of playing around with until i managed to work it out.
What scanner do you have?
Have just re read my documentation!! The document check list doesnt mention form 80 or 1221, but my email from DIAC does. Just shows how easy you can miss stuff. No wonder you were saying about the
amount of time it takes Hazel M.
I think id best get started on this paperwork. Aso how does an ozzie CV differ to a UK one?
Does anyone know the address to get my transcript sent directly to DIAC from the NMC?
What is form 80 and 1221? Why have applied for 176 and there is no mention of these forms?? Also as a nurse, my transcript was sent directly from the NMC to the ANMC, as my training transcript
was not available. Surely i dont need another now. If i do, then they like to send it direct, so how do i do that rather than upload it myself?? It doesnt ask for work references or anything
specific to do with my work. have they requested this seperately from you??
Sorry for all the questions. as you say is a bit of a minefield!
Hi Donna,
Hubby's verification was sent direct from NMC to NBSA and ANMC but when you lodge the visa app (ours was 175 not 176 so perhaps different) you get an automated email that tells you what documents to
attach, one of the things was
' satisfactory skills assessment for your nominated occupation. Please include all evidence of work experience you used to obtain this assessment.'
That's why (aswell as ANMC verification letter and NBSA reg cert) I attached the transcript, not sure what you would do if you only ever used the NMC verification for skills assessment. Maybe just
leave it out and if there's a problem, your case officer would ask for it to be sent??
The forms were also metioned in the automated email of documents required, it says...
'- form 80 - Personal Particulars for Character Assessment;
- form 1221 - Additional Personal Particulars Information;
I hadnt heard of them til I read the email. You just download & print them from the immi website.
Hope that helps - and hope Im doing right! They haven't asked me separately for references etc I just thought it cant hurt to attach them, I classed them as 'evidence of work experience' so they can
see his current place of employment.
The email also though did not ask for CV so thats why I was going to leave the CV out - I dont see how a CV can make any difference to the visa since all work / skills etc have been accounted for
with other bits and bobs.
Questions, questions!!!!!!!!!! It's hard to know what you're meant to do!! Alll advice is very welcome
Ooops - posted at same time - I see you found the info!! Still not sure asbout the CV tho - Im inclined to leave it out. Not sure how an aussie one is different too!
Hi, i hope i can help in some way. When scanning your docs in (dependant on software and type of scanner) rather than scanning as a jpg, hopefully you should be able to scan as a word doc or as a
pdf, you might have to play around with the setting etc. Hopefully if you can scan as a word/pdf doc then there should be a setting that will ask if there are anymore docs to be scanned, this
will enable you to send all docs at once. I made a similar mistake where i was scanning as a photo and was unable to send as my ISP would not allow me to send such a large file.
We used a lexmark scanner and took a bit of playing around with until i managed to work it out.
What scanner do you have?
Thanks for that!
Its an epson scanner - Im not sure if it will scan in anything other than jpg's - will check the settings when I scan the forms (when I eventually fill them in - am so tired at the mo after scanning
and attaching til 2am this morning! then no sleep coz baby is teething
ANYWAY... otherwise I'll just have to convert to pdfs page by page and label them pg 1, 2 etc!
Better start form filling
Ok so ive just done a quck google search with "scanning docs as a pdf with epsom scanner" and found this, obviously i dont the model number so it might be worth doing the same with the model number
but it looks promising, have a look at the link and then try the same with your model number. again hope this helps,
scanning multiple docs (again dont no the model of your scanner but most will prob work similar) and came up with this
Hope this helps in some way
Ok so ive just done a quck google search with "scanning docs as a pdf with epsom scanner" and found this, obviously i dont the model number so it might be worth doing the same with the model
number but it looks promising, have a look at the link and then try the same with your model number. again hope this helps,
scanning multiple docs (again dont no the model of your scanner but most will prob work similar) and came up with this
Hope this helps in some way
A ha!!! That is brilliant! Wish I'd found that pdf settings page when I 1st started, about 30 documents ago - lol!!! Thank you.
HELP!!! Have not received any email, automated or otherwise from DIAC and submitted visa app 175 online 6/11/09. No mention of either of these forms on the doc checklist for visa either, do you have
to do them later????????????????????
HELP!!! Have not received any email, automated or otherwise from DIAC and submitted visa app 175 online 6/11/09. No mention of either of these forms on the doc checklist for visa either, do you
have to do them later????????????????????
All I can tell you is that I got an automated email on the day after the visa application was lodged. The email is basically an acknowledgement of the app and gives a list of documents to attach if
you havent already done so, and the link on which to attach them. This is the only list where I had heard of the forms, didn;t know what they were before.
Maybe worth an email / phone call to them if you havent heard anything from them at all - just to be on the safe side , Ive copied and pasted the end of the email for you with the phone no on...
After reading the information at the beginning of this letter on Skilled Migration priority processing arrangements and expected timeframes for the processing of applications, should you have any
enquiry relating to your GSM application please use the online enquiry form available on our website at: http://www.immi.gov.au/contacts/forms/gsm/post.htm.
In Australia you can call 1300 364 613 and from outside Australia by calling +61 1300 364 613 between 9 am and 4 pm Monday to Friday.
For general enquiries you can call 13 18 81 between 9 am and 4 pm Monday to Friday.
Hope that helps.
Thanks, think will be a late night as am now waiting for 9am their time to call!!!!!! Have also sent an email...................getting PARANOID, more red wine needed!!!! LOL!!!
Hopefully the phone call will answer your question & you'll just be awaiting your CO still. As long as the wine doesnt make you nod off & forget to call!!!
I actually rang Australia House today and spoke to a lovely man who answered all my queries and was able to reassure me that SA SS was also now attached to our application. He said not to be
concerned as they usually only email if docs required etc..... so PANIC OVER, for this week anyway!!!! Thanks. | {"url":"https://www.pomsinadelaide.com/topic/13011-help-please-forms-80-1221-and-other-attachments/","timestamp":"2024-11-01T19:09:21Z","content_type":"text/html","content_length":"289823","record_id":"<urn:uuid:5db34824-8496-4ae4-b4a8-ba7c7d88ef5c>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00038.warc.gz"} |
Jan Łukasiewicz | Polish philosopher | Britannica
Learn about this topic in these articles:
Assorted References
• association with Leśniewski
□ In Stanisław Leśniewski: Life
…vocation to the influence of Jan Łukasiewicz, also a pupil of Twardowski and then a privat dozent at the University of Lwów. Already learned in the history of logic, to which he was to make
outstanding contributions, Łukasiewicz was at the time studying the work of the German logicians Gottlob…
Read More
contribution to
• logic
□ In laws of thought
In 1920 Jan Łukasiewicz, a leading member of the Polish school of logic, formulated a propositional calculus that had a third truth-value, neither truth nor falsity, for Aristotle’s future
contingents, a calculus in which the laws of contradiction and of excluded middle both failed. Other systems have…
Read More
□ In formal logic: Nonstandard versions of PC
…owing to the Polish logician Jan Łukasiewicz, are the same as the ordinary two-valued ones when the arguments have the values 1 and 0. The other values are intended to be intuitively
plausible extensions of the principles underlying the two-valued calculus to cover the cases involving half-true arguments. Clearly, these…
Read More
• syllogistic notation
□ In syllogistic
…of the early 20th-century logician Jan Łukasiewicz, the general terms or term variables can be expressed as lowercase Latin letters a, b, and c, with capitals reserved for the four
syllogistic operators that specify A, E, I, and O propositions. The proposition “Every b is an a” is now written…
Read More | {"url":"https://www.britannica.com/biography/Jan-Lukasiewicz","timestamp":"2024-11-07T14:30:45Z","content_type":"text/html","content_length":"49449","record_id":"<urn:uuid:051c6c69-e1b7-4fb1-a4b4-5731ac88916e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00467.warc.gz"} |
TI-83 BASIC Math Programs (Linear Algebra, Vector, Matrix) - ticalc.org
Name Size Date Rating Description
(Parent Dir) folder Up to TI-83 BASIC Math Programs
complex.zip 1k 98-05-09 Complex v1.0
Converts complex numbers between rectangular and polar forms.
compnumbers.zip 1k 03-03-12 Complex Numbers
Complex Number program which equates Cartesian to Polar, Polar to Cartesian, z1.z2, z1/z2 and De Moivre's Theorm AND Gives answer in Polar & Cartesian. Good for
VCE/HSC Specialist maths
cpxsis.zip 6k 02-10-15 CPXSIS
Do you want to solve a system of linear equations with both real and complex vars, in an easy way and without the writing of matrices?. This program uses the
Gaussian partial pivoting technique to make it. If it is not enough, you can also choose between polar and rectangular format on displaying results.
cramer.zip 1k 00-07-14 Cramer's Rule v1.10
This program solves for a system of linear equations using Cramer's Rule.
determ2.zip 1k 98-08-30 2 Dimensional Determinents
Evaluates 2 Dimensional Determinents
determ3.zip 1k 98-08-30 3 Dimensional Determinents
Evaluates 3 Dimensional Determinents
determ4.zip 1k 98-08-30 4 Dimensional Determinents
Evaluates 4 Dimensional Determinents
determ.zip 1k 00-09-02 Determinants v1.2
This is a program that can find X, Y, and Z(if applicable) in a second and third order equation and finds the determinants. *update* program is 90 bytes bigger
than V1.11 but saves about 400 bytes RAM overall.
echelon83.zip 1k 04-01-08 Matrix Reduction
Reduces a matrix to echelon or reduced echelon
echelon.zip 1k 99-11-08 Echelon Form of Matrix
Echelon Form of matrix - can display every step.
eigen.zip 1k 03-03-06 Eigen Values
This program finds the two Eigen values of a 2 x 2 matrix. Works for non real values too.
eye.zip 1k 02-01-01 Identity Matrix Creator
This program creates an identity matrix of specified size. It stores it as matrix [I].
imagine83.zip 1k 00-06-23 Imagine83
Find the modulus and argument of complex numbers, a+bi equation for mod and arg, and complex and real roots of quadratics
imatrix.zip 2k 02-09-30 iMatrix
solve matrix with complex numbers
itsolve.zip 3k 99-12-20 Iteration Solver
Iterative Techniques for Solving Linear System (Au=B). Using Jacobi, Gauss-Seidel or Relaxation method.
linsys.zip 1k 06-08-20 Linear System Solver
Solves linear systems of equations. Enables you to enter in complex values as well since the TI-83 is incapable of complex values in matrices. Perfect for
electrical circuits.
lqs.zip 1k 01-06-27 Linear-Quadratic Systems
Solve system of equations.
mag.zip 1k 00-11-23 Mag
Finds the magnitude of a vector and gives the answer as the square root of a number and also the decimal answer.
matrix.zip 1k 99-11-10 Cholesky Decomposition
Cholesky decomposition, LU, and QR factorization
matsolver.zip 1k 00-04-17 Matrix Solver
This program uses matrices [A] [B] and [C]: Add, Subtract, and Multiply both ways. It also finds and displays (ODTAI): Original, Determinent, Transpose,
Adjoint, and Inverse of a matrix. The biggest time saver is that this program finds the Echelon form of a matrix as well as show each step. BIG TIME SAVER!!!!
nmath.zip 2k 99-10-30 NMath v1.0
Helps in solving almost ALL systems of equations, whether with 2 or 99 equations (the only limitation is memory).
simeqatn.zip 1k 02-01-01 Simultaneous equations solver
Solves simultanes equations and give interception
solvsys.zip 1k 99-12-03 Solvsys v2.06
A program that solves a system of equations; it solves up to six variables.
sys23.zip 1k 01-02-07 System of Equations
I think this is the shortest program out there that would solve system of equations in 2 and 3 variables. I used the rref function.
sysofequ.zip 1k 99-10-25 SysOfEqu Solver
Solves for a system of equations consisting of either 2 or 3 unknown variables.
systems.zip 1k 98-01-19 System Solver 1.1 (AShell83 Compatible)
system.zip 1k 01-01-04 System of Equations
This program would solve system of equations in 2 and 3 variables. Very easy to use.
vector3.zip 1k 99-11-08 N-D Vector Operations
vectors1.zip 2k 00-11-09 Vector Solver
This program will add and subtract any number of vectors you need, it will also multiply a vector by a coefficent and then add or subtract it. A must need for
anyone in pre-cal or physiscs!!
vectors.zip 1k 00-10-07 Vectors v1.0
This program will find the magnitudes, dot product, cross product, projection, and angle between any two vectors you supply. Great for Calculus 3.
vector.zip 1k 99-11-08 2-D Vector Operations and Graphing | {"url":"https://guide.ticalc.org/pub/83/basic/math/linearalgebra/rate.html","timestamp":"2024-11-05T07:27:09Z","content_type":"text/html","content_length":"29207","record_id":"<urn:uuid:cafdcaef-e8e6-4718-b338-27c54fdab236>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00384.warc.gz"} |
Motor Torque Constant in context of ac motor torque
01 Sep 2024
Title: The Motor Torque Constant: A Fundamental Parameter in AC Motor Torque Analysis
The motor torque constant, also known as the torque constant or Kt, is a crucial parameter in the analysis of alternating current (AC) motors. It represents the ratio of the motor’s torque output to
the current flowing through its windings. In this article, we will delve into the concept of the motor torque constant, its significance, and the mathematical formulation that relates it to other key
AC motors are widely used in various industrial applications due to their high efficiency, reliability, and flexibility. The performance of an AC motor is characterized by its torque output, which is
a function of the current flowing through its windings, the voltage applied, and the motor’s design parameters. The motor torque constant (Kt) is a fundamental parameter that plays a crucial role in
determining the motor’s torque output.
The motor torque constant (Kt) is defined as the ratio of the motor’s torque output (T) to the current flowing through its windings (I):
Kt = T / I
Mathematical Formulation:
The torque output of an AC motor can be mathematically formulated using the following equation:
T = Kt * I
where T is the torque output, Kt is the motor torque constant, and I is the current flowing through the windings.
The motor torque constant (Kt) is a critical parameter in AC motor design and analysis. It determines the motor’s ability to produce torque at a given current level. A higher value of Kt indicates
that the motor can produce more torque for a given current, while a lower value indicates reduced torque output.
Relationship with Other Parameters:
The motor torque constant (Kt) is related to other key parameters such as the motor’s voltage rating (V), current rating (I), and power factor (PF). The following equation relates Kt to these
Kt = (P * PF) / (2 * π * f * V)
where P is the motor’s power output, PF is the power factor, f is the frequency of the AC supply, and V is the voltage rating.
In conclusion, the motor torque constant (Kt) is a fundamental parameter in AC motor analysis. It determines the motor’s ability to produce torque at a given current level and is related to other key
parameters such as power output, power factor, frequency, and voltage rating. A thorough understanding of Kt is essential for designing and analyzing AC motors.
1. IEEE Standard 112-1996, “IEEE Standard Test Procedure for Polyphase Induction Motors”
2. Krause, P. C., & Wasynczuk, O. (2002). Electric Drives. Prentice Hall.
3. Nasar, S. A. (1987). Electromechanics and Electrical Machinery. McGraw-Hill.
ASCII Formulae:
Kt = T / I T = Kt * I Kt = (P * PF) / (2 * π * f * V)
Note: The above article does not provide numerical examples, but rather focuses on the theoretical aspects of the motor torque constant and its significance in AC motor analysis.
Related articles for ‘ac motor torque ‘ :
• Reading: **Motor Torque Constant in context of ac motor torque **
Calculators for ‘ac motor torque ‘ | {"url":"https://blog.truegeometry.com/tutorials/education/f4e1d12495ea2df987f545c3ca9a80b0/JSON_TO_ARTCL_Motor_Torque_Constant_in_context_of_ac_motor_torque_.html","timestamp":"2024-11-12T05:27:37Z","content_type":"text/html","content_length":"16888","record_id":"<urn:uuid:5f22e755-47d1-4401-99ca-6ffcd0a39910>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00312.warc.gz"} |
LM03 Introduction to Fixed Income Valuation
IFT Notes for Level I CFA^® Program
LM03 Introduction to Fixed Income Valuation
Part 5
3.5. Yield Measures for Floating-Rate Notes
Floating-rate notes (FRN) are instruments where coupon/interest payments change from period to period based on a reference interest rate. Some important points to note about floating-rate notes:
• The objective is to protect the investor from volatile interest rates.
• The reference rate, usually a money market instrument such as a T-bill or an interbank offered rate like Libor, is used to calculate the interest payments. This rate is determined at the
beginning of each period, but the interest is actually paid at the end of the period.
• Often, the coupon rate of an FRN is not just the reference rate, but a certain number of basis points, called the spread, is added to the reference rate.
• The specified yield spread over the reference rate is called the quoted margin.
• The spread remains constant throughout the life of the bond. The amount of spread depends on the credit quality of the issuer.
Example of a Floating Rate Note
Moody’s assigned a long-term credit rating of A2 to Nationwide, U.K.’s largest building society. Nationwide issued a perpetual floating-rate bond with a coupon rate of 6 month Libor + 240 basis
points. The 2.4% quoted margin is a reflection of its credit quality. On the other hand, AAA-rated Apple sold a three-year bond at 0.05% over three-month Libor in 2013 as its credit risk was very
Coupon rate of a FRN = reference rate + quoted margin
The required margin is the spread demanded by the market. We saw that the quoted margin, or the spread over the reference rate, is fixed at the time of issuance. But what happens if the floater’s
credit risk changes and investors demand an additional spread for bearing this risk? The required margin is the additional spread over the reference rate such that the FRN is priced at par on a rate
reset date. If the required margin increases (decreases) because of a credit downgrade (upgrade), the FRN price will decrease (increase).
For example, assume a floater has a coupon rate of 3-month Libor plus 50 basis points. Six months after issuance, the issuer’s credit rating is downgraded and the market demands a required spread of
75 basis points. The coupon paid by the floater is lower than what the market demands. As a result, the floater would be priced at a discount to par as the cash flow is now discounted at a higher
rate. The amount of the discount will be the present value of differential cash flows, i.e., the difference between the required and quoted margins. Conversely, if the credit rating of the issuer
improves, the required margin would be below the quoted margin, and the market will demand a lower spread.
│How required margin affects a floater’s price at reset date │
│Relationship between quoted and required margin │Floater’s price at reset date│
│Required margin = quoted margin │Par │
│Required margin > quoted margin │Discount (below par) │
│Required margin < quoted margin │Premium (above par) │
The required margin is also called the discount margin. FRNs can be valued using the model shown below.
PV = present value of the FRN
Index = reference rate, stated as an annual percentage rate
QM = quoted margin, stated as an annual percentage rate
FV = future value paid at maturity, or the par value of the bond
m = periodicity of the floating-rate note, or the number of payment periods per year
DM = discount margin; the required margin stated as an annual percentage rate
N = number of evenly spaced periods to maturity
Equation 1 for reference:
How to interpret the floating-rate note equation:
• Think of it as an extension of Equation 1, we will draw similarities between the two equations.
• (Index + QM) is the annual rate.
• Since it is divided by periodicity we get the interest payment for that period.
• In Equation 1, cash flows are discounted at 1 + r. For FRN, the cash flow for the first period is discounted at 1+
This is considered a simple model because of the following assumptions:
• The value is calculated only on reset dates. There is no accrued interest, so the flat price is the full price.
• The model uses a 30/360 day-count convention, which means periodicity is always an integer.
• The same reference rate is used in the numerator and denominator for all payment periods.
A 3-year Italian floating-rate note pays 3-month Euribor plus 0.75%. Assuming that the floater is priced at 99, calculate the discount margin for the floater if the 3-month Euribor is constant at 1%
(assume 30/360 day-count convention).
The interest payment for each period is (1.00% + 0.75%) / 4 = 0.4375%.
The keystrokes to calculate the market discount rate are: PV = -99, PMT = 0.4375, FV = 100, N = 12, CPT I/Y = 0.5237 * 4, I/Y = 2.09%
The discount margin for the floater is 2.09% – 1% = 1.09% or 109 bps.
3.6. Yield Measures for Money Market Instruments
Money market instruments are short-term debt securities. They have maturities of one year or less, ranging from overnight repos to one-year certificates of deposit.
│Differences in Bond Market and Money Market Yields │
│Bond Market Yields │Money Market Yields │
│YTM is annualized and compounded. │Rate of return is annualized but not compounded; stated on a simple interest basis. │
│YTM calculated using the standard time value of money approach using a financial calculator.│Non-standard calculation using discount rates and add-on rates. │
│YTM stated for a common periodicity for all times to maturity. │Instruments with different times to maturity have different periodicities for the annual rate.│
The calculation of interest of a money market instrument is different from calculating accrued interest on a bond. Money market instruments can be classified into two categories based on how the
rates are quoted:
• Discount rates: T-bills, commercial paper (CP), and banker’s acceptances are discount instruments. It means they are issued at a discounted price, and pay par value at maturity. They do not make
any payments before maturity. The difference between the purchase price and par value at redemption is the interest earned. Note: Do not confuse this discount rate with the rate used in TVM
Price of a money market instrument quoted on a discount basisPV = FV x (1-
PV = present value of the money market instrument
FV = face value of the money market instrument
days to maturity = actual number of days between settlement and maturity
year = number of days in the year. Most markets use a 360-day year.
DR = discount rate, stated as an annual percentage rate (APR)
Money market discount rate
Money market discount rate DR =
FV-PV = interest earned on the instrument (this is the discount)
• Add-on rates: Bank term deposits, repos, certificates of deposit, and indices such as Libor/Euribor are quoted on an add-on basis. For a money market instrument quoted using an add-on rate,
interest is added to the principal to calculate the redemption amount at maturity. In simple terms, if PV is the initial principal amount, days is the days to maturity, and year is the number of
days in a year, then the amount to be paid at maturity is: FV = PV + PV x AOR x
Present value or price of a money market instrument quoted on an add-on basisPV =
PV = present value of the money market instrument
FV = amount paid at maturity including interest
days = number of days between settlement and maturity
year = number of days in a year
AOR = add-on rate stated as an annual percentage rate
Add-on rate
AOR =
Instructor’s Note
The primary difference between a discount rate (DR) and an add-on rate (AOR) is that the interest is included on the face value of the instrument for DR whereas it is added to the principal in case
of AOR.
Suppose that a banker’s acceptance will be paid in 91 days. It has a face value of $1,000,000. It is quoted at a discount rate of 5%. What is the price of the banker’s acceptance?
PV = FV *
FV = 1,000,000
Days = 91
Year = 360 days
DR = 5%
PV =
Suppose that a Canadian pension fund buys a 180-day banker’s acceptance (BA) with a quoted add-on rate of 4.38% for a 365-day year. If the initial principal amount is CAD 10 million, what is the
redemption amount due at maturity?
0.0438 =
FV = $10,216,000
Comparing Discount Basis with Add-On Yield
There are two approaches to compare the return of two money market instruments if one is quoted on a discount basis and the other on an add-on basis.
First approach: If you don’t want to memorize one more formula, follow this approach:
1. Determine the present value of the instrument quoted on a discount basis.
2. Use the present value to determine the AOR.
3. Compare the two AORs to see which instrument offers a better return.
Second approach: Use the following relationship between AOR and DR:
Relationship between AOR and DR
A T-bill with a maturity of 90 days is quoted at a discount rate of 5.25%. Its par value is $100. Calculate the add-on rate.
Using the first approach:
FV = 100; Days = 90; Year = 360 days; DR = 5.25%
PV =
AOR =
Using the second approach:
AOR =
│Money market Instrument│Quotation Basis│Number of Days in the Year │Quoted Rate│
│A │Discount Rate │360 │3.23% │
│B │Discount Rate │365 │3.46% │
│C │Add-on Rate │360 │3.25% │
│D │Add-on Rate │365 │3.35% │
Given the four 90-day money market instruments, calculate the bond equivalent yield for each of them. Which instrument offers the highest rate of return if the credit risk is the same?
A. The price of instrument A is
B. The price of instrument B is 100 –
C. The redemption amount per 100 of principal is 100 +
D. The quoted rate for instrument D of 3.35% is the bond equivalent yield. Instrument B offers the highest rate of return on a bond equivalent yield basis.
Periodicity of the Annual Rate
Another difference between yield measures in the money market and the bond market is the periodicity of the annual rate. Because bond yields-to-maturity are computed using interest rate compounding,
there is a well-defined periodicity. For instance, bond yields-to-maturity for semi-annual compounding are annualized for a periodicity of two. Money market rates are computed using simple interest
without compounding. In the money market, the periodicity is the number of days in the year divided by the number of days to maturity. Therefore, money market rates for different times-to-maturity
have different periodicities.
Periodicity of a money market instrument =
A 90-day T-bill has a BEY of 11%. Calculate its semiannual bond yield.
The 11% BEY of the T-bill is based on a periodicity of 365/90. The periodicity of a semiannual bond is 2. Give this information, we can create the equation shown below and solve for r. We will get r
= 0.1115 = 11.15%. | {"url":"https://ift.world/booklets/fixed-income-introduction-to-fixed-income-valuation-part5/","timestamp":"2024-11-10T04:58:55Z","content_type":"text/html","content_length":"280806","record_id":"<urn:uuid:c345cec2-102d-4e50-bffc-6cd020435d20>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00460.warc.gz"} |
John Allen Paulos
Below - A Random Miscellany of My Writings
"John Allen Paulos is one of the greatest mathematical storytellers of all time, one of those rare individuals who can quite beautifully use the medium of story to communicate math and
statistics. In this immensely entertaining work (Once Upon A Number), he also does the reverse: he uses the medium of math (and statistics) to tell us about the medium of story. Each of his
insights and one-liners is great and together they offer a profound, new view of the relation between math and stories."
— Doron Zeilberger, winner, Steele prize in mathematics and the Euler medal Counting On Dyscalculia in Discover Magazine The Math of Romantic Crushes in The New York Times Monty Hall Revisited on
ABCNews.com Evolution and the Development Of Complexity In Economics And Biology in The Guardian
"If you've ever wanted to recapture that sense of near-mystical rapture, there is no better place than this book (Beyond Numeracy), and no more humane and enthusiastic mentor than John Allen
Paulos ..... Paulos painstakingly presents even the most recondite ideas in concrete, easily visualizable terms. ..... But Paulos's principal genius lies in the recognition that many of those
humans are "unknowing mathophiles" who "have been thinking math all their lives without realizing it." For those, for anyone, who ever sat rapt at the austere beauty of a proof and later wondered
where the wonder went, it's here."
— Curt Suplee. Washington Post
"The world, as seen by Paulos (in Innumeracy), is less mysterious, yet somehow more elegant, less magical, yet more wonderful. So many apparently strange events do, in fact, become all the more
magnificent in their not-so-fearful symmetry."
— Arthur Salm, San Diego Tribune Why Don’t Americans Elect Scientists in The New York Times Mammogram Math in The New York Times Magazine Metric Mania in The New York Times Magazine It Was Easy to
Show How Much BP Oil Spilled on ABCNews.com Review of Nate Silver’s The Signal and the Noise in The Washington Post
"It would be great to have John Allen Paulos living next door. Every morning when you read the paper and came across some story that didn't seem quite right - that had the faint odor of illogic
hovering about it - you could just lean out the window and shout, "Jack! Get the hell over here!"..... Paulos, who wrote the bestseller Innumeracy (the mathematical equivalent of illiteracy), has
now written a fun, spunky, wise little book (A Mathematician Reads the Newspaper) that would be helpful to both the consumers of the news and its purveyors."
— Joel Achenbach, Washington Post Stories vs. Statistics, an Opinionator piece in The New York Times Review of He Conquered the Conjecture (on Grigory Perelman and the Poincare conjecture) in the New
York Review of Books Review of Infinitesimal in The New York Times The Nonsense of Numerology During 9-11 on ABCNews.com Ruminations on the Gore-Bush Tie in Florida In 2000 in The Philadelphia Daily
"(In Innumeracy) Paulos makes numbers, probability, and statistics perform like so many trained seals for the reader's entertainment and enlightenment."
— Jon Van, Chicago Tribune
"(Throughout I Think, Therefore I Laugh) Paulos is brilliant at capturing difficult ideas in a memorable joke. I've never laughed so much while thinking so hard."
— Brian Butterworth, author of What Counts: How Every Brain Is Hardwired for Math.
"Many scholars nowadays write seriously about the ludicrous. Some merely manage to be dull. A few - like Paulos - are brilliant in an odd endeavor (Mathematics and Humor)."
— Harvey Mindess, Los Angeles Times Groucho Meets Russell, from I Think, Therefore I Laugh A New Biblical Hoax, from Once Upon a Number We're Measuring Bacteria With A Yardstick, in The New York
Times A Self-Referential Parable, from Once Upon a Number Tsongkerclintkinbro Wins, in The New York Times
"His brief essays are arranged alphabetically by topic, and as with one of its precursors, Voltaire's Philosophical Dictionary, it makes for an often jolly little book (Beyond Numeracy). ... The
lore has it that when Pythagoras discovered his great theorem on right triangles, he was so transported that he sacrificed 100 head of oxen to the gods as a token of gratitude. On this scale, Mr.
Paulos's book is surely worth an ox or two."
— Jim Holt, Wall Street Journal Where Mathematics Comes From, American Scholar Fractal Nature Of Human Consciousness, from Beyond Numeracy O.J. Simpson - Murder He Wrote, Philadelphia Inquirer
Irreligion - Why The Arguments For God Just Don't Add Up reviews of Irreligion
"Paulos' goal is nothing less than lofty. He hopes to reconcile the personal aspect of human life, which refers to the stories we tell and live by, and the impersonal, which is essentially
mathematical, statistical and scientific. ... Both delightful and wise, this little book (Once Upon a Number) cries out to be kept close at hand, to be looked into from time to time, to be
treasured as an old friend."
— Anthony Day, Los Angeles Times Mathematics And The Unabomber, in The New York Times Review Of Stephen Gould's Full House, in The Washington Post Remembrance Of Innumeracies Past, from Innumeracy
Review Of Quantification Of Western Society, The Los Angeles Times
"To combat [innumeracy] John Allen Paulos has concocted the perfect vaccine: this book, which is in many ways better than an entire high school math education! Our society would be unimaginably
different if the average person truly understood the ideas in this marvelous and important book. It is probably hopelessly optimistic to dream this way, but I hope that Innumeracy might help
launch a revolution in math education that would do for innumeracy what Sabin and Salk did for polio."
— Douglas Hofstadter, author of Godel, Escher, and Bach Review Of Arthur Clarke's 3001, The New York Times A Math Quiz for Presidential Candidates on ABCNews.com Math Moron Myths, The New York Times
Computation Versus Understanding?, Forbes Magazine Sexual Codes In U.S. Constitution, from Once Upon a Number
"He's done it again. John Allen Paulos has written a charming book (Irreligion) that takes you on a sojourn of flawless logic, with simple and clear examples drawn from math, science, and pop
culture. At journey's end, Paulos has left you with plenty to think about, whether you are religious, irreligious, or anything in between."
— Neil deGrasse Tyson, astrophysicist, American Museum of Natural History and author of Death By Black Hole and Other Cosmic Quandaries 12 Irreligious Questions For The Presidential Candidates on
ABCnews.com Dick Cheney's 1% Solution on ABCNews.com Wittgenstein and Lewis Carroll, from Mathematics and Humor Final Tallies Minus Exit Polls – A Statistical Mystery in the Philadelphia Daily News
Ramsey Order & Self-Organization on ABCNews.com
"This is press criticism, but not of the usual kind .... This is press criticism of the sort that George Orwell had in mind when he observed that what's important isn't news, and what's news
isn't important. ..... This is a subversive book. Paulos argues that the world is so complex that it cannot be accurately described, much less manipulated. ...... a wise and thoughtful book (A
Mathematician Reads the Newspaper) , which skewers much of what everyone knows to be true."
— Lee Dembart, Los Angeles Times Creationist Probability Mistakes on ABCNews.com An Abortion Reductio Ad Absurdum on ABCNews.com Review Of Erdos, Nash Biographies, The Los Angeles Times Lanchester’s
Law and the Misguided (Putting It Kindly) Iraq War on ABCNews.com Review of the Theory That Would Not Die in The New York Times
"Paulos is the real McCoy, and his newest offering, A Mathematician Plays the Stock Market, is a double-chocolate nougat of a book - a rich, densely packed delight. It is also rueful, funny and
disarmingly personal."
— Kai Maristed, Los Angeles Times A Market Paradox, Wall Street Journal, adapted from A Mathematician Plays the Stock Market American Sucker, in The Los Angeles Times My Lowest Ebb(Ers) in The Wall
Street Journal A Stock Market Scam, from Innumeracy Review of the Theory That Would Not Die in The New York Times
"A quirky and surprisingly poignant book (A Numerate Life) about the struggle to make sense of one’s own life story. With the help of logic and statistical reasoning, Paulos shines a light on the
paradoxes and delusions that so often bedevil our remembrance of things past."
— Steven Strogatz — Professor of Mathematics, Cornell University, and author of The Joy of X Wittgenstein and Carroll, from Mathematics and Humor “Electrified Paté”, in The American Scholar A
Numerate Life - Contents God and Girls in Thailand on 3quarksdaily.com Twitter "War" - Neil deGrasse Tyson and John Allen Paulos | {"url":"https://johnallenpaulos.com/writings.html","timestamp":"2024-11-12T12:48:36Z","content_type":"text/html","content_length":"29559","record_id":"<urn:uuid:33117c05-bffc-4a6d-8ae3-85e8cdae57ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00761.warc.gz"} |
Course on Algorithmic Paradigms – Study Algorithms
Algorithmic Paradigms in short define how you can go about solving a problem. If you are starting to solve problems in computer science, this course is designed for you. However, this course is also
recommended for you if you are revising all the basics and want a quick recap of some famous techniques.
What are Algorithmic Paradigms?
Whenever you try to build something, or develop something, an initial research and foundation is very essential. Think about it, a skyscraper will only be as stable as its base. In the world of
programming, when you are about to solve more and more problems, you need to understand there can be several different ways to solve a problem. These techniques help you to lay that foundation.
You don’t have a “one size fits all” in computer applications. To be a better programmer, you need to learn about different algorithmic paradigms. These paradigms define the general structure of how
the solution would look like. Once, you get the idea, you can start thinking in a particular direction.
What Algorithmic Paradigms do we cover?
The word paradigm means “pattern of thought“. Hence you can think about a solution in several ways. The course covers the following different techniques:
What knowledge do you need?
The course does not require any sort of programming knowledge to start. You will find very easy to follow examples with objects that you use in day to day lives. This helps to understand these
algorithmic paradigms in a very quick and easy way. We do not do any kind of coding in these tutorials.
Feel free to mention any doubts in the comments below. Good Luck!!
Secrets of Algorithmic Paradigms [Introduction] | Study Algorithms
Brute Force algorithms with real life examples | Study Algorithms
Divide and Conquer algorithms with real life examples | Study Algorithms
Greedy Algorithms with real life examples | Study Algorithms
Dynamic Programming easy to understand real life examples | Study Algorithms
0/1 Knapsack Problem easy explanation using Dynamic Programming. | Study Algorithms
Recursion paradigms with real life examples | Study Algorithms
Backtracking made easy | Algorithmic Paradigms | Real life example | Study Algorithms
0 comments
a tech-savvy guy and a design buff... I was born with the love for exploring and want to do my best to give back to the community. I also love taking photos with my phone to capture moments in my
life. It's my pleasure to have you here.
previous post
Algorithmic Paradigms – Dynamic Programming
You may also like | {"url":"https://studyalgorithms.com/theory/course-on-algorithmic-paradigms/","timestamp":"2024-11-04T14:37:28Z","content_type":"text/html","content_length":"291148","record_id":"<urn:uuid:7826e937-155d-4b0d-8754-8c66671fc07b>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00026.warc.gz"} |
Introductory Business Statistics with Interactive Spreadsheets – 1st Canadian Edition
Chapter 7. Some Non-Parametric Tests
Remember that you use statistics to make inferences about populations from samples. Most of the techniques statisticians use require that two assumptions are met. First, the population that the
sample comes from is normal. Second, whenever means and variances were computed, the numbers in the data are cardinal or interval, meaning that the value given an observation not only tells you which
observation is larger or smaller, but how much larger or smaller. There are many situations when these assumptions are not met, and using the techniques developed so far will not be appropriate.
Fortunately, statisticians have developed another set of statistical techniques, non-parametric statistics, for these situations. Three of these tests will be explained in this chapter. These three
are the Mann-Whitney U-test, which tests to see if two independently chosen samples come from populations with the same location; the Wilcoxon rank sum test, which tests to see if two paired samples
come from populations with the same location; and Spearman’s rank correlation, which tests to see if two variables are related. The Mann-Whitney U-test is also presented in an interactive Excel
What does non-parametric mean?
To a statistician, a parameter is a measurable characteristic of a population. The population characteristics that usually interest statisticians are the location and the shape. Non-parametric
statistics are used when the parameters of the population are not measurable or do not meet certain standards. In cases when the data only order the observations, so that the interval between the
observations is unknown, neither a mean nor a variance can be meaningfully computed. In such cases, you need to use non-parametric tests. Because your sample does not have cardinal, or interval,
data, you cannot use it to estimate the mean or variance of the population, though you can make other inferences. Even if your data are cardinal, the population must be normal before the shape of the
many sampling distributions are known. Fortunately, even if the population is not normal, such sampling distributions are usually close to the known shape if large samples are used. In that case,
using the usual techniques is acceptable. However, if the samples are small and the population is not normal, you have to use non-parametric statistics. As you know, “there is no such thing as a free
lunch”. If you want to make an inference about a population without having cardinal data, or without knowing that the population is normal, or with very small samples, you will have to give up
something. In general, non-parametric statistics are less precise than parametric statistics. Because you know less about the population you are trying to learn about, the inferences you make are
less exact.
When either (1) the population is not normal and the samples are small, or (2) when the data are not cardinal, the same non-parametric statistics are used. Most of these tests involve ranking the
members of the sample, and most involve comparing the ranking of two or more samples. Because we cannot compute meaningful sample statistics to compare to a hypothesized standard, we end up comparing
two samples.
Do these populations have the same location? The Mann-Whitney U-test
In Chapter 5, “The t-Test”, you learned how to test to see if two samples came from populations with the same mean by using the t-test. If your samples are small and you are not sure if the original
populations are normal, or if your data do not measure intervals, you cannot use that t-test because the sample t-scores will not follow the sampling distribution in the t-table. Though there are two
different data problems that keep you from using the t-test, the solution to both problems is the same, the non-parametric Mann-Whitney U-test. The basic idea behind the test is to put the samples
together, rank the members of the combined sample, and then see if the two samples are mixed together in the common ranking.
Once you have a single ranked list containing the members of both samples, you are ready to conduct a Mann-Whitney U-test. This test is based on a simple idea. If the first part of the combined
ranking is largely made up of members from one sample, and the last part is largely made up of members from the other sample, then the two samples are probably from populations with different
averages and therefore different locations. You can test to see if the members of one sample are lumped together or spread through the ranks by adding up the ranks of each of the two groups and
comparing the sums. If these rank sums are about equal, the two groups are mixed together. If these ranks sums are far from equal, each of the samples is lumped together at the beginning or the end
of the overall ranking.
Willy works for an immigration consulting company in Ottawa that helps new immigrants who apply under the Canadian federal government’s Immigrant Investor Program (IIP). IIP facilitates the
immigration process for those who choose to live in small cities. The company tasked Willy to set up a new office in a location close to the places where more potential newcomer investors will choose
to settle down. Attractive small cities (less than 100,000 population) in Canada offer unique investing opportunities for these newcomers. After consulting with the company, Willy agrees that the new
regional office for the immigration consulting services will be moved to a smaller city.
Before he starts looking at office buildings and other major factors, Willy needs to decide if more small cities for which the newcomers are qualified are located in the eastern or the western part
of Canada. Willy finds his data on www.moneysense.ca/canadas-best-places-to-live-2014-full-ranking, which lists the best cities for living in Canada. He selects the top ten small cities from the list
on this website. Table 7.1 shows the top 18 Canadian small cities along with their populations and ranks.
Table 7.1 Top 18 Canadian Small Cities along with Their Populations and Ranks
Row Cities Populations Locations Ranks
1 St. Albert, AB 64,377 West 1
2 Strathcona County, AB 98,232 West 2
3 Boucherville, QC 41,928 East 6
4 Lacombe, AB 12,510 West 17
5 Rimouski, QC 53,000 East 18
6 Repentigny, QC 85,425 East 20
7 Blainville, QC 57,058 East 21
8 Fredericton, NB 99,066 East 22
9 Stratford, ON 32,217 East 23
10 Aurora, ON 56,697 East 24
11 North Vancouver, B.C. (District Municipality) 88,085 West 25
12 North Vancouver, B.C. (City) 51,650 West 28
13 Halton Hills, ON 62,493 East 29
14 Newmarket, ON 84,902 East 31
15 Red Deer, AB 96,650 West 33
16 West Vancouver, B.C. 44,226 West 36
17 Brossard, QC 83,800 East 38
18 Camrose, AB 18,435 West 40
Ten of the top 18 are in the east, and eight are in the west, but these ten represent only a sample of the market. It looks like the eastern places tend to be higher in the top ten, but is that
really the case? If you add up the ranks, the ten eastern cities have rank sum of 92 while the western cities have a rank sum of 79, but there are more eastern cities, and even if there were the same
number, would that difference be due to a different average in the rankings, or is it just due to sampling?
The Mann-Whitney U-test can tell you if the rank sum of 79 for the western cities is significantly less than would be expected if the two groups really were about the same and 10 of the 18 in the
sample happened to be from the same group. The general formula for computing the Mann-Whitney U for the first of two groups is:
[latex]U_1 = n_1n_2 + [n_1(n_1 + 1)]/2 - T_1[/latex]
T[1] = the sum of the ranks of group 1
n[1] = the number of members of the sample from group 1
n[2] = the number of members of the sample from group 2
This formula seems strange at first, but a little careful thought will show you what is going on. The last third of the formula, –T[1], subtracts the rank sum of the group from the rest of the
formula. What is the first two-thirds of the formula? The bigger the total of your two samples, and the more of that total that is in the first group, the bigger you would expect T[1] to be,
everything else being equal. Looking at the first two-thirds of the formula, you can see that the only variables in it are n[1] and n[2], the sizes of the two samples. The first two-thirds of the
formula depends on the how big the total group is and how it is divided between the two samples. If either n[1] or n[2] gets larger, so does this part of the formula. The first two-thirds of the
formula is the maximum value for T[1], the rank sum of group 1. T[1] will be at its maximum if the members of the first group were all at the bottom of the rankings for the combined samples. The U[1]
score then is the difference between the actual rank sum and the maximum possible. A bigger U[1] means that the members of group 1 are bunched more at the top of the rankings and a smaller U[1] means
that the members of group 1 are bunched near the bottom of the rankings so that the rank sum is close to its maximum. Obviously, a U-score can be computed for either group, so there is always a U[1]
and a U[2]. If U[1] is larger, U[2] is smaller for a given n[1] and n[2] because if T[1] is smaller, T[2] is larger.
What should Willy expect if the best cities are in one region rather than being evenly distributed across the country? If the best cities are evenly distributed, then the eastern group and the
western group should have U’s that are close together, since neither group will have a T that is close to either its minimum or its maximum. If one group is mostly at the top of the list, then that
group will have a larger U since its T will be small, and the other group will have a smaller U since its T will be large. U[1] + U[2] is always equal to n[1]n[2], so either one can be used to test
the hypothesis that the two groups come from the same population. Though there is always a pair of U-scores for any Mann-Whitney U-test, the published tables only show the smaller of the pair. Like
all of the other tables you have used, this one shows what the sampling distribution of U’s is like.
The sampling distribution, and this test, were first described by H.B. Mann and D.R. Whitney (1947).^[1] While you have to compute both U-scores, you only use the smaller one to test a two-tailed
hypothesis. Because the tables only show the smaller U, you need to be careful when conducting a one-tail test. Because you will accept the alternative hypothesis if U is very small, you use the U
computed for that sample, which H[a] says is farther down the list. You are testing to see if one of the samples is located to the right of the other, so you test to see if the rank sum of that
sample is large enough to make its U small enough to accept H[a]. If you learn to think through this formula, you will not have to memorize all of this detail because you will be able to figure out
what to do.
Let us return to Willy’s problem. He needs to test to see if the best cities in which to locate the office are concentrated in one part of the country or not. He can attack his problem with a
hypothesis test using the Mann-Whitney U-test. His hypotheses are:
H[o]: The distributions of eastern and western city rankings among the “best places for new investors” are the same.
H[a]: The distributions are different.
Remembering the formula from above, he finds his two U-values:
He calculates the U for the eastern cities:
and for the western cities:
The smaller of his two U-scores is U[w] = 37. This is known as a Mann-Whitney test statistic. Because 37 is larger than 14, his decision rule tells him that the data support the null hypothesis that
eastern and western cities rank about the same. All these calculations can also be performed within the interactive Excel template provided in Figure 7.1.
Figure 7.1 Interactive Excel Template for the Mann-Whitney U-Test – see Appendix 7.
This template has two worksheets. In the first worksheet, named “DATA”, you need to use the drop-down list tab under column E (Locations), select Filter, and then checkmark East. This will filter all
the data and select only cities located in eastern Canada. Simply copy (Ctrl+c) the created data from the next column F (Ranks). Now, move to the next worksheet, named “Mann-Whitney U-Test”, and
paste (Ctrl+v) into the East column. Repeat these steps to create your data for western cities and paste them into the West column on the Mann-Whitney U-Test worksheet. As you paste these data, the
ranks of all these cities will instantly be created in the next two columns. In the final step, type in your alpha, either .05 or .01. The appropriate final decision will automatically follow. As you
can see on the decision cell in the template, H[o] will not be rejected. This result indicates that we arrive at the same conclusions as above: Willy decides that the new regional immigration
consulting office can be in either an eastern or western city, at least based on the best places for new investors to Canada. The decision will depend on office cost and availability, airline
schedules, etc.
Testing with matched pairs: the Wilcoxon signed ranks test
During your career, you will often be interested in finding out if the same population is different in different situations. Do the same workers perform better after a training session? Do customers
who used one of your products prefer the “new improved” version? Are the same characteristics important to different groups? When you are comparing the same group in two different situations, you
have “matched pairs”. For each member of the population or sample you have what happened under two different sets of conditions.
There is a non-parametric test using matched pairs that allows you to see if the location of the population is different in the different situations. This test is the Wilcoxon signed ranks test. To
understand the basis of this test, think about a group of subjects who are tested under two sets of conditions, A and B. Subtract the test score under B from the test score under A for each subject.
Rank the subjects by the absolute size of that difference, and look to see if those who scored better under A are mostly lumped together at one end of your ranking. If most of the biggest absolute
differences belong to subjects who scored higher under one of the sets of conditions, then the subjects probably perform differently under A than under B.
The details of how to perform this test were published by Frank Wilcoxon (1945).^[2] He found a method to find out if the subjects who scored better under one of the sets of conditions were lumped
together or not. He also found the sampling distribution needed to test hypotheses based on the rankings. To use Wilcoxon’s test, collect a sample of matched pairs. For each subject, find the
difference in the outcome between the two sets of conditions and then rank the subjects according to the absolute value of the differences. Next, add together the ranks of those with negative
differences and add together the ranks of those with positive differences. If these rank sums are about the same, then the subjects who did better under one set of conditions are mixed together with
those who did better under the other set of conditions, and there is no difference. If the rank sums are far apart, then there is a difference between the two sets of conditions.
Because the sum of the rank sums is always equal to [N(N-1)]/2], if you know the rank sum for either the positives or the negatives, you know it for the other. This means that you do not really have
to compare the rank sums; you can simply look at the smallest and see if it is very small to see if the positive and negative differences are separated or mixed together. The sampling distribution of
the smaller rank sums when the populations the samples come from are the same was published by Wilcoxon. A portion of a table showing this sampling distribution is in Table 7.2.
Table 7.2 Sampling Distribution
One-Tail Significance .05 .025 .01
Two-Tail Significance .1 .05 .02
Number of Pairs, N
Wendy Woodruff is the president of the Student Accounting Society at Thompson Rivers University (TRU) in Kamloops, BC. Wendy recently came across a study by Baker and McGregor [Empirically assessing
the utility of accounting student characteristics, unpublished, 1993] in which both accounting firm partners and students were asked to score the importance of student characteristics in the hiring
process. A summary of their findings is in Table 7.3.
Table 7.3 Data on Importance of Student Attributes
Attribute Mean: Student Rating Mean: Big Firm Rating
High Accounting GPA 2.06 2.56
High Overall GPA .08 -.08
Communication Skills 4.15 4.25
Personal Integrity 4.27 7.5
Energy, drive, enthusiasm 4.82 3.15
Appearance 2.68 2.31
Data source: Baker and McGregor
Wendy is wondering if the two groups think the same things are important. If the two groups think that different things are important, Wendy will need to have some society meetings devoted to
discussing the differences. Wendy has read over the article, and while she is not exactly sure how Baker and McGregor’s scheme for rating the importance of student attributes works, she feels that
the scores are probably not distributed normally. Her test to see if the groups rate the attributes differently will have to be non-parametric since the scores are not normally distributed and the
samples are small. Wendy uses the Wilcoxon signed ranks test.
Her hypotheses are:
H[o]: There is no true difference between what students and Big 6 partners think is important.
H[a]: There is a difference.
She decides to use a level of significance of .05. Wendy’s test is a two-tail test because she wants to see if the scores are different, not if the Big 6 partners value these things more highly.
Looking at the table, she finds that, for a two-tail test, the smaller of the two sums of ranks must be less than or equal to 2 to accept H[a].
Wendy finds the differences between student and Big 6 scores, and ranks the absolute differences, keeping track of which are negative and which are positive. She then sums the positive ranks and sums
the negative ranks. Her work is shown in Table 7.4.
Table 7.4 The Worksheet for the Wilcoxon Signed Ranks Test
Attribute Mean Student Rating Mean Big Firm Rating Difference Rank
High Accounting GPA 2.06 2.56 -.5 -4
High Overall GPA .08 -.08 .16 2
Communication Skills 4.15 4.25 -.1 -1
Personal Integrity 4.27 7.5 -2.75 -6
Energy, drive, enthusiasm 4.82 3.15 1.67 5
Appearance 2.68 2.31 .37 3
sum of positive ranks = 2+5+3=10
sum of negative ranks = 4+1+6=11
number of pairs=6
Her sample statistic, T, is the smaller of the two sums of ranks, so T=10. According to her decision rule to accept H[a] if T < 2, she decides that the data support H[o] that there is no difference
in what students and Big 6 firms think is important to look for when hiring students. This makes sense, because the attributes that students score as more important, those with positive differences,
and those that the Big 6 score as more important, those with negative differences, are mixed together when the absolute values of the differences are ranked. Notice that using the rankings of the
differences rather than the size of the differences reduces the importance of the large difference between the importance students and Big 6 partners place on personal integrity. This is one of the
costs of using non-parametric statistics. The Student Accounting Society at TRU does not need to have a major program on what accounting firms look for in hiring. However, Wendy thinks that the
discrepancy in the importance in hiring placed on personal integrity by Big 6 firms and the students means that she needs to schedule a speaker on that subject. Wendy wisely tempers her statistical
finding with some common sense.
Are these two variables related? Spearman’s rank correlation
Are sales higher in those geographic areas where more is spent on advertising? Does spending more on preventive maintenance reduce downtime? Are production workers with more seniority assigned the
most popular jobs? All of these questions ask how the two variables move up and down together: When one goes up, does the other also rise? When one goes up does the other go down? Does the level of
one have no effect on the level of the other? Statisticians measure the way two variables move together by measuring the correlation coefficient between the two.
Correlation will be discussed again in the next chapter, but it will not hurt to hear about the idea behind it twice. The basic idea is to measure how well two variables are tied together. Simply
looking at the word, you can see that it means co-related. If whenever variable X goes up by 1, variable Y changes by a set amount, then X and Y are perfectly tied together, and a statistician would
say that they are perfectly correlated. Measuring correlation usually requires interval data from normal populations, but a procedure to measure correlation from ranked data has been developed.
Regular correlation coefficients range from -1 to +1. The sign tells you if the two variables move in the same direction (positive correlation) or in opposite directions (negative correlation) as
they change together. The absolute value of the correlation coefficient tells you how closely tied together the variables are; a correlation coefficient close to +1 or to -1 means they are closely
tied together, a correlation coefficient close to 0 means that they are not very closely tied together. The non-parametric Spearman’s rank correlation coefficient is scaled so that it follows these
same conventions.
The true formula for computing the Spearman’s rank correlation coefficient is complex. Most people using rank correlation compute the coefficient with a computer program, but looking at the equation
will help you see how Spearman’s rank correlation works. It is:
n = the number of observations
d = the difference between the ranks for an observation
Keep in mind that we want this non-parametric correlation coefficient to range from -1 to +1 so that it acts like the parametric correlation coefficient. Now look at the equation. For a given sample
size n, the only thing that will vary is Σd^2. If the samples are perfectly positively correlated, then the same observation will be ranked first for both variables, another observation ranked second
for both variables, etc. That means that each difference in ranks d will be zero, the numerator of the fraction at the end of the equation will be zero, and that fraction will be zero. Subtracting
zero from one leaves one, so if the observations are ranked in the same order by both variables, the Spearman’s rank correlation coefficient is +1. Similarly, if the observations are ranked in
exactly the opposite order by the two variables, there will many large d^2’s, and Σd^2 will be at its maximum. The rank correlation coefficient should equal -1, so you want to subtract 2 from 1 in
the equation. The middle part of the equation, 6/n(n^2-1), simply scales Σd^2 so that the whole term equals 2. As n grows larger, Σd^2 will grow larger if the two variables produce exactly opposite
rankings. At the same time, n(n^2-1) will grow larger so that 6/n(n^2-1) will grow smaller.
Located in Saskatchewan, Robin Hood Company produces flour, corn meal, grits, and muffin, cake, and quickbread mixes. In order to increase its market share to the United States, the company is
considering introducing a new product, Instant Cheese Grits mix. Cheese grits is a dish made by cooking grits, combining the cooked grits with cheese and eggs, and then baking the mixture. It is a
southern favourite in the United States, but because it takes a long time to cook, it is not served much anymore. The Robin Hood mix will allow someone to prepare cheese grits in 20 minutes in only
one pan, so if it tastes right, the product should sell well in the southern United States along with other parts of North America. Sandy Owens is the product manager for Instant Cheese Grits and is
deciding what kind of cheese flavouring to use. Nine different cheese flavourings have been successfully tested in production, and samples made with each of those nine flavourings have been rated by
two groups: first, a group of food experts, and second, a group of potential customers. The group of experts was given a taste of three dishes of “homemade” cheese grits and ranked the samples
according to how well they matched the real thing. The customers were given the samples and asked to rank them according to how much they tasted like “real cheese grits should taste”. Over time,
Robin Hood has found that using experts is a better way of identifying the flavourings that will make a successful product, but they always check the experts’ opinion against a panel of customers.
Sandy must decide if the experts and customers basically agree. If they do, then she will use the flavouring rated first by the experts. The data from the taste tests are in Table 7.5.
Table 7.5 Data from Two Taste Tests of Cheese
Flavouring Expert Ranking Consumer Ranking
NYS21 7 8
K73 4 3
K88 1 4
Ba4 8 6
Bc11 2 5
McA A 3 1
McA A 9 9
WIS 4 5 2
WIS 43 6 7
Sandy decides to use the SAS statistical software that Robin Hood has purchased. Her hypotheses are:
H[o]: The correlation between the expert and consumer rankings is zero or negative.
H[a]: The correlation is positive.
Sandy will decide that the expert panel does know best if the data support H[a] that there is a positive correlation between the experts and the consumers. She goes to a table that shows what value
of the Spearman’s rank correlation coefficient will separate one tail from the rest of the sampling distribution if there is no association in the population. A portion is shown in Table 7.6.
Table 7.6 Some
One-Tail Critical
Values for Spearman’s
Rank Correlation
n α=.05 α=.025 α=.10
5 .9
6 .829 .886 .943
7 .714 .786 .893
8 .643 .738 .833
9 .6 .683 .783
10 .564 .648 .745
11 .523 .623 .736
12 .497 .591 .703
Using α = .05, going across the n = 9 row in Table 7.6, Sandy sees that if H[o] is true, only .05 of all samples will have an r[s] greater than .600. Sandy decides that if her sample rank correlation
is greater than .600, the data support the alternative, and flavouring K88, the one ranked highest by the experts, will be used. She first goes back to the two sets of rankings and finds the
difference in the rank given each flavour by the two groups, squares those differences, and adds them together, as shown in Table 7.7.
Table 7.7 Sandy’s Worksheet
Flavouring Expert ranking Consumer ranking Difference d²
NYS21 7 8 -1 1
K73 4 3 1 1
K88 1 4 -3 9
Ba4 8 6 2 4
Bc11 2 5 -3 9
McA A 3 1 2 4
McA A 9 9 0 0
WIS 4 5 2 3 9
WIS 43 6 7 -1 1
Sum 38
Then she uses the formula from above to find her Spearman rank correlation coefficient:
[latex]1- [6/(9)(92-1)][38] = 1 - .3166 = .6834[/latex]
Her sample correlation coefficient is .6834, greater than .600, so she decides that the experts are reliable and decides to use flavouring K88. Even though Sandy has ordinal data that only rank the
flavourings, she can still perform a valid statistical test to see if the experts are reliable. Statistics have helped another manager make a decision.
Though they are less precise than other statistics, non-parametric statistics are useful. You will find yourself faced with small samples, populations that are obviously not normal, and data that
are not cardinal. At those times, you can still make inferences about populations from samples by using non-parametric statistics.
Non-parametric statistical methods are also useful because they can often be used without a computer, or even a calculator. The Mann-Whitney U-test and the t-test for the difference of sample means
test the same thing. You can usually perform the U-test without any computational help, while performing a t-test without at least a good calculator can take a lot of time. Similarly, the Wilcoxon
signed rank test and Spearman’s rank correlation are easy to compute once the data have been carefully ranked. Though you should proceed to the parametric statistics when you have access to a
computer or calculator, in a pinch you can use non-parametric methods for a rough estimate.
Notice that each different non-parametric test has its own table. When your data are not cardinal, or your populations are not normal, the sampling distributions of each statistic is different. The
common distributions, the t, the χ^2, and the F, cannot be used.
Non-parametric statistics have their place. They do not require that we know as much about the population, or that the data measure as much about the observations. Even though they are less precise,
they are often very useful.
1. Mann, H.B., & Whitney, D.R. (1947). On a test of whether one or two random variables is stochastically larger than the other. Annals of Mathematical Statistics, 18, 50-60. ↵
2. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics, 1(6), 80-83. ↵ | {"url":"https://opentextbc.ca/introductorybusinessstatistics/chapter/some-non-parametric-tests-2/","timestamp":"2024-11-14T18:40:59Z","content_type":"text/html","content_length":"98210","record_id":"<urn:uuid:ffad093a-293f-427c-a058-621a211b57c4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00556.warc.gz"} |
Ch. 8 Critical Thinking Items - Physics | OpenStax
Critical Thinking Items
8.1 Linear Momentum, Force, and Impulse
Cars these days have parts that can crumple or collapse in the event of an accident. How does this help protect the passengers?
a. It reduces injury to the passengers by increasing the time of impact.
b. It reduces injury to the passengers by decreasing the time of impact.
c. It reduces injury to the passengers by increasing the change in momentum.
d. It reduces injury to the passengers by decreasing the change in momentum.
How much force would be needed to cause a 17 kg ⋅ m/s change in the momentum of an object, if the force acted for 5 seconds?
a. 3.4 N
b. 12 N
c. 22 N
d. 85 N
8.2 Conservation of Momentum
A billiards ball rolling on the table has momentum p[1]. It hits another stationary ball, which then starts rolling. Considering friction to be negligible, what will happen to the momentum of the
first ball?
a. It will decrease.
b. It will increase.
c. It will become zero.
d. It will remain the same.
A ball rolling on the floor with momentum p[1] collides with a stationary ball and sets it in motion. The momentum of the first ball becomes p'[1], and that of the second becomes p'[2]. The
directions of p[1], p'[1], and p'[2] are all the same. Compare the magnitudes of p[1] and p'[2].
a. Momenta p[1] and p'[2] are the same in magnitude.
b. The sum of the magnitudes of p[1] and p'[2] is zero.
c. The magnitude of p[1] is greater than that of p'[2].
d. The magnitude of p'[2] is greater than that of p[1].
Two cars are moving in the same direction. One car with momentum p[1] collides with another, which has momentum p[2]. Their momenta become p'[1] and p'[2] respectively. Considering frictional losses,
compare (p'[1] + p'[2] ) with (p[1] + p[2]).
a. The value of (p'[1] + p'[2] ) is zero.
b. The values of (p[1] + p[2]) and (p'[1] + p'[2] ) are equal.
c. The value of (p[1] + p[2]) will be greater than (p'[1] + p'[2] ).
d. The value of (p'[1] + p'[2] ) will be greater than (p[1] + p[2]). | {"url":"https://openstax.org/books/physics/pages/8-critical-thinking-items","timestamp":"2024-11-13T06:16:20Z","content_type":"text/html","content_length":"497474","record_id":"<urn:uuid:ef927280-2ec6-4b62-9fdf-ef4603178320>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00025.warc.gz"} |
Complex Fraction Worksheet With Answers
Complex Fraction Worksheet With Answers. Multiplying positive & negative numbers. Grade nine math practice test.
simplify fractions worksheet with answers from printablecollection.z21.web.core.windows.net
Arithmetic (all content) > unit 4. Web free complex fractions worksheet collection for kids a fraction with fractions in the numerator, denominator, or both is referred to as a complicated fraction.
Follow the normal rules for dividing fractions: | {"url":"http://studydblamb123.s3-website-us-east-1.amazonaws.com/complex-fraction-worksheet-with-answers.html","timestamp":"2024-11-03T04:35:08Z","content_type":"text/html","content_length":"24172","record_id":"<urn:uuid:54af4d1e-e6a2-4945-b620-0c4705d37eee>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00586.warc.gz"} |
Determinant maximization
Let us solve a determinant maximization problem. Given two ellipsoids
\[ E_1 &= \{x ~|~ x^TP_1x \leq 1\}\\ E_2 &= \{x ~|~ x^TP_2x \leq 1\}\\ \]
Find the ellipsoid \(x^TPx \leq 1\) with smallest possible volume that contains the union of \(E_1\) and \(E_2\). By using the fact that the volume of the ellipsoid is proportional to \(\det(P)\) and
applying the S-procedure, it can be shown that this problem can be written as
The objective function \(-\det(P)\) (which is minimized) is not convex, but monotonic transformations can render this problem convex. One alternative is the logarithmic transform, leading to
minimization of\(-\log(\det(P))\) instead, which only is supported if you use SDPT3 version 4 (see below).
For other SDP solvers YALMIP uses \(-\det(P)^{1/m}\) where \(m\) is dimension of \(P\) (in other words, the geometric mean of the eigenvalues, which is a monotonic function of the determinant). The
concave function \(\det(P)^{1/m}\), available by applying geomean on a Hermitian matrix in YALMIP, can be modeled using semidefinite and second order cones, hence any SDP solver can be used for
solving determinant maximization problems.
n = 2;
P1 = randn(2);P1 = P1*P1'; % Generate random ellipsoid
P2 = randn(2);P2 = P2*P2'; % Generate random ellipsoid
t = sdpvar(2,1);
P = sdpvar(n,n);
F = [1 >= t >= 0];
F = [F, t(1)*P1-P >= 0];
F = [F, t(2)*P2-P >= 0];
sol = optimize(F,-geomean(P));
x = sdpvar(2,1);
plot(x'*value(P)*x <= 1,[],'b'); hold on
plot(x'*value(P1)*x <= 1,[],'r');
plot(x'*value(P2)*x <= 1,[],'y');
If you have the dedicated solver SDPT3 installed and want to use it to handle the logarithmic term directly, you must use the dedicated command logdet for the objective and explicitly select SDPT3.
This command can not be used in any other construction than in the objective function, compared to the geomean operator that can be used as any other variable in YALMIP, since it a so called
nonlinear operator.
x = sdpvar(2,1);
plot(x'*value(P)*x <= 1,[],'b'); hold on
plot(x'*value(P1)*x <= 1,[],'r');
plot(x'*value(P2)*x <= 1,[],'y');
Note that if you use the logdet command but not explicitly select a solver that supports logdet terms natively, YALMIP will use \(-\det(P)^{1/m}\) as objective function instead anyway. This will not
cause any problems if your objective function is a simple logdet expression (since the two functions are monotonically related). However, if you have a mixed objective function such as \(\
operatorname{trace}(P)-\log(\det(P))\), you can only use SDPT3 version 4. | {"url":"https://yalmip.github.io/tutorial/maxdetprogramming/","timestamp":"2024-11-09T16:26:13Z","content_type":"text/html","content_length":"22169","record_id":"<urn:uuid:cec2ef01-0431-49ee-914f-1b3bfb3cb942>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00894.warc.gz"} |
What is Geometry Geometry Definition, Formulas and Shapes
What is Geometry? – Geometry Definition, Formulas and Shapes
Geometry deals with the shapes, sizes, and various figures which are in our everyday life. Check the detailed information for Geometry Definition, Formulas.
Geometry: Geometry is the oldest branch of mathematics which deals with the size, shapes, position angles, and dimensions of things. Geometry deals with the things which are used in daily life.
Geometry includes 2D as well as 3D shapes i.e 2 dimensional and 3-dimensional shapes.
In-plane geometry, 2d shapes such as triangles, squares, rectangles, and circles are also called flat shapes. In solid geometry, 3d shapes such as a cube, cuboid, cone, etc. are also called solids.
The basic geometry is based on points, lines, and planes which come under coordinate geometry.
In this article, we are providing you the detailed information about geometry, geometry shapes, and geometry formulas. Understanding geometry will help candidates to solve the problems related to
that and asked in the competitive exams.
Geometry Definition
Geometry word is derived from Ancient Greek words – ‘Geo’ means ‘Earth’ and ‘metron’ means ‘measurement’. Geometry is concerned with properties of space that are related to distance, shape, size, and
relative position of figures. The basics of geometry depend on majorly points, lines, angles, and planes.
What are the Branches of Geometry?
The branches of geometry are as follows:
• Algebraic geometry
• Discrete geometry
• Differential geometry
• Euclidean geometry
• Convex geometry
• Topology
Plane Geometry (2D Geometry)
Plane Geometry means flat shapes which can be drawn on a piece of paper. These include lines, circles & triangles of two dimensions. Plane geometry is also known as a two-dimensional geometry.
Example of 2D Geometry is square, triangle, rectangle, circle, lines, etc. Here we are providing you with the properties of the 2D shapes below.
A point is a location or place on a plane. A dot usually represents them. It is important to understand that a point is not a thing, but a place. The point has no dimension and it has the only
The line is straight with no curves), has no thickness, and extends in both directions without end infinitely.
Angles in Geometry
Angles are formed by the intersection of two lines called rays at the same point. which is called the vertex of the angle.
Types of Angle
Acute Angle – An Acute angle is an angle smaller than a right angle ie. it can be from 0 – 90 degrees.
Obtuse Angle – An Obtuse angle is more than 90 degrees but is less than 180 degrees.
Right Angle –
An angle of 90 degrees.
Straight Angle –
An angle of 180 degrees is a straight angle, i.e. the angle formed by a straight line.
A polygon is an unrestricted shape figure that has a minimum of three sides and three vertices. The term‘ poly’ means’ numerous‘ and’ gon’ means’ angle‘. therefore, polygons have numerous angles. The
border and area of polygon depend upon its types. The distribution of polygons is described as grounded on the figures of sides and vertices.
Types of Polygons
The types of polygons are:
• Triangles
• Quadrilaterals
• Pentagon
• Hexagon
• Heptagon
• Octagon
• Nonagon
• Decagon
Geometry Shapes and Formulas: FAQ
Q. What is the area of Triangle?
Ans: The area of a triangle is 1/2( b × h).
Q. What are the types of angles?
Ans: These are 4 types of angles ,i.e. acute, obtuse, right, and straight angle.
Q. What is Geometry?
Ans: Geometry is a branch of mathematics that deals with shapes, angles, dimensions, and sizes. | {"url":"https://sscegy.testegy.com/2023/11/geometry-definition-formulas-and-shapes.html","timestamp":"2024-11-14T06:37:27Z","content_type":"text/html","content_length":"1049064","record_id":"<urn:uuid:3024118d-4625-4108-b129-4f0e535cbe55>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00808.warc.gz"} |
To what extent is there a Correlation between the number of individuals who are going out to their workplaces after Lockdown and the number of individuals who are getting infected by COVID-19? | Mathematics AI SL's Sample Internal Assessment | Nail IB®
Research question
What is the relationship between the number of individuals who are going out to their workplaces after Lockdown and the number of individuals who are getting infected by COVID-19?
In this IA, a correlation has been developed between the total number of individuals who are going to their respective work places after calling off of Lockdown in India and thereby getting infected
by COVID-19. For the first group, i.e., between the age of 25 years and 35 years, working individuals are getting infected by COVID-19 in an increasing fashion. The trendline shows an increasing
direct relationship between the number of individuals going out for their work and the ones those who are getting infected. The equation of the trendline is shown below:
y = 0.1219x + 12.976
The value of R^2 correlation coefficient for the graph is 0.9894 which validates the fact that the relation is linear and increasing. Furthermore, the value of Pearson’s Correlation Coefficient for
this graph is 0.998. As the value is very close to 1, it justifies the claim that the relationship is linear and being a positive value, it satisfies the claim that the relationship is increasing or
direct. The reason behind such a correlation is assumed to be the work load that is there on the individuals of this age group. On the other hand, being on the lower side of the age group, their
immunity is comparatively less strong than that of the other age groups. Thus, the spread of infection is taking a significant number in this age group.
For the second group, i.e., between the age of 35 years and 45 years, working individuals are getting infected by COVID-19 is showing a direct increasing relation. The equation of the trendline is
shown below:
y = 0.0439x - 3.2908
The value of R^2 correlation coefficient for the graph is 0.977 which validates the fact that the relation is linear and increasing. Furthermore, the value of Pearson’s Correlation Coefficient for
this graph is 0.988. As the value is very close to 1, it justifies the claim that the relationship is linear and being a positive value, it satisfies the claim that the relationship is increasing or
direct. The spread is comparatively less in this group as the immunity of the individuals of this age group is considerably more and with an increase in age, people often tend to be more cautious and
aware and take significant preventive measures to protect them from the disease.
Finally, in the third group, i.e., between the age of 45 years and 55 years, working individuals are getting infected by COVID-19 is showing a direct increasing relation. The equation of the
trendline is shown below:
y = 0.1564x + 19.344
The value of R^2 correlation coefficient for the graph is 0.9968 which validates the fact that the relation is linear and increasing. Furthermore, the value of Pearson’s Correlation Coefficient for
this graph is 0.998. As the value is very close to 1, it justifies the claim that the relationship is linear and being a positive value, it satisfies the claim that the relationship is increasing or
direct. This group has shown maximum infection than that of the other groups. It is due to the fact that, this age group is quite close to the limit of senior citizens. According to the doctors,
people aged more than 50 are more vulnerable to the disease which is significantly proved in its correlative study. | {"url":"https://nailib.com/user/ib-resources/ib-math-ai-sl/ia-sample/64ae3a2051c461c33d2e28d9","timestamp":"2024-11-04T04:24:29Z","content_type":"text/html","content_length":"290766","record_id":"<urn:uuid:30cea948-0876-4e43-8842-ed9079bb358d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00678.warc.gz"} |
How Many Seconds Are in 8 Days? Find Out Here!
Table of Contents :
To determine how many seconds are in 8 days, it's essential first to understand the relationships between days, hours, minutes, and seconds. This breakdown will help us arrive at the answer
Understanding Time Units β ³
To convert days into seconds, we need to follow these time unit conversions:
• 1 day = 24 hours
• 1 hour = 60 minutes
• 1 minute = 60 seconds
Using these conversions, we can break it down step by step.
Step-by-Step Conversion π
1. Calculate the total number of hours in 8 days: [ 8 \text{ days} \times 24 \text{ hours/day} = 192 \text{ hours} ]
2. Convert hours to minutes: [ 192 \text{ hours} \times 60 \text{ minutes/hour} = 11,520 \text{ minutes} ]
3. Convert minutes to seconds: [ 11,520 \text{ minutes} \times 60 \text{ seconds/minute} = 691,200 \text{ seconds} ]
Thus, there are 691,200 seconds in 8 days. This straightforward calculation can help you grasp the concept of time conversion easily.
Summary of the Conversion Table π
Hereβ s a summary of the conversion steps we performed:
Unit Value
Days 8
Hours 192
Minutes 11,520
Seconds 691,200
Why Knowing Time Conversions Is Useful? π
Understanding how to convert time can be crucial in various situations, whether it be for scientific calculations, event planning, or scheduling. Here are a few reasons why it matters:
Important Note: Time conversions can help in programming, data analysis, and even in everyday life, such as cooking or planning activities.
• Event Planning: Knowing how to break down longer timeframes helps in scheduling events accurately.
• Scientific Calculations: In fields like physics or chemistry, precise time measurements can be crucial for experiments and results.
• Time Management: Understanding how to visualize larger amounts of time in smaller units can enhance productivity.
Practical Applications of Time Calculations π
In Everyday Life
You might often find yourself in situations where you need to calculate the duration between two events or the total time spent on activities over several days. For instance, tracking hours worked
over a week can help with budgeting time effectively.
In Project Management
In project management, calculating the total duration of tasks often requires converting time from days to seconds, minutes, or hours to fit into project timelines.
In Technology
Developers often work with timestamps and durations in programming, which necessitates precise time conversions to ensure smooth operation of applications.
Fun Facts About Time π °οΈ
• The concept of a "second" comes from the division of the day into 24 hours, which is further divided into 60 minutes and 60 seconds.
• Every leap year introduces an extra day, impacting how we calculate larger periods of time over the years.
Conclusion π
In summary, understanding how many seconds are in 8 days not only helps us grasp time conversions but also provides practical applications in various areas of life, from everyday scheduling to
scientific research. By knowing that there are 691,200 seconds in 8 days, you can better manage your time and improve your productivity. Time is a valuable resource, and mastering its conversions can
help you utilize it more effectively. | {"url":"https://tek-lin-pop.tekniq.com/projects/how-many-seconds-are-in-8-days-find-out-here","timestamp":"2024-11-08T11:12:00Z","content_type":"text/html","content_length":"83974","record_id":"<urn:uuid:f8e10f24-0c91-4878-a218-d287b6261b61>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00747.warc.gz"} |
Growing Math
π STANDARD
CCSS.MATH.CONTENT.4.NF.B.3.A Understand addition and subtraction of fractions as joining and separating parts referring to the same whole.
CCSS.MATH.CONTENT.4.NF.B.3.D Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.g., by using visual fraction models and
equations to represent the problem.
[For state-specific standards, click here.]
β ° LESSON TIME
45 minutes
π SUMMARY
This lesson plan will explore how students can add fractions with like denominators to determine the sum of fractions. It incorporates two instructional videos, an editable presentation and
educational game that can be used to practice/reinforce the concept with assessment data.
π ² Technology Required
The teacher (or student, if learning at home) will need a computer, phone or tablet with an Internet connection to play the video. For students at home without Internet access, the teacher can print
out the attached PDF or PowerPoint for students to study. The game required plays on Windows or Mac computers and on iPad. A Chromebook version will be available by April.
π Lesson Plan
1. VIDEO: Adding Like Fractions
Start your lesson with this one-minute video on adding fractions with like denominators.
Alternate format : POWERPOINT: Adding Fractions with Like Denominators
This presentation provides the information in the video viewed at the beginning of the lesson as a PowerPoint or PDF.
For an editable PowerPoint version, go here.
2. Game Play
Have students play Fish Lake for 30 minutes. This lesson is most effective when introduced towards the beginning of Fish Lake gameplay since the math ties into the math in Level 3. Students who
master this standard will be able to advance within the game. Students who have trouble with this standard will receive individual instruction within the game to teach and reinforce this concept.
3. Reinforce with another video
Common denominators can help you determine what’s fair
This two-minute video gives examples of how fractions with like denominators can be used to see if everyone is doing their fair share of the work or eating a fair share of the pizza.
4. Related Lesson – Introducing Fractions
If your students are struggling with adding fractions with common denominators, they may need a review of the introduction to fractions, including defining numerator and denominator.
You can view your studentsβ progress on mastering these standards by viewing your Fish Lake Teacher Reports. You can access the Fish Lake reports here.
Arizona (AZ), New Mexico (NM), North Dakota (ND), South Dakota (SD), and Oregon (OR) have all adopted the math standards covered in the Common Core Standards.
Minnesota (MN) Math Standard
4. Number and Operation
Represent and compare fractions and decimals in real-world and mathematical situations; use place value to understand how decimals represent quantities.4.1.2.3 – Use fraction models to add and
subtract fractions with like denominators in real-world and mathematical situations. Develop a rule for addition and subtraction of fractions with like denominators.
Subtracting Fractions: Like denominators
π Standards
CCSS.Math.Content.5.NF.A.2 Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction
models or equations to represent the problem. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers. For example, recognize an incorrect
result 2/5 + 1/2 = 3/7, by observing that 3/7 < 1/2.
π Summary
After this lesson, students will know how to solve multi–step word problems using addition and subtraction of fractions with like (common) denominators. After watching the video, students will login
to “Aztech: The Story Begins” on a device with the website or application. Students will be faced with a fractions problem in Level 1 which uses a calendar to find the fraction of days students did
homework. The game character points out that 16/31 may not be “all the time” but it is still more than half. Throughout the game, students will be presented with AzTech history.
β ° Time required
30 -45 minutes, including individual assessment
π ² Technology required
The game in this lesson plan can be played on the web on any Chromebook, Mac or Windows computer with reliable Internet access. If students do not have high-speed Internet at home, the game can be
pre-loaded on to iPads and played offline with no Internet required.
π Lesson Plan
1. Video: Adding Like Fractions
Adding fractions with common denominators 1:10
“Like fractions” are those with the same denominator. This is also called a common denominator. How do you add like fractions? This quick video from the game Fish Lake has simple examples of
comparing fractions and fraction addition.
2. Presentation or video: When is a fraction the same as 1 ?
Understanding that N/N = 1 for any number 3:32
If the numerator and denominator are the same, then this fraction equals 1. N/N = 1 How can you apply your knowledge of fractions to help you figure out how far you’ve gone on your trip and how much
further you have to go? Teachers can either have students watch the video or use this 27-slide presentation in both Google slides format and PowerPoint. Both include examples of fractions of 8/8 , 3/
3 and 4/4 all equaling one. Examples include distance, money and a bowl of stew.
3. Game: Play AzTech – The Story Begins
Students will login to “Aztech: The Story Begins” on a device with the website or application. Students using Chromebook, Mac or Windows computers can play on the web here. Students using an iPad can
download the app here. Throughout the game, students will be presented with Aztech history. Students will be faced with fraction and statistic examples, leading to similar problems that need to be
Estimated time for this portion: 10 minutes.
Individual Assessment
Use this template to have students create their own fraction equation.
It includes a calendar template and these instructions:
Use this template to show what you did most in the last month when you werenβ t in school.
• First, make a copy in your own Google Drive.
• Second, put a 1 in the calendar for the first day of this month and continue until all days of the month are filled.
• Third, make a picture or write what you did each day in each of the boxes.
• Fourth, write your own fraction equation like this:
□ On 11/31 of the days, I played games on the computer.
□ On 7/31 of the days I worked planting my garden
□ On 13/31 of the days I was doing homework.
11/31 + 7/31 + 13/31 = 31/31
Of course, if there are 28 or 30 days in the month, your denominator will be different.
Group Assessment
Use the video below to solve the problem from Level 1 in AzTech: The Story Begins as a group. This video shows the problem from level 1 on finding the fraction of days Xitlali did homework and gives
a hint on how to solve it. Ask the students why Xitlali said that 16/31 was more than half. How did she know? Introduce the concept of equivalent fractions.
Fraction problem using a calendar
State Standards
Minnesota State Standard 4.1.2.3 – Use fraction models to add and subtract fractions with like denominators in real-world and mathematical situations. Develop a rule for addition and subtraction of
fractions with like denominators. | {"url":"https://www.growingmath.org/tag/adding-fractions/","timestamp":"2024-11-09T23:09:27Z","content_type":"text/html","content_length":"68223","record_id":"<urn:uuid:5851efaf-23a4-4740-92db-d1b7a7636b5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00480.warc.gz"} |
Numericals Vors wort
1. The volume of an ideal gas in a vessel ... | Filo
Question asked by Filo student
Numericals Vors wort 1. The volume of an ideal gas in a vessel is 2 litres at normal pressure . The pressure of the gas is increased (i) under isothermal condition, (ii) under adiabatic condition
until its volume remains 1 litre. Find the increased pressure of the gas. ( for air ) Ans. (i) , (ii) . 2. The initial pressure of a gas is . Its volume is compressed to of its original volume
adiabatically. What will be the pressure of the gas in this condition ? For the gas . Ans. . 3. Dry air is compressed to of its original volume by applying atmospheric pressure adiabatically. What
will be resulting pressure of the gas in this condition? Ans. 27 P. 4. One litre of a gas whose initial pressure is 1 atmosphere is compressed till the pressure becomes 2 atmospheres. If the gas be
compressed (i) slowly, (ii) suddenly, what will be the new volume of the gas ? and Ans. 0.50 litre, 0.61 litre. 5. The pressure of a gas is suddenly raised to 8 times. Calculate, how many times the
volume of gas will become? Ans. One-fourth. 6. A certain mass of air at is compressed (i) slowly and (ii) suddenly to one-fourth of original volume. Find the final temperature of the compressed air
in each case. Ans. (i) , (ii) . 7. Kitial pressure and temperature of a gas are and respectively. By suddenly compressing, the final volume of gas is made one-fourth of initial volume. Find the last
pressure and temperature of the gas. Ans. .
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
3 mins
Uploaded on: 12/25/2023
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
5 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Thermodynamics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Physics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Numericals Vors wort 1. The volume of an ideal gas in a vessel is 2 litres at normal pressure . The pressure of the gas is increased (i) under isothermal condition, (ii) under adiabatic
condition until its volume remains 1 litre. Find the increased pressure of the gas. ( for air ) Ans. (i) , (ii) . 2. The initial pressure of a gas is . Its volume is compressed to of its
original volume adiabatically. What will be the pressure of the gas in this condition ? For the gas . Ans. . 3. Dry air is compressed to of its original volume by applying atmospheric
Question pressure adiabatically. What will be resulting pressure of the gas in this condition? Ans. 27 P. 4. One litre of a gas whose initial pressure is 1 atmosphere is compressed till the pressure
Text becomes 2 atmospheres. If the gas be compressed (i) slowly, (ii) suddenly, what will be the new volume of the gas ? and Ans. 0.50 litre, 0.61 litre. 5. The pressure of a gas is suddenly
raised to 8 times. Calculate, how many times the volume of gas will become? Ans. One-fourth. 6. A certain mass of air at is compressed (i) slowly and (ii) suddenly to one-fourth of original
volume. Find the final temperature of the compressed air in each case. Ans. (i) , (ii) . 7. Kitial pressure and temperature of a gas are and respectively. By suddenly compressing, the final
volume of gas is made one-fourth of initial volume. Find the last pressure and temperature of the gas. Ans. .
Updated Dec 25, 2023
Topic Thermodynamics
Subject Physics
Class Class 11
Answer Video solution: 1
Upvotes 133
Video 3 min | {"url":"https://askfilo.com/user-question-answers-physics/numericals-vors-wort-1-the-volume-of-an-ideal-gas-in-a-36353335323732","timestamp":"2024-11-03T16:41:35Z","content_type":"text/html","content_length":"296117","record_id":"<urn:uuid:1e0b3625-b49f-4c89-a5e3-4898c3950983>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00300.warc.gz"} |
Select all 6 dice below and click "Randomise" at the bottom of the Polypad to roll the dice.
Calculate the mean, median, mode and range of the 6 numbers rolled.
Make A Prediction
Before you roll again, predict the mean, median, mode and range of the numbers that will show up on the dice. Write down your prediction. Now, roll again.
How many did you predict correctly? Try it a few more times and see if you can improve your prediction.
Imagine you win a prize if you correctly predict ONLY ONE of the mean, median, mode, or range for these 6 dice. Which one would you select and what would your prediction be? Roll the dice again and
see how you do! Did you win the prize?
100 Dice
You now have 100 dice to roll! You also win a prize if you correctly predict only 1 of the mean, median, mode, or range correct. Which one would you select and what would your prediction be? Roll the
dice below and see how you do! Good luck!
Teaching Ideas
Individually, or in small groups, invite students to predict the mean, median, mode, and range for 6 dice. Consider rolling the 6 dice first and then calculating the mean, median, mode, and range if
you think your students might benefit from an example before making a prediction. After students have made their prediction, project a Polypad canvas with 6 dice and roll them. Students should then
calculate the mean, median, mode, and range for the 6 dice and share with the class how many they predicted correctly.
Before playing again, invite students to share their original predictions and how they may have changed them for round 2. Play a few more times. If you want to make a game out of it, award groups 1
point for each measure they correctly predict. Discuss with students which measure they find easier to predict. Consider some of the following:
• Rather than rolling all 6 at once, roll 1 die at a time to build the drama and excitement of the activity. With 1 die left to roll, invite students to share what number they hope comes up and
• After rolling 3 dice, invite students to change some of their predictions as they like. Invite students to share with the class how they changed their predictions.
As described above, ask students to only predict 1 of the mean, median, mode, and range and encourage them to predict the one they feel most confident in. Invite them to share the reasoning behind
the predictions.
Finally, project the canvas with 100 dice and again ask students to only predict 1 of the mean, median, mode, and range and encourage them to predict the one they feel most confident in. Invite them
to share the reasoning behind the predictions. Roll the dice and see how students did with their predictions. | {"url":"https://polypad.amplify.com/fr/lesson/measures-of-central-tendency-and-rolling-dice","timestamp":"2024-11-03T06:45:03Z","content_type":"text/html","content_length":"24877","record_id":"<urn:uuid:9ff36c36-d21d-45c2-a171-98e9d0359c48>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00423.warc.gz"} |
Number Bonds
Number bonds are missing number addition problems that all have the same sum.
For example:
__ + 6 = 10
__ + 2 = 10
This worksheet has the problems written as horizontal missing number problems. For number bond problems arranged in the form of a tree, use the Number Bond Problems Written in Tree Format worksheet. | {"url":"https://themathworksheetsite.com/subscr/number_bonds.html","timestamp":"2024-11-11T08:10:56Z","content_type":"text/html","content_length":"7173","record_id":"<urn:uuid:c6ca463c-3a61-405f-92f9-5c3028ff11ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00562.warc.gz"} |
Glance at a graph_lme object — glance.graph_lme
Glance at a graph_lme object
Glance accepts a graph_lme object and returns a tidyr::tibble() with exactly one row of model summaries. The summaries are the square root of the estimated variance of the measurement error, residual
degrees of freedom, AIC, BIC, log-likelihood, the type of latent model used in the fit and the total number of observations.
A tidyr::tibble() with exactly one row and columns:
• nobs Number of observations used.
• sigma the square root of the estimated residual variance
• logLik The log-likelihood of the model.
• AIC Akaike's Information Criterion for the model.
• BIC Bayesian Information Criterion for the model.
• deviance Deviance of the model.
• df.residual Residual degrees of freedom.
• model.type Type of latent model fitted. | {"url":"https://davidbolin.github.io/MetricGraph/reference/glance.graph_lme.html","timestamp":"2024-11-05T18:50:12Z","content_type":"text/html","content_length":"10794","record_id":"<urn:uuid:4ae1d37d-1af3-49a2-9be0-c349c5874344>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00191.warc.gz"} |
CMB lensing power spectrum without noise bias
Apr 01, 2024 - 11:00 am to 12:00 pm
Delon Shen (KIPAC) zoom https://stanford.zoom.us/my/sihanyuan?pwd=QnpsUHZWWGJ2ekVYWmZVL3BmM0gzZz09
Zoom info: https://stanford.zoom.us/my/sihanyuan?pwd=QnpsUHZWWGJ2ekVYWmZVL3BmM0gzZ…
Upcoming surveys will measure the cosmic microwave background (CMB) weak lensing power spectrum in exquisite detail, allowing for strong constraints on the sum of neutrino masses among other
cosmological parameters. Standard CMB lensing power spectrum estimators aim to extract the connected non-Gaussian trispectrum of CMB temperature maps. However, they are generically dominated by a
large disconnected, or Gaussian, noise bias, which thus needs to be subtracted at high accuracy. This is currently done with realistic map simulations of the CMB and noise, whose finite accuracy
currently limits our ability to recover the CMB lensing on small-scale. In this talk, I will describe a novel estimator which instead avoids this large Gaussian bias. This estimator relies only on
the data and avoids the need for bias subtraction with simulations. Thus, this bias avoidance method is (1) insensitive to misestimates in simulated CMB and noise models and (2) avoids the large
computational cost of standard simulation-based methods like "realization-dependent N(0)'' (RDN(0)). I will show that this estimator is as robust as standard methods in the presence of realistic
inhomogeneous noise (e.g. from scan strategy) and masking. Moreover, this method can be combined with split-based methods, making it completely insensitive to mode coupling from inhomogeneous
atmospheric and detector noise. Although in this talk I specifically consider CMB weak lensing power spectrum estimation, I will illuminate the relation between this new estimator, RDN(0)
subtraction, and general optimal trispectrum estimation. Through this discussion I will show that our estimator can be applicable to analogous problems in other fields which rely on estimating
connected trispectra/four-point functions like large-scale structure. | {"url":"https://kipac.stanford.edu/events/cmb-lensing-power-spectrum-without-noise-bias","timestamp":"2024-11-12T23:35:54Z","content_type":"text/html","content_length":"25398","record_id":"<urn:uuid:17d07076-807f-4d84-a526-9ab8d15952fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00369.warc.gz"} |
ML Aggarwal Class 6 Solutions for ICSE Maths Chapter 10 Basic Geometrical Concept Ex 10.3
ML Aggarwal Class 6 Solutions for ICSE Maths Chapter 10 Basic Geometrical Concept Ex 10.3
Question 1.
Draw rough diagrams to illustrate the following:
(i) open simple curve
(ii) closed simple curve
(iii) open curve that is not simple
(iv) closed curve that is not simple.
Question 2.
Consider the given figure and answer the following questions:
(i) Is it a curve?
(ii) Is it a closed curve?
(iii) Is it a polygon?
Question 3.
Draw a rough sketch of a triangle ABC. Mark a point P in its interior and a point Q in its exterior. Is the point A in its exterior or in its interior?
Question 4.
Draw a rough sketch of a quadrilateral PQRS. Draw its diagonals. Name them.
Question 5.
In context of the given figure:
(i) Is it a simple closed curve?
(ii) Is it a quadrilateral?
(iii) Draw its diagonals and name them.
(iv) State which diagonal lies in the interior and which diagonal lies in the exterior of the quadrilateral.
Question 6.
Draw a rough sketch of a quadrilateral KLMN. State,
(i) two pairs of opposite sides
(ii) two pairs of opposite angles
(iii) two pairs of adjacent sides
(iv) two pairs of adjacent angles. | {"url":"https://www.learncram.com/ml-aggarwal/ml-aggarwal-class-6-solutions-for-icse-maths-chapter-10-ex-10-3/","timestamp":"2024-11-06T07:18:51Z","content_type":"text/html","content_length":"64559","record_id":"<urn:uuid:d7206ec6-5715-45e6-ae5d-210487eae36c>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00410.warc.gz"} |
Voltage Drop - (Honors Physics) - Vocab, Definition, Explanations | Fiveable
Voltage Drop
from class:
Honors Physics
Voltage drop is the decrease in electrical potential that occurs when current flows through a resistive component, such as a wire, a resistor, or any other electrical device. It is a fundamental
concept in understanding the behavior of electrical circuits.
congrats on reading the definition of Voltage Drop. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The voltage drop across a component is directly proportional to the current flowing through it and the resistance of the component.
2. In a series circuit, the voltage drop across each component adds up to the total voltage applied to the circuit.
3. In a parallel circuit, the voltage drop across each branch is equal to the total voltage applied to the circuit.
4. Voltage drop is an important consideration in the design and analysis of electrical circuits, as it can affect the performance and efficiency of the system.
5. Minimizing voltage drop is crucial in applications where maintaining a stable voltage is essential, such as in power distribution systems and electronic devices.
Review Questions
• Explain how voltage drop is related to Ohm's law in the context of series circuits.
□ According to Ohm's law, the voltage drop across a component in a series circuit is directly proportional to the current flowing through it and the resistance of the component. In a series
circuit, the same current flows through all the components, and the voltage drop across each component adds up to the total voltage applied to the circuit. Therefore, understanding voltage
drop and Ohm's law is essential for analyzing and designing series circuits, as it allows you to determine the voltage at different points in the circuit and ensure that the components are
operating within their voltage ratings.
• Describe how voltage drop affects the behavior of parallel circuits.
□ In a parallel circuit, the voltage drop across each branch is equal to the total voltage applied to the circuit, regardless of the resistance or current in each branch. This is because the
components in a parallel circuit are connected to the same set of terminals, and the current divides among the branches based on their respective resistances. Voltage drop is an important
consideration in parallel circuits, as it ensures that each branch receives the same voltage, allowing the components to function as intended. Understanding voltage drop in parallel circuits
is crucial for designing and troubleshooting these types of circuits, as it helps maintain the proper voltage levels for each component.
• Evaluate the significance of minimizing voltage drop in electrical systems and discuss its implications for real-world applications.
□ Minimizing voltage drop is essential in electrical systems for several reasons. Firstly, voltage drop can lead to power losses, reducing the efficiency and performance of the system. This is
particularly important in power distribution systems, where voltage drop along the transmission lines can result in significant energy losses. Secondly, maintaining a stable voltage is
crucial for the proper operation of electronic devices and sensitive equipment. Excessive voltage drop can cause these devices to malfunction or fail, leading to costly repairs or
replacements. In applications such as automotive electrical systems, medical devices, and industrial automation, minimizing voltage drop is crucial to ensure reliable and safe operation. By
understanding the principles of voltage drop and designing circuits to minimize it, engineers can optimize the performance, efficiency, and reliability of electrical systems in a wide range
of real-world applications.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/honors-physics/voltage-drop","timestamp":"2024-11-09T20:05:47Z","content_type":"text/html","content_length":"174629","record_id":"<urn:uuid:a00ea304-fe81-4352-a095-608efae6ad7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00256.warc.gz"} |
VE4: Automated run-time analysis of probabilistic programs
Randomisation has been an important tool for the construction of algorithms since the early days of computing. It is typically used to convert a deterministic algorithm with bad worst-case behavior
into an efficient randomised algorithm that yields a correct output, possibly with a low error probability. The Rabin-Miller primality test, Freivalds’ matrix multiplication, and the random pivot
selection in Hoare’s quicksort algorithm are prime examples. Randomised algorithms are conveniently described by probabilistic programs. The run-time of a randomised algorithm is a random variable as
the run-time is affected by the outcome of the coin tosses. An important efficiency measure for probabilistic programs is the expected run-time. Reasoning about the expected run-time of randomised
algorithms is surprisingly subtle and full of nuances. In classical sequential deterministic algorithms, a single diverging program run causes the program to have an infinite run-time. This is not
true for randomised algorithms. They may admit arbitrarily long and even infinite runs while still having a finite expected run-time.
The expected run-time of randomised algorithms is typically analysed using standard mathematical means. Recently there is some initial work towards applying formal verification techniques to analyse
the run-time of probabilistic programs. Alternative techniques are either based on using super-martingales, or a weakest pre-conditioning style reasoning. None of these techniques is fully automated.
The goal of this dissertation project is to develop a fully automated technique for the inference of expected run-time bounds for probabilistic programs. This challenging objective will be approached
by exploiting recently developed powerful techniques to infer upper and lower bounds for the run-time complexity of ordinary (i.e., non-probabilistic) programs. The plan is to extend these techniques
for the automated complexity analysis of programs towards probabilistic programs, and combine them with the recently studied weakest precondition calculus in order to generate suitable upper and
lower invariants that are needed to infer expected run-times of probabilistic programs. Besides the theoretical developments, a prototypical implementation in the tool AProVE is planned. Extensive
experiments will be carried out to analyse the expected run-time of well-known randomised algorithms such as Sherwood variants of binary search, quicksort and the (challenging) coupon collector
(Given the connections between positive almost-sure termination and expected run-times, there is an obvious link between this dissertation project and VE3. It is also related to project AP5.) | {"url":"https://moves.rwth-aachen.de/research/projects/unravel/verification/ve4-automated-run-time-analysis-of-probabilistic-programs/","timestamp":"2024-11-14T11:16:41Z","content_type":"text/html","content_length":"34725","record_id":"<urn:uuid:7c4d7a2a-9516-4323-bc44-0fb78ae06a07>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00463.warc.gz"} |
\overline{1T} \times \overline{1N}
$\begin{array}{ccccc} & 1&T \\ \times & 1&N \\ \hline 2 & T & T \\ \hline \ \end{array}$
In the cryptogram above, each letter represents a distinct non-negative integer.
This section requires Javascript. You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c) loading the non-javascript version of this page . We're sorry about the hassle. | {"url":"https://solve.club/problems/15-times-overline1n/15-times-overline1n.html","timestamp":"2024-11-10T12:04:16Z","content_type":"text/html","content_length":"110083","record_id":"<urn:uuid:9d3ef061-cc91-405a-aed4-a2226aa57f6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00879.warc.gz"} |
Strike Models
Finding the scale waterline for a new model ship can be a bit of a challenge. Here is one method that will give you a close approximation of the waterline for almost any ship of any particular size.
Instead of the traditional putting the scale amount of weight in the ship and floating it in a bathtub, you will be filling the ship itself with water to its scale weight.
Gather your measurements. You need the
• weight of the empty model ship (from your own scale)
• scale weight of the ship (see below), and (optional) +10%, -10%, and -20% of this scale weight
• scale width of the ship (calculated from a reference book)
• amount of water to add to the ship (see below)
For 1:144 scale models, the scale weight of the ship is the full displacement in long tons divided by 1333 to get the weight in pounds. MWCI also has an extensive ship list with the scale weights at
http://mwci.org/shiplist.shtml . For other scales, use the following calculation: scale weight = (Full displacement weight in long tons)*2240/(scale^3).
To determine the amount of water to add to the ship, subtract the weight of the empty ship from the scale weight of the ship. That is the amount of water, in pounds, you need to add to the ship. Each
pound is about two cups of water (one gallon of water is 16 cups, and weighs about 8.345 pounds).
You need a completely uncut hull, as we will be filling it with water.You will also need a marker or a pencil (but not a grease pencil), and shims or a way to keep the hull level if it is not a
flat-bottomed hull.
1. Use packaging tape to tape the hull width to the scale width, so the hull does not expand when you fill it with water.
2. Find a level slab of concrete for your hull. At the top of the hull, make sure that it is level from side to side. If it is a small ship like a cruiser that does not have a flat bottom, use shims
to keep the hull upright and level.
3. Add the correct amount of water (from your calculations above) to the hull.
4. Mark the water level in the fore, aft, and midships of the hull.
5. If desired, change amount of water to +10%, -10%, and -20% of scale weight and mark the water level.
These markings will be very close to the desired water line when the ship is finished, but will be slightly low because the density of the fiberglass is higher than the density of the water. On an
Iowa hull, the different in that water level appears to be about the difference of one pound of water added.
An Iowa class battleship has a scale weight of 44.5 pounds and the dry, uncut hull weighs about 4.875 pounds. After taping across the sides to keep the beam width at 9 inches, add 39.625 pounds of
water (4 gallons and 3 quarts). The water level will be very close to the desired water line once the ship is finished.
This method of finding the waterline will not work on ships whose keels are not level with the waterline. Some destroyers are like this and there are probably others. Most of the larger ships should
be OK as their keels were built on a flat and level surface.
General Construction Tips
(Original Article by Phil Sensibaugh, edited by Bill Pickl)
Begin with the end in mind. Install the systems in the proper order. Many skippers end up installing the hardware several times because they get ahead of themselves. It’s common to complete the hull
and install the drive motors, only to discover that the stern cannon won’t fit in the hull because the motors are in the wrong position. The rules mandate that the cannon must be located in the same
position as on the real ship. Other systems, to include the motors may be installed anywhere in the hull, so the cannon must be installed first.
Think small and think light. You can always add more weight if needed and if added late in the building cycle the weight can be placed where it is needed to accommodate balance. Keep hardware close
together – pack it in, but keep it modular so it can be removed easily for maintenance. Open spaces inside your hull don’t hurt anything and allow for future flexibility. Keep hardware in the
smallest space possible. Don’t spread it out in the hull just because it looks like you have extra room. There is no such thing as extra room in an RC combat warship.
Think about maintenance when building your ship. Make all systems modular and removable and never install any component of you ship hardware permanently in the hull. For instance, don’t glue the
cannon down to the bottom of the hull thinking that you’re saving time and likewise with other hardware. Sooner or later you will have to remove it for maintenance. Think ahead. Think simple. Make
repairs easy and timely.
Build modular systems to make life simple. A warship has many operating systems to include motors and drive gear, pump, weapons, flotation, electrical and pneumatic plumbing, to name the predominate
systems. Such a maze of hardware, electrical wiring, and plumbing can baffle even an experienced modeler on first glance. To keep it all manageable just consider each system as a stand-alone item,
and build it accordingly. Use quick disconnect fittings on CO2 lines and connectors on electrical wiring. When you look at your ship don’t view it as a maze of components, but as a group of
independent systems. Remember that each system by itself is really pretty simple and with some common sense you can figure it out, but if you build your boat so systems can be isolated trouble
shooting becomes that much easier. This means that during construction you must avoid “daisy chaining” systems together and build each system as a stand-alone item. Bundle the wires together and put
a cable tie around them to make it look neat and take up less space. Cut off any excess wiring (shorten wires as needed), but allow a couple of inches of extra wire for future service. Do likewise
with the pneumatic plumbing. Following these steps will make your boat a lot easier to work on. For instance, if a motor fails and is isolated the rest of your systems will still be operational.
Whereas the opposite is having your whole system go down without any idea of where the problem occurred would take a long time to diagnose and fix.
Keep weight low in the boat. If you have ever stood up in a canoe or watched what happens when someone does you will appreciate this advice. The key to a stable weapons platform is keep all possible
weight below the waterline and minimizing the weight of anything located above the waterline. Lie batteries flat on the bottom of the hull keeping total mass of batteries below the waterline. Mount
cannon low in the hull and extend barrel riser tubes to proper barrel height, don’t raise the whole cannon. A low center of balance is imperative to achieve ship stability.
(Original Article by Phil Sensibaugh, edited by Bill Pickl and Strike Models) Note: this is one section of a comprehensive model warship construction manual originally published on the BDE/RC
website. This section is applicable to both Big Gun and Fast Gun combat.
This article does not discuss how to make the packing tubes or prop shafts. That is a topic of an article in the Drive Train section of this manual. Before you get started locate the position of your
rear cannons in your hull set them inside and determine approximately where you want to locate your motors and where you want the packing tubes to end. This will eliminate the need for modification
of your prop stuffing tubes later on when you begin installing the hardware into your boat.
Installing prop shafts and packing tubes is far less difficult than most builders make it. An important thing to remember is not to be overly critical when cutting a hole(s) in you hull for the prop
shafts. The holes will probably be in the wrong place no matter how much time you spend thinking about anyway, so just cut them. Oversize holes are easier to fill later.
The upper photos show how the packing tubes were aligned parallel to one another and level, glued to a wood dowel that was carefully measured and marked. If you are using a fiberglass hull and brass
stand-off supports for the ends of your packing tubes then use the dremel to cut a slot for your brass stand-off support near the end of the packing tube. Slide the brass support into the slot as
you tilt your packing tube in place and glue to the wood dowel. Ribs were ground away as needed to allow the tubes to lie level with one another and fit in place at a slight downward angle. The
tubes were secured in place with epoxy putty, which also reinforced the ribs that were ground down substantially.
If you are using the brass stand-off support for the end of your prop shaft and have not yet sheeted the bottom of your wood hull then glue cross support between ribs so that you have something to
glue the supports to.
Wood sheeting was installed around the tubes and stand-offs (editors note: the above article on wood hull construction suggests the use of hardwood strips instead of balsa wood sheeting), but a small
space was left open around the rib. This hole was filled with epoxy putty, which is a great water seal and also gives a nice appearance to the hull and looks like the packing boxes on real ships.
The third photo shows how the over size holes cut into a fiberglass hull were filled with epoxy putty. It also shows the extreme angle on the coupling for the motors that was required to allow the
cannon to fit between the motors. As it turns out, the center motor was still in the way of the stern cannon and had to be removed and installed “backwards” above the packing tube using an o-ring
drive or gear drive. The motors are installed by attaching small sections of brass tube to the hull with epoxy, then slipping plastic wire ties through the brass and around the motors. This is a
system that has proven to work very well.
The bottom photo shows the running gear of the Scharnhorst, which is one of the most difficult ship hulls to outfit. Three props and two rudders fit into a very small space, but it can be done.
(Original article by Phil Sensibaugh, edited by Bill Pickl and Strike Models)
This material was originally published on the BDE/RC website as an instruction manual for getting started in Big Gun Model Warship Combat. This chapter is applicable to both Big Gun and Fast Gun
formats. View the manual homepage.
Have you ever wondered why some ships settle fairly evenly in the water when they flood internally while others take on a severe list? The reason is most likely inadequate water channeling. Water,
being a liquid will seek out the lowest point of the ship and move in that direction. It also follows the laws of physics and reacts whenever the ship moves. If the ship turns right the water will
move to the left, and visa-versa. Also, when the ship moves forward the water will run towards the back. This is why nearly all ships sink by their stern, rather than bow first. In fact, of the
several dozen of ships I have seen sink I have never seen one sink bow first. Although sinking bow first would be a good feature since this has the potential to save the rudders and props from damage
when the ship hits the bottom, or is recovered. I say “potential” damage because after six years of battling the MBG (Midwest Battle Group) has yet to see any props or rudders damaged by sinking, but
it could happen.
I’ve developed effective water channels with the past nine ships I’ve constructed and the method I have come to like the best is the foam filled water channel. I like this method since I’ve found it
the easiest to accomplish. To make the water channel I first installed two wood stringers down the center of the hull and separated by about 2.75 inches. The stringers should be 1/4″ x 1/4″ hardwood.
These stringers also serve to add some strength since the bottom plate of this ship (wood construction) was made up of seven sections to prevent warping. Next grind down the portion of rib that is
glued to the base plate so that it will form a sloped line going from the 1/2″ tall height of the rib to the center 1/4″ tall strip (editor’s note: its easier to layout the rib patterns with this
slope in mind and save the grinding). In addition to the channel down the middle you may want to leave an open section sized for your batteries so you can keep this large piece of weight low in your
hull. Make this battery space an 1″ longer than the battery you intend to use and typically centered amidships with the batteries placed out towards the side of the ship to allow for a CO2 tank
between them.
If you are putting a channel in a fiberglass hull your job is a bit easier. After attaching the sides of your channel to the bottom of the hull you will need to add a stringer that goes from the edge
of the water channel out to the side of the hull about every 4″ along the length of the hull. You will need to cut a slope on them such that they are 1/4″ tall on the water channel edge and 1/2″ tall
on the end near the side of the hull. I recommend you work with 3/4″ by 1/4″ hardwood strips. Measure off the length of stringer that will fit in the section of hull you are currently working on and
measure in 1/4″ on opposite ends of the rectangle and draw a diagonal line between the two. The result should be a matched pair of wedges that are the same length and 1/4″ tall on one end and 1/2″
tall on the other. Make your diagonal cut first down the center then make the cross cut. Glue these two pieces to the hull and you’ve created your own “rib” stringers and you are ready for the next
I then installed a piece of balsa over the ribs between the water channel stringer and the side ribs of the ship as shown in the accompanying photos. Since the part of the ribs that were glued to the
hull keel plate were sloped towards the center this allows any water coming in through holes on the sides to run into the water channel and towards the pump. Then I drilled a hole in the balsa sheet
between each rib and using a can of “Great Stuff” minimal expanding spray foam I filled each rib section with the foam. My first attempt at this several years ago the foam simply forced off the
balsa, cracking it to pieces. The accompanying photo shows how even minimum expanding foam still expands greatly (there are some new very minimal expansion foams on the market get some and
experiment). The spaces between the ribs were only 2/3 filled!
Using a small blade on my pocketknife I cut away the excess foam, which was quite easily accomplished, then sheeted over it with more 1/16″ balsa sheet. Be sure to use epoxy glue for this since CA
glue will melt foam, as will fiberglass resin. When the whole hull was sheeted on the inside so I couldn’t see foam anywhere I put a thin coat of “SolarEZ” UV cured polymer resin over the inside of
the ship. This product won’t hurt the foam and cures more predictably than conventional fiberglass. The trick is sunlight must be able to reach it in order to cure the resin.
Now that the water channel is installed you should have a ship that will settle level as it takes on water.
Original Article by Phil Sensibaugh. Edited by Bill Pickl and Strike Models. Note: this is one section of a comprehensive model warship construction manual originally published on the BDE/RC website.
This section is applicable to both Big Gun and Fast Gun combat.
Strike Models Note: We advise talking to someone who has already built a ship from scratch, as they can be a big help. Also, please check your club’s rules regarding rib spacing and allowed
penetrable area and use those rules over what we included here. A very detailed instructional forum thread about scratch building a SMS Pommern (a predreadnought) is being chronicled on the RC Naval
Combat website.
I have often been asked what I felt was the best way to construct a hull “from scratch.” I’ve seen several methods used with some methods working better than others, yet still I’m not sure if there
is one best method. I believe if the hull doesn’t warp, isn’t overly heavy, and floats somewhat level when empty (no major list) it’s a good hull. I suppose I should add in one other criterion as
well, it shouldn’t leak. This article will cover: Making a Pattern Set, Selecting Construction Material, and Assembly.
The premise for developing patterns sets for a scratch built hull is that the ship will be built on a flat bottom plate with ribs, bow and stern keels being glued in vertically all topped by a
caprail. Strike Models offers several such pattern sets ready for cutting and assembly, but this section will cover the basics of developing your own pattern sets should they not be commercially
available. Using the baseplate method of building is recommended otherwise you will have to set the ribs on a keel, which requires jigs and fixtures to achieve good results and the keel will be in
the way later anyway. Flat bottom boats are much easier to build, but don’t confuse a flat bottom with a shoebox shaped hull. The sides will still be rounded, as will most of the hull below the
waterline. Real warships are generally flat on the bottom as can be verified by your ship plans.
First, obtain a set of plans for your ship. The greater the detail shown on the plans the better, but don’t be surprised if the detail is lacking. Often a set of plans consists of a top and side view
of the hull and superstructure and a drawing of the ribs at a few stations along the hull, but this will suffice. [If you contact Strike Models, we can tell you the level of detail for each
particular plan we sell.] The plans usually don’t provide enough ribs for the required spacing (1, 2, or 3 inches) so you will need to draw additional ribs. Look at the overhead view of the provided
rib locations. Next decide the spacing you will use. Use of 1/4″ wide ribs on 2″ spacing is the most common selection, but for large ships 3/8″ wide ribs on 3″ spacing is also used (Big Gun only, and
even that is dependent upon your club. Fast Gun has significantly different rules). With your spacing selected you will need to draw lines on your overhead view where you need to add ribs. You will
often need to add one and sometimes up to three ribs in between those provided by the plan set. What I recommend is making a copy of the original ribs and hand sketching the correct number of ribs
in-between the provided rib profiles. Just eyeball even spacing for the number of ribs you are adding. Be sure to reduce the overall rib width by the thickness of the balsa sheeting on the hull, and
the overall height by the thickness of the caprail and bottom plate. Do this by drawing a line below (3/8″ for plastic 1/4″ for wood) the top of the rib for your new rib top and a line 1/8″ above the
bottom of the boat. Also draw reference lines for the water line and a line one inch below the water line across the ribs. Note that often only half of the ribs are drawn, so you’ll have to draw the
mirror image of each rib the best you can including the added ribs. When I do so I use a light tracing paper that you can see through easily, draw half the rib, fold the paper in half then copy the
other half of the rib off the first half. Another method is to make a copy of the ribs then trace them on the back of the paper copy, thus making a mirror image. When you have a complete set of full
width ribs COPY all of your work and save the original drawings. Make one copy for each of the ribs.
For each rib highlight the correct exterior hull line the top and bottom (remember to follow the lines that allow for the bottom plate and the caprail). Also on the exterior line mark an 1/8″ deep
notch on each side of the rib at a point starting one inch below the waterline and extending to the bottom of the rib. Hardwood stringers will be installed here later on to help form the impenetrable
area of you hull. On some of the wider ribs you will not need a rib that goes completely across the bottom of the hull. If the flat spot in the rib is more than 4.5″ wide then you will draw a left
and right piece. You may wish to read the section on water channels at this point so you can design your rib patterns to accommodate water channeling. The water channel will be 2.75″ wide so measure
1-5/8″ inches left and right of the center point on the flat bottom of the rib profile to allow for the water channel stringer. Make these marks 1/4″ high. At the outer edge of the flat spot measure
1/2″ up and draw a diagonal from that point to the top of the 1/4″ tall line that marked the top of the innermost edge of the rib’s bottom. Next sketch in a line about 1/2″ in from the exterior hull
line to complete the inside edge of your rib. You will want to mark the location of the prop shafts on the appropriate rib patterns, usually the rib just forward of the propeller and the two ribs
forward of where the shaft enters the hull. For the rib just forward of the prop you will need to draw in braces to support a circle big enough to drill the hole through for the prop packing tube.
The next items to make patterns for are the bow and stern keel plates, the caprail, and the baseplate. Start with the base plate. Take your rib patterns and measure the “flat” width at the bottom of
the ribs. For all ribs with a flat spot at least 3/8″ wide and that touch the bottom plate transfer those measurements over to a sheet of paper. Remember to also measure the distance from the bow to
each rib from your overhead view and transfer these to your base plate pattern. You should end up with a center line with rib locations marked and each rib location will have a perpendicular line
centered on the center line that represents the width of the flat bottom of the rib. To complete the base plate pattern just connect the outer edge of the rib lines. Next to make the bow and stern
keels trace the side view of the bow and stern profile. Measure in about 1/2″ in from the profile and make another line. You will want to make the keels long enough to overlap the base plate at least
3 inches. Remember to make a 1/8″ allowance for the base plate. Also note that a few of the forward and stern ribs will not attach to the base plate but to either the bow or stern keel. These rib
drawings should be modified with a notch to slide onto the keel and remember to keep the depth of the ribs all the way to the bottom of the keel, since they do not rest on the bottom plate. Some ribs
that are on the base plate may need to have a notch added to their pattern to allow for the overlapping bow and stern keels. To make a pattern for the caprail trace the outer edge of the ships deck
from the overhead view (please note that some odd ships are wider at the waterline then at the deck or caprail level). Draw a second line a half inch in to complete the pattern. You may also wish to
design in some cross braces into the caprail pattern. These help the ship maintain its desired width and to reinforce the hull should it ever need to be pulled from the water with 100 pounds of water
in it! Make copies of all these patterns as well.
You are now ready to select the material for your scratch built hull. Some people prefer 5-layer plywood, while the MBG (Midwest Battle Group) now has three plastic hulled ships. The plastic is
foamed PVC and can be obtained in various thickness’ from an industrial plastics supplier. Foamed PVC enjoys the advantage of being lightweight and strong, easily cut and glued with CA glue, is
inherently waterproof and will not warp or rot. If you do chose to use plywood the following precautions must be followed. Cut your caprail and base plate patterns into pieces between 12 to 18 inches
to prevent the wood from warping. Cuts should be made at a rib location.
Glue the copies of the tracings to plywood using Elmer’s glue, or some other water-soluble glue, then saw them out slightly oversize. Use material of the appropriate thickness corresponding to the
rib spacing of your pattern. For the base plate use 1/8″ and for the caprail use 3/8″ for plastic and 1/4″ for wood. Next sand the pieces to the correct size. Finally remove the paper from the wood
or plastic with warm soapy water, then dry the parts well. Don’t be concerned if the wood parts warp somewhat. If the wood is going to warp, now is the time to find out. If warping of the longest
sections of the cap rail or bottom plate occurs just cut them into shorter sections, preferably at rib location. A little warp won’t hurt anything at this stage of construction. We’ll fix it later.
If you chose wood as your material you will need to glue sections of the caprail and base plate together, end to end on a flat surface and while laying over a tracing of the plans. This will ensure
the sections have the proper curve to match the hull. Likewise, with the bottom sections of the hull. Epoxy glue works well for this purpose, but CA is too brittle and will not work well. Don’t be
concerned if they look weak lying there. We’ll strengthen them plenty later on. When the glue dries, lay these sections over the plans and mark the positions where the ribs will attach. Now attach
the ribs to the base plate with one or two drops of CA glue. Don’t glue them too well right now since you may need to remove the later if something doesn’t line up right. Next, look at the hull from
the end and visually verify that the ribs are symmetrical on both sides of the hull. There’s a photo of this step later in this article. Now attach the cap rail to the top of the ribs. Some of the
ribs may not line up with the cap rail well, but don’t force the caprail down, or up to the ribs. Trim or file the ribs as needed to line up with the level caprail. Note the word level! There are
photos accompanying this article that will help you visualize how the hull will go together.
Once the hull is glued (tacked) together in this state it will still be very frail so handle with care, but don’t panic yet. Next will come the strengthening. Place the hull on a flat surface and
inspect carefully to see if the hull has developed a warp. If so just break a few glue joints to relieve the pressure, then glue them again. You may also need to make a few cuts through the caprail
or base plate to relieve pressure to eliminate the warp. Make as many cuts as needed to get the warp out. Once again, don’t worry, you’re not weakening your hull permanently.
Now the strengthening of the hull begins. For wood hulls install hardwood (spruce) strips the thickness of the balsa sheeting allowed (1/16 to 1/8 inch). These stringers will be 3/8″ wide. This width
will allow the strip to overlap the ribs by 1/8″, since the wood caprail is only 1/4″ thick. These strips are installed around the caprail on the inside and outside of the hull. You can cut the
stringers into shorter sections, but make sure the joints are staggered and the inside stringer joint does not occur on the same rib as the outside stringer. Again installing them on the bow and the
stern is the trickiest part to accomplish. To allow the hardwood to bend around the curved areas cut notches about 2/3 though the wood stringer about every 1/4″ in the inside the side that will be
next to the hull, then bend the stringer until it cracks at the notches. I use the Dremel tool and cut off wheel to make the notches.
Next, install 1/8″ by 1/8″ stringers (preferably spruce) in the notched portion of the ribs that starts 1″ below the waterline and extends down to the base plate. The stringers do not need to butt up
closely together, as you will cover this portion of the hull with fiberglass. Assuming your hull is still true and not warped go back and brush epoxy glue on all wood joints that were tacked with CA
glue. For plastic joints a bead of CA glue along both sides of the joint will permanently bond the plastic parts together. Invert the hull and brush the epoxy inside the sandwich formed by the two
hardwood stringers and the caprail. Wait for the epoxy to cure and you’ll see that this step will have strengthened your hull dramatically.
Now the hull should look nearly complete save for the side skin. Sand all outer surfaces of the hull so that they are smooth in preparation for fiberglassing the bottom. Next, place the hull top down
on a flat surface and add spacer beneath it to allow it to lay flat and be supported. If the hull has taken on any warp you must get the warp out at this time. Check the hull closely for warping.
Don’t be afraid to cut the hull in two and glue it back together if needed to correct a warp. Now is the best time to fix them.
Fiberglass resin has quite an aroma (it stinks) so find an area to work with good ventilation. Cover the work area with a sheet of plastic. Now make a stand to hold the hull off the work so it can
lay inverted (upside down) and be stable. The stand must hold the entire hull (for wood only) off the work area to include the bow and cap rail since we’ll be glassing them also.
Next, cut the lightweight fiberglass cloth in to small sections about 12″ square, or whatever size or shape is needed to cover the hull. Small sections of cloth are easier to work with and to keep
air pockets out of. At this point I would recommend purchasing an ultra violet cured resin sold by SolarEZ. This stuff is just like epoxy resin with the added bonus of only hardening when exposed to
about 30 minutes of strong sunlight. If you keep the windows covered in your shop you will be able to work at your own pace rather than at the pace of the setting time of normal resin. Apply a thin
coat of resin to the hull bottom and sides down to the penetrable area, then lay on a section of fiberglass cloth and apply another thin coat of resin over the cloth. Repeat this procedure to apply
the next section of cloth, overlapping the previous section by 1/4″ to 1/2″. Continue laying cloth until all the wood stringers on the bottom of the hull are covered with fiberglass cloth and resin.
Remember a thin coat of resin is all that is desired. Applying more resin just makes a mess and increases the amount of sanding needed. Sanding fiberglass is no fun. The cloth will try to “slip”
across the wood as you brush resin on, so reverse directions of your brush strokes regularly and use a gloved hand to push or pull the cloth. As you are progressing smooth out the cloth, working out
all air pockets and wrinkles. Cut the cloth with an Exacto knife to let the air escape if necessary and overlap the cloth at the cut then smooth it down. This will be especially necessary in the bow
and stern where there are a lot of curves. Continue this effort until the hull is covered, bow to stern, to include the solid bow and stern blocks.
Allow the fiberglass resin to partially set, then using an Exacto knife cut away any excess fiberglass cloth that has extended into the penetrable areas of the ship. After cutting, smooth the cloth
down again along the cut edge using a gloved hand. Wetting the resin with water first to provide some lubrication helps to keep the resin smooth. As soon as the resin on the bottom of the hull is set
enough (but not fully cured) invert the hull and apply cloth and glass to the top of the bow stern and cap rail, overlapping the sides of the caprail down to the penetrable area. When you are through
the entire outside of the hull will be covered with fiberglass cloth and resin except for the penetrable areas. Once the resin begins to set up trim away any cloth that extended into the penetrable
areas and smooth down the cloth. Remember no wrinkles or air bubbles should be allowed in the cloth. Now invert the hull and sit it back on the wooden block upside down.
Apply another layer of glass cloth and resin down the center of the hull bottom from bow to stern. This sheet does not need to extend up the side of the hull to the penetrable area, but just cover
the flat part of the hull bottom to provide more reinforcing in the base plate to strengthen the butt joints that were glued together.
At this point you may want to install optional frames to butt your balsa sheeting up against. Some people like these since they create a “window frame” that you cut the balsa to fit into. The
advantage is that all the work in tapering the balsa sheet to the hull profile is done once with the frame the disadvantage is that when you install the balsa it has to be cut to fit this frame. If
you decide to add this frame you’ll need to get some wood stringers that are 14″ wide and the thickness of your balsa sheeting. Glue these 1.25″ below the waterline (this gives a 1/4″ of hull for you
balsa to glue on to) and 1/4″ fore and aft of the penetrable areas. Use automotive putty to taper the edge of the framing to the ship’s hull. Let dry and sand. You may need to apply a few layers to
get it smooth.
Brush another thin coat of resin over the entire hull and caprail. As this coat of resin sets make sure the job “looks right.” Look for thin spots in the resin. If it looks good and you are happy
with it then let the hull dry completely. Otherwise, apply another thin coat of resin. If there are a few “rough” areas it won’t make all that much difference and they will be corrected later. On a
warm day this could take only a few hours for the resin to cure, other times it can take several days for all “tackiness” to vanish. Again the two part resins are tricky things to mix and the solar
cured resin is preferred although use of an old mirror might be required to get the sun to all parts of the hull for complete curing.
Once the fiberglass resin has set completely sand lightly with fine grit (150) sandpaper on a sanding block or orbital sander. Sand lightly is a key word. You do not want to sand through the resin
and into the cloth anywhere! After the sanding is complete wipe off the hull with a damp cloth then skim on a coat of automotive putty over the entire hull surface that was fiber glassed. A plastic
putty knife works well to skim on the filler, allowing the filler to fill in only low spots and to smooth out rough areas. I recommend the automotive filler putty because it is easy to work with, is
waterproof, and is easy to sand. Once it dries sand the hull again. You may have to repeat this procedure to get a really smooth finish, especially in the areas where the glass cloth was overlapped.
Now all you need to do is to skin the ship by gluing the appropriate thickness of balsa wood sheeting.
Click each image to enlarge. We apologize, but this is the best resolution we have for these images.
(Original Article by Phil Sensibaugh, edited by Bill Pickl and Strike Models) Note: this is one section of a comprehensive model warship construction manual originally published on the BDE/RC
website. This section is applicable to both Big Gun and Fast Gun combat.
First things first – decide what ship you want to build.
This decision alone may take many months of procrastination while sorting out all the facts that seem pertinent when in reality, it doesn’t make all that much difference. I have participated in about
50 RC combat warship battles over the past five years and have followed the action of other clubs closely. One thing that I have learned is generally, there is no such thing as a bad boat. Assuming a
boat is reliable and well balanced so it is seaworthy, and put in the hands of a skipper that has learned how to use the features of the particular ship to his advantage any ship can be an effective
part of a team.
Ask yourself why you want to participate in this hobby.
Presumably the reason is to occupy free time and consume some disposable cash, for this hobby will certainly do that, but more likely the real reason is to have fun. The best way to have fun is to
have a ship that is reliable and seaworthy. It’s very frustrating to have your ship role over and sink as soon as it begins to take on water, or to spend the day sitting at the side of the pond
working on your ship instead of participating in the game.
Consider a used ship as your first ship.
This will allow you to begin playing the game sooner and there is no better way to decide what ship fits your style than to participate in the game for awhile in order to learn your strengths and
weaknesses. If you go this route, you want to see the ship in person and in action — there are too many sad stories of buying a ship sight-unseen and not having it be as represented (broken,
inoperable, rotted, rusted, etc.). Ideally the owner will allow you to battle the ship before you purchase it and to have an experienced third party examine the ship. If you like how it responds to
your style of battling and it operates reliably through the day it is a good choice to get you in the game quickly. When you get a ship test all systems to ensure that they work, and how they work,
then use this ship to gain combat experience and as a construction aid and test bed for your new ideas. That’s right. To test out your new ideas. About every modeler I have ever known has his or her
own ways of accomplishing tasks and you will find yourself asking, “Why did the original builder do it this way?” Most often there was a reason, but sometimes it was just a mistake, an attempt to
implement a new idea that didn’t work very well. There is no substitute for experience in building a ship and learning combat techniques.
Avoid small ships and complex ships for your first building experience.
There are many operational systems in our warships and every system is equally important in its own right. Think about it, which is more important, cannon, drive motors, pump, steering, or balance?
After a little reflection you will probably decide that all systems are equally important since your ship won’t be combat effective if any of these systems don’t work well. It’s by far and away
easier to learn the basics of maintenance and installation on a ship that has fewer operational systems. It is easier to get the hardware installed in a larger ship. Small ships test the talents of
the most skilled builder. For your first ship you will be well advised to build a larger ship rather than a smaller one. Larger ships are more survivable in combat as well.
Keep it simple.
Another sound tidbit of advice would be don’t try to reinvent the wheel. Stick to the basic and proven methods of implementing a function. Look at the ships of the seasoned skippers and pay
attention to how they implement the various functions, then follow suit.
This manual is a collection of articles that originally appeared on the BDE/RC website as a how to manual for building Big Gun RC model warships. Strike Models has made some edits and updates to the
documentation, and incorporated Fast Gun information when available. However, the original articles are now several years old, and construction methods and rules may have changed since they were
written. Please check with your local club or one of the resources listed in the links section before taking this information as gospel.
Table of Contents
Getting Started and Choosing a Ship
Scratch Building a Model Warship
Please check this page regularly or subscribe to the RSS feed, as we have several dozens articles we are in the process of reviewing and posting over the next several weeks.
One of the things we have heard people having problems getting the interuptor pin set correctly after disassembling the cannon for cleaning or maintenance. While I am not an expert on this yet, I
have found this method of setting the interruptor cap to be fast and effective. This process has to be done without an o-ring at the base of the barrel (and will not work at all with a geek breach).
1. Screw the interruptor cap all the way in (without pinching the pin itself).
2. Fill the magazine with BBs and cap the magazine.
3. Position the cannon as shown in the picture.
4. Slowly unscrew the interruptor cap.
5. As some point, a number of BBs will fall out.
6. Further unscrew the cap by 1/8 turn. The position is now at least very close.
7. Test the guns in this configuration to make sure a single shot is fired each time (adjust as needed)
8. Secure the cap with Loctite (blue is preferable) to keep the adjustment stable.
9. Test one last time before the Loctite sets.
In order to make our ships penetrable, we need to cut large windows in the fiberglass hulls. For those who have not done so before, cutting the hulls can be practically traumatic. Here are some tips
on how to make the cutting much easier. These instructions assume that the hull has already been marked for cutting (our 3/8 inch tape works well for this) and has had the corners drilled.
For a long time, the tool of choice for cutting the windows in our fiberglass hulls has been the Dremel rotary tool with a fiberglass reinforced cut-off wheel. These wheels wear quickly, but they
work. The standard ceramic cut-off wheels are not suitable for cutting hulls as any misalignment in the cutting will cause the wheel to shatter. Within the last couple of years, we found that the
carbide cutting/shaping wheel (#543) works better than either of these wheels for cutting the windows in the hulls. I’ve only purchased a single one of these wheels and cut many hulls with it. The
wheel base is metal, so the chances of the wheel breaking are very slim, and they do not wear out quickly.
Very recently, I found a tool that I think works much better than the Dremel rotary tool for cutting the hulls. This tool is one of the oscillating tools such as the Dremel Multi-Max combined with
the grout removal blade or the Harbor Freight Multifunction Power Tool combined with a carbide half moon blade (HF #67462). With the HF tool and carbide blade, I cut out one half of the HMS Hood in
one 35 minute session (and it was the first time I had used the tool). I timed myself at starting a 3 inch cut every 15 seconds. What I liked about using the tool is that you never felt like you were
straining to keep the cutting action under control like one would with a rotary tool. You also only need to apply a little pressure to the tool against the fiberglass instead of forcing the blade
along the cut. I did best by rolling the blade slowly between the drilled corners to cut through the gel coat, and then rolled the blade back to cut through the fiberglass below. One other thing I
really like about this type of tool is that it doesn’t throw the fiberglass dust around like the rotary tools do. You still need to be wearing goggles and breathing protection, but the dust is much
better. On the negative side, your hand will get tingly from holding the vibrating tool.
I’ve also tried the Harbor Freight inexpensive diamond wheel, but the this wheel takes three times as long as the carbide one. A more expensive diamond cutter might work better, but I haven’t picked
one up.
After cutting the windows in the hull, you will almost certainly find that parts of the window panes and caprail will be too thick. Once again, Harbor Freight comes to the rescue. They have a finger
width hand held belt sander that makes very short work of the sections that are too thick. The entire hull can be fixed in only minutes, depending upon now close the original cut was.
After about eight months of use, my Harbor Freight Multifuction Power Tool died. One of the motor brush springs broke. I’d found that I almost completely stopped using the Dremel rotary tool except
for some metal cut-off and polishing during this time. With the death of the HF version, I’m keeping to my rules about buying cheap tools (buy a cheap tool once; if it breaks, you’re using it enough
to warrant getting an expensive one). The Dremel Multi-Max is a much nicer tool for our purposes. The ability to change oscillation speeds is nice, but the really nice thing is that it is half the
weight of the HF version. This makes it much easier for one handed usage, which is typically needed when cutting hulls. | {"url":"http://www.strikemodels.com/blog/","timestamp":"2024-11-05T07:15:10Z","content_type":"text/html","content_length":"70564","record_id":"<urn:uuid:f3486b05-2c29-4f37-9a1d-284efd9278e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00397.warc.gz"} |
Properties of Cauchy Sequences -
Properties of Cauchy Sequences - Product and Quotient Laws
Properties of Cauchy Sequences - Product and Quotient Laws
Recall from the Cauchy Sequences of Real Numbers page that a sequence $(a_n)$ of real numbers is said to be Cauchy if for all $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $m, n \geq
N$ then:
\quad |a_m - a_n| < \epsilon
We will now prove that the product of two Cauchy sequences is also a Cauchy sequence.
Theorem 1: Let $(a_n)$ and $(b_n)$ be Cauchy sequences. Then $(a_nb_n)$ is a Cauchy sequence.
• Proof: Let $(a_n)$ and $(b_n)$ be Cauchy sequences. We want to show that $(a_nb_n)$ is also a Cauchy sequence, that is $\forall \epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $m, n
≥ N$ then $\mid a_nb_n - a_mb_m \mid < \epsilon$. Doing some algebraic manipulation we get that:
\quad \quad \mid a_nb_n - a_mb_m \mid = \mid a_nb_n - a_mb_n + a_mb_n - a_mb_m \mid = \mid b_n(a_n - a_m) + a_m(b_n - b_m) \mid < \mid b_n \mid \mid a_n - a_m \mid + \mid a_m \mid \mid b_n - b_m \mid
• Now recall that every Cauchy sequence is bounded. Since $(a_n)$ and $(b_n)$ are Cauchy sequences it follows that for $0 < M_1, M_2 \in \mathbb{R}$ that $\mid a_n \mid < M_1$ and $\mid b_n \mid <
M_2$ for all $n \in \mathbb{N}$ and so:
\quad \quad \quad \mid a_nb_n - a_mb_m \mid = ... < \mid b_n \mid \mid a_n - a_m \mid + \mid a_m \mid \mid b_n - b_m \mid < M_2 \mid a_n - a_m \mid + M_1 \mid b_n - b_m \mid
• Also, since $(a_n)$ is Cauchy there exists an $N_1 \in \mathbb{N}$ such that if $m, n ≥ N_1$ then $\mid a_n - a_m \mid < \frac{\epsilon}{2M_2}$.
• Similarly since $(b_n)$ is Cauchy there exists an $N_2 \in \mathbb{N}$ such that if $m, n ≥ N_2$ then $\mid b_n - b_m \mid < \frac{\epsilon}{2M_1}$.
• Choose $N = \mathrm{max} \{ N_1, N_2 \}$ and so for $m, n ≥ N$:
\quad \quad \mid a_nb_n - a_mb_m \mid = ... < M_2 \mid a_n - a_m \mid + M_1 \mid b_n - b_m \mid < M_2 \frac{\epsilon}{2M_2} + M_1 \frac{\epsilon}{2M_1} = \frac{\epsilon}{2} + \frac{\epsilon}{2} = \
• Therefore $(a_nb_n)$ is a Cauchy sequence. $\blacksquare$
Observe that if $(a_n)$ and $(b_n)$ are Cauchy sequences then it need not be that $\left ( \frac{a_n}{b_n} \right )$ is a Cauchy sequence. For example, consider the following sequences:
\quad (a_n) = \left ( \frac{1}{n} \right ) \quad , \quad (b_n) = \left ( \frac{1}{n^2} \right )
These sequences both converge (to $0$) and are hence Cauchy by the Cauchy convergence criterion. However the following sequence is not Cauchy:
\quad \left ( \frac{a_n}{b_n} \right ) = \left ( \frac{\frac{1}{n}}{\frac{1}{n^2}} \right) = (n)
This is because $(n)$ is divergent.
However, if $(b_n)$ is Cauchy and does not converge to $0$ then everything works out.
Theorem 2: Let $(a_n)$ and $(b_n)$ be Cauchy sequences and suppose that $(b_n)$ does not converge to $0$. Then $\left ( \frac{a_n}{b_n} \right )$ is a Cauchy sequence.
• Proof: Since $(a_n)$ and $(b_n)$ are Cauchy they are both convergent sequences by the Cauchy convergence criterion and since $(b_n)$ does not converge to $0$ we have that $\displaystyle{\left ( \
frac{a_n}{b_n} \right)}$ converges. But again by the Cauchy convergence criterion we have that then $\displaystyle{\left ( \frac{a_n}{b_n} \right)}$ is a Cauchy sequence. $\blacksquare$ | {"url":"http://mathonline.wikidot.com/properties-of-cauchy-sequences-product-and-quotient-laws","timestamp":"2024-11-07T09:35:42Z","content_type":"application/xhtml+xml","content_length":"18989","record_id":"<urn:uuid:b9224016-94ee-4051-9abd-837566e5965a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00785.warc.gz"} |
Let's continue the previous example. This time, our purpose will not be to showcase as many library features as possible, but we will scope on different interfaces one can provide with the mp-units.
We will also describe some advantages and disadvantages of presented solutions.
First, we include all the necessary header files and import all the identifiers from the mp_units namespace:
1 #include <mp-units/ostream.h>
2 #include <mp-units/systems/cgs/cgs.h>
3 #include <mp-units/systems/international/international.h>
4 #include <mp-units/systems/isq/isq.h>
5 #include <mp-units/systems/si/si.h>
6 #include <exception>
7 #include <iostream>
9 namespace {
11 using namespace mp_units;
Next, we define two functions calculating average speed based on quantities of fixed units and integral and floating-point representation types, respectively, and a third function that we introduced
in the previous example:
constexpr quantity<si::metre / si::second, int> fixed_int_si_avg_speed(quantity<si::metre, int> d,
quantity<si::second, int> t)
return d / t;
constexpr quantity<si::metre / si::second> fixed_double_si_avg_speed(quantity<si::metre> d, quantity<si::second> t)
return d / t;
constexpr QuantityOf<isq::speed> auto avg_speed(QuantityOf<isq::length> auto d, QuantityOf<isq::time> auto t)
return d / t;
We also added a simple utility to print our results:
27 template<QuantityOf<isq::length> D, QuantityOf<isq::time> T, QuantityOf<isq::speed> V>
28 void print_result(D distance, T duration, V speed)
29 {
30 const auto result_in_kmph = speed.force_in(si::kilo<si::metre> / non_si::hour);
31 std::cout << "Average speed of a car that makes " << distance << " in " << duration << " is " << result_in_kmph
32 << ".\n";
33 }
Now, let's analyze how those three utility functions behave with different sets of arguments. First, we are going to use quantities of SI units and integral representation:
void example()
using namespace mp_units::si::unit_symbols;
// SI (int)
constexpr auto distance = 220 * km;
constexpr auto duration = 2 * h;
std::cout << "SI units with 'int' as representation\n";
print_result(distance, duration, fixed_int_si_avg_speed(distance, duration));
print_result(distance, duration, fixed_double_si_avg_speed(distance, duration));
print_result(distance, duration, avg_speed(distance, duration));
The above provides the following output:
SI units with 'int' as representation
Average speed of a car that makes 220 km in 2 h is 108 km/h.
Average speed of a car that makes 220 km in 2 h is 110 km/h.
Average speed of a car that makes 220 km in 2 h is 110 km/h.
Please note that in the first two cases, we must convert length from km to m and time from h to s. The converted values are used to calculate speed in m / s which is then again converted to the one
in km / h. Those conversions not only impact the application's runtime performance but may also affect the final result. Such truncation can be easily observed in the first case where we deal with
integral representation types (the resulting speed is 108 km / h).
The second scenario is really similar to the previous one, but this time, function arguments have floating-point representation types:
// SI (double)
constexpr auto distance = 220. * km;
constexpr auto duration = 2. * h;
std::cout << "\nSI units with 'double' as representation\n";
// conversion from a floating-point to an integral type is a truncating one so an explicit cast is needed
print_result(distance, duration, fixed_int_si_avg_speed(value_cast<int>(distance), value_cast<int>(duration)));
print_result(distance, duration, fixed_double_si_avg_speed(distance, duration));
print_result(distance, duration, avg_speed(distance, duration));
Conversion from floating-point to integral representation types is considered value-truncating and that is why now, in the first case, we need an explicit call to value_cast<int>.
In the text output, we can observe that, again, the resulting value gets truncated during conversions in the first cast:
SI units with 'double' as representation
Average speed of a car that makes 220 km in 2 h is 108 km/h.
Average speed of a car that makes 220 km in 2 h is 110 km/h.
Average speed of a car that makes 220 km in 2 h is 110 km/h.
Next, let's do the same for integral and floating-point representations, but this time using US Customary units:
// Customary Units (int)
using namespace mp_units::international::unit_symbols;
constexpr auto distance = 140 * mi;
constexpr auto duration = 2 * h;
std::cout << "\nUS Customary Units with 'int' as representation\n";
// it is not possible to make a lossless conversion of miles to meters on an integral type
// (explicit cast needed)
print_result(distance, duration, fixed_int_si_avg_speed(distance.force_in(m), duration));
print_result(distance, duration, fixed_double_si_avg_speed(distance, duration));
print_result(distance, duration, avg_speed(distance, duration));
// Customary Units (double)
using namespace mp_units::international::unit_symbols;
constexpr auto distance = 140. * mi;
constexpr auto duration = 2. * h;
std::cout << "\nUS Customary Units with 'double' as representation\n";
// conversion from a floating-point to an integral type is a truncating one so an explicit cast is needed
// also it is not possible to make a lossless conversion of miles to meters on an integral type
// (explicit cast needed)
print_result(distance, duration,
fixed_int_si_avg_speed(value_cast<int>(distance.force_in(m)), value_cast<int>(duration)));
print_result(distance, duration, fixed_double_si_avg_speed(distance, duration));
print_result(distance, duration, avg_speed(distance, duration));
One important difference here is the fact that as it is not possible to make a lossless conversion of miles to meters on a quantity using an integral representation type, so this time, we need a
value_cast<si::metre> to force it.
If we check the text output of the above, we will see the following:
US Customary Units with 'int' as representation
Average speed of a car that makes 140 mi in 2 h is 111 km/h.
Average speed of a car that makes 140 mi in 2 h is 112.654 km/h.
Average speed of a car that makes 140 mi in 2 h is 112 km/h.
US Customary Units with 'double' as representation
Average speed of a car that makes 140 mi in 2 h is 111 km/h.
Average speed of a car that makes 140 mi in 2 h is 112.654 km/h.
Average speed of a car that makes 140 mi in 2 h is 112.654 km/h.
Please note how the first and third results get truncated using integral representation types.
In the end, we repeat the scenario for CGS units:
// CGS (int)
constexpr auto distance = 22'000'000 * cgs::centimetre;
constexpr auto duration = 7200 * cgs::second;
std::cout << "\nCGS units with 'int' as representation\n";
// it is not possible to make a lossless conversion of centimeters to meters on an integral type
// (explicit cast needed)
print_result(distance, duration, fixed_int_si_avg_speed(distance.force_in(m), duration));
print_result(distance, duration, fixed_double_si_avg_speed(distance, duration));
print_result(distance, duration, avg_speed(distance, duration));
// CGS (double)
constexpr auto distance = 22'000'000. * cgs::centimetre;
constexpr auto duration = 7200. * cgs::second;
std::cout << "\nCGS units with 'double' as representation\n";
// conversion from a floating-point to an integral type is a truncating one so an explicit cast is needed
// it is not possible to make a lossless conversion of centimeters to meters on an integral type
// (explicit cast needed)
print_result(distance, duration,
fixed_int_si_avg_speed(value_cast<int>(distance.force_in(m)), value_cast<int>(duration)));
print_result(distance, duration, fixed_double_si_avg_speed(distance, duration));
print_result(distance, duration, avg_speed(distance, duration));
Again, we observe value_cast being used in the same places and consistent truncation errors in the text output:
CGS units with 'int' as representation
Average speed of a car that makes 22000000 cm in 7200 s is 108 km/h.
Average speed of a car that makes 22000000 cm in 7200 s is 110 km/h.
Average speed of a car that makes 22000000 cm in 7200 s is 109 km/h.
CGS units with 'double' as representation
Average speed of a car that makes 2.2e+07 cm in 7200 s is 108 km/h.
Average speed of a car that makes 2.2e+07 cm in 7200 s is 110 km/h.
Average speed of a car that makes 2.2e+07 cm in 7200 s is 110 km/h.
The example file ends with a simple main() function:
} // namespace
int main()
try {
} catch (const std::exception& ex) {
std::cerr << "Unhandled std exception caught: " << ex.what() << '\n';
} catch (...) {
std::cerr << "Unhandled unknown exception caught\n"; | {"url":"https://mpusz.github.io/mp-units/2.1/users_guide/examples/avg_speed/","timestamp":"2024-11-08T12:15:51Z","content_type":"text/html","content_length":"100521","record_id":"<urn:uuid:1836bf67-358c-4b24-ab7f-8c8be3b25688>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00628.warc.gz"} |
Support of QR decomposition for a very large sparse 2-D matrix
Hi all,
I see we have
Sparse ITensors as
Single Element ITensor
Delta and Diagonal ITensor
Also, IQTensor for block-sparse matrix.
However, is there any support for original sparse matrix which is defined as non-zero elements are less than half of total elements and doesn't have specific structure?
I was hoping to QR decompose a very large rectangle sparse matrix such as 4^30 x 4.
Thank you very much!
Hi Victor,
Unfortunately we don't have support for general sparse tensors beyond the ones that you listed (though that would be great to have). For now and in the foreseeable future you would have to look for
another library for that.
Thank you so much!
I understand it might be beyond this website.
Do you recommend any library can go with MPI such as Eigen? | {"url":"http://itensor.org/support/1616/support-of-qr-decomposition-for-a-very-large-sparse-d-matrix?show=1624","timestamp":"2024-11-01T23:16:36Z","content_type":"text/html","content_length":"23002","record_id":"<urn:uuid:1290c8b7-661f-45a4-8398-b8f0f410d41d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00459.warc.gz"} |
Consecutive Integers Calculator - Find the Consecutive Integers!
Introduction to Consecutive Integers Calculator
Consecutive integers calculator is an online tool that is used to find the consecutive integers solution in a fraction of a second.
It helps you to determine the solution of the sum and product of the consecutive number in a sequence. With the consecutive integer calculator, you can also find the consecutive integers' sum and
What are Consecutive Integers?
Consecutive integers are integers that are arranged in number to increase one by one from right to left in order such that the difference between any two consecutive integers is the same.
For example, if x is the number, then x+1, x + 2, and x+3 are its three consecutive numbers. Similarly, the consecutive number has even integers; -4, -2, 0, 2, 4,.. or consecutive odd integers; …-3,
-1, 1, 3,.. Etc.
To find 3 consecutive odd integers, simply start with any odd integer and add 2 to get the next consecutive odd integer.
Formula Used for Consecutive Integer
Consecutive integers are in even or odd order to find the sum and product of these consecutive numbers. The notation used by consecutive integer problems calculator is,
Odd Consecutive number for sum or product
$$ x + \;(x + 1) + \;(x + 3) +..........+ (2n + 1x) $$
Even Consecutive numbers for the sum or product
$$ (x+2) \times (x+4) ...............(2n) $$
Consecutive integers solver is a calculator that helps find the consecutive integer sequence, whether they are even, odd, or consecutive in general using the above formula.
How to Calculate in a Consecutive Integer Calculator?
2 Consecutive integers calculator has an advanced algorithmic that is installed in its server that enables you to find the solution of 2 consecutive odd integers number problems easily and quickly.
You just add your particular word problems and the rest of the work will be done by the consecutive number calculator automatically. When you add your consecutive number as an input in a consecutive
numbers calculator it will identify whether the given number is even or odd.
After identification, it starts calculating the consecutive integers after supposing some consecutive numbers as per question requirements. Then keeping the sum or product value equal to all these
combine consecutive integers.
After that, you simplify the equation and find the value of x. Now add the x value in all the supposed terms one by one to get the value of the consecutive number. You can get clarity about this
concept using the given example as shown below.
Solved Example of Consecutive Integers
Let's see a practical example of consecutive integers to understand the working procedure of a consecutive integer calculator.
Calculate three consecutive integers whose sum is 657.
$$ n + n + 1 + n + 2 \;=\; 657 $$
$$ 3n + 3 \;=\; 657 $$
$$ 3n \;=\; 654 $$
$$ n \;=\; 218 $$
The numbers are,
$$ 218, 219, 220 $$
Example for an even number
Julie has a board that is 5 feet long. She plans to use the board to make 4 shelves whose lengths are to be a series of consecutive even numbers. Now calculate how long should each shelf be in
We have to add 2 to the previous number,
$$ let\; x \;=\; length\; of\; first\; shelf $$
$$ x + 2 \;=\; length\;of\; second\;shelf $$
$$ x + 4 \;=\; length\;of\;third\;shelf $$
$$ x + 6 \;=\; length\;of\;fourth\;shelf $$
Convert feet to inches,
$$ 5 \times 12 \;=\; 60 $$
Sum the 4 shelves,
$$ x + x + 2 + x + 4 + x + 6 \;=\; 60 $$
After simplification, we get the equation
$$ 4x + 12 \;=\; 60 $$
Separate the x variable on one side of the equation to get the value of x.
$$ 4x \;=\; 60 - 12 $$
$$ 4x \;=\; 48 $$
$$ x \;=\; 12 $$
After adding the values of x in all the consecutive numbers, we get the length of the shelf which is 12, 14, 16, and 18.
Example of the Odd Number
What is the length of the longest side if the perimeter is 45 and the lengths of the sides of the triangle are consecutive odd numbers?
To make our consecutive integer into an odd number we add 2 in x variable.
$$ let\; x \;=\; length\; of\; shortest\; side $$
$$ x + 2 \;=\; length\; of\; medium\;side $$
$$ x + 4 \;=\; length\;of\;longest\;side $$
Our triangle become
$$ Perimeter\; of\; length \;=\; a + b + c $$
$$ 45 \;=\; x + x + 2 + x + 4 $$
To get the value of x, Simply it the given equation
$$ 45 \;=\; 3x + 6 $$
$$ 3x \;=\; 45 - 6 $$
$$ 3x \;=\; 39 $$
$$ x \;=\; 13 $$
Add the value of x to find the length of the triangle.
The length of the longest is,
$$ 13 + 4 \;=\; 17 $$
How to Use Consecutive Integers Solver?
2 consecutive integers calculator has a simple design tool that enables you to use it to calculate the two algebraic equations with different methods.
Before entering the input function into the 3 consecutive odd integers calculator, you must follow some simple steps so that you get a smooth experience during the calculation. These steps are:
1. Choose the sum and product that you want to evaluate for a consecutive number
2. Enter your consecutive integer in the input box.
3. Review your consecutive integer number before hitting the calculate button to start the evaluation process.
4. Click the “Calculate” button to get the result of your given consecutive integer number problem.
5. If you want to try out our consecutive number calculator for the first time then you can use the load example to get a better understanding of the concept of consecutive number integer
6. Click on the “Recalculate” button to get a refresh page for more solutions of consecutive integer problems
Final Result of Consecutive Integers Calculator
3 consecutive even integers calculator gives you the solution to a given integer for a product or sum when you add the input function into it. It provides you with solutions with a detailed procedure
instantly. It may be included as:
When you click on the Result option it gives you a solution for the consecutive number to find the given problem
It provides you with a solution in which all the evaluation processes are present in a step-by-step method of the consecutive integer problem when you click on this option.
Advantages of the Consecutive Number Calculator
Two Consecutive integers calculator provides you with multiple benefits whenever you use it to calculate problems and gives you a solution. These benefits are:
• Consecutive integer calculator is a free-of-cost tool that enables you to use it anytime to find the consecutive integer problem in real-time.
• Consecutive integers solver is a versatile tool that allows you to get the solution of various kinds of consecutive integer problems
• You can try out our calculator to practice more examples of the consecutive integer, you get a strong hold on this concept
• Our consecutive integer problems calculator saves the time that you spend on doing the consecutive integer calculations problems.
• It is a reliable tool that provides you with accurate solutions whenever you use it to calculate consecutive for the sum and product examples without any man-made error.
• It provides the solution with a complete process in a stepwise method so that you get clarity on the consecutive integer number problems | {"url":"https://pinecalculator.com/consecutive-integers-calculator","timestamp":"2024-11-12T07:05:10Z","content_type":"text/html","content_length":"47926","record_id":"<urn:uuid:8f8f5b02-1261-454c-a011-2b7134818c44>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00533.warc.gz"} |
Cosmological simulations of self-interacting Bose-Einstein condensate dark matter
Issue A&A
Volume 666, October 2022
Article Number A95
Number of page(s) 15
Section Cosmology (including clusters of galaxies)
DOI https://doi.org/10.1051/0004-6361/202243496
Published online 20 October 2022
A&A 666, A95 (2022)
Cosmological simulations of self-interacting Bose-Einstein condensate dark matter
Institute of Theoretical Astrophysics, University of Oslo, PO Box 1029, Blindern, 0315 Oslo, Norway
e-mail: stian.hartman@gmail.com
Received: 8 March 2022
Accepted: 16 August 2022
Fully 3D cosmological simulations of scalar field dark matter with self-interactions, also known as Bose-Einstein condensate dark matter, are performed using a set of effective hydrodynamic
equations. These are derived from the non-linear Schrödinger equation by performing a smoothing operation over scales larger than the de Broglie wavelength, but smaller than the self-interaction
Jeans’ length. The dynamics on the de Broglie scale become an effective thermal energy in the hydrodynamic approximation, which is assumed to be subdominant in the initial conditions, but become
important as structures collapse and the fluid is shock-heated. The halos that form have Navarro-Frenk-White envelopes, while the centers are cored due to the fluid pressures (thermal +
self-interaction), confirming the features found by Dawoodbhoy et al. (2021, MNRAS, 506, 2418) using 1D simulations under the assumption of spherical symmetry. The core radii are largely determined
by the self-interaction Jeans’ length, even though the effective thermal energy eventually dominates over the self-interaction energy everywhere, a result that is insensitive to the initial ratio of
thermal energy to interaction energy, provided it is sufficiently small to not affect the linear and weakly non-linear regimes. Scaling relations for the simulated population of halos are compared to
Milky Way dwarf spheroidals and nearby galaxies, assuming a Burkert halo profile, and are found to not match, although they conform better with observations compared to fuzzy dark matter-only
Key words: cosmology: theory / dark matter
© S. T. H. Hartman et al. 2022
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe-to-Open model. Subscribe to A&A to support open access publication.
1. Introduction
Unveiling the fundamental nature of dark matter (DM) – the dominant matter component of our Universe – has been one of the ultimate goals of physics for many decades. Although its particle identity
has remained elusive, the collisionless and cold DM (CDM) paradigm, along with a cosmological constant and inflationary initial conditions, has proven extremely successful at explaining a wide range
of observables, and is known as the ΛCDM model (Davis et al. 1985; Percival et al. 2001; Tegmark et al. 2004; Trujillo-Gomez et al. 2011; Vogelsberger et al. 2014; Planck Collaboration XIII 2016;
Riess et al. 2016; Cyburt et al. 2016). Nevertheless, the quest for a complete theory of DM has spawned numerous expanded models, many of which aim to reproduce the success of CDM while attempting to
fix discrepancies between ΛCDM and observations, finding physics beyond the standard model of particle physics (SM), or both.
Some of the tensions in ΛCDM are related to the formation and shape of small-scale structures. CDM, due to its cold and collisionless nature, is very efficient at forming structure at all scales. For
instance, N-body simulations predict CDM halos to follow the universal Navarro-Frenk-White (NFW) profile (Navarro et al. 1996, 1997), which diverges near the center as ρ(r)∼r^−1, whereas measurements
instead indicate the central regions of DM profiles of low-mass halos to be flatter, or rather, that low-mass halos are cored rather than cuspy. Furthermore, ΛCDM predicts a large number of low-mass
halos, as well as massive subhalos that should be too big to fail at forming stars, yet these are largely absent in our galactic neighborhood. These issues of ΛCDM, known as the “cusp-core”,
“missing-halo”, and “too-big-to-fail” problems, respectively, might be due to limitations in accurately modeling baryonic physics and processes that are too small to be resolved in large-scale
simulations (see Weinberg et al. 2015; Del Popolo & Le Delliou 2017; Bullock & Boylan-Kolchin 2017, and references therein for further discussion on these small-scale discrepancies in ΛCDM).
Alternatively, the solution might lie in the dark sector.
A popular class of DM models beyond CDM involve ultra-light scalar, or pseudo-scalar, particles that have a sufficiently small mass to exhibit wave-like behaviour on astrophysical scales. The
free-field case, termed Fuzzy DM (FDM; Dine & Fischler 1983; Preskill et al. 1983; Hu et al. 2000; Marsh 2016; Hui et al. 2017), with masses as low as 10^−22 eV, have large de Broglie wavelengths λ
[dB]=1/mv and produce solitonic cores of order 1 kpc at the centers of DM halos. Unlike CDM, which clusters on all scales, FDM suppresses small-scale structure because of an effective Jeans’ length
due to the large de Broglie wavelength. Furthermore, FDM produces interference patterns in its density distribution, which is a very distinct feature of FDM compared to CDM and other DM models, such
as warm DM. However, a recent bound on the FDM mass from the Lyman-alpha forest constrains it to m>2×10^−20 eV, and therefore seems to rule out the canonical mass-range generally considered to be
needed to solve the small-scale problems of ΛCDM (Rogers & Peiris 2021).
Another kind of ultra-light DM are scalar fields with interactions. These can vary in complexity, from single-field DM with self-interactions (Lee & Koh 1996; Peebles 2000; Goodman 2000; Arbey et al.
2002, 2003; Böhmer & Harko 2007; Chavanis 2011; Rindler-Daller & Shapiro 2012), to multi-field DM with non-trivial couplings that give rise to exotic properties (Matos et al. 2000; Bettoni et al.
2014; Berezhiani & Khoury 2015; Khoury 2016; Ferreira et al. 2019), though a common feature among these models is for the interactions to give rise to a fluid pressure that produces halo cores. The
simplest scenario is single-field DM with quartic self-interactions and a negligible de Broglie wavelength, which, along with FDM, is often referred to as Bose-Einstein condensed DM, because they can
be regarded as the zero-temperature limit of a boson gas, which in mean-field theory is described by a classical field (Pitaevskii & Stringari 2016). The term superfluid DM is also frequently used
due to the close relationship between Bose-Einstein condensates and superfluidity, though in this work we will instead use “self-interacting Bose-Einstein condensed” (SIBEC) DM, in an effort to
emphasize the importance of the self-interactions on the dynamics of the scalar field.
The non-linear structure of SIBEC-DM has largely been investigated near hydrostatic equilibrium, which finds SIBEC-DM halo cores to be independent of the total halo mass and core density. Fitting
SIBEC-DM to nearby galaxies reproduces the observed rotation curves for core radii around 1 kpc and larger (Zhang et al. 2018; Crăciun & Harko 2020). However, to obtain more realistic SIBEC-DM halo
profiles from cosmological structure formation, which are expected transition to into the NFW profile outside the cores, one needs to go beyond hydrostatic considerations and use numerical
simulations, though some challenges arise in such an endeavor. For instance, the equation of motion for non-relativistic scalar fields, the non-linear Schrödinger equation (NLSE), is very
computationally demanding to solve, even in the FDM-limit where the de Broglie wavelength is of astrophysical scale. For SIBEC-DM, where λ[dB] is much smaller, but still needs to be resolved, the
NLSE is even more computationally demanding. Simply setting the terms that describe the dynamics at λ[dB]-scales to zero, which works in the hydrostatic case, does not work in non-linear simulations,
as they play an important role in regularizing discontinuities and keeping the numerical solution well-behaved. A scheme that somehow incorporates the de Broglie scale without actually needing to
resolve it is therefore desired, and was recently employed by Dawoodbhoy et al. (2021) and Shapiro et al. (2021). They used a hydrodynamic formulation of the NLSE that incorporates the dynamics on
the de Broglie scale as an effective thermal pressure, and performed 1D simulations of collapsing spherical overdensities of SIBEC-DM. They found that in a non-cosmological setting, SIBEC-DM halos
with cores of order 1 kpc provide a solution to the small-scale issues of the standard model. In fact, they found SIBEC-DM to provide a better solution to both the too-big-to-fail and cusp-core
problems than FDM, which struggles to fix these two issues at the same time (Robles et al. 2019). Moving to a cosmological setting, on the other hand, results in an overly large suppression of the
formation of small-scale halos for these same core radii. The cores therefore have to be smaller than what is needed to provide a solution to, for instance, the core-cusp issue, in order to be
compatible with the halo mass function. A similar conclusion was drawn by Hartman et al. (2022) using large-scale observables, albeit to a much lesser degree. Exploring the formation of non-linear
structure in a universe with SIBEC-DM and investigating the properties of SIBEC-DM halos is nevertheless a worthwhile endeavor in the effort to further understand the rich phenomenology that is found
in scalar field DM models. In this paper we therefore build on the work of Dawoodbhoy et al. (2021) and Shapiro et al. (2021) by implementing the de Broglie-smoothed hydrodynamic formulation of
SIBEC-DM into a modified version of the RAMSES code (Teyssier 2002). This code is then used to perform fully 3D cosmological simulations to explore the formation of SIBEC-DM structures and their
non-linear dynamics. The paper is structured as follows: In Sect. 2 the basic theoretical framework for modelling scalar field DM is reviewed, along with the smoothing procedure used to derive the
hydrodynamic approximation for SIBEC-DM. In Sect. 3 the numerical aspects of this paper are discussed, including a brief description of how the RAMSES code works, the initial setup of the SIBEC-DM
fluid, and a discussion of the limitations of the simulations. Our main results are presented and discussed in Sect. 4, with final conclusions in Sect. 5. We use, for the most part, natural units.
2. Theory of self-interacting Bose-Einstein condensates dark matter
Ultra-light scalar DM can be described by a classical field theory, and one of the simplest realizations is the Lagrangian for a complex scalar field Ψ with self-interactions (Li et al. 2014),
$L = 1 2 m g μ ν ∂ μ Ψ ∗ ∂ ν Ψ − 1 2 m | Ψ | 2 − 1 2 g | Ψ | 4 ,$(1)
where m is the mass of the scalar field, and g is the coupling parameter for the self-interaction. In the early universe, where DM is much denser and the self-interaction of the scalar field
dominates the energy density, the averaged pressure and energy of the field behave as radiation (Matos & Arturo Ureña-López 2001; Li et al. 2014). As the universe expands and becomes more diluted,
the interaction energy eventually becomes smaller than the rest energy of the field, and it ceases to be radiation-like. In this non-relativistic phase we can define a new field, Ψ=ψe^−imt, whose
dynamics are given by the NLSE (Ferreira 2021),
$i ∂ ψ ∂ t = [ − ∇ 2 2 m + V ] ψ ,$(2)
where V=g|ψ|^2+mϕ includes two-body contact interactions and the gravitational potential ϕ. The NLSE can be reformulated in a hydrodynamical form by substituting for the wavefunction
where n is the particle number density, ρ=mn the mass density, and the velocity field is defined by v=∇S/m. This gives the Madelung equations (Madelung 1926),
$∂ ρ ∂ t + ∇ · ( ρ v ) = 0 ,$(4)
$∂ v ∂ t + ( v · ∇ ) v + ∇ ( g ρ m 2 − 1 2 m 2 ∇ 2 ρ ρ + ϕ ) = 0 ,$(5)
which are readily recognizable as an equation for mass conservation and a variant of the momentum equation. The largest difference from standard hydrodynamics is the absence of an energy equation,
the presence of a self-interaction pressure
and a so-called quantum potential
The phenomenology of ultra-light DM depends crucially on whether the interaction pressure or the quantum potential dominates. This becomes apparent when considering halo properties of ultra-light DM
at equilibrium. In the FDM-limit, the de Broglie wavelength of the ultra-light DM particles is much larger than the Jeans’ length of the self-interaction and is of astrophysical size, such that
small-scale structure is stabilized against collapse due to the formation of solitonic cores for masses m≲10^−21. The core radii of FDM halos at hydrostatic equilibrium are approximately (Membrado
et al. 1989)
where M is the total mass of the halo. The opposite limit Q≪P[SI]/ρ, which is the case we refer to as SIBEC-DM, but is also known as the Thomas-Fermi approximation, results in the hydrostatic
density profile (Goodman 2000)
$ρ ( r ) = ρ 0 sin ( A r ) Ar ,$(9)
where $A = 4 π G m 2 / g$, and goes to zero at
The soliton cores of massive FDM halos are smaller than low-mass halos, a feature confirmed by simulations (Chan et al. 2022), whereas the core radius R[c] of halos in the Thomas-Fermi limit is
independent of the total halo mass and central density, depending only on the combination g/m^2.
Both the NLSE and its hydrodynamic Madelung formulation are widely used to study ultra-light DM across a wide range of scales, from the properties of low-mass halos and galaxies (Membrado et al. 1989
; Goodman 2000; Böhmer & Harko 2007; Chavanis 2011; Chavanis & Delfini 2011; Rindler-Daller & Shapiro 2014; Hui et al. 2017; Zhang et al. 2018; Berezhiani et al. 2019; Crăciun & Harko 2020; Lancaster
et al. 2020) to large-scale simulations (Schive et al. 2014a,b; Schwabe et al. 2016; Mocz et al. 2017, 2018; Veltmaat et al. 2018; Nori & Baldi 2018; Mina et al. 2020, 2022; Nori & Baldi 2021; May &
Springel 2021). However, only the FDM-limit has been thoroughly investigated using fully 3D cosmological simulations. No corresponding work has so far been carried out in the Thomas-Fermi limit, in
part due to a number of challenges in simulating this kind of system. The hydrodynamic equations obtained by neglecting the quantum potential appear equivalent to regular hydrodynamics with an
adiabatic index γ=2, such that it seems standard numerical schemes should work, but as shock fronts and discontinuities arise, the Thomas-Fermi approximation breaks down and the full equations are
needed. However, both the NLSE and Madelung equations are computationally demanding to solve due to the high resolution needed to resolve the scales on which the quantum potential operates. This is
already a challenge in FDM simulations, but even more so for SIBEC-DM where the characteristic scale of Q is much smaller than the scales of interest given by the interaction pressure, λ[dB]≪R[c].
2.1. Smoothed SIBEC hydrodynamics
An alternative hydrodynamic approximation of SIBEC-DM that includes the effect of the quantum potential without actually needing to resolve λ[dB] was recently used by Dawoodbhoy et al. (2021) and
Shapiro et al. (2021). It involves using a smoothed phase space representation of the wavefunction, known as the Husimi representation (Husimi 1940), which is obtained by essentially smoothing over ψ
with a Gaussian window function of width η and Fourier transforming;
$Ψ ( x , p , t ) = ∫ d 3 y ( 2 π ) 3 / 2 e − ( x − y ) 2 / 2 η 2 ( η π ) 3 / 2 ψ ( y , t ) e − i p · ( y − x / 2 ) .$(11)
Defining the phase space distribution function
$F ( x , p , t ) = | Ψ ( x , p , t ) | 2 ,$(12)
and computing the derivative ∂ℱ/∂t, using the NLSE i∂ψ/∂t=[−∇^2/2m+V]ψ, and smoothing over scales larger than the de Broglie wavelength λ[dB]≪η, gives an approximate equation of motion that
is of the same form as the collisionless Boltzmann equation (Skodje et al. 1989; Widrow & Kaiser 1993);
$d F d t = ∂ F ∂ t + p i m ∂ F ∂ x i − ∂ V ∂ x i ∂ F ∂ p i = 0 .$(13)
The gravitational potential and the mean-field potential due to self-interactions in V are now given by the smoothed density field, which is obtained from ℱ in the same way as from a standard phase
space distribution function,
$ρ ( x , t ) = m ∫ d 3 p F ( x , p , t ) = ∫ d 3 y e − ( x − y ) 2 / η 2 ( η π ) 3 m | ψ ( y , t ) | 2 .$(14)
Following Dawoodbhoy et al. (2021), the hydrodynamic equations for SIBEC-DM in this approximation are obtained from the zeroth, first, and second moments of Eq. (13), and it is assumed that the phase
space distribution function is isotropic around its mean momentum ⟨p⟩, that is, that ℱ(x,p,t) is skewless in p. This ansatz is supported by the results of Veltmaat et al. (2018), who performed 3D
simulations of FDM halo mergers using the NLSE and found the velocity distribution of plane waves inside the virial radius to be Maxwellian. Before the growth of non-linear structures, and outside of
halos, such as in regions that have not undergone violent relaxation and shell crossing, the assumption of skewlessness in ℱ may not hold true. On the other hand, the fluid is supersonic and
ballistic in these regions, as well as dominated by the interaction pressure, so it should not matter in these situations. During mergers, however, the skewless assumption might also break down and
have significant implications if the merging halos have large plane-wave velocity dispersions. We nevertheless use this assumption throughout our simulations. The resulting equations are
$∂ j ∂ t + ∇ ( P + P SI ) + ρ ( v · ∇ ) v + v ( ∇ · ρ v ) = − ρ ∇ ϕ ,$(16)
$∂ E ∂ t + ∇ · [ ( E + P + P SI ) v ] = − j · ∇ ϕ .$(17)
There are two major changes in the above hydrodynamic description of BECs compared to the Madelung equations; first, there is no quantum potential; and second, there is an energy equation in addition
to the continuity and momentum equations, with the total energy given by
$E = 1 2 ρ v 2 + U + U SI ,$(18)
where $P = 2 3 U$ and P[SI]=U[SI]. In fact, the small-scale dynamics of the Madelung equations driven by the quantum potential, after smoothing over scales larger than the de Broglie wavelength,
behave effectively as an ideal gas with adiabatic index γ=5/3, with an additional pressure and internal energy due to the self-interactions. The fluid variables, such as the fluid density and
momentum, and the effective thermal internal energy and pressure, are defined in the same way as in classical statistical mechanics (Huang 1987), but using the smoothed phase space representation of
the wavefunction rather than a phase space distribution function of classical point particles;
$n ( x , t ) = ∫ d 3 p F ( x , p , t ) ,$(19)
$j ( x , t ) = n p = ∫ d 3 p p F ( x , p , t ) ,$(20)
$P ( x , t ) = ∫ d 3 p | p − p | 2 3 m F ( x , p , t ) ,$(21)
$U ( x , t ) = ∫ d 3 p | p − p | 2 2 m F ( x , p , t ) ,$(22)
$E ( x , t ) = U SI + ∫ d 3 p p 2 2 m F ( x , p , t ) .$(23)
In the following, we will sometimes refer to the “effective thermal” energy and pressure as simply “thermal”.
2.2. Supercomoving hydrodynamics
Now that we have a set of fluid equations for SIBEC-DM, we would like to apply these to the growth of structure in a cosmological setting. A convenient set of coordinates for this purpose are the
so-called supercomoving coordinates (Martel & Shapiro 1998), which uses the following change of variables in the case of a flat universe dominated by matter and a cosmological constant;
$x ∼ = 1 a x L , d t ∼ = H 0 d t a 2 , ρ ∼ = a 3 ρ Ω m 0 ρ c 0 , u ∼ = a u H 0 L , P ∼ = a 5 P Ω m 0 ρ c 0 H 0 2 L 2 , m ∼ = m H 0 L , g ∼ = g Ω m 0 ρ c 0 a ,$(24)
where the peculiar motion u=v−Hr is introduced. L is a free length scale parameter, Ω[m0] is the fraction of matter today, and ρ[c0] the critical energy density today. In these coordinates, the
fluid equations in an expanding universe are on a similar form as the standard equations on a static background;
$∂ ρ ∼ ∂ t ∼ + ∇ ∼ · j ∼ = 0 ,$(25)
$∂ j ∼ ∂ t ∼ + ∇ ∼ ( P ∼ tot ) + ρ ∼ ( u ∼ · ∇ ∼ ) u ∼ + u ∼ ( ∇ ∼ · ρ ∼ u ∼ ) = − ρ ∼ ∇ ∼ ϕ ∼ ,$(26)
$∂ E ∼ ∂ t ∼ + ∇ ∼ · [ ( E ∼ + P ∼ tot ) u ∼ ] + H ( 3 P ∼ tot − 2 U ∼ tot ) = − j ∼ · ∇ ∼ ϕ ∼ .$(27)
where $P ∼ tot = P ∼ + P ∼ SI$ and $U ∼ tot = U ∼ + U ∼ SI$. The supercomoving Hubble parameter is given by $H = a − 1 d a / d t ∼$, the gravitational potential by
$∇ ∼ 2 ϕ ∼ = 3 2 a Ω m ( ρ ∼ − 1 ) ,$(28)
and expressions for the energies, pressures, and so on, are the same as before, but with supercomoving quantities. A notable difference from regular hydrodynamics is the presence of the Hubble drag
in the supercomoving energy equation, which vanishes for the ideal gas pressure with γ=5/3, but not for the self-interaction pressure.
3. Numerical implementation
A modified version of RAMSES (Teyssier 2002) is used to simulate the formation of structure of SIBEC-DM in a cosmological setting. The halo catalogs are obtained with Amiga’s Halo Finder (AHF; Gill
et al. 2004; Knollmann & Knebe 2009), and plots of simulation snapshots are made using the python package YT (Turk et al. 2010). RAMSES is a grid based code that uses adaptive mesh refinement (AMR)
to focus computational resources through a hierarchy of nested grids, increasing the local resolution in regions of interest. This is done by recursively splitting individual cells into 2^N smaller
cells, where N is the dimensionality, until a set of refinement criteria are satisfied, doubling the spatial resolution at each new level of refinement. For a box of length L, the cell length at
refinement level l is Δx^l=L/2^l.
RAMSES was originally developed for cosmological simulations of the formation and evolution of structure at both large and small scales, using an N-body particle-mesh code for CDM, with which the
phase space distribution of DM is sampled through a computationally tractable number of collisionless macroparticles. The gravitational forces acting on these macroparticles are obtained by
projecting the DM mass onto the AMR grid and solving the Poisson equation. RAMSES also includes code for studying gas dynamics, using the grid and a Godunov scheme to solve the hydrodynamic equations
in supercomoving coordinates. It is this latter part of RAMSES that has been modified in order to solve the SIBEC-DM hydrodynamic equations, and we use a second-order Godunov scheme with the Rusanov
flux (an HLLE Riemann solver) and the van Leer slope limiter to compute the flux at the cell interfaces (Toro 2006). The Rusanov flux is computed using the conservative variables $ρ ∼$, $j ∼$, and $E
∼$, and Eqs. (25)–(27), while the second-order MUSCL-Hancock and slope limiting step is done using the primitive variables $ρ ∼$, $u ∼$, and $P ∼$ (the thermal pressure).
A 3D test of the collapse of a spherically symmetric overdensity at two different grid resolutions (refinement levels 7 to 11, and 8 to 12) are included in Fig. 1. A periodic box of size L=50 kpc
was used, with the initial density ρ(r) = ρ[0][1+e^−(r/R)^2], where R=5 kpc and ρ[0]=0.3ρ[c0]. The initial thermal pressure was ζ=P/P[SI]=10^−1, and the simulations were run until 3t[dyn],
where $t dyn = 3 / 8 π G ρ 0$ is the dynamical timescale of the initial overdensity. The energy density profiles, with
Fig. 1.
Self-interaction energy density U[SI], thermal energy density U, and gravitational potential energy density W of the spherically symmetric test halo with R[c]=1 kpc, relative to the interaction
energy of the initial background $U ¯ SI$. The result of the simulation with refinement levels 8 to 12 are shown in solid, while 7 to 11 is shown in dotted. The 3D simulation reproduces the 1D
results of Ahn & Shapiro (2005) and Dawoodbhoy et al. (2021), in particular the large thermal energy in the wake of the accretion shock, its sharp decrease to below the self-interaction energy
inside the core, and the fit U[SI]∝ρ^2∝r^−24/7 in the envelope.
agrees with the results of the 1D simulations of Ahn & Shapiro (2005) and Dawoodbhoy et al. (2021), in particular the large thermal energy in the wake of the accretion shock, its sharp decrease to
below the self-interaction energy inside the core, and the fit ρ∝r^−12/7 in the envelope, which gives U[SI]∝ρ^2∝r^−24/7.
A number of simulations were run, with different SIBEC-DM interaction strengths (given by the core radius R[c]), initial conditions, and box sizes to balance reaching sufficient spatial resolution to
resolve the cores of the emerging SIBEC-DM halos, while also forming enough such halos to study their properties. Initial conditions were generated with the MUSIC code (Hahn & Abel 2011) using the
standard CDM matter power spectrum, but with a cut-off in power below a certain scale to include the effect of a non-zero sound speed,
$P SIBEC ( k ) = f cut ( k ) P CDM ( k ) .$(30)
Comparison with output from the Boltzmann code CLASS, modified to include SIBEC-DM (Hartman et al. 2022), shows that the function
$f cut ( k ) = e − ( k / k cut ) 3 ,$(31)
with k[cut] given by the effective sound horizon r[s] of SIBEC-DM,
$r s = ∫ 0 τ init d τ c s = ∫ z init ∞ d z c s H ,$(32)
provides a good fit to the suppression of power in a universe with SIBEC-DM relative to CDM. The SIBEC-DM sound speed c[s], which includes the early radiation-like epoch during which the interaction
energy dominates, is approximately given by (Hartman et al. 2022)
$c s 2 = ∂ P ¯ ∂ ρ ¯ = 1 3 1 1 + a 3 / 6 ω 0 ,$(33)
where ω[0] is the SIBEC-DM equation of state today,
$ω 0 = P ¯ 0 ρ ¯ 0 = g ρ ¯ 0 2 m 2 ≈ 10 − 15 ( R c 1 kpc ) 2 .$(34)
The resulting cut-off scale is
$k cut = 1 2.2 k s = 1 2.2 2 π r s ≈ 0.3 h Mpc − 1 ( R c 1 kpc ) − 2 / 3 ,$(35)
which is in close agreement with previous results on the SIBEC-DM transfer function and cut-off scale (Shapiro et al. 2021), apart from a slightly different prefactor and slope (0.3→0.2 and −2/
3→−1/2), obtained by considering the first k-mode that enters the horizon while it is sub-Jeans, instead of the effective sound horizon. Although this cut-off provides initial conditions
appropriate for SIBEC-DM as described by the Lagrangian in Eq. (1), it poses a computational challenge that unfortunately could not be overcome at the present time. The large difference between the
cut-off scale λ[cut]=π/k[cut] and the Jeans’ scale R[c], λ[cut]≫R[c], means the simulations require both a large box and very high spatial resolution to simultaneously investigate the formation
of SIBEC-DM structure and to resolve the SIBEC-DM halo cores. For instance, R[c]=1 kpc gives a cut-off at λ[cut]≈15 Mpc. Including an extra order of magnitude at each end to get sufficient
resolution to resolve halo cores, as well as to have a large enough simulated volume to form halos, ends up being at least 6 orders of magnitude of spatial resolution that has to be simulated. We
therefore chose to forgo the realistic initial conditions, and to instead use a simple Heaviside function f[cut](k) = Θ(k[cut]−k), with k[cut] equal to the Jeans’ length k[J]=π/R[c] of SIBEC-DM
at the time the simulation starts at z=50. In this way there is enough power in the initial conditions to give rise to a population of halos that can be analysed, while removing all perturbations
on sub-Jeans’ scales, although at the expense of a realistic cosmological history for SIBEC-DM.
The strong cut-off in Eq. (35) also has important consequences for the canonical parameter space of SIBEC-DM. By considering the largest halo masses that are affected by the cut-off,
$M cut ≈ 4 π 3 ρ dm , 0 ( π k cut ) 3 ≈ 5 × 10 14 M ⊙ ( R c 1 kpc ) 2 ,$(36)
it is clear that SIBEC-DM with core radii of R[c]∼1 kpc results in a suppression in structure formation that is clearly at odds with observations. This was first pointed out and discussed in detail
by Shapiro et al. (2021). In their paper, Shapiro et al. (2021) computed the halo mass function (HMF) for SIBEC-DM from linear theory, and found that a SIBEC-DM Jeans’ length as low as R[c]∼10 pc
is needed to be consistent with observations, which poses a serious challenge for SIBEC-DM as a candidate to solve, for instance, the cusp-core problem. In the present work we nevertheless use R[c]
∼1 kpc in our effort to explore the basic properties of SIBEC-DM halos. Our results are therefore in a sense more directly relatable to previous studies in which SIBEC-DM with halo cores of order 1
kpc is assumed to have a matter power spectrum very close to CDM (Harko 2011; Harko & Mocanu 2012; Velten & Wamba 2012; Freitas & Gonçalves 2013; Bettoni et al. 2014; de Freitas & Velten 2015;
Berezhiani & Khoury 2015; Khoury 2016; Hartman et al. 2022), usually justified by invoking a phase transition from an initial CDM-like state into a late-time SIBEC-DM fluid, or an additional
temperature dependence in the self-coupling in order to make it weaker at high redshifts, although no concrete realization of any such mechanism or transition has yet been suggested.
Initial conditions for the effective thermal energy of SIBEC-DM present in Eqs. (25)–(27) must also be provided and is in principle given by the underlying wavefunction, although in the hydrodynamic
approximation it has been integrated away with the smoothing kernel in Eq. (11). We therefore make the ansatz that the effective thermal pressure P is small compared to the interaction pressure P[SI]
at the start of the simulation, and set $P ∼ ( z init ) = ζ P ∼ SI ( z init )$ with ζ≪1. The supercomoving effective thermal pressure at the background level is constant in time, while the
interaction pressure decreases as $P ∼ SI ∝ a − 1$, so ζ should be sufficiently small that $P ∼$ is largely negligible until the first structures start to form, at which point the effective thermal
energy increases rapidly as matter is accreted onto halos and is heated.
The simulations performed in this work are summarized in Table 1, and an overview of the conservation of energy, resolution, and runtime for a few of these are shown in Fig. 2 at two different
initial refinement levels. The conservation of energy is satisfied to within 1%, and the Jean’s length is resolved in over-dense regions with the use of AMR at all times, more so at later times as
the initial perturbations collapse to form halos. On the other hand, the initial resolution of the grid, which is at the lowest level of refinement, places a lower bound on the halo masses that are
accurately evolved. An estimate of this bound is the total mass enclosed by a minimum number of cells N[min] on the initial grid,
$M min = ρ dm , 0 N min ( L 2 l init ) 3 .$(37)
Fig. 2.
Energy conservation (top), the smallest resolved physical cell length (middle), and the runtime in CPU hours (bottom) for the main suite of simulations. Runs with a minimum refinement level of 8
are shown in solid lines, and 7 in dotted lines.
Table 1.
Overview of the simulation runs.
We find that N[min]=300, the mass enclosed by an approximately 7×7×7 cube, provides a reasonable estimate, as shown in Fig. 3. In this case M[min]≈10^7M[⊙] for l[init]=8, whereas at l
[init]=7 the mass limit is M[min]≈10^8M[⊙]. The profiles are largely the same except in the lowest mass bin, which is below M[min] for l[init]=7. Additionally, the HMF is suppressed below M
[min], hence halos with M[200]<M[min] are discarded in the following analysis. It should be noted that the higher resolution yields slightly denser halos even for the masses that are most resolved,
and that the HMF above M[min] is not exactly identical at the two refinement levels, as one would expect. This indicates our simulations would benefit from further grid refinement, in particular the
Rc1 run, which has the highest ratio of both M[min]/M[cut] and Δx[min]/R[c]. Unfortunately, the computational cost of further increasing the resolution was prohibitive, since at l[init]=8 the
simulation used up to a hundred thousand CPU hours to reach z=0.5, while an increase to the next refinement level l[init]=9 is expected to use a few million CPU hours.
Fig. 3.
Binned SIBEC-DM halo profiles and the HMF for R[c]=1 kpc at z=1.5, with minimum refinement level 7 (dotted) and 8 (solid). The HMF of CDM for the same box size and initial resolution is
included in black. The halo mass limit M[min] are indicated by dots. The standard deviation from the binned halo profile mean for level 8, as well as the Poisson error in the HMF, are shown in
4. Results
In the cosmological simulations, the SIBEC-DM collapses and forms halos with a cored inner structure and a CDM-like envelope, which we find to be well-fitted by a cored NFW (NFWc) profile,
$δ NFWc ( r ) = [ δ c − 1 + δ NFW − 1 ] − 1 .$(38)
The profile is given in terms of over-density $δ = ρ / ρ ¯$, where δ[c] is the central over-density, and δ[NFW] is the NFW profile,
$δ NFW ( r ) = δ s r r s ( 1 + r r s ) 2 ·$(39)
We define the core radius r[c] as where the density has dropped to 50% of its central value, δ(r[c]) = 0.5δ(0). At hydrostatic equilibrium we would have r[c]≈0.6R[c]. A number of other cored
profiles could also be used to fit the SIBEC-DM halos. In Li et al. (2020) several DM profiles were fitted to the Spitzer Photometry & Accurate Rotation Curves (SPARC) dataset of rotation curves of
nearby galaxies (Lelli et al. 2016), such as the Burkert profile (Burkert 1995),
$δ Burkert ( r ) = δ c ( 1 + r r s ) [ 1 + ( r r s ) 2 ] ,$(40)
the Einasto profile (Einasto 1965),
$δ Einasto ( r ) = δ s exp { − 2 α ϵ [ ( r r s ) α ϵ − 1 ] }$(41)
and the so-called Lucky13 profile (Li et al. 2020),
$δ 13 ( r ) = δ c [ 1 + ( r r s ) 3 ] ·$(42)
These are fitted to the SIBEC-DM halos by minimizing the cost function
$χ ν 2 = 1 N d − N f ∑ i = 1 N d [ δ i − δ ( r i ) ] 2 δ i 2 ,$(43)
with respect to the N[f] free parameters of the density function δ(r), where N[d] is the number of radial points δ[i] from the simulation that we are fitting the density profile to. Each halo in the
simulation gives a value for $χ ν 2$, and the corresponding cumulative distribution function (CDF) for the set of these tells us what fraction of halos has $χ ν 2$ smaller than a given value, and
therefore how well the fitting function generally describes the halos. In Fig. 4 the CDF for the different types of profiles are shown, from which we find that the NFWc profile provides the best fit
overall. Of the cored profiles used by Li et al. (2020) to fit the DM halos in the SPARC dataset, the Burkert and Einasto profiles are best suited and perform about the same, but as Burkert is the
simpler of the two, we will use it in the following.
Fig. 4.
Cumulative distribution functions for different cored halo profiles from SIBEC-DM simulations with R[c]=1 kpc (upper), R[c]=3 kpc (middle), and R[c]=10 kpc (lower) at z=0.5. This shows the
fraction of the halos that has $χ v 2$ smaller than a given value, and therefore how well the fitting function generally describes the halos in our simulation.
Both NFWc and Burkert fitted to binned SIBEC-DM halos with R[c]=1 kpc at z=0.5 are shown in Fig. 5. The two profiles yield slightly different results for the core radius. Burkert is a bit steeper
than NFWc near the core, and therefore generally gives a core radius r[c] of around half compared to NFWc (according to our definition δ(r[c]) = 0.5δ(0)). The core density δ[c] is also about twice as
high for Burkert compared to NFWc. In the following we use the Burkert profile, even though the fit using NFWc is better, for two reasons; it is a simpler function, and we can readily compare the
fitting parameters of the SIBEC-DM halos to observations, because the Burkert profile has previously been used for the DM component of nearby galaxies in the SPARC dataset (Li et al. 2020) and the
dwarf spheroidal (dSph) satellites of the Milky Way (Salucci et al. 2012).
Fig. 5.
Mean (solid) and standard deviation (shaded) of binned SIBEC-DM halo profiles with R[c]=1 kpc at z=0.5. The binned profiles are fitted to NFWc (dashed) and Burkert (dash-dotted). The fits of
the NFW profile to the halo envelopes are also shown (dotted).
In order to investigate the general trends in the SIBEC-DM halos in our simulations, and compare these to observations, we introduce scaling functions for the core radius r[c], the central density δ
[c], and the mass enclosed inside the core radius, M[c];
$r c ( M 200 ) = r c , 10 ( M 200 10 10 M ⊙ ) α ,$(44)
$δ c ( M 200 ) = δ c , 10 ( M 200 10 10 M ⊙ ) β ,$(45)
$M c ( M 200 ) = M c , 10 ( M 200 10 10 M ⊙ ) γ .$(46)
These are fitted using Theil-Sen regression (Theil 1950; Sen 1968). This method finds the slope m and the y-intercept b of the linear function y(x) = b+mx (which Eqs. (44)–(46) are in log-space) by
first computing the median of slopes between all pairs of points (y[i],x[i]), which gives m, and then finding the median of y[i]−mx[i] for all points, which gives b. This fitting procedure is
robust against outliers, and provides a simple measure of the variation in m and b present in the data through, for example, the first and third quartiles of the pairwise slopes and individual y
-intercepts. Tables listing the scaling parameters fitted using this procedure are included in Appendix A for several redshifts.
The properties of the Burkert profile fitted to SIBEC-DM halos are shown in Fig. 6 for R[c]=1 kpc, 3 kpc, and 10 kpc, along with the above scaling relations. The fit to the nearby galaxy rotation
curves in the SPARC dataset (Li et al. 2020) and Milky Way dSphs (Salucci et al. 2012) are also included in Fig. 6.
Fig. 6.
Core radii r[c] (left), core densities δ[c] (middle), and core masses M[c] (right) versus M[200] for halos in cosmological simulations of SIBEC-DM with R[c]=1 kpc, 3 kpc, and 10 kpc, using the
Burkert profile. The upper plots show the scatter of halos at z=0.5, with the fitted scaling Eqs. (44)–(46) shown in dashed lines, compared to the SPARC dataset (Li et al. 2020) and Milky Way
dSphs (Salucci et al. 2012). The lower plots show the fitted median (solid) and the first and third quartiles (shaded) at several redshifts using Theil-Sen regression. The results from SPARC and
the dSphs are also shown. In the scatter plot for r[c] the core radii R[c] are indicated in dotted colored lines.
While the halo catalogues from the simulation snapshots only span 2−3 orders in magnitude in mass and number 200−400 halos due to the limited volume of the simulated box, and the fitted parameters of
the scaling functions show some time-dependence (the prefactors more so than the exponents), there are a number of interesting features to note. The halo core radii r[c] are scattered near R[c], and
the dependence of r[c] on M[200] is nearly constant, largely as expected from hydrostatic considerations, although there is a slight positive trend, $r c ∼ M 200 α$ with α≈0.05−0.1. Also, the
SIBEC-DM cores are generally larger compared to what we find from Eq. (9). Both of these observations are consistent with the addition of thermal pressure in the halos centers, which is also slightly
increasing for more massive halos, as seen in Fig. 7. In fact, the thermal-like energy due to the hidden dynamics on the de Broglie-scale dominates the central regions of the SIBEC-DM halos. This is
illustrated in Fig. 8, where the self-interaction, thermal, and gravitational potential energy densities of a sample halo are shown. As structure forms, the SIBEC-DM fluid is heated as it is accreted
onto the growing halos, the kinetic energy of the in-falling matter being converted into effective thermal energy. The result are halos with a thermal energy much larger than the interaction energy
throughout the halo, even in the core where it dominates the overall dynamics. Additionally, the thermal energy is nearly isothermal. At the edge of the core the potential energy becomes larger than
the internal energies, and the halo transitions to an NFW-like envelope.
Fig. 7.
Ratio of average thermal and self-interaction energy, U and U[SI], inside the halo cores r<r[c]/2 in simulations with R[c]=3 kpc at z=1.
We illustrate with the sample halo in Fig. 8, which is relatively isolated in the simulated box, that the SIBEC-DM halos in the simulations achieve approximate virial equilibrium. The virial 𝒱
satisfies, under the assumption of spherical symmetry (Dawoodbhoy et al. 2021),
$V = 2 T + W + 2 U + 3 U SI + S + S SI = 0 ,$(47)
Fig. 8.
Top panel: self-interaction energy density U[SI], thermal energy density U, and gravitational potential energy density W of a sample halo with R[c]=1 kpc and M[200]=3×10^9M[⊙], relative to
the interaction energy at the background level $U ¯ SI$. The middle panel shows instead the specific energy, and the bottom panel the cumulative virial. The core radius r[c] and halo radius r[200]
are indicated with the inner and outer dotted vertical lines, respectively.
where the various terms are cumulative energy contributions and surface terms,
$T = 1 2 ∫ v 2 d M − 2 π r 2 ( ρ r v 2 + 1 2 r 2 ∂ ( ρ v ) ∂ t ) ,$(48)
$U = ∫ U ρ d M , U SI = ∫ U SI ρ d M ,$(50)
$S = − 4 π r 3 P , S SI = − 4 π r 3 P SI ,$(51)
and 𝒰[tot]=𝒰+𝒰[SI].
Our 3D simulations confirm many of the features of SIBEC-DM halos that were reported in Dawoodbhoy et al. (2021), such as the transition from a cored interior to a CDM-like envelope near R[c], and
the domination of the thermal energy over self-interaction energy outside the core due to shock heating. The only significant difference is the domination of thermal energy inside the cores of halos
in 3D, whereas in 1D simulations the cores are not heated and are instead dominated by self-interactions. In fact, the halos from our simulations look more like the double-polytrope solutions of
Dawoodbhoy et al. (2021) for SIBEC-DM with both the self-interaction pressure and an isothermal thermal pressure at hydrostatic equilibrium. Dawoodbhoy et al. (2021) anticipated that after collapse,
relaxation, and finally virial equilibrium, the thermal profile should be nearly isothermal, with core radii set by the self-interaction pressure and thus on the order of R[c], which is what we find
in our simulations.
We attribute the difference between our simulations and the ones in Dawoodbhoy et al. (2021) to the absence of mixing in spherical symmetry. In 1D, halos form by the accretion of spherically
symmetric shells that are heated as they are slowed down, resulting in an accretion shock expanding outwards as shells collide, but do not cross. The cores themselves experience little heating as the
fluid is decelerated smoothly by the self-interaction pressure and without a shock. In fully 3D simulations the halos form as clumps – not perfect shells – of matter merge, which perturb the halos
and causes the outer shock heated layers to mix with the interior. We demonstrate this difference between symmetrical and asymmetrical collapse using 2D simulations; one in which an initial
stationary over-density is centered in the simulated box and collapses symmetrically; and another with a second smaller over-density offset at x[2] that merges with the first in an asymmetrical
manner. The initial density field in the two scenarios are ρ(x) = ρ[0]+ρ[0]Δ[1]e^−(|x|/R[1])^2, and ρ(x) = ρ[0]+ρ[0]Δ[1]e^−(|x|/R[1])^2+ρ[0]Δ[2]e^−(|x−x[2]|/R[2])^2, and we use a periodic box
of size L=40 kpc with R[c]=1 kpc, R[1]=5 kpc, R[2]=2 kpc, Δ[1]=Δ[2]=99, |x[2]| = 10 kpc, and ρ[0]=Ω[m0]ρ[c0], with the initial ratio ζ=P/P[SI]=10^−2. The “symmetrical” simulation
was run for 20t[dyn], while the “asymmetrical” one was run for 200t[dyn], to give it enough time to virialize, where $t dyn = 1 / 2 π G ρ 0 Δ 1$ is the dynamical timescale of the central
over-density. The final distributions of U/U[SI] are shown in Fig. 9. In the symmetrical case, the thermal energy is dominant everywhere except in the core, whereas in the asymmetrical case the
merging with the second over-density results in a strong mixing of the fluid layers, causing shock-heated fluid in the region outside the core to penetrate into the core.
Fig. 9.
Final distributions of U/U[SI] in 2D simulations with symmetrical (upper) and asymmetrical (lower) collapse, centered on the density peak. In the asymmetrical case, the second smaller over-density
was initially located to the right. The minimum in the symmetrical case is around U/U[SI]≈2×10^−2, while in the asymmetrical case its is U/U[SI]≈2.
In Fig. 10 the internal energies of SIBEC-DM with R[c]=1 kpc at z=0.5 are shown, where the extended envelopes of high thermal energies compared to the interaction energy are clearly seen around
the collapsed structures. In the same figure the density field is compared to standard CDM, which illustrates the smoothing of the DM density field, especially in the halo cores. The corresponding
matter power spectrum is shown in Fig. 11 at several redshifts, as well as CDM with the same initial conditions, but without the cut-off, with
$Δ 2 ( k ) = k 3 P ( k ) 2 π 2 ,$(52)
Fig. 10.
Projection plots of the DM density in cosmological simulations with L=2 Mpc h^−1 at z=0.5 SIBEC-DM with R[c]=1 kpc (upper right) and CDM with the same initial conditions, but without a
cut-off (upper left). The SIBCE-DM internal energies for the same snapshot showing the effective thermal energy U (lower left) and the self-interaction energy U[SI] (lower right).
Fig. 11.
Matter power spectrum for cosmological simulations L=2 Mpc h^−1 at z=0.5 for SIBEC-DM with R=1 kpc (solid) and CDM (dashed). The comoving cut-off k[cut] is indicated with a dotted vertical
line, and the spectrum at z=50 is multiplied by 50 in the figure.
$( 2 π ) 3 P ( k ) δ 3 ( k − k ′ ) = δ ̂( k ) δ ̂( k ′ ) ,$(53)
$δ ̂( k ) = ∫ d 3 x δ ( x ) e − i k · x ,$(54)
with δ^3 being the 3D Dirac delta function. At the largest scales the SIBEC-DM modes grow like CDM, while at smaller scales SIBEC-DM is suppressed compared to CDM due to the DM fluid pressure.
The simulations are insensitive to the initial ratio P/P[SI] as long as it is much smaller than unity, as shown in Figs. 12 and 13. Below ζ=P/P[SI]≲0.1, there is little change in the final halos.
As soon as structure begins to form, the thermal energy increases due to the conversion of kinetic energy to internal energy, hence, a small amount of initial thermal energy matters little.
Nevertheless, despite the effective thermal energy dominating the DM-halo cores, it is the strength of the self-interaction that determines the characteristic size of the cores. This is because it is
at the Jeans’ scale due to the self-interactions that the pressure forces begin opposing gravity, creating the shocks that cause the heating of the fluid.
Fig. 12.
Evolution of the ratio of the total thermal energy and total self-interaction energy for the simulations run.
Fig. 13.
Scaling functions at several redshifts for the core radii r[c] (left), the core density δ[c] (middle), and the core mass M[c] (right) for SIBEC-DM halos with R[c]=3 kpc and different initial ζ=
P/P[SI] (upper), and different initial power cut-offs (lower). The shaded areas give the first and third quartiles for the scaling parameters obtained from the Theil-Sen regression.
Another interesting feature is the dependence of δ[c] on the halo mass. The Burkert profile fitted to observational data prefers a negative slope, meaning massive halos are less dense. In N-body
simulations of CDM, considering instead the characteristic over-density δ[s] of the halo in Eq. (39), such mass dependence arises, in very simple terms, because δ[s] is proportional to the mean
density at the time the halo forms, with low-mass halos generally collapsing at a higher redshift, when the universe was denser (Navarro et al. 1996). We might therefore expect processes that
transform the CDM halo cusps into cores, for instance baryonic feedback, to produce core densities that inherit the same halo mass dependence (Ogiya et al. 2014). SIBEC-DM halos prefer instead a
positive slope β≈0.5 at z=0.5, such that more massive halos are also denser. Furthermore, the SIBEC-DM halo core masses M[c] scales approximately as $M c ∼ M 200 γ$, with γ≈0.75 at z=0.5. The
scaling of both M[c] and δ[c] can be qualitatively understood under the assumption of velocity dispersion tracing outside the core, meaning that the circular orbital velocity is nearly constant (
Chavanis 2019a,b; Padilla et al. 2019; Dawoodbhoy et al. 2021),
$v c 2 = G M c r c ≈ v 200 2 = G M 200 R 200 ,$(55)
which we indeed see to be approximately true in Fig. 8, since v^2=−W/ρ. Inserting that r[c] is constant, and $M 200 ∼ r 200 3$, gives
$δ c ∝ M c r c 3 ∼ M 200 2 / 3 ,$(57)
which are in the neighborhood of what is obtained in the simulations. We can improve this scaling argument by including a small mass dependence in $r c ∼ M 200 α$, which gives
Inserting α=0.05 gives $M c ∼ M 200 0.72$ and $δ c ∼ M 200 0.57$, while α=0.1 gives $M c ∼ M 200 0.77$ and $δ c ∼ M 200 0.47$. For comparison, independent FDM simulations, using the full NLSE or
the Madelung equations, find varying exponents, with $M c ∼ M 200 1 / 3$ (Schive et al. 2014a,b; Veltmaat et al. 2018), $M c ∼ M 200 5 / 9$ (Mocz et al. 2017; Mina et al. 2022), or $M c ∼ M 200 0.6$
(Nori & Baldi 2021), all of which are less steep than what we find for SIBEC-DM. Furthermore, as expected from hydrostatic equilibrium, the size of FDM halo cores have been found to be decreasing
with the virial mass (Chan et al. 2022), as opposed to slightly increasing for SIBEC-DM.
Generally, we find the trends in r[c], δ[c], and M[c] for the values of R[c] and initial conditions tested in this paper to be in conflict with the SPARC dataset and Milky Way dSphs, although their
slopes are in less tension with observations compared to FDM. The trends are mostly insensitive to the initial ratio P/P[SI]≪1, but they are dependent on the cut-off scale in the initial matter
power spectrum (when it is not scaled along with R[c] to match the Jeans’ length), as shown in Fig. 13. By moving the initial cut-off to larger scales, such that there is less initial power at small
scales, the scaling exponents shift towards those inferred from the SPARC dataset and the Milky Way dSphs. On the flip-side, the prefactors shift away from the observations. For example, the total
ratio P/P[SI] increases as k[cut] is lowered, meaning there is more thermal pressure, causing halos to have larger cores and lower central densities. However, as we saw in Fig. 6, decreasing the
interaction strength, or equivalently, the core radius R[c], hardly affects the scaling exponents, but moves the prefactors in the desired direction. Based on this observation, one could imagine that
a sufficiently small R[c] and strong cut-off might create a population of halos that alleviates to some degree the current tension with the observed trends. According to Eqs. (58) and (59), a
stronger mass dependence in the core radius $r c ∼ M 200 α$ will bring the SIBEC trends towards the observed ones, and can come about by having a larger thermal Jeans’ length for massive halos, such
that more massive halos have a larger ratio of thermal energy to self-interaction energy, U/U[SI], due to increased heating, as observed in our simulations.
However, a good deal of caution is called for in making such a statement. For instance, the range of halo masses represented in the simulations is rather small, and baryonic physics are not included
at all, but are known to affect the distribution of DM in halos. Furthermore, the scaling function parameters have some time-dependence, primarily the prefactors, and our simulations were only run
until z=0.5, whereas the observed galaxies and dSphs are at z≈0. This is particularly true for the Rc3-b run with k[cut]=k[J]/2, which has about a fourth of the halos compared to k[cut]=k[J],
and is therefore more sensitive to disruptions in parts of the simulated halo population due to events such as mergers. We see this in Fig. 13 at z=0.5, where the scaling parameters suddenly
change, and the first and third quartile-region becomes very large, signaling that the scaling relations generally do not provide a good fit to the halo population at that point. Finally, but not
least, the SIBEC-DM parameter space and initial conditions used in our simulations are not appropriate for a realistic realization of scalar field DM with self-interactions as given by the Lagrangian
in Eq. (1), for which R[c]≳1 kpc are ruled out and the suppression in the initial power spectrum is much stronger (Shapiro et al. 2021; Hartman et al. 2022).
5. Conclusion
In this paper, a hydrodynamic approximation of the NLSE that includes the de Broglie-scale dynamics as an effective thermal energy was used to simulate the formation of structure of SIBEC-DM in a
fully 3D cosmological setting. The advantage of such an approach is that the dynamics on the de Broglie-scale need not be resolved on the numerical grid, making the task of simulating SIBEC-DM
simpler and computationally tractable. This is of particular importance when the de Broglie wavelength is much smaller than the self-interaction Jeans’ scale that we are interested in. On the other
hand, this approach introduces an additional equation for the effective thermal energy that must be evolved through time, information about the underlying wavefunction is lost, and some simplifying
assumptions were made, such as the skewlessness of the phase space distribution function.
Our simulations reproduce many of the features reported in Dawoodbhoy et al. (2021). The SIBEC-DM halos have cores with radii of order R[c], which is only weakly dependent on the halo mass. Outside
the cores the halos transition to a CDM-like profile, which in our 3D simulations is the NFW profile. The SIBEC-DM halos are therefore generally well-described by a cored NFW-profile such as NFWc,
Eq. (38), or the Burkert profile, Eq. (40). Despite the self-interactions determining the scale of the cores, the effective thermal energy ends up dominating over the self-interactions throughout the
halos due to significant heating during collapse, even in the central regions. This is in contrast to the 1D simulations of Dawoodbhoy et al. (2021), who found the SIBEC-DM halo cores to not
experience much heating, and the thermal energy to fall sharply below the self-interaction energy inside the core radius. However, Dawoodbhoy et al. (2021) anticipated that after collapse and
relaxation, the SIBEC-DM halos should be largely isothermal with the central thermal energy on the order of the self-interaction energy. Spherically symmetric 1D simulations do not include the mixing
of fluid layers that allows for such heating of the core, while in 3D, matter collapses onto the halos in clumps rather than shells, causing the outer shock-heated layers to be mixed with the central
fluid, resulting in an isothermal profile. Our simulations therefore confirm this anticipated result of Dawoodbhoy et al. (2021).
Scaling relations for the core radii r[c], core densities δ[c], and core masses M[c] as functions of the total halo mass M[200] were fitted to the simulated halo populations, which largely agree with
hydrostatic considerations of the halo cores where r[c] is nearly constant, as well as velocity dispersion tracing in the halo envelope, $v c 2 ≈ v 200 2$. However, these trends do not agree with
those obtained by fitting the Burkert profile to nearby galaxies in the SPARC dataset and the classical Milky Way dSphs. This poses an issue for SIBEC-DM with R[c]≳1 kpc and a largely CDM-like
matter power spectrum at late times (Harko 2011; Harko & Mocanu 2012; Velten & Wamba 2012; Freitas & Gonçalves 2013; Bettoni et al. 2014; de Freitas & Velten 2015; Hartman et al. 2022), as was used
in our simulations, although these scenarios are not well-motivated. For SIBEC-DM given by the field Lagrangian in Eq. (1), the self-interaction is constrained to R[c]<1 kpc, otherwise an early
radiation-like period and a large comoving Jeans’ length washes out too much structure to be consistent with observations (Shapiro et al. 2021; Hartman et al. 2022). In fact, Shapiro et al. (2021)
found by using constraints on FDM as a proxy for SIBEC-DM, and matching their transfer function cut-offs and HMFs, that the SIBEC-DM self-interaction should be as low as R[c]∼10 pc to not be in
conflict with observations. We were unable to probe SIBEC-DM with initial conditions and parameters consistent with the Lagrangian in Eq. (1), since the large gap between the halo cores and the
cut-off scale requires both a large simulation box and very high spatial resolution. It should be noted that our SIBEC-DM-only simulations do provide a better agreement with the slopes in observed
scaling relations than FDM. In particular, FDM simulations generally find $M c ∼ M 200 γ$ with 1/3<γ<0.6, while we find γ≈0.75, which is closer to the observed γ≈1.1. Additionally, FDM halos
have core radii that generally decrease with the halo mass, while we find a slightly increasing trend due to larger halos experiencing more thermal heating, although not as steep as in the SPARC
dataset and the Milky Way dSphs.
The main results of the present work is summarized as follows:
• Using a hydrodynamic approximation and fully 3D cosmological simulations, SIBEC-DM halos are found to have NFW-like envelopes with central core radii of order R[c]. This confirms the results
reported in Shapiro et al. (2021), based on 1D spherically symmetric simulations in the same hydrodynamic approximation, when the initial perturbation is chosen to yield an infall rate that
matches the mass assembly history of 3D N-body simulations of CDM. The density profiles are therefore well-fitted by cored NFW profiles such as NFWc, Eq. (38), and the Burkert profile, Eq. (40).
• The SIBEC-DM core radii are only weakly dependent on the halo mass, largely as expected from hydrostatic equilibrium. The slight increase is due to larger halos generally experiencing more
heating, and hence an increase in their thermal Jeans’ length.
• Despite the self-interaction energy initially dominating the internal energy and fluid pressure of the SIBEC-DM, as well as determining the general scale of the SIBEC-DM cores, the effective
thermal energy ends up dominating throughout the halo. This result is insensitive to the initial ratio of the thermal energy relative to the self-interaction energy, as long as it is small, since
the final thermal energy comes from heating as SIBEC-DM collapses and forms structure. Below ζ=P/P[SI]≲0.1, there are hardly any changes in the final halos.
• The SIBEC-DM halo core densities δ[c] and core masses M[c] are found to scale with the virial mass M[200] as $δ c ∼ M 200 0.5$ and $M c ∼ M 200 0.75$. This result is insensitive to changes in R
[c] that is matched by a corresponding change in the cut-off scale, k[cut]∼1/R[c]. Velocity tracing in the halo envelope and a nearly constant core radius predict these relations to be $δ c ∼ M
c ∼ M 200 2 / 3$. Including a small mass dependence in r[c] improves the agreement with the simulations, $r c ∼ M 200 0.05$, gives $M c ∼ M 200 0.72$ and $δ c ∼ M 200 0.57$.
• The scaling relations for r[c](M[200]), δ[c](M[200]), and M[c](M[200]), Eqs. (44)–(46), assuming the Burkert profile for the SIBEC-DM halos, generally do not agree with trends of the classical
Milky Way dSphs and nearby galaxies in the SPARC dataset for the SIBEC-DM parameters tested in this work. Nevertheless, the slopes of the scaling relations in our SIBEC-DM-only simulation are in
better agreement with observations compared to FDM-only simulations.
In future work, larger high-resolution simulations should be carried out to further investigate SIBEC-DM, as our current simulations have a limited box size and halo population. This is particularly
relevant for probing SIBEC-DM with realistic initial conditions and self-interactions that are consistent with the HMF (Shapiro et al. 2021) and large-scale observables (Hartman et al. 2022). It will
be important to test the validity and accuracy of the smoothing procedure and the resulting hydrodynamic approximation used in this work, especially if it continues to be used to test SIBEC-DM
models. Dawoodbhoy et al. (2021) showed analytically for a few idealized 1D cases that smoothing the exact solutions of the NLSE gives the same large-scale fluid properties, such as pressure and
density, as the smoothed phase space distribution function. These tests should be extended to more complicated scenarios, such as self-gravitation and the formation of shock fronts, to check, for
instance, if the heating of the effective thermal energy is accurately captured by the hydrodynamic approximation, or if the skewlessness of ℱ is a valid assumption, both inside virialized halos and
during the in-fall and collapse. In particular, the assumption of skewlessness should be tested in mergers, since the merging SIBEC-DM halos can have large plane-wave velocity dispersions, which
would have significant implications for the merger event, although this will require using large and costly numerical simulations.
We thank the Research Council of Norway for their support, and the anonymous referee for their helpful comments and suggestions. We are also grateful to Matteo Nori and Marco Baldi for our
discussions on simulations of ultra-light DM, and to Taha Dawoodbhoy and Paul R. Shapiro for their feedback on this work. Computations were performed on resources provided by UNINETT Sigma2 – the
National Infrastructure for High Performance Computing and Data Storage in Norway.
Appendix A: Fitted scaling relation parameters
The fitted parameters for the scaling relations of r[c](M[200]), δ[c](M[200]), and M[c](M[200]), Eqs. (44), (45), and (46), are shown in Tables A.1, A.2, and A.3. These values correspond to the fits
shown in Figs. 6 and 13 obtained using Theil-Sen regression, including the results for the SPARC dataset (Li et al. 2020) and the Milky Way dSphs (Salucci et al. 2012).
Table A.1.
Fitted scaling parameters for r[c](M[200]), Eq. (44), at various redshifts. The results from the SPARC dataset and the Milky Way dSphs are included under the label “Data”.
Table A.2.
Fitted scaling parameters for δ[c](M[200]), Eq. (45), at various redshifts. The results from the SPARC dataset and the Milky Way dSphs are included under the label “Data”.
Table A.3.
Fitted scaling parameters for M[c](M[200]), Eq. (46), at various redshifts. The results from the SPARC dataset and the Milky Way dSphs are included under the label “Data”.
All Tables
Table 1.
Overview of the simulation runs.
Table A.1.
Fitted scaling parameters for r[c](M[200]), Eq. (44), at various redshifts. The results from the SPARC dataset and the Milky Way dSphs are included under the label “Data”.
Table A.2.
Fitted scaling parameters for δ[c](M[200]), Eq. (45), at various redshifts. The results from the SPARC dataset and the Milky Way dSphs are included under the label “Data”.
Table A.3.
Fitted scaling parameters for M[c](M[200]), Eq. (46), at various redshifts. The results from the SPARC dataset and the Milky Way dSphs are included under the label “Data”.
All Figures
Fig. 1.
Self-interaction energy density U[SI], thermal energy density U, and gravitational potential energy density W of the spherically symmetric test halo with R[c]=1 kpc, relative to the interaction
energy of the initial background $U ¯ SI$. The result of the simulation with refinement levels 8 to 12 are shown in solid, while 7 to 11 is shown in dotted. The 3D simulation reproduces the 1D
results of Ahn & Shapiro (2005) and Dawoodbhoy et al. (2021), in particular the large thermal energy in the wake of the accretion shock, its sharp decrease to below the self-interaction energy
inside the core, and the fit U[SI]∝ρ^2∝r^−24/7 in the envelope.
In the text
Fig. 2.
Energy conservation (top), the smallest resolved physical cell length (middle), and the runtime in CPU hours (bottom) for the main suite of simulations. Runs with a minimum refinement level of 8
are shown in solid lines, and 7 in dotted lines.
In the text
Fig. 3.
Binned SIBEC-DM halo profiles and the HMF for R[c]=1 kpc at z=1.5, with minimum refinement level 7 (dotted) and 8 (solid). The HMF of CDM for the same box size and initial resolution is
included in black. The halo mass limit M[min] are indicated by dots. The standard deviation from the binned halo profile mean for level 8, as well as the Poisson error in the HMF, are shown in
In the text
Fig. 4.
Cumulative distribution functions for different cored halo profiles from SIBEC-DM simulations with R[c]=1 kpc (upper), R[c]=3 kpc (middle), and R[c]=10 kpc (lower) at z=0.5. This shows the
fraction of the halos that has $χ v 2$ smaller than a given value, and therefore how well the fitting function generally describes the halos in our simulation.
In the text
Fig. 5.
Mean (solid) and standard deviation (shaded) of binned SIBEC-DM halo profiles with R[c]=1 kpc at z=0.5. The binned profiles are fitted to NFWc (dashed) and Burkert (dash-dotted). The fits of
the NFW profile to the halo envelopes are also shown (dotted).
In the text
Fig. 6.
Core radii r[c] (left), core densities δ[c] (middle), and core masses M[c] (right) versus M[200] for halos in cosmological simulations of SIBEC-DM with R[c]=1 kpc, 3 kpc, and 10 kpc, using the
Burkert profile. The upper plots show the scatter of halos at z=0.5, with the fitted scaling Eqs. (44)–(46) shown in dashed lines, compared to the SPARC dataset (Li et al. 2020) and Milky Way
dSphs (Salucci et al. 2012). The lower plots show the fitted median (solid) and the first and third quartiles (shaded) at several redshifts using Theil-Sen regression. The results from SPARC and
the dSphs are also shown. In the scatter plot for r[c] the core radii R[c] are indicated in dotted colored lines.
In the text
Fig. 7.
Ratio of average thermal and self-interaction energy, U and U[SI], inside the halo cores r<r[c]/2 in simulations with R[c]=3 kpc at z=1.
In the text
Fig. 8.
Top panel: self-interaction energy density U[SI], thermal energy density U, and gravitational potential energy density W of a sample halo with R[c]=1 kpc and M[200]=3×10^9M[⊙], relative to
the interaction energy at the background level $U ¯ SI$. The middle panel shows instead the specific energy, and the bottom panel the cumulative virial. The core radius r[c] and halo radius r[200]
are indicated with the inner and outer dotted vertical lines, respectively.
In the text
Fig. 9.
Final distributions of U/U[SI] in 2D simulations with symmetrical (upper) and asymmetrical (lower) collapse, centered on the density peak. In the asymmetrical case, the second smaller over-density
was initially located to the right. The minimum in the symmetrical case is around U/U[SI]≈2×10^−2, while in the asymmetrical case its is U/U[SI]≈2.
In the text
Fig. 10.
Projection plots of the DM density in cosmological simulations with L=2 Mpc h^−1 at z=0.5 SIBEC-DM with R[c]=1 kpc (upper right) and CDM with the same initial conditions, but without a
cut-off (upper left). The SIBCE-DM internal energies for the same snapshot showing the effective thermal energy U (lower left) and the self-interaction energy U[SI] (lower right).
In the text
Fig. 11.
Matter power spectrum for cosmological simulations L=2 Mpc h^−1 at z=0.5 for SIBEC-DM with R=1 kpc (solid) and CDM (dashed). The comoving cut-off k[cut] is indicated with a dotted vertical
line, and the spectrum at z=50 is multiplied by 50 in the figure.
In the text
Fig. 12.
Evolution of the ratio of the total thermal energy and total self-interaction energy for the simulations run.
In the text
Fig. 13.
Scaling functions at several redshifts for the core radii r[c] (left), the core density δ[c] (middle), and the core mass M[c] (right) for SIBEC-DM halos with R[c]=3 kpc and different initial ζ=
P/P[SI] (upper), and different initial power cut-offs (lower). The shaded areas give the first and third quartiles for the scaling parameters obtained from the Theil-Sen regression.
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://www.aanda.org/articles/aa/full_html/2022/10/aa43496-22/aa43496-22.html","timestamp":"2024-11-14T07:11:10Z","content_type":"text/html","content_length":"379061","record_id":"<urn:uuid:9546f5c2-201e-4d25-b2ba-4e460a16bf28>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00887.warc.gz"} |
Chain complex
In mathematics, chain complex and cochain complex are constructs originally used in the field of algebraic topology. They are algebraic means of representing the relationships between the cycles and
boundaries in various dimensions of a topological space. More generally, homological algebra includes the study of chain complexes in the abstract, without any reference to an underlying space. In
this case, chain complexes are studied axiomatically as algebraic structures.
Applications of chain complexes usually define and apply their homology groups (cohomology groups for cochain complexes); in more abstract settings various equivalence relations are applied to
complexes (for example starting with the chain homotopy idea). Chain complexes are easily defined in abelian categories, also.
Formal definition
A chain complex is a sequence of abelian groups or modules ... A[2], A[1], A[0], A[−1], A[−2], ... connected by homomorphisms (called boundary operators or differentials) d[n] : A[n]→A[n−1], such
that the composition of any two consecutive maps is the zero map: d[n] ∘ d[n+1] = 0 (i.e., (d[n]∘d[n+1])(a)=0[n−1]—the identity in A[n−1]—for all a in A[n+1]) for all n. They are usually written out
A variant on the concept of chain complex is that of cochain complex. A cochain complex is a sequence of abelian groups or modules ..., , , , , , ... connected by homomorphisms such that the
composition of any two consecutive maps is the zero map: for all n:
The index in either or is referred to as the degree (or dimension). The only difference in the definitions of chain and cochain complexes is that, in chain complexes, the boundary operators decrease
dimension, whereas in cochain complexes they increase dimension.
A bounded chain complex is one in which almost all the A[i] are 0; i.e., a finite complex extended to the left and right by 0's. An example is the complex defining the homology theory of a (finite)
simplicial complex. A chain complex is bounded above if all modules above some fixed degree N are 0, and is bounded below if all modules below some fixed degree are 0. Clearly, a complex is bounded
both above and below if and only if the complex is bounded.
Leaving out the indices, the basic relation on d can be thought of as
The elements of the individual groups of a chain complex are called chains (or cochains in the case of a cochain complex.) The image of d is the group of boundaries, or in a cochain complex,
coboundaries. The kernel of d (i.e., the subgroup sent to 0 by d) is the group of cycles, or in the case of a cochain complex, cocycles. From the basic relation, the (co)boundaries lie inside the
(co)cycles. This phenomenon is studied in a systematic way using (co)homology groups.
Chain maps and tensor product
There is a natural notion of a morphism between chain complexes called a chain map. Given two complexes M and N, a chain map between the two is a series of homomorphisms from M[i] to N[i] such that
the entire diagram involving the boundary maps of M and N commutes. Chain complexes with chain maps form a category.
If V = V and W = W are chain complexes, their tensor product is a chain complex with degree i elements given by
and differential given by
where a and b are any two homogeneous vectors in V and W respectively, and denotes the degree of a.
This tensor product makes the category (for any commutative ring K) of chain complexes of K-modules into a symmetric monoidal category. The identity object with respect to this monoidal product is
the base ring K viewed as a chain complex in degree 0. The braiding is given on simple tensors of homogeneous elements by
The sign is necessary for the braiding to be a chain map. Moreover, the category of chain complexes of K-modules also has internal Hom: given chain complexes V and W, the internal Hom of V and W,
denoted hom(V,W), is the chain complex with degree n elements given by and differential given by
We have a natural isomorphism
Singular homology
Suppose we are given a topological space X.
Define C[n](X) for natural n to be the free abelian group formally generated by singular n-simplices in X, and define the boundary map
where the hat denotes the omission of a vertex. That is, the boundary of a singular simplex is the alternating sum of restrictions to its faces. It can be shown that ∂^2 = 0, so is a chain complex;
the singular homology is the homology of this complex, that is,
de Rham cohomology
The differential k-forms on any smooth manifold M form an abelian group (in fact an R-vector space) called Ω^k(M) under addition. The exterior derivative d[k] maps Ω^k(M) to Ω^k+1(M), and d^2 = 0
follows essentially from symmetry of second derivatives, so the vector spaces of k-forms along with the exterior derivative are a cochain complex:
The cohomology of this complex is called the de Rham cohomology:
{locally constant functions on M with values in R} ^#{connected pieces of M}
Chain maps
A chain map f between two chain complexes and is a sequence of module homomorphisms for each n that commutes with the boundary operators on the two chain complexes: . Such a map sends cycles to
cycles and boundaries to boundaries, and thus descends to a map on homology:.
A continuous map of topological spaces induces chain maps in both the singular and de Rham chain complexes described above (and in general for the chain complex defining any homology theory of
topological spaces) and thus a continuous map induces a map on homology. Because the map induced on a composition of maps is the composition of the induced maps, these homology theories are functors
from the category of topological spaces with continuous maps to the category of abelian groups with group homomorphisms.
It is worth noticing that the concept of chain map reduces to the one of boundary through the construction of the cone of a chain map.
Chain homotopy
Chain homotopies give an important equivalence relation between chain maps. Chain homotopic chain maps induce the same maps on homology groups. A particular case is that homotopic maps between two
spaces X and Y induce the same maps from homology of X to homology of Y. Chain homotopies have a geometric interpretation; it is described, for example, in the book of Bott and Tu. See Homotopy
category of chain complexes for further information.
See also
This article is issued from
- version of the 10/20/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files. | {"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Cochain.html","timestamp":"2024-11-11T00:09:50Z","content_type":"text/html","content_length":"29628","record_id":"<urn:uuid:5b7b4fbc-8d4f-44ac-a473-b83cd88e0eee>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00812.warc.gz"} |
Please help me understand the Jacobian matrix
I’m just trying to get my head around the Jacobian matrix for a task*.
I think I understand derivatives (e.g. dF/dx of “F = x^2” is 2x), and I think I grok gradients (just a vector field where each vector points in the direction of greatest increase).
But suddenly I get to the Jacobian matrix and hit some kind of mental block. I must have read the first 3 pages of google results. I still don’t get it to a sufficient level to know how to apply it.
• My task, which is not homework :), includes computing the jacobian of a deformation field.
The deformation field is a function that for each x y z of 3D space, gives a displacement in each of the 3 dimensions.
So: (x’, y’, z’) = (x, y, z) + Disp(x, y, z)
Well, understanding is one thing and getting your task done is another; they’re related, but you needn’t necessarily fully have the former to do the latter. Still, I’ll concentrate on the former
I think we teach multivariable calculus in a somewhat needlessly confusing way, largely out of historical reasons more than anything else. Let me try to give you some better intuition so that you can
understand how derivatives, the gradient, and the Jacobian are all fundamentally the same concept (just called different things depending on the number of dimensions).
In the following, I’ll write “x |-> x[sup]2[/sup]” to mean the function which sends an input x to the output x[sup]2[/sup], “e |-> 8e” to mean the function which multiplies its input by 8, and so on.
A function’s derivative at a point is the local linear approximation to the function at that point. What do I mean by this?
Well, for starters, let’s introduce the idea of a linear function. We call a function G linear if G(a + b + c + …) = G(a) + G(b) + G© + … . For example, e |-> 8e and e |-> -e are linear, while e |->
e[sup]3[/sup] and e |-> sin(e) are not. Functions which distribute across addition like this are of course very convenient; hence our interest in them.
Now, if F is some function, and p is some point, then the derivative of F at p is itself a function. I know, you don’t normally think of it that way. But you should! The derivative of x |-> x[sup]2[/
sup] at 4 isn’t really 8; it’s the function e |-> 8e (i.e., “multiply by 8”).
Finally, the purpose of the derivative of a function at a point is to describe, to the best linear approximation, how small changes in the function’s input around that point cause changes in its
output. That is, if G is the derivative of F at p, then F(p + e) should be approximately equal to F§ + G(e) for small e. For example, when we say that e |-> 8e is the derivative of x |-> x[sup]2[/
sup] at the point 4, what we mean is that (4 + e)[sup]2[/sup] is approximately equal to 4[sup]2[/sup] + 8e, for small e (this being the best approximation we can give using a linear function).
Alright; what of all this? Well, the point is, nothing I said above assumes F takes in one-dimensional input or produces one-dimensional output. That’s just one common case. But we can look at
functions from any number of dimensions to any number of dimensions as well.
Now, if F is a function from n-dimensional space to m-dimensional space, so are its derivatives at any point. And, as you probably are aware, a linear function from n-dimensional space to
m-dimensional space can be represented by an m-by-n matrix of numbers.
In particular, if F is a function from 1-dimensional space to 1-dimensional space, then its derivative at any point can be represented by a 1-by-1 matrix of numbers; i.e., by a single number. This,
as we saw above, is what’s going on when we say the derivative of x |-> x[sup]2[/sup] at 4 is 8; the 8 is shorthand for the function e |-> 8e which it represents.
If F is a function from n-dimensional space to 1-dimensional space, then its derivative at any point can be represented by a 1-by-n matrix of numbers; i.e., by an n-dimensional vector. This is what’s
going on when we say the “gradient” of <x, y> |-> x[sup]2[/sup] + y[sup]2[/sup] at <4, 3> is <8, 6>; the <8, 6> is shorthand for the function <e, f> |-> <8e, 6f> which it represents, and “gradient”
is just a fancy word for “derivative of a function from n-dimensional space to 1-dimensional space”. (It happens to be the case that the vector representing the derivative in this case always points
in the direction of greatest rate of increase and has magnitude equal to the rate of increase in that direction, but don’t think of that as the fundamental definition of the gradient; the gradient is
just the derivative by another name)
Finally, if F is a function from n-dimensional space to m-dimensional space, then its derivative at any point can be represented by a full m-by-n matrix of numbers, and in this case the fancy word we
use for it is “Jacobian”.
(And “partial derivatives”? That’s just when you speak of particular entries in this matrix, rather than the entire matrix)
Alright, as for computation: the rows and columns of a Jacobian matrix correspond to particular co-ordinates of the output and input, and the entry at any given point is the partial derivative of
that corresponding output co-ordinate with respect to that corresponding input co-ordinate.
For example: the function <x, y> |-> <x[sup]2[/sup]y, x + y, sin(y)>. This is a function from 2-dimensional space to 3-dimensional space. Its Jacobian matrix will be a 3 by 2 matrix. Its first row
corresponds to the x[sup]2[/sup]y coordinate of the output, its second row to the x + y, and its third row to the sin(y). Its first column corresponds to the x of the input, and its second column
corresponds to the y of the input. At each row and column goes the corresponding partial derivative. So the matrix in full becomes
2xy | x[sup]2[/sup]
1 | 1
0 | cos(y)
(You may use the flipped convention for rows and columns; it doesn’t really matter. Indeed, in many cases, the matrix itself doesn’t matter; it’s just a way of representing a linear function, which
is often more conveniently dealt with without resorting to the matrix representation. But when you do want the matrix, this is how you get it; just set up all the partial derivatives in a table)
You might benefit from a more physical point of view. One application of the Jacobian is in solid mechanics.
The deformation gradient is a linear transformation that converts one configuration of points to another. Let x represent one configuration and X represent another. Then X=Fx, where F is the
deformation gradient carrying x to X. The vectors x and X contain the locations of the points in a body.
The determinant of F is the Jacobian, and it turns out to be the ratio of the density in the original configuration to the density in the final configuration.
In other words, if the location of points in a chunk of Jello was x, and you squashed the Jello until the points took on locations X, then the determinant of F (X=Fx) would be the ratio of the final
density to the initial density of the chunk of Jello.
You’re speaking about the Jacobian determinant; the OP seems interested in the Jacobian matrix F itself.
Still, the point you make is a good one for defining the determinant of a linear transformation from some space to itself (with square matrices being a particularly co-ordinatey way of looking at
this): the determinant is the constant ratio describing how much the “volume” (length in 1 dimension, area in 2 dimensions, and so on) of any figure multiplies by under this transformation.
[Technically, “oriented volume”, in that the determinant will go negative to reflect volume that turns inside out].
Er, sorry, I should have written <e, f> |-> 8e + 6f here. A function from 2-dimensional space to 1-dimensional space. In general, a vector v can be taken as representing the function w |-> v dot w,
and that’s precisely what we’re doing when we say the gradient is a particular vector. Though if we presented things differently, we wouldn’t even have to worry about a lot of this unnecessary back
and forth into unwieldy representations.
(While I’m fixing minor errors, Hyperelastic’s last line should read “ratio of the initial density to the final density”, as in their penultimate paragraph. Of course, the ratio of densities goes the
other way from the ratio of volumes, the latter being, I think, a more direct conceptualization of the determinant.)
Thanks a lot Indistinguishable, I think I have a pretty good understanding of the Jacobian (matrix) now.
In my mind, derivatives, gradients and the jacobian were separate but related concepts. Of all the links, none made it clear that they are the same thing over different orders.
So, in terms of my problem, and this might be hopelessly naive, would I be right in looking at it like this:
e.g. Let’s say I have a 1D deformation, and it’s values are [3 6 2 7 4 1].
“Differentiating” at x = 1, I can just use a delta_x of 1 (so, not really “infinitesimal”), yielding a slope of 3.
And similarly, use a delta_y of 1 etc for the 3D case to fill in all the elements of the jacobian?
Yes! I got all excited and read Jacobian and not much else from the OP.
In mechanics, when we say Jacobian, we always mean det F. We call F the deformation gradient.
Indistinguishable, can you please travel back in time and become (all of) my calculus teacher(s)? Please?
Well, I still don’t exactly know what the task you’re supposed to accomplish is, but I think what you’re saying here sounds reasonable.
Aw, shucks; I think I need a blushing smiley.
I still can’t see how to apply the jacobian to my problem. I’m really tearing my hair out over this. :mad:
So I have a displacement field: it’s a 3D field of 3D vectors. Each vector is just a translation to apply to a control point at that position in the world.
e.g. Applying this field to a regular cube, say, might produce something like this.
But I need to calculate an affine matrix for each point, that describes the local transformation in terms of scaling etc, not translation.
Apparently this matrix is “simply” calculated as M = I + J, where M is the affine transform, I is the identity matrix and J is the jacobian of the distortion field.
But what does “jacobian of the distortion field” mean?
I’m not sure what you mean by “describes the local transformation in terms of scaling etc, not translation”; if “affine matrix” means what I think it means, there is translation involved.
Anyway, the function you’re computing is f§ = p + d§, where d§ is the displacement at point p. That is, you’re ultimately interested in the function f which adds to each point the appropriate
displacement. The derivative of f [aka, its Jacobian] at p is therefore the derivative of the identity function at p + the derivative of d at p; the derivative of the identity function is itself
(since it’s a perfect linear approximation to itself), so we get the equation M = I + J, where M is the derivative of f and J is the derivative of d.
That’s just in case you were wondering where that equation came from; you’re basically just calculating the derivative of f.
Ok, now, how do you calculate J? That is, how do you calculate the derivative of d? Well, if you’re going to represent it as a matrix, then its first column (well, you may perhaps use the flipped
convention for rows and columns, but whatever) is the derivative of the displacement field as you move along the X axis, its second column is the derivative of the displacement field as you move
along the Y axis, and the third is the derivative as you move along the Z axis. So, the first column of the matrix, at the point p, can be approximated as (d(p + <tinystep, 0, 0>) - d§)/tinystep, the
second column can be approximated as (d(p + <0, tinystep, 0>) - d§)/tinystep, and so on for the third column (note that these are all three-dimensional vectors).
Does that help?
Well yes, that’s a very good explanation, and it does make sense.
But now I’m wondering why my (naive) function doesn’t work. Are you familiar with Matlab?
I’m going to just post up my matlab code; this might be a bit cheeky, especially for GQ, but I’m way past the point of desperation…
I’m not familiar with Matlab syntax, but if you talk me through it, I should be ok. What does the first line do? I think I understand the rest (initialize a 3 * 3 matrix “jacob”; fill in its entries.
deformAtPlusXYZ and deform are both, apparently, 3d vectors).
Just from the comment, it sounds like deform is d§ and deformAtPlusXYZ is d(p + <1, 1, 1>). But you don’t want d(p + <1, 1, 1>) for anything; you want to use d(p + <1, 0, 0>) - d§ for calculating the
first column, d(p + <0, 1, 0>) - d§ for calculating the second column, and d(p + <0, 0, 1>) - d§ for calculating the third column. [Well, it’d be even better to take steps of size less than 1 unit if
your grid is finer, but I’ll assume you chose 1 unit because that is indeed the resolution of your grid]
That is, it sounds like you’re always looking at steps in the <1, 1, 1> direction. But that’s not what you should be doing; you should be looking separately at steps in the three different
directions, the X, Y, and Z directions.
Houston, we have a jacobian!
Thanks Indistinguishable, that worked a treat! I would describe how I want to repay you but it would be NSFW
Just one more thing…the values I’m calculating are tiny, I had to scale them up before they matched my reference image. Is there a standard scaling factor (e.g. should the largest jacobian on an
image be 1 say?)
Scratch the last thing actually, scaling was just a bug | {"url":"https://boards.straightdope.com/t/please-help-me-understand-the-jacobian-matrix/547739","timestamp":"2024-11-07T05:34:00Z","content_type":"text/html","content_length":"63556","record_id":"<urn:uuid:bb15af18-d4c2-414f-ac47-ffd31cd0a97c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00572.warc.gz"} |
Lesson 1
Moving in Circles
Let’s think about moving in circles.
1.1: Which One Doesn't Belong: Reading Clocks
Which one doesn’t belong?
1.2: Around and Around
A ladybug lands on the end of a clock’s second hand when the hand is pointing straight up. The second hand is 1 foot long and when it rotates and points directly to the right, the ladybug is 10 feet
above the ground.
1. How far above the ground is the ladybug after 0, 30, 45, and 60 seconds have passed?
Pause here for a class discussion.
2. Estimate how far above the ground the ladybug is after 10, 20, and 40 seconds. Be prepared to explain your reasoning.
3. If the ladybug stays on the second hand, describe how its distance from the ground will change over the next minute. What about the minute after that?
4. At exactly 3:15, the ladybug flies from the second hand to the minute hand, which is 9 inches long.
1. How far off the ground is the ladybug now?
2. At what time will the ladybug be at that height again if it stays on the minute hand? Be prepared to explain your reasoning.
1.3: Where is the Point?
1. What is the radius of the circle?
2. If \(Q\) has a \(y\)-coordinate of -4, what is the \(x\)-coordinate?
3. If \(B\) has a \(y\)-coordinate of 4, what is the \(x\)-coordinate?
4. A circle centered at \((0,0)\) has a radius of 10. Point \(S\) on the circle has an \(x\)-coordinate of 6. What is the \(y\)-coordinate of point \(S\)? Explain or show your reasoning.
1. How many times a day do the minute hand and the hour hand on a clock point in the same direction?
2. At what times do they point in the same direction?
Consider the height of the end of a second hand on a clock over a full minute. It starts pointing up, then rotates to point down, then rotates until it is pointing straight up again. This motion
repeats once every minute.
If we imagine the clock centered at \((0,0)\) on the coordinate plane, then we can study the movement of the end of the second hand by thinking about its \((x,y)\) coordinates on the plane. Over one
minute, the \(y\)-coordinate starts at its highest value (when the hand is pointing up), decreases to its lowest value (when the hand is pointing down), and then returns to its highest value. This
happens once every minute that passes.
While we have worked with many types of functions, such as rational or exponential, none of them are characterized by output values that repeat over and over again, so we can’t use them to model the
height of the end of the second hand. This means we need to use a new type of function. A function whose values repeat at regular intervals is called a periodic function, and the length of the
interval at which a periodic function repeats is called the period. We will study several types of periodic functions in this unit.
• period
The length of an interval at which a periodic function repeats. A function \(f\) has a period, \(p\), if \(f(x+p) = f(x)\) for all inputs \(x\).
• periodic function
A function whose values repeat at regular intervals. If \(f\) is a periodic function then there is a number \(p\), called the period, so that \(f(x + p) = f(x)\) for all inputs \(x\). | {"url":"https://curriculum.illustrativemathematics.org/HS/students/3/6/1/index.html","timestamp":"2024-11-11T08:06:31Z","content_type":"text/html","content_length":"104342","record_id":"<urn:uuid:f06a9b17-168c-4a50-9a5e-5d81a9392084>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00519.warc.gz"} |
#21 of Sept 2016 Grade 10 CSAT(Korean SAT) mock test
There are two quadratic functions $f(x),~g(x)$ and a linear function $h(x).$ $f(x)$ and $h(x)$ contact with each other at only one point, $(\alpha,~f(\alpha)).$ $g(x)$ and $h(x)$ contact with each
other at only one point, $(\beta,~h(\beta)).$
They satisfy the below conditions:
• Leading coefficients of $f(x)$ and $g(x)$ are $1$ and $4,$ respectively.
• Two positives $\alpha$ and $\beta$ satisfy $\alpha:\beta=1:2.$
Let $t$ be the $x$ -coordinate of the point of intersection of $y=f(x)$ and $y=g(x)$ whose $x$ -coordinate is in between $\alpha$ and $\beta.$
Find the value of $\dfrac{210t}{\alpha}.$
This problem is a part of <Grade 10 CSAT Mock test> series . | {"url":"https://solve.club/problems/21-of-sept-2016-grade-10-csatkorean-sat-mock-test/21-of-sept-2016-grade-10-csatkorean-sat-mock-test.html","timestamp":"2024-11-05T18:55:41Z","content_type":"text/html","content_length":"107066","record_id":"<urn:uuid:a577df72-3496-46cb-8bf3-59bc8d65cba6>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00811.warc.gz"} |
How do you know if a matrix has linearly independent eigenvectors?
Eigenvectors corresponding to distinct eigenvalues are linearly independent. As a consequence, if all the eigenvalues of a matrix are distinct, then their corresponding eigenvectors span the space of
column vectors to which the columns of the matrix belong.
How many linearly independent eigenvectors does a matrix have?
There are infinite number of independent Eigen Vectors corresponding to 2×2 identity matrix: each for every direction, and multiple of those vectors will be linearly dependent on that vector.
What does it mean to have a full set of eigenvectors?
A “complete” set of eigenvectors is a basis for the vector space consisting entirely of eigenvectors for a given linear transformation.
Are all eigenvectors of the same eigenvalue linearly independent?
Eigenvectors corresponding to distinct eigenvalues are always linearly independent. It follows from this that we can always diagonalize an n × n matrix with n distinct eigenvalues since it will
possess n linearly independent eigenvectors.
Are eigenvectors normalized?
Eigenvectors may not be equal to the zero vector. A nonzero scalar multiple of an eigenvector is equivalent to the original eigenvector. Hence, without loss of generality, eigenvectors are often
normalized to unit length. , so any eigenvectors that are not linearly independent are returned as zero vectors.
How do you check if a matrix is linearly independent?
Since the matrix is , we can simply take the determinant. If the determinant is not equal to zero, it’s linearly independent. Otherwise it’s linearly dependent. Since the determinant is zero, the
matrix is linearly dependent.
How do you find the linear independence of a vector?
We have now found a test for determining whether a given set of vectors is linearly independent: A set of n vectors of length n is linearly independent if the matrix with these vectors as columns has
a non-zero determinant. The set is of course dependent if the determinant is zero.
How do you know if a set is linearly independent?
If you make a set of vectors by adding one vector at a time, and if the span got bigger every time you added a vector, then your set is linearly independent. A set containg one vector { v } is
linearly independent when v A = 0, since xv = 0 implies x = 0.
What is linear independence of matrices?
Linear independence of matrices is essentially their linear independence as vectors. So you are trying to show that the vectors $(1,-1,0,2), (0,1,3,0),(1,0,1,0)$ and $(1,1,1,1)$ are linearly
independent. These are precisely the rows of the matrix that you have given.
Are the identity matrix and the row equivalent matrix linearly independent?
If this matrix is indeed row equivalent to the identity matrix (a fact which I’m assuming) then the vector space the above four vectors will generate will have dimension four (recall that, row or
column operations don’t change the rank of a matrix). This shows that they are linearly independent.
Does a wide matrix have linearly dependent columns?
A wide matrix (a matrix with more columns than rows) has linearly dependent columns. For example, four vectors in R 3 are automatically linearly dependent. Note that a tall matrix may or may not have
linearly independent columns. Facts about linear independence | {"url":"https://profoundadvices.com/how-do-you-know-if-a-matrix-has-linearly-independent-eigenvectors/","timestamp":"2024-11-10T08:30:03Z","content_type":"text/html","content_length":"57737","record_id":"<urn:uuid:9cffc1e1-a5c3-4153-b060-11e77fe6f870>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00417.warc.gz"} |
Data Science Analysis of Stroke Prediction - Exploratio Journal
Author: Charisse Yeung
Mentor: Dr. Gino Del Ferraro
Carlmont High School
1. Introduction
Today’s market is constantly altered by the rising popularity of AI and Machine Learning. Data science utilizes these technologies by solving modern problems and linking similar data for future use.
Data science is extensively used in numerous industry domains, such as marketing, healthcare, finance, banking, and policy. For my research project, I used data science for healthcare, precisely
stroke predictions. Stroke is the fifth leading cause of death in the United States and a leading cause of severe long-term disability worldwide. With its costly treatment and prolonged effects,
prevention efforts and identification of the possibility and early stages of stroke benefit a significant population in the country, especially the disadvantaged. My goal is to help society use
technology with stroke predictions. The paper is structured as follows: Section 2 introduces the cause and problem of stroke in the US population; Section 3 discusses the steps of a data science
project; Section 4 introduces Machine Learning as a tool to make predictions; finally, Section 5 applies all these analyses to a data set of stroke patients to make predictions.
2. Stroke Prediction
Every year, about 800,000 people in the United States are directly affected by stroke. The two major strokes are ischemic and hemorrhagic (Figure 2.1). Ischemic stroke results from a blocked artery
that cuts blood to an area of the brain. North African, Middle Eastern, sub-Saharan African, North American, and Southeast Asian countries had the highest rates of ischemic stroke. Hemorrhagic stroke
results from a broken or leaking blood vessel leading to blood spilling into the brain.
Figure 2.1 Ischemic vs. Hemorrhagic stroke
In both cases, the brain does not receive enough oxygen and nutrients, and brain cells begin to die. Risk factors for stroke are old age, overweight, physical inactivity, heavy alcohol consumption,
drug consumption, smoking, hypertension, diabetes, and heart disease (Figure 2.2). One in 3 American adults has at least one of these conditions or habits: high blood pressure, high cholesterol,
smoking, obesity, and diabetes. In my project, I investigated risk factors in stroke patients to find a correlation and make stroke predictions. Furthermore, I chose to focus my research on American
patients since stroke risk factors are much more prevalent in the United States than in other countries.
Figure 2.2 Risk factors of stroke
3. Process of a Data Science Project
In problem-solving, one must follow a particular series of steps and a deliberate plan to reach a resolution. The same technique applies to a data science project. A dataset isn’t enough to solve a
problem; One needs an approach or a method that will give the most accurate results. A data science process is a guideline defining how to execute a project. The general steps in the data science
process include: defining the topic of research, obtaining the data, organizing the data, exploring the data, modeling the data, and finally communicating the results.
Before starting any data science project, the topic of the research project must be defined. It is critical to brainstorm numerous relevant research ideas and then refine the focus on one worth doing
the project. Relevancy is the factor in research that helps both the data scientist and the reader develop confidence about the investigation’s findings and outcome. Relevant research topics can be
social, economic, intellectual, environmental, etc., as long as they are up to date. For example, gun control would be a relevant social issue for research, and stroke prediction would be a relevant
medical research idea. To get a deeper insight into the topic, thorough research on the specific topic should be conducted and explored, such as reading articles on the internet or talking to an
expert on the topic. After developing a high understanding, there should be a general idea of the ultimate purpose and goal of the project. One should ask themselves: “What problem am I trying to
solve?” In my case, the problem I am trying to solve is the leading cause of death by stroke annually in the US. The purpose of this project is to use data science to make stroke predictions and
further limit the effects of stroke on the population by identifying the early stages of stroke with some correlations regarding stroke. Understanding and framing the problem will help build an
effective model that will positively impact the organization.
3.1 Data Acquisition
Next, one must find the data to be analyzed in the project. When researching for data, one should discover high-quality and targeted datasets. Not only does the topic of research needs to be relevant
but also the data. Data from different sources can be extracted and sorted into categories to form a particular dataset. This process is also known as data scraping. One can find sources on the
internet from research centers, government organizations, and specific websites for data scientists, such as Kaggle (Figure 2.3). The data must be accessible, so the most convenient formats for data
science are CSV, JSON, or Excel files. Once the datasets have been downloaded, it is necessary to import them into an environment that can directly read data from these data sources into the data
science programs. In most cases, data scientists will be using and importing the data into Python or R programming languages. In my case, I downloaded a CSV file of stroke data consisting of patients
from the US and their conditions from Kaggle, and then I imported my data into the Juypter Notebook in Python for use.
Figure 3.1 Stroke patient data downloaded from Kaggle
3.2 Data Cleaning
The data acquired and imported is not perfect on its own. Thus, the data must be organized and “clean” to ensure the best quality. Duplicate and unnecessary data are removed, and missing data are
replaced. Unnecessary data could be infinities, outliers, or data that does not belong in the sample. For my project on stroke predictions, I removed the data of particular patients from the set if
their BMI is infinity (Figure 3.2) or they live outside of the United States, which is the scope of our study.
Figure 3.2 Example of infinities in a data set
There are also irrelevant data that are not as obvious and require analyzing the correlation between the parameter and the target. If the correlation is very low, it is irrelevant and should be
removed. If there is a missing parameter in the dataset, locate the correct missing data instead and replace it or delete the patient from the dataset. The data is then consolidated by splitting,
merging, and extracting columns to organize it and maximize its efficiency. The efficiency and accuracy of the analysis will depend considerably on the quality of the data, especially when used for
making predictions.
3.3 Data Exploration
A critical factor in exploring and analyzing the data is to find covariations, as mentioned earlier. Different datasets, such as numerical, categorical, and ordinal, require different treatments.
Numerical data is a measurement or a count. Categorical data is a characteristic such as a person’s gender, marital status, hometown, or the types of movies they like. Categorical data can take
numerical values, such as “0” indicating no and “1” indicating yes, but those numbers don’t have mathematical meaning. In my case, I used numerical data—for the age, average glucose levels, and
BMI—and a categorical dataset—for gender, hypertension, heart disease, marriage status, work, residence, smoking status, and stroke. I detected patterns and trends in the data using visualization
features on Python with Numpy, Matplotlib, Pandas, and Scipy. With Numpy and Matplotlib, I could plot linear regressions, bar charts, and a heat map in correlation to select parameters and the
target. Using insights made by observing the visualizations and finding correlations, one can start to make conjectures about the problem being solved. This step is crucial for data modeling.
3.4 Data Modeling
Data modeling is the climax of the data science process. The pre-processed data will be used for model building to learn algorithms and to perform a multi-component analysis. At this stage, a model
will be created to reach the goal and solve the problem. In my case, I used a Machine Learning algorithm as the model, which can be trained and tested using the dataset. Machine Learning is the use
and development of computer systems that can learn and adapt without following explicit instructions by using algorithms and statistical models to analyze and draw inferences from patterns in data.
The first step to data modeling with Machine Learning is data splicing (Figure 3.3), where the entire data set is divided into two parts: training data and testing data. Generally, data scientists
split 80% of their data for training and the remaining 20% for testing. The Machine Learning model is fed with the training input data to train the data. The data is then tagged according to the
defined criteria so that the Machine Learning model can produce the desired output. During this operation, the model will recognize the patterns within the parameters and target of the training data.
Algorithms are trained to associate certain features with tags based on manually tagged samples, then learn to make predictions when processing unseen data. The model will be tested for accuracy with
the remaining 20% of the data. Since the correct parameters for each individual in the set are already known, it would be known whether the predictions made by the model are accurate by running the
model with the testing data.
Figure 3.3 Diagram of the Training-Testing cycle
The goal is to maximize the model’s accuracy by making final edits and testing it. One may encounter issues during testing and must fix them before deploying the model into production. This stage
builds a model that best solves the problem.
3.5 Data Interpretation
The concluding step of the data science process is to execute and communicate the results made from the model. The project is completed, and the goal is accomplished. Consequently, one must present
their results to an audience through a research paper or a presentation. The presentation is comprehensible to a non-technical audience. The findings could be visualized with graphs, scatterplots,
heat maps, or other conceivable visualizations. Useful data visualization tools for Python are Matplotlib, ggplot, Seaborn, Tableau, and d3js. To visualize the covariance between stroke and its
primary causes, I used Matplotlib and Seaborn to create a heatmap. During the presentation, report the results and carefully explain the results’ reasoning and meaning. My ultimate goal is to make
predictions for strokes with given patient data, and I hope my research paper will raise awareness of this technology and its global benefits for stroke patients. A successful presentation will
prompt the audience to take action in response to the purpose.
4. Machine Learning
The popularity of Machine Learning, particularly its subset of Deep Learning, has rapidly grown in the past decade with skyrocketing interest in Artificial Intelligence. However, the history of
Machine Learning dates back to the mid-twentieth century. Machine Learning is a subset of Artificial Intelligence that imitates human behavior and cognition. The “learning” in Machine Learning
expresses how the algorithm automatically learns from the data and improves from experience by constantly tuning its parameters to find the best solution. The data set trains a mathematical model to
know what to output when it sees a similar one in the future. Machine Learning can be classified into three algorithm types: Supervised Learning, Unsupervised Learning, and Reinforcement Learning
(Figure 4.1). While Supervised and Unsupervised Learning is presented with a given set of data, Reinforcement Learning, known as an agent, learns by interactions with its environment. The agent makes
observations and selects an action. When it takes action, it receives feedback rather than a reward or a punishment. Its goal is to maximize rewards and minimize penalties; thus, it would learn and
tune its knowledge to take the actions leading to reward and avoid the activities leading to punishment.
Figure 4.1 Web diagram of Machine Learning
4.1 Supervised & Unsupervised Problems
The significant distinction between Supervised and Unsupervised Learning is the labeling status of the given data set. In Supervised Learning, the machine is given pre-labeled data. For my project, I
used Supervised Learning and already had data from researchers who labeled each patient with or without stroke. I used a portion of this labeled data to train the model to distinguish which patients
have or do not have a stroke based on their given conditions. The system would make a mapping function that uses the pre-existing data to create the best-fit curve or line and make estimations.
Subsequently, I used the remaining portion of my labeled data to test the model for its accuracy. The goal is to maximize the accuracy of the model’s approximations when given new input data. In
Unsupervised Learning, the machine is given unlabeled and uncategorized data, so it uses statistical methods on the data without prior training. For example, I would be using Unsupervised Learning if
I were to predict which of the given patients have diabetes without previous data on diabetes. To form a model, I must analyze the data distribution and separate it based on similar patterns. Without
any labeling, I would divide the patients into two groups based on their similar characteristics and behavior. Unsupervised Learning is split into two types: clustering and dimensionality reduction.
In clustering, the goal is to find the inherent groupings and reveal the structure of the data. Some examples of clustering would be my previous example of predicting a patient with diabetes,
targeted marketing, recommender systems, and customer segmentation. In dimensionality reduction, the goal is to reduce the number of dimensions rather than examples.
4.2 Classification & Regression
Supervised Learning is divided into two types: classification and regression. The goal of classification is to determine the specific labeled group the given input belongs to. The output variable
would be a discrete category or a class. The only possibilities for my project are “stroke” or “no stroke.” The given data on the patients trains the model to correlate various parameters—their
conditions and behavior—to the corresponding output of “stroke” or “no stroke.” The output could also be a defined set of numbers, such as “0” representing no stroke and “1” representing stroke. The
accuracy of its categorization evaluates the classification algorithm. As a result, the model could predict whether a new patient would have a stroke. For regression, the outputs are continuous and
have an infinite set of possibilities, generally real numbers. For instance, the machine could be estimating a house’s cost based on its location, size, and age parameters. Standard regression
algorithms are linear regression, logistic regression, and polynomial regression.
In the following sections, I will discuss two regression models: linear and logistic regression. The former is used as an introduction to the regression problem whereas, the latter is the algorithm
that I used to perform stroke predictions.
4.3 Linear Regression
Linear regression uses the relationship between the points or outputs of the data to draw a straight line, known as the line of best fit, through all of them. This line of best fit is then used to
predict output values. A linear function has a constant change or slope and is usually written in the mathematical form:
y = θ1x + θ0 (Equation 4.1) where m is the constant slope and b is the y-intercept. When finding the line of best fit, there will be infinite possible straight lines through the values (Figure 4.2),
and the θ1-values (slopes) and θ0-values (y-intercepts) will be adjusted. The “θ0” and “θ1” are the two parameters of the function. Regression is the predicting of the exact numeric value the
variable would take to have the line of best fit. When given a data set, there exist various x-variables (features or input) and a y-variable (label or output). In my case, the features included
gender, age, multiple diseases, and smoking status. The label is stroke or no stroke, listed as “0” and “1.” When using actual data, there will always be a distance between the actual and predicted
y-values. This distance, known as the error, is minimized as much as possible to form the best fit line.
Figure 4.2 Possible lines of best fit for a given dataset
The error is often represented by a cost function, which is the sum of the square of the actual output subtracted by the predicted output:
where y[i] is the real label output, g(x) is the approximation of the output, and (y[i] – g(x)) is the error. The error is squared to ensure that the result of the cost function will be the sum of
positive values. The line of best fit is created when the mean square error is the smallest it can be. In Machine Learning, the data receives training to find the line of best fit using Gradient
Descent, an optimization algorithm to find the local minimum of a differentiable function. The Gradient Descent can be represented with the formula:
where ⍺ is the learning rate and [n] and θ[n] – 1 approach each other. Once the difference is very small or |θ[n] – θ[n-1]| < 0. 001, the line of best fit is found. One example of linear regression
would be the number of sales based on the product’s price. There would be a set of data with various products at different prices (the inputs) and each of their sales (the outputs). Assuming the
trend of the relationship between the costs and the sales is linear, one would be able to find a linear model with the slightest mean square error. Thus, one can predict the number of sales at a new
price. When two inputs or independent variables exist, the function becomes three-dimensional (Figure 4.3), and the model becomes a plane of best fit.
Figure 4.3 Plane of best fit on a three-dimensional graph
4.4 Logistic Regression
The data may not always fit into a linear model. For my data set on stroke predictions, the only two possible labels are stroke and no stroke or “0” and “1,” which is an example of binary
classification. Thus, linear regression is non-ideal in the case of binary classification.
Figure 4.4 Linear regression used in binary classification
The line of best fit would exceed the 0 and 1 range and not be a good representation of the data, as seen in Figure 4.4. That’s why we will be using a logistic function to model the data. A logistic
function, also known as a sigmoid curve, is an “S”-shaped curve (Figure 4.5) that can be represented by the function:
where L is the curve’s maximum value and (θ[0] and θ[1]x) = g(x) or the linear regression function.
Figure 4.5 Logistic regression used in binary classification
In the case of a common sigmoid function, the output is in the range of 0 and 1, so L would be 1. There exists a threshold at 0.5; Outputs less than 0.5 will be still to 0 while outputs greater than
equal to 0.5 will be set to one. Logistic regression finds the curve of best fit or the best sigmoid function for the given data set. For linear regression, we found the line of best fit with
Gradient Descent. For logistic regression, we will use the Cross-Entropy Loss Function to determine the curve of best fit. Cross-entropy loss is the sum of the negative logarithm of the predicted
probabilities of each model. For my case, I had only two labels and used Binary Cross-Entropy Loss which can be represented in the formula:
where s[i] is inputs, f is the sigmoid function, and t[i] is the target prediction. The goal is to minimize the loss; thus, the smaller the loss the better the model. When the best sigmoid function
is found, the Binary Cross-Entropy should be very close to 0. The machine completes most of the logistic regression process internally, so it will solve and find the best function, which can be
applied to make accurate predictions.
5. Process of Stroke Prediction Project
In the following session, I will apply the previous machine learning skills, specifically the logistic regression algorithm, to the case of stroke predictions. The data set introduced in Section 2
and the data science project process discussed in Section 2 will be used. I will describe the process of my project in detail and explain the analysis involved in interpreting the accuracy and
efficiency of my model.
5.1 Data Acquisition
Before I started the data science research project, I researched various topics and current events and chose to do my project on stroke prediction. I obtained my organized data from the Kaggle
website, which allowed me to download the file as a CSV file conveniently. I used the Jupyter Notebook application via Anaconda as my environment for this project. I imported my downloaded CSV file
to the notebook (Figure 5.1).
Figure 5.1 First 15 lines of the imported dataset
As seen in the top row of Figure 5.1, there are various parameters or features: gender, age, hypertension, heart disease, marriage status, work type, residence type, average glucose level, BMI, and
smoking status. The output or target I investigated was whether or not the patient had a stroke. The variables hypertension, heart disease, and stroke are defined by “0” being no and “1” being yes.
5.2 Data Cleaning
During the data cleaning process, I removed the redundant data for clarity by deleting other values in gender, never_worked values from work_type, and the id column (Figure 5.2 & Figure 5.4). In
addition, I labeled all categorical features, or non-numerical columns, as ‘category’ when converting them into numerical values for analysis (Figure 5.2 & 5.3). Since the age values are
non-integers, I converted them into integers in the last row of my code (Figure 5.2).
Figure 5.2 Code for removal and revision of dataset
Figure 5.3 Conversion of categorical to numerical
Figure 5.4 Histograms before and after removal of unnecessary data
The next part of data cleaning is removing outliers. I identified those outliers by recognizing the “null” or nonexistent values (Figure 5.5), labeled as NaN in the data as seen previously in Figure
3.2. Any non-zero output means there is a presence of outliers.
Figure 5.5 Identification of outlier
In my dataset, the only outlier was BMI. Thus, I removed those outlier values and replaced them with the mean BMI value in the code in Figure 5.6. I was confident no more null values were present in
my data since all outputs were zero.
Figure 5.6 Removal of outlier
5.3 Data Balancing
Even after data cleaning, my dataset was not yet ready for use after data cleaning due to imbalance. Imbalanced data refers to the issue in classification when the classes or targets are not equally
represented. The number of patients with stroke was much higher than without stroke (left plot in Figure 5.8). To create a fair model, the number of patients in stroke and no stroke classes must be
equal. I could have resampled the data by undersampling (downsizing the larger class) or oversampling (upsizing the smaller class). I chose to oversample with the SMOTE algorithm (Figure 5.7) because
the number of patients in the stroke class was too small and would lower the accuracy with undersampling.
Figure 5.7 Code for resampling
Figure 5.8 Histogram of gender to stroke before and after balancing
As a result of the oversampling, the ratio of stroke to no stroke should be 1:1 and thus balanced (Figure 5.7 & right plot in Figure 5.8).
5.4 Data Modeling
After dividing the resampled data into 80% training and 20% testing, I created a logistic regression model with the training data (Figure 5.9).
Figure 5.9 Code for logistic regression
The logistic regression algorithm was imported from sklearn.linear_model and automatically found the best fit curve representing the dataset.
5.5 Data Performance
In order to determine the accuracy of my model, I found the mean square error or MSE (from Equation 4.2). The MSE could be found with three methods: score method, sklearn.metrics, and equation
(Figure 5.10).
Figure 5.10 Three methods of finding MSE
As a result, my model had approximately 91.1% accuracy. For a more detailed understanding of the model’s performance, I used a confusion matrix, which is a 2×2 table dividing the accuracy of the data
into four categories (Figure 5.11).
Figure 5.11 Confusion matrix plot
The four categories, as shown in Figure 5.11, are true positive (bottom right), true negative (top left), false positive (top right), and false negative (bottom left). The accuracy of the model is
high as long as most of the results are in the true positive and true negative categories because the predicted values are equal to the actual values. Using the confusion matrix, I further analyzed
the performance of the model by calculating the F-1 score (Equation 5.1 & Figure 5.12). The F-1 score shows not only accuracy but also precision. I used the sklearn.metric algorithm to calculate my
F-1 score (Figure 5.12), but I also could have used the equation.
Figure 5.12 Code for F-1 score
As a result, my model had an F-1 score of approximately 90.8%. Both my MSE and F-1 score were above 90.0%, and thus my model had high accuracy and precision.
5.6 Features Selection
Although my model already had high performance, I attempted to further increase it by removing certain features from my data. I hypothesized that the accuracy would improve if I removed the
unimportant features or features with little correlation to the presence of stroke. On the other hand, the accuracy would drastically decrease when I removed important features. I determined the
important and unimportant features with a correlation matrix plot (Figure 5.13).
Figure 5.13 Correlation matrix plot
The labeled bar on the right of Figure 5.13 shows the correlation between the features and output. The algorithm found the correlation with the following equation:
Where cov is the covariance, o[x] is the variability of x with respect to the mean (the variance), x[i] is an output of function x, x is the mean of x, and the y-variables have the same meanings
using the y data set. When used to find the correlation between the parameters and stroke, I focused on the right-most column of the map. A correlation of 1.0 means the trends of the feature and
output are equivalent, while a correlation of -1.0 means the trends of the feature and output are completely opposite. Both types of correlation are considered crucial when creating the logistic
regression model. On the other hand, the feature and output are entirely unrelated if the correlation is 0. Therefore, I considered the features with a correlation close to 0—gender, residence type,
children, and unknown smoking status—unimportant and removed them from my dataset (Figure 5.14).
Figure 5.14 Code for removal of unimportant features
After the removal, I repeated the processes of splitting the data, training the data, creating the logistic regression model, and calculating its accuracy with MSE and F-1 scores. Surprisingly, the
accuracy and F-1 score lowered to approximately 86.6%; hence, the data removal led to a smaller training set and thus a less accurate and precise model. I further tested this theory by removing the
important features or only keeping the features deemed unimportant and then repeated the data modeling process. Understandably, the accuracy lowered to 66.2%, and the F-1 score reduced to 71.9%. In
conclusion, I kept my original model with all the features because it had the highest accuracy and precision.
6. Conclusion
In this data science project, I applied Machine Learning algorithms into predicting the likeliness of a patient in the United States to have a stroke. The goal of making such predictions is to
prevent the consequences of stroke, which impacts a large population of Americans today. Throughout the project, I closely followed each step of the data science project process: data acquisition,
data cleaning, data exploration, data modeling, and data interpretation. I discussed the difference between Supervised and Unsupervised Learning is whether the given data is labeled. Within
Supervised Learning, there is Classification, using categorical data, and Regression, using numerical data. These data sets can be modeled with linear and logistic regression. In my project, I used a
logistic regression algorithm to test and train my data. As a result, I tested my model with MSE and F-1 scores, and my model had an accuracy of 90%, which is a very promising outcome. To ensure the
highest accuracy has been reached, I removed features with low correlation deemed unimportant and features with high correlation deemed important. The removal of important features led to a drastic
drop in accuracy, and thus those features of the dataset should continue to be collected and studied for stroke predictions. Meanwhile, the removal of the irrelevant features had a small drop in
accuracy, so those features are still of good use and are to be collected with the important features in this study. There may be other factors that play a role in the risk of stroke, however, the
factors I have mentioned are of greatest significance based on the accuracy of my model.
Works Cited
Yeung, C. (2022, August 11). Stroke_Predictions_Project_Charisse_Yeung.ipynb. GitHub. Retrieved September 3, 2022, from https://github.com/honyeung21/data_science/blob/main/
Stroke_Predictions_Project_Charis se_Yeung.ipynb
Medlock, B. (2022). Stroke. Headway. Retrieved September 3, 2022, from https://www.headway.org.uk/about-brain-injury/individuals/types-of-brain-injury/stroke/
Initiatives, C. H. (n.d.). Stroke prevention. CHI Health. Retrieved September 3, 2022, from https://www.chihealth.com/en/services/neuro/neurological-conditions/stroke/stroke-prevent ion.html
Fedesoriano. (2021, January 26). Stroke prediction dataset. Kaggle. Retrieved September 3, 2022, from https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset
Wolff, R. (2020, November 2). What is training data in machine learning? MonkeyLearn Blog. Retrieved September 3, 2022, from https://monkeylearn.com/blog/training-data/
Pant, A. (2019, January 22). Introduction to machine learning for beginners. Medium. Retrieved September 3, 2022, from https://towardsdatascience.com/
introduction-to-machine-learning-for-beginners-eed6024fd b08
V.Kumar, V. (2020, May 28). ML 101: Linear regression. Medium. Retrieved September 3, 2022, from https://towardsdatascience.com/ml-101-linear-regression-bea0f489cf54
Gupta, S. (2020, July 17). What makes logistic regression a classification algorithm? Medium. Retrieved September 3, 2022, from https://towardsdatascience.com/
what-makes-logistic-regression-a-classification-algorithm- 35018497b63f
About the author
Charisse Yeung
Charisse is currently a 12th grader at the Carlmont High School in California. Her academic interests are data science, computer science, healthcare, and mathematics. | {"url":"https://exploratiojournal.com/data-science-analysis-of-stroke-prediction%EF%BF%BC/","timestamp":"2024-11-11T18:04:52Z","content_type":"text/html","content_length":"166880","record_id":"<urn:uuid:4d6a329a-5287-44dd-84cc-17f4ead2fc37>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00871.warc.gz"} |
How to Transpose
How to Transpose a Matrix in Python
To transpose a matrix A into a transpose matrix A^T using Python, we use the transpose() function of the library numpy
import numpy as np
The argument x is the input matrix (A).
The function calculates and outputs the transpose matrix A^T.
What is the transpose matrix? It is a matrix in which the rows are replaced by columns. For example, the first row 1 2 3 of the matrix A is the first column of the A^T matrix.
Given a 2x2 matrix created by array function of numpy.
Calculate the transposed matrix using the function transpose(x).
The transpose matrix is assigned to the variable y.
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
The matrix y is the transpose matrix of x
Report an error or share a suggestion to enhance this page | {"url":"https://how.okpedia.org/en/how-to-transpose-matrix-in-python","timestamp":"2024-11-13T19:06:29Z","content_type":"text/html","content_length":"12138","record_id":"<urn:uuid:b6dbd5a4-8b77-4ca5-a310-d9cd747d68d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00707.warc.gz"} |
The Squeeze Theorem
Consider the green function g in the applet below. Note how it seems that the graph of function g always stays in between functions f and h. Drag the BIG GREEN POINT as close as possible to the
origin. Observe the y-coordinates of the 3 points as you do. Given this information, what would you say the limit as x ==> 0 of g(x) is, given this information? Does this limit even exist? (After
all, we clearly know that function g is NOT DEFINED at x = 0). If so, how could you algebraically prove it? (Feel free to zoom in if you'd like). | {"url":"https://stage.geogebra.org/m/bta4ft7y","timestamp":"2024-11-11T00:23:44Z","content_type":"text/html","content_length":"91040","record_id":"<urn:uuid:4ccd9d23-c5c9-42ea-849e-6c09001416a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00786.warc.gz"} |
Advanced Non-Normal Data Example
Aimee Lee Houde
In this vignette, the fullfact package is explored using advanced (functions designated by the number 2) for the standard model, i.e. containing random effects for dam, sire, and dam by sire, for
non-normal error structures (e.g. binary, proportion, and/or count data types), with the options of including additional random effects for one position (e.g. tank) and/or one block effect
(e.g. several blocks of 2 \(\times\) 2 factorial matings).
Simple (functions designated by no number) for the standard model only is explored in the vignette Simple Non-Normal Data Example.
Expert (functions designated by the number 3) for the standard model with the ability of the user to include additional fixed and/or random effects, such as a model including environment treatments
and their interactions is explored in the vignette Expert Non-Normal Data Example.
Normal error structure or data type is explored in another three vignettes: (1) Simple Normal Data Example, (2) Advanced Normal Data Example, and (3) Expert Normal Data Example.
Load the package and example data
The example data set is an 11 \(\times\) 11 full factorial mating: 11 dams and 11 sires with all combinations resulting in 121 families. There are two replicates per family.
#> family repli dam sire tray cell alive dead egg_size
#> 1 f1 r1 d1 s1 t7 1A 136 14 7.27
#> 2 f1 r2 d1 s1 t8 1A 146 4 7.27
#> 3 f2 r1 d1 s2 t7 1B 128 22 7.27
#> 4 f2 r2 d1 s2 t8 1B 132 18 7.27
#> 5 f3 r1 d1 s3 t7 1C 142 8 7.27
#> 6 f3 r2 d1 s3 t8 1C 144 6 7.27
Displayed are columns for family identities (ID), replicate ID, dam ID, sire ID, incubation tray ID, incubation cell ID (within tray), Chinook salmon number of offspring alive, number of offspring
dead, and dam egg size (mm). The total number of offspring per family is 300 with 150 per replicate.
Convert to a binary data frame
For data that were recorded at the replicate-level, such as the number of offspring dead or alive for survival in the example data set, these data should be converted to the individual-level to not
underestimate phenotypic variance and influence variance component estimates (see Puurtinen et al. 2009).
Puurtinen M, Ketola T, Kotiaho JS. 2009. The good-genes and compatible-genes benefits of mate choice. The American Naturalist 174(5): 741-752. DOI: 10.1086/606024
The buildBinary function can assign a binary number (i.e. ‘0’ or ‘1’) to two columns containing the number of offspring and copy information by the number of times equal to the number of offspring.
The final data set will have a number of rows matching the total number of offspring.
one is the column name of counts to assign a ‘1’ value, e.g. alive. zero is the column name of counts to assign a ‘0’ value, e.g. dead.
copy is a vector of column numbers (to copy the contents). Does not need to contain the one and zero column names.
The buildMulti function is similar and can assign multiple numbers to multiple columns. multi is a list containing the numbers to assign and matching column names, e.g. list(c(2,1,0),c
chinook_survival2<- buildBinary(dat=chinook_survival,copy=c(1:6,9),one="alive",zero="dead")
rm(chinook_survival) #remove original
#> status family repli dam sire tray cell egg_size
#> 1 1 f1 r1 d1 s1 t7 1A 7.27
#> 1.1 1 f1 r1 d1 s1 t7 1A 7.27
#> 1.2 1 f1 r1 d1 s1 t7 1A 7.27
#> 1.3 1 f1 r1 d1 s1 t7 1A 7.27
#> 1.4 1 f1 r1 d1 s1 t7 1A 7.27
#> 1.5 1 f1 r1 d1 s1 t7 1A 7.27
#Multinomial example
#>chinook_survival$total<- chinook_survival$alive + chinook_survival$dead
#>chinook_survival3<- buildMulti(dat=chinook_survival,copy=c(1:6,9),multi=list(c(2,1,0),
A new column is produced named “status” containing the 1 and 0 values for the offspring. The “alive” and “dead” columns are not included because their column numbers (7 and 8) were not in copy.
Observed variance components
Model random effects are dam, sire, and dam by sire. Options to include one random position and/or one random block effect(s). Extracts the dam, sire, dam, and dam by sire variance components.
Calculates the residual and total variance component. Calculates the additive genetic, non-additive genetic, and maternal variance components. Extracts optional position and block variance
The residual variance component for the binomial and Poisson error structures with four links are described by Nakagawa and Schielzeth (2010, 2013). Specifically, the residual variance component for
binomial errors with the logit link is \(\pi\)^2/3; binomial errors with the probit link is 1; Poisson errors with the log link is ln(1/exp(\(\beta\)[0]) + 1), where \(\beta\)[0] is the intercept
value from the model without any fixed effects and containing only the random effects; and Poisson errors with the square-root link is 0.25.
Assuming the effects of epistasis are of negligible importance, the additive genetic variance (V[A]) component is calculated as four times the sire (V[S]), the non-additive genetic variance (V[N])
component as four times the dam by sire interaction (V[D\(\times\)S]), and the maternal variance component (V[M]) as the dam (V[D]) – sire (V[S]) (Lynch and Walsh 1998, p. 603). When there is
epistasis, those variance components will be overestimated and this may explain why the percentage of phenotypic variance explained by the components can add up to more than 100% in certain cases.
fam_link is the family and link in family(link) format. Supported options are binomial(link=“logit”), binomial(link=“probit”), poisson(link=“log”), and poisson(link=“sqrt”). Binary or proportion data
are typically analyzed with binomial. Count data are typically analyzed with Poisson.
Default in quasi = F. Option for overdispersion or quasi-error structure is quasi = T, such that an observation-level random effect is added to the model (Atkins et al. 2013).
position is the column name containing position factor information.
block is the column name containing block factor information.
Significance values for the random effects are determined using likelihood ratio tests (Bolker et al. 2009).
Atkins DC, Baldwin SA, Zheng C, Gallop RJ, Neighbors C. 2013. A tutorial on count regression and zero-altered count models for longitudinal substance use data. Psychology of Addictive Behaviors 27
(1): 166-177. DOI: 10.1037/a0029508
Nakagawa S, Schielzeth H. 2010. Repeatability for Gaussian and non-Gaussian data: a practical guide for biologists. Biological Reviews 85(4): 935-956. DOI: 10.1111/j.1469-185X.2010.00141.x
Nakagawa S, Schielzeth H. 2013. A general and simple method for obtaining R2 from generalized linear mixed-effects models. Methods in Ecology and Evolution 4(2): 133-142. DOI: 10.1111/
Lynch M, Walsh B. 1998. Genetics and Analysis of Quantitative Traits. Sinauer Associates, Massachusetts.
Bolker BM, Brooks ME, Clark CJ, Geange SW, Poulsen JR, Stevens MHH, White J-SS. 2009. Generalized linear mixed models: a practical guide for ecology and evolution. Trends in Ecology and Evolution 24
(3): 127-135. DOI: 10.1016/j.tree.2008.10.008
For this example, we explore position (tray) effects for Chinook salmon survival.
survival_mod2<- observGlmer2(observ=chinook_survival2,dam="dam",sire="sire",response="status",
#> [1] "2024-01-27 12:56:12 PST"
#> Time difference of 1.338003 mins
#> $random
#> effect variance percent d.AIC d.BIC Chi.sq p.value
#> 1 dam:sire 0.167061285 3.78378550 595.885940 587.386367 597.885940 4.826421e-132
#> 2 tray 0.003668016 0.08307721 1.291355 -7.208218 3.291355 6.964552e-02
#> 3 sire 0.166654347 3.77456873 44.385729 35.886156 46.385729 9.712031e-12
#> 4 dam 0.787937313 17.84606041 126.128476 117.628903 128.128476 1.052074e-29
#> $other
#> component variance percent
#> 1 Residual 3.289868 74.51251
#> 2 Total 4.415189 100.00000
#> $calculation
#> component variance percent
#> 1 additive 0.6666174 15.09827
#> 2 nonadd 0.6682451 15.13514
#> 3 maternal 0.6212830 14.07149
Produces a list object containing three data frames. Each data frame contains the raw variance components and the variance components as a percentage of the total variance component. The first data
frame also contains the difference in AIC and BIC, and likelihood ratio test Chi-square and p-value for all random effects.
The Laplace approximation is used because there were fewer disadvantages relative to penalized quasi-likelihood and Gauss-Hermite quadrature parameter estimation (Bolker et al. 2009). That is,
penalized quasi-likelihood is not recommended for count responses with means less than 5 and binary responses with less than 5 successes per group. Gauss-Hermite quadrature is not recommended for
more than two or three random effects because of the rapidly declining analytical speed with the increasing number of random effects.
Statistical Power analysis
Power values are calculated by stochastically simulating data for a number of iterations and then calculating the proportion of P-values less than \(\alpha\) (e.g. 0.05) for each component (Bolker
2008). Simulated data are specified by inputs for variance component values and the sample sizes.
Bolker BM. 2008. Ecological Models and Data in R. Princeton University Press, Princeton.
Defaults are alpha = 0.05 for 5% and nsim = 100 for 100 simulations.
varcomp is a vector of dam, sire, dam by sire, position and/or block variance components, i.e. c(dam,sire,dam \(\times\) sire,position/block). If there is a position and a block, c(dam,sire,dam \(\
times\) sire,position,block).
nval is a vector of dam, sire, offspring per family, and offspring per position or number of block sample sizes, i.e. c(dam,sire,offspring,position/block). If there is a position and a block, c
position is optional number of positions.
block is optional vector of dams and sires per block, e.g. c(2,2).
poisLog is the residual variance component value if using fam_link = poisson(link="log").
For this example, the variance components of observGlmer2 above are used (i.e. dam= 0.7880, sire= 0.1667, dam \(\times\) sire= 0.1671, tray= 0.0037) and the sample size of the Chinook salmon data set
(i.e. dam= 11, sire= 11, offspring= 300, offspring per position= 3300). Position was represented by 11 trays. The actual design was composed of 16 trays with 1,650–2,400 offspring each. However,
powerGlmer2 uses an equal number of offspring per position, so the number of trays was decreased from 16 to 15.
Full analysis is 100 simulations. Example has 2 simulations.
#2 simulations
#> [1] "2024-01-27 12:57:33 PST"
#> [1] "Starting simulation: 1"
#> [1] "Starting simulation: 2"
#> Time difference of 3.108018 mins
#> term n var_in var_out power
#> 1 dam 11 0.788000 1.072130283 1
#> 2 sire 11 0.166700 0.180743178 1
#> 3 dam.sire 121 0.167100 0.207502496 1
#> 4 position 15 0.003700 0.003502962 1
#> 5 residual NA 3.289868 3.289868134 NA
#Block examples using 8 dams, 8 sires (as four 2x2 blocks), and 20 offspring per family
#>fam_link=binomial(link="logit"),position=8,block=c(2,2)) #with position
There is sufficient power (\(\ge\) 0.8) for dam, sire, and dam by sire variance components. There was also sufficient power for position (tray) variance components. In the cases of insufficient power
(< 0.8), the sample size of dam, sire, and/or offspring can be increased until there is sufficient power.
Taking the reverse approach (can the sample size of dam, sire, or offspring be reduced while maintaining sufficient power?) using the same variance components and offspring sample size, dam and sire
sample sizes could be reduced from 11 to 7. The position sample size was reduced accordingly, i.e. 7 dams \(\times\) 7 sires \(\times\) 300 offspring = 14,700, divided by 15 trays for 980 offspring
#2 simulations
#> [1] "2024-01-27 13:00:39 PST"
#> [1] "Starting simulation: 1"
#> [1] "Starting simulation: 2"
#> Time difference of 1.062281 mins
#> term n var_in var_out power
#> 1 dam 7 0.788000 0.276665384 1.0
#> 2 sire 7 0.166700 0.176126942 1.0
#> 3 dam.sire 49 0.167100 0.120607315 1.0
#> 4 position 15 0.003700 0.005058561 0.5
#> 5 residual NA 3.289868 3.289868134 NA
Bootstrap confidence intervals
Confidence intervals for the additive genetic, non-additive genetic, and maternal variance components can be produced using the bootstrap-t resampling method described by Efron and Tibshirani (1993,
p. 160‒162). Observations are resampled with replacement until the original sample size is reproduced. The resampled data are then used in the model and the additive genetic, non-additive genetic,
and maternal variance components are extracted. The process is repeated for a number of iterations, typically 1,000 times, to produce a distribution for each component. The confidence interval lower
and upper limits and median are extracted from the distribution.
Efron B, Tibshirani R. 1993. An Introduction to the Bootstrap. Chapman and Hall, New York.
Resample observations
The resampRepli function is used to bootstrap resample observations grouped by replicate ID within family ID for a specified number of iterations to create the resampled data set. A similar
resampFamily function is able to resample observations grouped by family ID only.
copy is a vector of column numbers (to copy the contents). Does not need to contain the family and/or replicate columns.
Full analysis is 1000 iterations. Example has 5 iterations.
#>resampRepli(dat=chinook_survival2,copy=c(1,4:8),family="family",replicate="repli",iter=1000) #full
#>resampFamily(dat=chinook_survival2,copy=c(1,4:8),family="family",iter=1000) #family only
resampRepli(dat=chinook_survival2,copy=c(1,4:8),family="family",replicate="repli",iter=5) #5 iterations
Because of the large file sizes that can be produced, the resampling of each replicate Y per family X is saved separately as a common separated (X_Y_resampR.csv) file in the working directory. These
files are merged to create the final resampled data set (resamp_datR.csv).
If using resampFamily, the file names are X_resampF.csv per family and resamp_datF.csv for the final resampled data set.
Iteration variance components
The equivalent to observGlmer2 is available for the final bootstrap resampled data set, i.e. resampGlmer2.
Default is no overdispersion as quasi = F. The starting model number start = and ending model number end = need to be specified.
Full analysis is 1000 iterations. Example has 2 iterations.
#>survival_datR<- read.csv("resamp_datR.csv") #1000 iterations
#>survival_rcomp2<- resampGlmer2(resamp=survival_datR,dam="dam",sire="sire",response="status",
#>fam_link=binomial(logit),position="tray",start=1,end=1000) #full
data(chinook_resampS) #5 iterations
#> status1 dam1 sire1 tray1 cell1 egg_size1 status2 dam2 sire2 tray2 cell2 egg_size2 status3 dam3 sire3 tray3 cell3
#> 1 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A
#> 2 1 d1 s1 t7 1A 7.27 0 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A
#> 3 1 d1 s1 t7 1A 7.27 0 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A
#> 4 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A
#> 5 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A
#> 6 0 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A
#> egg_size3 status4 dam4 sire4 tray4 cell4 egg_size4 status5 dam5 sire5 tray5 cell5 egg_size5
#> 1 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27
#> 2 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27
#> 3 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27
#> 4 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27
#> 5 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27
#> 6 7.27 1 d1 s1 t7 1A 7.27 1 d1 s1 t7 1A 7.27
survival_rcomp2<- resampGlmer2(resamp=chinook_resampS,dam="dam",sire="sire",response="status",
#> [1] "2024-01-27 13:01:43 PST"
#> [1] "Working on model: 1"
#> [1] "Working on model: 2"
#> Time difference of 54.80829 secs
#> dam:sire tray sire dam Residual Total additive nonadd maternal
#> 1 0.1913733 0.005188036 0.1721041 0.7757233 3.289868 4.434257 0.6884163 0.7654933 0.6036192
#> 2 0.1776859 0.004506256 0.1661613 0.7820526 3.289868 4.420274 0.6646453 0.7107437 0.6158912
The function provides a data frame with columns containing the raw variance components for dam, sire, dam by sire, residual, total, additive genetic, non-additive genetic, and maternal. Also columns
containing the raw variance components for the options of position and/or block. The number of rows in the data frame matches the number of iterations in the resampled data set and each row
represents a model number.
Plotting confidence intervals
The barMANA and boxMANA functions are simple plotting functions for the confidence intervals or all values, respectively, from the bootstrap and jackknife approaches. Default is to display the
percentage values as type = perc. Raw values can be displayed as type = raw.
Within the functions, there are simple plot modifications available. For the y-axis, min and max values can be species as ymin and ymax, as well as the increment as yunit. Also, magnification of the
axis unit as cex_yaxis and label as cex_ylab. The position of the legend can be specified as leg. Default is “topright”.
Bar plot
The barMANA function produces bar graphs with the bootstrap-t median (ciMANA2) or jackknife pseudo-value mean (ciJack2) as the top of the shaded bar and error bars covering the range of the
confidence interval for each of the additive genetic, non-additive genetic, and maternal values of a phenotypic trait.
The length of the error bar can be specified in inches as bar_len.
survival_ci<- ciJack2(comp=chinook_jackS,position="Residual",
oldpar<- par(mfrow=c(2,1))
barMANA(ci_dat=survival_ci) #basic, top
barMANA(ci_dat=survival_ci,bar_len=0.3,yunit=4,ymax=20,cex_ylab=1.3) #modified, bottom
Different traits can also be combined on the same bar plot using trait specified in ciMANA or ciJack. The information is combined into a list object. For the example, the jackknife CI is duplicated
to simulate ‘different traits’.
survival_ci1<- ciJack2(comp=chinook_jackS,position="Residual",
survival_ci2<- ciJack2(comp=chinook_jackS,position="Residual",
comb_bar<- list(raw=rbind(survival_ci1$raw,survival_ci2$raw),
The legend is slightly off in the presented html version but is fine with the R plotting device.
Box plot
The boxMANA function produces box plots using all values for the bootstrap-t resampling data set (resampGlmer2) or jackknife resampling data set (JackGlmer2).
oldpar<- par(mfrow=c(2,1))
boxMANA(comp=chinook_bootS) #from resampGlmer2, basic, top
boxMANA(comp=chinook_bootS,yunit=2,ymin=10,ymax=22,cex_ylab=1.3,leg="topleft") #modified, bottom
Different traits can also be combined on the same box plot by adding a “trait” column to the resampling data set. For the example, the bootstrap-t data frame is duplicated to simulate ‘different
chinook_bootS1<- chinook_bootS; chinook_bootS2<- chinook_bootS #from resampGlmer2
chinook_bootS1$trait<- "survival_1"; chinook_bootS2$trait<- "survival_2"
comb_boot<- rbind(chinook_bootS1,chinook_bootS2)
comb_boot$trait<- as.factor(comb_boot$trait)
The recommended follow-up vignette is the Expert Non-Normal Data Example, covering the standard model with the ability of the user to include additional fixed and/or random effects, such as a model
including environment treatments and their interactions. | {"url":"https://cran.rstudio.org/web/packages/fullfact/vignettes/v5_advanced_non_normal.html","timestamp":"2024-11-10T14:13:24Z","content_type":"text/html","content_length":"100275","record_id":"<urn:uuid:2d97e015-eb37-46f8-913a-25cf10878942>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00897.warc.gz"} |
Local Equilibrium and Retardation Revisited
In modeling solute transport with mobile-immobile mass transfer (MIMT), it is common to use an advection-dispersion equation (ADE) with a retardation factor, or retarded ADE. This is commonly
referred to as making the local equilibrium assumption (LEA). Assuming local equilibrium, Eulerian textbook treatments derive the retarded ADE, ostensibly exactly. However, other authors have
presented rigorous mathematical derivations of the dispersive effect of MIMT, applicable even in the case of arbitrarily fast mass transfer. We resolve the apparent contradiction between these
seemingly exact derivations by adopting a Lagrangian point of view. We show that local equilibrium constrains the expected time immobile, whereas the retarded ADE actually embeds a stronger,
nonphysical, constraint: that all particles spend the same amount of every time increment immobile. Eulerian derivations of the retarded ADE thus silently commit the gambler's fallacy, leading them
to ignore dispersion due to mass transfer that is correctly modeled by other approaches. We then present a particle tracking simulation illustrating how poor an approximation the retarded ADE may be,
even when mobile and immobile plumes are continually near local equilibrium. We note that classic “LEA” (actually, retarded ADE validity) criteria test for insignificance of MIMT-driven dispersion
relative to hydrodynamic dispersion, rather than for local equilibrium.
ASJC Scopus subject areas
• Water Science and Technology
• Computers in Earth Sciences
Dive into the research topics of 'Local Equilibrium and Retardation Revisited'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/local-equilibrium-and-retardation-revisited","timestamp":"2024-11-11T02:03:15Z","content_type":"text/html","content_length":"57349","record_id":"<urn:uuid:c27d3c22-e11d-44a9-b5dd-5e2d9933c052>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00763.warc.gz"} |
NCERT Solutions Class 10 Maths Chapter 14 Statistics - Download free PDF
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
NCERT Solutions Class 10 Maths Chapter 14 Statistics
NCERT Solutions for Class 10 Maths Chapter 14 Statistics deals with the classification of ungrouped as well as grouped frequency distributions. In the previous classes, the students must have learned
how to represent the data pictorially in various graphical formats such as bar graphs, histograms, and frequency polygons. They must be familiar with topics like numerical representatives of
ungrouped data and measures of central tendencies such as mean, median, and mode. In the NCERT Solutions Class 10 Maths Chapter 14, learning will switch to all the three measures such as mean,
median, mode from ungrouped data to the grouped data.
The students will understand some new topics of statistics such as cumulative frequency, cumulative frequency distribution, cumulative frequency curves or ‘ogives,’ etc. A detailed pdf file of class
10 maths NCERT solutions Chapter 14 Statistics can be found below and also you can find some of these in the exercises given below.
NCERT Solutions for Class 10 Maths Chapter 14 PDF
NCERT Solutions Class 10 Maths Chapter 14 Statistics enables the students to understand the concept of statistics by outlining its key points. This chapter also helps in the revision of concepts such
as mean, median, mode. Apart from this, the students will be able to explore some new topics with the help of this chapter, such as measures of central tendency in the ungrouped data. The following
links mentioned below will help understand the sections in this chapter further in detail :
☛ Download Class 10 Maths NCERT Solutions Chapter 14 Statistics
NCERT Class 10 Maths Chapter 14
NCERT Solutions for Class 10 Maths Chapter 14 Statistics
NCERT Solutions Class 10 Maths Chapter 14 Statistics forms the perfect study material for class 10 students to prepare for their board exams. With detailed explanations on the various methods of the
step-deviation method, assumed mean method, direct method, etc., these solutions cover all the important points in a detailed step-by-step manner. A section-wise detailed analysis of NCERT Solutions
class 10 chapter 14 Statistics can be seen below :
• Class 10 Maths Chapter 10 Ex 14.1 - 9 Questions
• Class 10 Maths Chapter 10 Ex 14.2 - 6 Questions
• Class 10 Maths Chapter 10 Ex 14.3 - 7 Questions
• Class 10 Maths Chapter 10 Ex 14.4 - 3 Questions
☛ Download Class 10 Maths Chapter 14 NCERT Book
Topics Covered: The Class 10 maths NCERT solutions Chapter 14 Statistics covers a variety of topics such as mean, median, and mode of grouped data, cumulative frequency, representation of cumulative
frequency data in the graphical form, and obtaining the median of grouped data.
Total Questions: Class 10 Maths Chapter 14 Statistics consists of 25 questions. Out of these 25 problems, 15 are fairly easy to solve, 5 are of moderate level, while 5 are a bit complex in nature.
List of Formulas in NCERT Solutions Class 10 Maths Chapter 14
NCERT solutions class 10 maths Chapter 14 guides the students in studying the grouped data and finding their mean, median, and mode. Analyzing data is one of the most essential skills to have in
today’s time hence this is the most important chapter of class 10 maths that has a lot of real-life applications. Statistics involves the use of formulas which are covered in detail in the chapter.
Some of the important formulas that will help students study and analyze different data charts are as :
• Class mark = (Upper class limit + Lower class limit)/ 2
• Relationship between the three measures of central tendency : 3 Median = Mode + 2 Mean
Important Questions for Class 10 Maths NCERT Solutions Chapter 14
│CBSE Important Questions for Class 10 Maths Chapter 14 Exercise 14.1│
│ │
│CBSE Important Questions for Class 10 Maths Chapter 14 Exercise 14.2│
│ │
│CBSE Important Questions for Class 10 Maths Chapter 14 Exercise 14.3│
│ │
Video Solutions for Class 10 Maths NCERT Chapter 14
FAQs on NCERT Solutions Class 10 Maths Chapter 14
What is the Importance of NCERT Solutions Class 10 Maths Chapter 14 Statistics?
NCERT Solutions Class 10 Maths Chapter 14 Statistics deals with the collection, analysis, presentation, and interpretation of data in different forms. This chapter enables the students to arrange
data in a particular form in order to study the salient features of the same and present them in a way that can be easily understood by all. The modern world is highly inclined towards studying and
analyzing data in order to make the right decisions, be it in any field, thereby these solutions become an important resource to study.
Do I Need to Practice all Questions Provided in NCERT Solutions Class 10 Maths Statistics?
There are a total of 25 questions in NCERT solutions class 10 maths Statistics. All the questions are curated to cover all the topics related to statistics, such as finding the mean, median, mode
using different methods, cumulative frequency representation, etc. in detail. A consistent practice of all the examples and questions in the chapter will result in a better understanding and
increased confidence in problem-solving. Thus, it is highly beneficial for the students to practice all the questions.
What are the Important Topics Covered in NCERT Solutions Class 10 Maths Chapter 14?
NCERT Solutions Class 10 Maths Chapter 14 will help you revise the numerical representation of ungrouped data, also known as the measures of central tendencies called mean, median, and mode. Along
with that, these solutions also cover the concept of cumulative frequency, the cumulative frequency distribution, and steps involved in drawing ‘ogives’ or cumulative frequency curves.
How Many Questions are there in Class 10 Maths NCERT Solutions Chapter 14 Statistics?
NCERT Solutions Class 10 maths chapter 14 consists of a total of 25 questions distributed in 4 exercises. These questions challenge the students to think out of the box and solve complex problems
based on the frequency distribution table, exclusive or continuous frequency, range, etc. Practicing all the questions in these NCERT solutions class 10 maths chapter 14 will result in an excellent
preparation for board exams.
What are the Important Formulas in NCERT Solutions Class 10 Maths Chapter 14?
Some of the important formulas of this chapter is the mean of grouped data using the direct method, mean method, and step deviation method. Other formulas include median and mode for grouped data.
These formulas can better be understood by solving questions in a stepwise manner. The derivation of these formulas is stated in the NCERT Solutions Class 10 Maths Chapter 14.
Why Should I Practice NCERT Solutions Class 10 Maths Statistics 14?
The NCERT Solutions Class 10 Maths Statistics 14 is a very important chapter not just for exams but for learning insights that have their applications in real-life situations as well. Through this
chapter, students will learn how to arrange and analyze different kinds of data by arranging it in a definite or required order. This is one of the greatest skill-set in today’s time and can offer a
life-long benefit to the students; hence they must practice NCERT solutions regularly.
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/ncert-solutions/ncert-solutions-class-10-maths-chapter-14-statistics/","timestamp":"2024-11-02T20:06:52Z","content_type":"text/html","content_length":"278548","record_id":"<urn:uuid:e45677bd-acd1-4cb4-ac4b-67180cba64eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00299.warc.gz"} |
Go to the first, previous, next, last section, table of contents.
I hope that someday Octave will include more statistics functions. If you would like to help improve Octave in this area, please contact bug-octave@che.utexas.edu.
corrcoef (x [, y])
cov (x [, y])
kurtosis (x)
mahalanobis (x, y)
mean (a)
median (a)
skewness (x)
std (a)
Go to the first, previous, next, last section, table of contents. | {"url":"http://www.math.utah.edu/docs/info/octave_19.html","timestamp":"2024-11-08T22:32:28Z","content_type":"text/html","content_length":"3804","record_id":"<urn:uuid:e35bbac5-b268-4b1a-af05-cf1bf616ca87>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00002.warc.gz"} |
Bulk Discount
Viewing 4 posts - 1 through 4 (of 4 total)
• Author
• February 29, 2020 at 3:35 am #563488
4.5.1 Example: Bulk discounts
The annual demand for an item of inventory is 45 units. The item costs $200 a unit to purchase, thenholding cost for 1 unit for 1 year is 15% of the unit cost and ordering costs are $300 an
The supplier offers a 3% discount for orders of 60 units or more, and a discount of 5% for orders of 90units or more.
Calculate the cost-minimising order size.
In the solution of this example in textbook (p.116).
I don’t understand the line which is Purchases (no discount) 45 x $200. Why dont we use EOQ=30 units instead of 45, sir? I’ve thought that is 30 units because it is the order quantity which
minimises inventory costs.
February 29, 2020 at 11:20 am #563536
You do not say which textbook you are referring to 🙂
However, the EOQ is indeed 30 units each time when there is no discount, and this is relevant for calculation the holding cost per year and the order cost per year.
The cost of purchasing the goods over the year is 45 x $200 because on average they need to buy a total of 45 units each year and each unit costs $200.
We need this, because if they order 60 or 90 units each time then the total of the holding and ordering costs will be higher, but the total purchase cost will be lower because of the discount.
Have you watched my free lectures on this? I work through similar examples explaining what we do and why.
December 28, 2021 at 5:15 pm #644963
December 28, 2021 at 5:50 pm #644975
What is n/a supposed to mean here???
• Author
Viewing 4 posts - 1 through 4 (of 4 total)
• You must be logged in to reply to this topic. | {"url":"https://opentuition.com/topic/bulk-discount/","timestamp":"2024-11-09T00:55:23Z","content_type":"text/html","content_length":"83949","record_id":"<urn:uuid:4e7cadb1-f83f-410d-ae7c-cd4bee2f9603>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00293.warc.gz"} |
Thomas Cass
Thomas Cass is Professor of Mathematics at Imperial College London. He obtained his MA and PhD from Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge.
Thomas's research interests relate to the study of random phenomena. His research writings span both classical areas of stochastic analysis such as Malliavin calculus and Gaussian processes as well
as more recent aspects of rough path analysis. He is also interested in the way in which insights in pure mathematics can spur developments in mathematical finance and in data science.
A more specific list of his interests includes:
• the theory of rough paths, rough differential equations and rough analysis.
• Gaussian processes, fractional Brownian motion and the properties of rough differential equations driven by these processes
• the use of Malliavin calculus and cubature on Wiener space for sensitivity analysis and efficient pricing and hedging methods in mathematical finance
• McKean-Vlasov-type models for large populations of interacting particles and agents
• stochastic differential geometry and rough paths on manifolds
• the signature and its mathematics, and applications of rough paths to data science, especially in radioastronomy. | {"url":"https://datasig.ac.uk/people/thomas-cass","timestamp":"2024-11-07T07:35:05Z","content_type":"text/html","content_length":"93442","record_id":"<urn:uuid:89bfe7df-383e-41b7-8f44-4d54bfd1ca1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00719.warc.gz"} |
Algebraic Manipulation
Expand[poly] expand out products and powers
Factor[poly] factor completely
FactorTerms[poly] pull out any overall numerical factor
FactorTerms[poly,{x,y,…}] pull out any overall factor that does not depend on x, y, …
Collect[poly,x] arrange a polynomial as a sum of powers of x
Collect[poly,{x,y,…}] arrange a polynomial as a sum of powers of x, y, …
expands out products and powers, writing the polynomial as a simple sum of terms:
performs complete factoring of the polynomial:
pulls out the overall numerical factor from
There are several ways to write any polynomial. The functions
, and
give three common ways.
writes a polynomial as a simple sum of terms, with all products expanded out.
pulls out common factors from each term.
does complete factoring, writing the polynomial as a product of terms, each of as low degree as possible.
When you have a polynomial in more than one variable, you can put the polynomial in different forms by essentially choosing different variables to be "dominant".
takes a polynomial in several variables and rewrites it as a sum of terms containing different powers of the "dominant variable"
reorganizes the polynomial so that
is the "dominant variable":
If you specify a list of variables,
will effectively write the expression as a polynomial in these variables:
Expand[poly,patt] expand out poly, avoiding those parts which do not contain terms matching patt
PowerExpand[expr] expand out (ab)^c and (a^b)^c in expr
expand out expr assuming assum
The Wolfram System does not automatically expand out expressions of the form
except when
is an integer. In general it is only correct to do this expansion if
are positive reals. Nevertheless, the function
does the expansion, effectively assuming that
are indeed positive reals.
does the expansion, effectively assuming that
are positive reals:
is not automatically expanded out:
returns a result correct for the given assumptions:
Collect[poly,patt] collect separately terms involving each object that matches patt
Collect[poly,patt,h] apply h to each final coefficient obtained
This applies
to each coefficient obtained:
HornerForm[expr,x] puts expr into Horner form with respect to x
Horner form
is a way of arranging a polynomial that allows numerical values to be computed more efficiently by minimizing the number of multiplications.
PolynomialQ[expr,x] test whether expr is a polynomial in x
PolynomialQ[expr,{x[1],x[2],…}] test whether expr is a polynomial in the x[i]
Variables[poly] a list of the variables in poly
Exponent[poly,x] the maximum exponent with which x appears in poly
Coefficient[poly,expr] the coefficient of expr in poly
Coefficient[poly,expr,n] the coefficient of expr^n in poly
Coefficient[poly,expr,0] the term in poly independent of expr
CoefficientList[poly,{x[1],x[2],…}] generate an array of the coefficients of the x[i] in poly
CoefficientRules[poly,{x[1],x[2],…}] get exponent vectors and coefficients of monomials
gives a list of the variables in the polynomial
This gives the maximum exponent with which
appears in the polynomial
. For a polynomial in one variable,
gives the degree of the polynomial:
gives the total coefficient with which
appears in
. In this case, the result is a sum of two terms:
gives a list of the coefficients of each power of , starting with :
For multivariate polynomials,
gives an array of the coefficients for each power of each variable:
It is important to notice that the functions in this tutorial will often work even on polynomials that are not explicitly given in expanded form.
Without giving specific integer values to
, and
, this expression cannot strictly be considered a polynomial:
still gives the maximum exponent of
, but here has to write the result in symbolic form:
The leading term of a polynomial can be chosen in many different ways. For multivariate polynomials, sorting by the total degree of the monomials is often useful.
MonomialList[poly] get the list of monomials
CoefficientRules[poly] represent the monomials by exponent vectors and coefficients
FromCoefficientRules[list] construct a polynomial from a list of rules
If the second argument to
is omitted, the variables are taken in the order in which they are returned by the function
By default, the monomials are sorted lexicographically and given in the decreasing order. In the previous example,
(corresponding to ) is taken to precede
(corresponding to ) by the second element.
An order is described by defining how two vectors of exponents and are sorted. For the lexicographic order,
An order can also be described by giving a weight matrix. In that case the exponent vectors are multiplied by the weight matrix and the results are sorted lexicographically, also in the decreasing
order. The matrices for different orderings are given as follows.
For functions such as
, it is necessary that the order be well-founded, which ensures that any decreasing sequence of elements is finite (the elements being the vectors of non-negative exponents). In order for that
condition to hold, the first nonzero value in each column of the weight matrix must be positive.
The default sorting used for polynomial terms in an expression corresponds to the negative lexicographic ordering with variables sorted in the reversed order. This is commonly known as reverse
lexicographic ordering.
This is the internal representation of a polynomial. The arguments of
are sorted on evaluation:
tries to arrange the terms in an order close to the lexicographic ordering.
This uses advanced typesetting capabilities to obtain the list of terms in the same order as they appear in the
The option
which variables should be excluded from the ordering.
One can obtain additional orderings from the six orderings used in
simply by reversing the resulting list. This is effectively equivalent to negating the exponent vectors. In a commutative setting, one can also obtain other orderings by reversing the order of the
This diagram illustrates the relations between various orderings. Red lines indicate reversing the list of variables and blue lines indicate negating the power vectors.
For ordinary polynomials,
give the most important forms. For rational expressions, there are many different forms that can be useful.
ExpandNumerator[expr] expand numerators only
ExpandDenominator[expr] expand denominators only
Expand[expr] expand numerators, dividing the denominator into each term
ExpandAll[expr] expand numerators and denominators completely
expands the numerator of each term, and divides all the terms by the appropriate denominators:
does all possible expansions in the numerator and denominator of each term:
avoid expanding parts which contain no terms matching patt
, etc.
Together[expr] combine all terms over a common denominator
Apart[expr] write an expression as a sum of terms with simple denominators
Cancel[expr] cancel common factors between numerators and denominators
Factor[expr] perform a complete factoring
puts all terms over a common denominator:
You can use
to factor the numerator and denominator of the resulting expression:
writes the expression as a sum of terms, with each term having as simple a denominator as possible:
cancels any common factors between numerators and denominators:
first puts all terms over a common denominator, then factors the result:
In mathematical terms,
decomposes a rational expression into "partial fractions".
In expressions with several variables, you can use
to do partial fraction decompositions with respect to different variables.
For many kinds of practical calculations, the only operations you will need to perform on polynomials are essentially structural ones.
If you do more advanced algebra with polynomials, however, you will have to use the algebraic operations discussed in this tutorial.
You should realize that most of the operations discussed in this tutorial work only on ordinary polynomials, with integer exponents and rational
number coefficients for each term.
PolynomialQuotient[poly[1],poly[2],x] find the result of dividing the polynomial poly[1] in x by poly[2], dropping any remainder term
PolynomialRemainder[poly[1],poly[2],x] find the remainder from dividing the polynomial poly[1] in x by poly[2]
give the quotient and remainder in a list
PolynomialMod[poly,m] reduce the polynomial poly modulo m
PolynomialGCD[poly[1],poly[2]] find the greatest common divisor of two polynomials
PolynomialLCM[poly[1],poly[2]] find the least common multiple of two polynomials
PolynomialExtendedGCD[poly[1],poly[2]] find the extended greatest common divisor of two polynomials
Resultant[poly[1],poly[2],x] find the resultant of two polynomials
Subresultants[poly[1],poly[2],x] find the principal subresultant coefficients of two polynomials
Discriminant[poly,x] find the discriminant of the polynomial poly
find the Gröbner basis for the polynomials poly[i]
find the Gröbner basis eliminating the y[i]
find a minimal representation of poly in terms of the poly[i]
Given two polynomials and , one can always uniquely write , where the degree of is less than the degree of .
gives the quotient , and
gives the remainder .
is essentially the analog for polynomials of the function
for integers. When the modulus
is an integer,
simply reduces each coefficient in
modulo the integer
. If
is a polynomial, then
effectively tries to get a polynomial with as low a degree as possible by subtracting from
appropriate multiples
q m
. The multiplier
can itself be a polynomial, but its degree is always less than the degree of
yields a final polynomial whose degree and leading coefficient are both as small as possible.
The main difference between
is that while the former works simply by multiplying and subtracting polynomials, the latter uses division in getting its results. In addition,
allows reduction by several moduli at the same time. A typical case is reduction modulo both a polynomial and an integer.
finds the highest degree polynomial that divides the
exactly. It gives the analog for polynomials of the integer function
gives the greatest common divisor of the two polynomials:
The function
is used in a number of classical algebraic algorithms. The resultant of two polynomials and , both with leading coefficient one, is given by the product of all the differences between the roots of
the polynomials. It turns out that for any pair of polynomials, the resultant is always a polynomial in their coefficients. By looking at when the resultant is zero, you can tell for what values of
their parameters two polynomials have a common root. Two polynomials with leading coefficient one have common roots if exactly the first elements in the list
are zero.
Here is the resultant with respect to
of two polynomials in
. The original polynomials have a common root in
only for values of
at which the resultant vanishes:
The function
is the product of the squares of the differences of its roots. It can be used to determine whether the polynomial has any repeated roots. The discriminant is equal to the resultant of the polynomial
and its derivative, up to a factor independent of the variable.
bner bases appear in many modern algebraic algorithms and applications. The function
takes a set of polynomials, and reduces this set to a canonical form from which many properties can conveniently be deduced. An important feature is that the set of polynomials obtained from
always has exactly the same collection of common roots as the original set.
The polynomials are effectively unwound here, and can now be seen to have exactly five common roots:
yields a list of polynomials with the property that is minimal and is exactly
Factor[poly] factor a polynomial
FactorSquareFree[poly] write a polynomial as a product of powers of square‐free factors
FactorTerms[poly,x] factor out terms that do not depend on x
give results as lists of factors
, and
perform various degrees of factoring on polynomials.
does full factoring over the integers.
extracts the "content" of the polynomial.
pulls out any multiple factors that appear.
pulls out only the factor of 2 that does not depend on
factors out the 2 and the term , but leaves the rest unfactored:
does full factoring, recovering the original form:
Particularly when you write programs that work with polynomials, you will often find it convenient to pick out pieces of polynomials in a standard form. The function
gives a list of all the factors of a polynomial, together with their exponents. The first element of the list is always the overall numerical factor for the polynomial.
The form that
returns is the analog for polynomials of the form produced by
for integers.
Here is a list of the factors of the polynomial in the previous set of examples. Each element of the list gives the factor, together with its exponent:
and related functions usually handle only polynomials with ordinary integer or rational
number coefficients. If you set the option
, however, then
will allow polynomials with coefficients that are complex numbers with rational real and imaginary parts. This often allows more extensive factorization to be performed.
IrreduciblePolynomialQ[poly] test whether poly is an irreducible polynomial over the rationals
IrreduciblePolynomialQ[poly,GaussianIntegers->True] test whether poly is irreducible over the Gaussian rationals
IrreduciblePolynomialQ[poly,Extension->Automatic] test irreducibility over the rationals extended by the algebraic number coefficients of poly
A polynomial is irreducible over a field
if it cannot be represented as a product of two nonconstant polynomials with coefficients in
Over the rationals extended by
, the polynomial is reducible:
Cyclotomic[n,x] give the cyclotomic polynomial of order n in x
Cyclotomic polynomials arise as "elementary polynomials" in various algebraic algorithms. The cyclotomic polynomials are defined by , where runs over all positive integers less than that are
relatively prime to .
Decompose[poly,x] decompose poly, if possible, into a composition of a list of simpler polynomials
Factorization is one important way of breaking down polynomials into simpler parts. Another, quite different, way is
. When you factor a polynomial , you write it as a product of polynomials . Decomposing a polynomial consists of writing it as a
of polynomials of the form .
Here is a simple example of
. The original polynomial can be written as the polynomial , where is the polynomial :
is set up to give a list of polynomials in
, which, if composed, reproduce the original polynomial. The original polynomial can contain variables other than
, but the sequence of polynomials that
produces are all intended to be considered as functions of
Unlike factoring, the decomposition of polynomials is not completely unique. For example, the two sets of polynomials and , related by and give the same result on composition, so that . The Wolfram
Language follows the convention of absorbing any constant terms into the first polynomial in the list produced by
give a polynomial in x which is equal to f[i] when x is the integer i
give a polynomial in x which is equal to f[i] when x is x[i]
The Wolfram Language can work with polynomials whose coefficients are in the finite field of integers modulo a prime .
PolynomialMod[poly,p] reduce the coefficients in a polynomial modulo p
Expand[poly,Modulus->p] expand poly modulo p
Factor[poly,Modulus->p] factor poly modulo p
find the GCD of the poly[i] modulo p
find the Gröbner basis modulo p
symmetric polynomial
in variables is a polynomial that is invariant under arbitrary permutations of . Polynomials
The fundamental theorem of symmetric polynomials says that every symmetric polynomial in can be represented as a polynomial in elementary symmetric polynomials in .
When the ordering of variables is fixed, an arbitrary polynomial can be uniquely represented as a sum of a symmetric polynomial , called the symmetric part of , and a remainder that does not contain
descending monomials. A monomial is called descending iff .
SymmetricPolynomial[k,{x[1],…,x[n]}] give the ^th elementary symmetric polynomial in the variables
SymmetricReduction[f,{x[1],…,x[n]}] give a pair of polynomials in such that , where is the symmetric part and is the remainder
give the pair with the elementary symmetric polynomials in replaced by
This writes the polynomial in terms of elementary symmetric polynomials. The input polynomial is symmetric, so the remainder is zero:
Here the elementary symmetric polynomials in the symmetric part are replaced with variables . The polynomial is not symmetric, so the remainder is not zero:
Functions like
usually assume that all coefficients in the polynomials they produce must involve only rational numbers. But by setting the option
you can extend the domain of coefficients that will be allowed.
Factor[poly,Extension->{a[1],a[2],…}] factor poly allowing coefficients that are rational combinations of the a[i]
gives the original polynomial back again:
Factor[poly,Extension->Automatic] factor poly allowing algebraic numbers in poly to appear in coefficients
By default,
will not factor this polynomial:
Other polynomial functions work much like
. By default, they treat algebraic number coefficients just like independent symbolic variables. But with the option
they perform operations on these coefficients.
By default,
does not reduce these polynomials:
IrreduciblePolynomialQ[poly,ExtensionAutomatic] test whether poly is an irreducible polynomial over the rationals extended by the coefficients of poly
IrreduciblePolynomialQ[poly,Extension->{a[1],a[2],…}] test whether poly is irreducible over the rationals extended by the coefficients of poly and by a[1],a[2],…
IrreduciblePolynomialQ[poly,ExtensionAll] test irreducibility over the field of all complex numbers
A polynomial is irreducible over a field
if it cannot be represented as a product of two nonconstant polynomials with coefficients in
Over the rationals extended by
, the polynomial is reducible:
Over the rationals extended by
, the polynomial is reducible:
TrigExpand[expr] expand trigonometric expressions out into a sum of terms
TrigFactor[expr] factor trigonometric expressions into products of terms
TrigFactorList[expr] give terms and their exponents in a list
TrigReduce[expr] reduce trigonometric expressions using multiple angles
works on hyperbolic as well as circular functions:
The Wolfram System automatically uses functions like
whenever it can:
TrigToExp[expr] write trigonometric functions in terms of exponentials
ExpToTrig[expr] write exponentials in terms of trigonometric functions
writes trigonometric functions in terms of exponentials:
also works with hyperbolic functions:
does the reverse, getting rid of explicit complex numbers whenever possible:
deals with hyperbolic as well as circular functions:
You can also use
on purely numerical expressions:
The Wolfram Language usually pays no attention to whether variables like
stand for real or complex numbers. Sometimes, however, you may want to make transformations which are appropriate only if particular variables are assumed to be either real or complex.
The function
expands out algebraic and trigonometric expressions, making definite assumptions about the variables that appear.
ComplexExpand[expr] expand expr assuming that all variables are real
ComplexExpand[expr,{x[1],x[2],…}] expand expr assuming that the x[i] are complex
In this case,
is assumed to be real, but
is assumed to be complex, and is broken into explicit real and imaginary parts:
There are several ways to write a complex variable
in terms of real parameters. As above, for example,
can be written in the "Cartesian form"
Re[z]+I Im[z]
. But it can equally well be written in the "polar form"
Abs[z] Exp[I Arg[z]]
The option
allows you to specify how complex variables should be written.
can be set to a list of functions from the set
will try to give results in terms of whichever of these functions you request. The default is typically to give results in terms of
Nested logical and piecewise functions can be expanded out much like nested arithmetic functions. You can do this using
LogicalExpand[expr] expand out logical functions in expr
PiecewiseExpand[expr] expand out piecewise functions in expr
PiecewiseExpand[expr,assum] expand out with the specified assumptions
puts logical expressions into a standard
disjunctive normal form
(DNF), consisting of an OR of ANDs.
works on all logical functions, always converting them into a standard OR of ANDs form. Sometimes the results are inevitably quite large.
can be expressed as an OR of ANDs:
Any collection of nested conditionals can always in effect be flattened into a
piecewise normal form
consisting of a single
object. You can do this in the Wolfram Language using
Functions like
, as well as
, implicitly involve conditionals, and combinations of them can again be reduced to a single
object using
This gives a result as a single
assumed real, this can also be written as a
Functions like
, and
can also be expressed in terms of
objects, though in principle they can involve an infinite number of cases.
The Wolfram Language by default limits the number of cases that the Wolfram Language will explicitly generate in the expansion of any single piecewise function such as
at any stage in a computation. You can change this limit by resetting the value of
Simplify[expr] try various algebraic and trigonometric transformations to simplify an expression
FullSimplify[expr] try a much wider range of transformations
performs standard algebraic and trigonometric simplifications:
It does not, however, do more sophisticated transformations that involve, for example, special functions:
try to simplify expr, working for at most t seconds on each transformation
use only the functions f[i] in trying to transform parts of expr
use built‐in transformations as well as the f[i]
simplify using c to determine what form is considered simplest
In both
there is always an issue of what counts as the "simplest" form of an expression. You can use the option
to provide a function to determine this. The function will be applied to each candidate form of the expression, and the one that gives the smallest numerical value will be considered simplest.
With its default definition of simplicity,
leaves this unchanged:
The Wolfram Language normally makes as few assumptions as possible about the objects you ask it to manipulate. This means that the results it gives are as general as possible. But sometimes these
results are considerably more complicated than they would be if more assumptions were made.
Refine[expr,assum] refine expr using assumptions
Simplify[expr,assum] simplify with assumptions
FullSimplify[expr,assum] full simplify with assumptions
FunctionExpand[expr,assum] function expand with assumptions
by default does essentially nothing with this expression:
With the assumption
can immediately reduce the expression to 0:
By applying
with appropriate assumptions to equations and inequalities, you can in effect establish a vast range of theorems.
can prove that the equation is true:
always try to find the simplest forms of expressions. Sometimes, however, you may just want the Wolfram Language to follow its ordinary evaluation process, but with certain assumptions made. You can
do this using
. The way it works is that
performs the same transformations as the Wolfram Language would perform automatically if the variables in
were replaced by numerical expressions satisfying the assumptions
There is no simpler form that
can find:
just evaluates
as it would for any explicit negative number
An important class of assumptions is those which assert that some object is an element of a particular domain. You can set up such assumptions using
, where the
character can be entered as
assert that x is an element of the domain dom
{x[1],x[2],…}∈dom assert that all the x[i] are elements of the domain dom
patt∈dom assert that any expression that matches patt is an element of the domain dom
Complexes the domain of complex numbers
Reals the domain of real numbers
Algebraics the domain of algebraic numbers
Rationals the domain of rational numbers
Integers the domain of integers
Primes the domain of primes
the domain of Booleans (True
Booleans and
If you say that a variable satisfies an inequality, the Wolfram Language will automatically assume that it is real:
By using
, and
with assumptions, you can access many of the Wolfram Language's vast collection of mathematical facts.
The assumption
k/2 ∈ Integers
implies that
must be even:
The Wolfram Language knows about discrete mathematics and number theory as well as continuous mathematics.
In something like
, you explicitly give the assumptions you want to use. But sometimes you may want to specify one set of assumptions to use in a whole collection of operations. You can do this by using
Assuming[assum,expr] use assumptions assum in the evaluation of expr
$Assumptions the default assumptions to use
This tells
to use the default assumption
Functions like
take the option
, which specifies what default assumptions they should use. By default, the setting for this option is
. The way
then works is to assign a local value to
, much as in
In addition to
, a number of other functions take
options, and thus can have assumptions specified for them by
. Examples are
, and
The assumption is automatically used in | {"url":"https://reference.wolfram.com/language/tutorial/AlgebraicManipulation.html#16713","timestamp":"2024-11-04T05:45:16Z","content_type":"text/html","content_length":"334221","record_id":"<urn:uuid:87cae5a4-caf0-43f4-9f28-eb887097dd7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00656.warc.gz"} |
Technical Perspective: Combining Logic and ProbabilityTechnical Perspective: Combining Logic and Probability
A goal of research in artificial intelligence and machine learning since the early days of expert systems has been to develop automated reasoning methods that combine logic and probability.
Probabilistic theorem proving (PTP) unifies three areas of research in computer science: reasoning under uncertainty, theorem-proving in first-order logic, and satisfiability testing for
propositional logic.
Why is there a need to combine logic and probability? Probability theory allows one to quantify uncertainty over a set of propositions—ground facts about the world—and a probabilistic reasoning
system allows one to infer the probability of unknown (hidden) propositions conditioned on the knowledge of other propositions. However, probability theory alone has nothing to say about how
propositions are constructed from relationships over entities or tuples of entities, and how general knowledge at the level of relationships is to be represented and applied.
Handling relations takes us into the domain of first-order logic. An important case is collective classification, where the hidden properties of a set of entities depend in part on the relationships
between the entities. For example, the probability that a woman contracts cancer is not independent of her sister contracting cancer. Many applications in medicine, biology, natural language
processing, computer vision, the social sciences, and other domains require reasoning about relationships under uncertainty.
Researchers in AI have proposed a number of representation languages and algorithms for combining logic and probability over the past decade, culminating in the emergence of the research area named
statistical relational learning (SRL).
The initial approaches to SRL used a logical specification of a domain annotated with probabilities (or weights) as a template for instantiating, or grounding, a propositional probabilistic
representation, which is then solved by a traditional probabilistic reasoning engine. More recently, research has centered on the problem of developing algorithms for probabilistic reasoning that can
efficiently handle formulas containing quantified variables without grounding—a process called "lifted inference."
Well before the advent of SRL, work in automated theorem-proving had split into two camps. One pursued algorithms for quantified logic based on the syntactic manipulation of formulas using
unification and resolution. The other pursued algorithms for propositional logic based on model finding; that is, searching the space of truth assignments for one that makes the formula true. At the
risk of oversimplifying history, the most successful approach for fully automated theorem-proving is model finding by depth-first search over partial truth assignments, the
Davis-Putnam-Logemann-Loveland procedure (DPLL).
There is a simple but profound connection between model finding and propositional probabilistic reasoning. Suppose each truth assignment, or model, came with an associated positive weight, and the
weights summed to (or could be normalized to sum to) one. This defines a probability distribution. DPLL can be modified in a straightforward manner to either find the most heavily weighted model or
to compute the sum of the weights of the models that satisfy a formula.
There is a simple but profound connection between model finding and propositional probabilistic reasoning.
The former can be used to perform Maximum a Posteriori (MAP) inference, and the latter, called "weighted model counting," to perform marginal inference or to compute the partition function.
PTP provides the final step in unifying probabilistic, propositional, and first-order inference. PTP lifts a weighted version of DPLL by allowing it to branch on a logical expression containing
un-instantiated variables. In the best case, PTP can perform weighted model counting while only grounding a small part of a large relational probabilistic theory.
SRL is a highly active research area, where many of the ideas in PTP appear in various forms. There are lifted versions of other exact inference algorithms such as variable elimination, as well as
lifted versions of approximate algorithms such as belief propagation and variational inference. Approximate inference is often the best one can hope for in large, complex domains. Gogate and Domingos
suggest how PTP could be turned in a fast approximate algorithm by sampling from the set of children of a branch point.
PTP sparks many interesting directions for future research. Algorithms must be developed to quickly identify good literals for lifted branching and decomposition. Approximate versions of PTP need to
be fleshed out and evaluated against traditional methods for probabilistic inference. Finally, the development of a lifted version of DPLL suggests that researchers working on logical theorem proving
revisit the traditional divide between syntactic methods for quantified logics and model-finding methods for propositional logic.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.
No entries found | {"url":"https://acmwebvm01.acm.org/magazines/2016/7/204014-technical-perspective-combining-logic-and-probability/fulltext","timestamp":"2024-11-10T05:37:23Z","content_type":"text/html","content_length":"28313","record_id":"<urn:uuid:3ee5061c-bc82-4175-a013-0188de1f49d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00784.warc.gz"} |
How to Calculate Torque.
Torque also referred to as the moment of force, rotational force or turning effect, depending on the field of study is the rotational equivalent of linear force.
Whether you are using a wrench, riding a bicycle, driving a car or any other thing, there are many ways torque is making things happen around you.
SI unit for torque is Newton meter.
Formula to calculate torque.
θ is the angle between the force and the lever arm.
Suppose θ is 20° and the force applied is 300N while the radius is 2m. Calculate the torque.
Therefore, the torque is 205.21 N m. | {"url":"https://www.learntocalculate.com/how-to-calculate-torque/","timestamp":"2024-11-14T08:09:25Z","content_type":"text/html","content_length":"56205","record_id":"<urn:uuid:f2dd1af3-a493-45df-9d3c-dbeef22661e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00287.warc.gz"} |
21.2 grams to ounces
Convert 21.2 Grams to Ounces (gm to oz) with our conversion calculator. 21.2 grams to ounces equals 0.747807952 oz.
Enter grams to convert to ounces.
Formula for Converting Grams to Ounces:
ounces = grams ÷ 28.3495
By dividing the number of grams by 28.3495, you can easily obtain the equivalent weight in ounces.
Converting grams to ounces is a common task that many people encounter, especially when dealing with recipes, scientific measurements, or everyday activities. Understanding the conversion factor is
essential for accurate measurements. In this case, the conversion factor from grams to ounces is 1 ounce = 28.3495 grams. This means that to convert grams to ounces, you need to divide the number of
grams by 28.3495.
To convert 21.2 grams to ounces, you can use the following formula:
Ounces = Grams ÷ 28.3495
Now, let’s break down the calculation step-by-step:
1. Start with the amount in grams: 21.2 grams.
2. Use the conversion factor: 28.3495 grams per ounce.
3. Perform the division: 21.2 ÷ 28.3495.
4. The result is approximately 0.7484 ounces.
5. Rounding this to two decimal places gives you 0.75 ounces.
This conversion is particularly important as it bridges the gap between the metric and imperial systems, which are used in different parts of the world. For instance, many recipes in the United
States use ounces, while most other countries use grams. By converting between these units, you can ensure that your cooking or baking turns out perfectly, regardless of the measurement system used.
Practical examples of where this conversion might be useful include:
• Cooking: If a recipe calls for 0.75 ounces of an ingredient, knowing that this is equivalent to 21.2 grams can help you measure accurately.
• Scientific Measurements: In laboratories, precise measurements are crucial. Converting grams to ounces can help in experiments where both metric and imperial units are used.
• Everyday Use: Whether you’re weighing food, measuring out supplements, or even calculating postage, understanding how to convert grams to ounces can simplify your tasks.
In conclusion, converting 21.2 grams to ounces is a straightforward process that can enhance your accuracy in various applications. By mastering this conversion, you can navigate between metric and
imperial systems with ease, making your cooking, scientific work, and daily activities more efficient.
Here are 10 items that weigh close to 21.2 grams to ounces –
• Standard Paperclip
Shape: Elongated oval
Dimensions: 28 mm x 0.8 mm
Usage: Used for holding sheets of paper together.
Random Fact: The paperclip was patented in 1867, but its design has remained largely unchanged since then.
• AA Battery
Shape: Cylindrical
Dimensions: 50.5 mm x 14.5 mm
Usage: Commonly used in various electronic devices like remote controls and toys.
Random Fact: An AA battery can power a small flashlight for up to 10 hours.
• Golf Ball
Shape: Spherical
Dimensions: 42.67 mm in diameter
Usage: Used in the sport of golf, designed for optimal aerodynamics.
Random Fact: A golf ball typically has 336 dimples, which help reduce air resistance.
• USB Flash Drive
Shape: Rectangular
Dimensions: 60 mm x 20 mm x 10 mm
Usage: Used for data storage and transfer between devices.
Random Fact: The first USB flash drive was released in 2000 and had a storage capacity of just 8 MB.
• Standard Dice
Shape: Cubic
Dimensions: 16 mm x 16 mm x 16 mm
Usage: Used in various games for generating random numbers.
Random Fact: Dice are believed to be one of the oldest gaming implements, dating back to 3000 BC.
• Small Key
Shape: Irregular with a long shaft
Dimensions: 50 mm x 20 mm
Usage: Used for locking and unlocking doors or containers.
Random Fact: The oldest known key dates back to ancient Egypt, made of wood and used for locking doors.
• Tea Bag
Shape: Rectangular pouch
Dimensions: 6 cm x 4 cm
Usage: Used for brewing tea by steeping in hot water.
Random Fact: The first tea bags were made of silk and were introduced in the early 1900s.
• Postage Stamp
Shape: Rectangular
Dimensions: 25 mm x 20 mm
Usage: Used for mailing letters and packages.
Random Fact: The first adhesive postage stamp, the Penny Black, was issued in the UK in 1840.
• Small Rubber Eraser
Shape: Rectangular
Dimensions: 25 mm x 15 mm x 10 mm
Usage: Used for removing pencil marks from paper.
Random Fact: The first rubber eraser was invented in 1770 by Edward Nairne, who used a piece of rubber to erase pencil marks.
• Button Cell Battery
Shape: Circular
Dimensions: 20 mm in diameter
Usage: Commonly used in watches, calculators, and small electronic devices.
Random Fact: Button cell batteries are named for their shape, resembling a button.
Other Oz <-> Gm Conversions – | {"url":"https://www.gptpromptshub.com/grams-ounce-converter/21-2-grams-to-ounces","timestamp":"2024-11-13T02:37:10Z","content_type":"text/html","content_length":"186932","record_id":"<urn:uuid:1366ee0a-db36-4995-802f-9c6a001b8f1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00490.warc.gz"} |
Calculating Return on Investment
AUTHOR’S NOTE: When this article was originally published twenty years ago, there was a growing recognition in the loss prevention community that we needed to better support our proposed expenditures
and budget, but the level of financial fluency in our profession was not well-established. Many practitioners had come from a law enforcement or military background and very few came up the ranks on
a purely business path. Unlike today, there were few individuals that had a background in operations or finance or analytics. And there was a dearth of available literature that talked about ROI
(return on investment) in the context of security and loss prevention.
This article is a primer on three of the most common methods that companies and, especially, financial executives use to evaluate the viability of proposed expenditures. As is noted in this article,
having a basis for evaluating competing budget requests is an absolute necessity as no company has unlimited budgets, even if every single project is positive in terms of ROI. Make no mistake, you
are competing for funds against many other worthy projects.
The dollars used in the example look quite small twenty years on, but the mechanics of the calculations have not changed. If you don’t already know how the finance group is going to evaluate your
proposed spending, this would be a good starting place to understand the nomenclature and thought process.
Over the past three decades, ROI, profit center, cost center, and value-added have become significant buzzwords in the loss prevention industry. While not unique to our industry—these concepts are
also important in other staff functions, such as information technology and finance—I can’t imagine ROI has gotten any more attention than it has in our industry. Most LP professionals agree this is
a step in the continuing evolution toward higher levels of involvement in our organizations, increased professionalism of our industry, and a more business-oriented mindset.
However, based on my observations and discussions, I believe there is a lack of understanding about calculating the ROI of loss prevention solutions. Most people who say “we are a value-added
function of the company and make decisions based on ROI” don’t know an NPV from an IRR.
Is it necessary to know the actual mechanics of these financial measures to understand the concept? Absolutely not.
Will it improve our ability to make good capital investment decisions, defend our projects to the CFO during budget approval, and give us greater confidence in the application of the ROI concept?
What Is Capital Budgeting?
The term “capital budgeting” refers to the process of planning expenditures that will generate income (or savings) that is expected to flow into the organization over a multi-year period. In other
words, capital budgeting is a way to decide where to spend your money when considering long-term projects. This contrasts with “expense budgeting,” which is deciding on where to spend money for
short-term, day-to-day expenses.
There are four steps in capital budgeting:
1. Determining the initial costs of the project,
2. Estimating incremental cash flow,
3. Financial analysis of the project, and
4. Selection of most favorable projects.
Determining Initial Costs
This step is probably the simplest part of your analysis. It includes the invoice price of new items (machinery, equipment, services, etc.), any sales taxes, and the additional expenses associated
with the implementation of the project, such as packing, delivery, installation, and inspection. The justification for including all the costs indicated, rather than recording some of them as
expenses, is that every one of these costs is an integral part of making the asset usable by the organization.
Once you know how much it costs to implement a project, you can compare the initial investment with future benefits and make a judgment as to whether the project is worth undertaking.
Estimating Incremental Cash Flow
The cash flow statement is a way of systematically estimating the financial benefits of your project over its useful life. In the LP industry, the cash flow is usually in the form of savings through
shrinkage reduction or risk avoidance.
Your cash flow projection should only include those estimated cash inflows and outflows that are directly related to the project itself. As a starting point, you can use the cash flow projections
that currently exist for your business, simply adding in the changes that you expect the project to bring. Then you can compare your original statement (without the project) to your new statement
(with the project), to gauge the likely results of moving forward with your plans.
Let’s look at an example of a simplified cash flow projection for a sample LP-related project. Let’s say that you are thinking of purchasing new equipment, such as electronic article surveillance
tags, that have a useful life of five years. You believe buying and installing this equipment will lower shrinkage significantly. The equipment will cost $75,000 to purchase, install, and bring into
use. This is your initial cost.
The next step is to construct a cash flow statement, as illustrated in Table 1 on page 39.. We begin with what we know or have planned. The store we are going to use this equipment in currently does
$10 million in sales per year and is budgeted to increase sales by 2 percent each year. The store’s historical retail shrinkage performance has consistently been 3.0 percent of sales, and you believe
this will continue if you do not purchase this new equipment. Therefore, 3.0 percent will be used as our baseline shrinkage figure.
Next, you project the income or savings you expect if you invest in this project. Based on your knowledge of this equipment or your experience with it when you’ve installed it in similar situations,
you are confident it will reduce shrinkage by 25 percent and keep it there. Therefore, the savings from this project will be the difference between what shrinkage would have been over the next five
years (baseline) and the estimate of the new shrinkage rate, which is 25 percent lower. Since we look at shrinkage as a cost, we need to convert the savings at retail to cost by multiplying by our
cost-to-retail ratio.
To maintain this equipment, certain expenses will be incurred each year. This could be service, maintenance, replacement of components, or upgrades. We need to reduce our projected savings by these
expenses. For purposes of this analysis, let’s assume it will take $10,000 in expenses to administer the program, and that number will increase in proportion to sales performance.
The next step is one that many people miss—the impact of taxes. We need to calculate the impact of taxes on our estimated cash flow because, if our previous projection comes true and we really do
increase the bottom line for this store as a result of the project, Uncle Sam will want his fair share of the additional profits. This is usually 34 percent.
But before we calculate the tax amount, we need to factor in the favorable tax impact of depreciation. Depreciation is the allocation, for accounting and tax purposes, of the purchase costs of fixed
assets over several years. All costs incurred in acquiring an asset and getting it up and operating are depreciable, including actual price, taxes, broker’s fees, labor, and freight. This allows you
to avoid paying taxes on the depreciable figure, thus lowering your tax burden.
Calculation of depreciation can be a topic in its own right as there are a number of different ways that accountants use to determine depreciation, such as accelerated cost recovery system,
straight-line, sum of the year’s digits method, and the double declining balance method. For the purposes of our discussion and analysis, I recommend the use of the straight-line depreciation method.
This method of depreciation assumes the same amount of expense will be allocated in each year of the asset’s useful life and is calculated by dividing the initial cost of the asset by its useful
life, in this case, $15,000 per year over five years.
Finally, once you’ve calculated the tax impact, you add the depreciation figure back in, since it is an accounting device only, that is, you didn’t have to spend an additional $15,000 on the project
each year. Figure 1 shows the cash flow statement we just constructed.
One thing to note is that a number of assumptions have been made, such as baseline shrinkage rate, amount of reduction, and sales plan. Don’t let anyone fool you—capital budgeting and financial
analysis are not exact sciences. The results you actually achieve will not match your analysis exactly. However, if you make good assumptions, your analysis will be directionally correct.
Now that we’ve created a projected cash flow statement for your project, we can use some financial analysis tools to see whether the project makes sense.
Financial Analysis of Implementing LP Solutions
At the simplest level, you’ll want to make sure that the total costs of any major project you undertake are less than the total benefits resulting from the project. You could simply add up the costs,
then add up the expected revenue increases and cost savings over the next few years and compare the two. However, if you did that, you’d be ignoring the fact that many of the costs will be incurred
at the beginning of the project, while many of the revenues or cost savings will occur later, over a period of months or years.
There are several more formal ways to evaluate the costs or benefits that a major purchase or project will bring to your company. The most used are the payback period, net present value (NPV), and
internal rate of return (IRR) methods.
Each of these methods has its advantages and drawbacks, so generally, multiple methods are used for any given project. In addition, no financial formula or combination of formulas should be used to
the exclusion of common sense. For example, some loss prevention solutions may “fail” your tests under some or all these methods, but you might decide to go forward with them anyway because of their
value as part of your long-range business plan.
Payback Period Analysis
The payback period method is the simplest way of looking at one or more major project ideas. It simply tells you how long it will take to earn back the money you’ll spend on the project. The formula
is Cost of Project ÷ Annual Cash Inflow = Payback Period.
Thus, if a project cost $75,000 and was expected to return $20,000 annually, the payback period would be $75,000 ÷ $20,000 = 3.75 years.
If the return from the project is expected to vary from year to year, you can simply add up the expected returns for each succeeding year, until you arrive at the total cost of the project. For
example, in our previous cash flow example, the project costs $75,000 and the expected returns are shown in Table 2 above.
The project would be completely paid for in approximately two years and nine months, because $75,000 (cost of project) is equal to all of the first two years’ revenues, plus $21,138, which is equal
to about 76 percent of the third year’s revenues.
Under the payback method of analysis, loss prevention solutions with shorter payback periods rank higher than those with longer paybacks. The theory is that projects with shorter paybacks are more
liquid, and thus less risky. They allow you to recoup your investment sooner, so you can reinvest the money elsewhere. Moreover, with any project, there are a lot of variables that grow fuzzy as you
look out into the future. With a shorter payback period, there’s less chance that market conditions, interest rates, the economy, or other factors affecting your project will drastically change.
Generally, a payback period of three years or less is preferred.
There are a couple of drawbacks to using the payback period method. For one thing, it ignores any benefits that occur after the payback period, so a project that returns $1 million after a six-year
payback period is ranked lower than a project that returns $10,000 after a five-year payback. But the major criticism is that a straight payback method ignores the time value of money. To get around
this problem, you should also consider the NPV and IRR of the project.
Net Present Value
The NPV method of evaluating a major project allows you to consider the time value of money. Essentially, it helps you find the value in “today’s dollars” of the future cash flows of a project. Then,
you can compare that amount with the amount of money needed to implement the project.
If the NPV is greater than zero, the project will be profitable for you (assuming, of course, that your estimated cash flow is reasonably close to reality). If you have more than one project on the
table, you can compute the NPV of both, and choose the one with the greatest difference between NPV and cost.
In order to calculate the value of the future cash flows in today’s dollars, you need to know your cost of capital. What is your cost of capital for purposes of analyzing a major purchase decision?
In simplest terms, it is the cost of the money you’ll use to make your purchase. Where will you get the $75,000 that is needed for your project?
If you are planning to finance the purchase and you know what the interest rate on the loan would be, the answer is simple. The rate charged on the loan can be considered the cost of capital for the
project. Therefore, if the loan rate were 10 percent, that would be your cost of capital.
If you are not financing your loss prevention solutions, there are still costs involved in the form of equity cost to your company. This is the basis behind several financial formulas such as
weighted average cost of capital (WACC) and accelerated present value (ACP). The good news is that you usually don’t have to determine the cost of capital for yourself. Most companies set a
predetermined cost of capital for all capital projects that you can get from the finance department.
How do you compute NPV? The easiest way is to use a good financial calculator (you can find these online). If you don’t want to take the time to learn how to use one, you can use a present value
table. Note that whenever you do time value of money calculations to find a present or future value (such as NPV), you’ll need to specify a cost of capital rate, also known as a discount factor (DF).
The DF is used to reduce the cash flows from future years into today’s dollars.
Table 3 below takes the cash flows we developed for our $75,000 project and shows the appropriate DF. We have used a 12-percent discount rate or cost of capital. By multiplying the cash flow times
the DF, we arrive at the present value (PV) of the future cash flows. This represents the future savings in terms of today’s dollars.
You can see that the sum of the cash flows is $137,985, but when discounted at 12 percent, it totals only $99,120. By subtracting out our original $75,000 investment, we find the NPV is $24,120. This
means that we would add that amount to the company’s bottom line over the course of the next five years by making this purchase.
The IRR Method
The IRR method of analyzing a major purchase or project also allows you to consider the time value of money. Instead of expressing the result in dollars, as the NPV does, it expresses results in
terms of a rate. Once you know the rate, you can compare it to the rates you could earn by investing your money in other loss prevention solutions or investments.
If the IRR is less than the cost of capital used to fund your project, the project will clearly be a money loser. However, it’s not uncommon for a business to insist that to be acceptable, a project
must be expected to earn an IRR that is several percentage points higher than the cost of capital to compensate the company for its risk, time, and trouble associated with the project.
How do you compute the IRR? You can also use the PV table. One problem with the IRR is that it can be difficult and time-consuming to calculate, especially if the expected cash inflows vary greatly
from year to year. But the basic premise is that the IRR is the cost of capital that would make the NPV for the project equal to zero. In the example we have been working with, the IRR is 24 percent.
If we were to go back to our NPV analysis and use that figure for the cost of capital, the NPV would equal zero.
The decision rule calls for comparing the IRR from the proposed project to the cost of capital. If IRR exceeds the cost of capital, the project is accepted, while if the IRR is less than the cost of
capital, the project is rejected. If capital is scarce, the firm can rank projects based on their IRR (cost of capital) ratio, or if the various projects have similar or identical cost of capital,
the firm can simply rank IRR.
Choosing Among Alternative LP Solutions
Of the three methods of analyzing a major purchase, which is the best? While the payback period method is the easiest to compute, most accountants would prefer to look at the net PV and the IRR.
These methods take into consideration the greatest number of factors, and in particular, they are designed to allow for the time value of money. If the net PV is negative, or if the IRR is less than
the cost of capital, the project should be rejected as not financially feasible (unless the project is one that’s required by law, such as a safety upgrade).
Occasionally, when you’re looking at several loss prevention solutions that are competing for your time and money, the NPV and IRR methods will yield different answers to the question “Which option
is best?” The issue is not whether to reject certain solutions when NPV is negative, or IRR is less than the cost of capital. The issue is project selection when capital is scarce, and so the firm
must rank alternative loss prevention solutions and select the best. However, these different tools can yield different project rankings. In particular:
• NPV leads to the highest ranking for large, profitable projects.
• IRR can overstate the attractiveness of highly profitable projects because the IRR algorithm assumes that excess cash flows are reinvested in the project and yield an IRR return, which may not be
plausible. In addition, IRR and NPV can yield ranking differences for projects that differ substantially in terms of the magnitude and timing of cash flows.
• Conflicts over ranking differences are usually decided in favor of NPV.
If you find that understanding ROI and capital budgeting is still a bit confusing, don’t worry. There are many subtleties and aspects to the financial concepts presented here. Use this article as a
stepping off point for further study. Ask your peers in the finance department about the methods they use when evaluating capital projects, what the organization’s cost of capital is, and what
evaluation criteria they use for saying “yes” to a project. As a result, I believe you will find greater success in evaluating and selling your loss prevention solutions and programs to
senior management.
is executive vice president of CAP Index. He has been an industry insider and advocate for thirty-five years with experience in asset protection, operations, inventory control, and strategic
consulting. Palmer works with top retailers, academics, asset protection practitioners, and industry associations in the US and around the world to advance the profession, examine key trends and
issues, and support their key goals and objectives. He can be reached at wpalmer@capindex.com. | {"url":"https://losspreventionmedia.com/calculating-roi/","timestamp":"2024-11-02T02:06:38Z","content_type":"text/html","content_length":"414001","record_id":"<urn:uuid:6c77463b-cf16-44c2-8048-8d56f9506f1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00088.warc.gz"} |
How To Calculate Power Consumption In Kwh
How to Calculate Power Consumption in kWh
Calculating power consumption in kWh (kilowatt-hour) is essential for managing energy usage and determining electricity costs. Whether you are a homeowner trying to measure your household energy
consumption or a business owner aiming to control expenses, this guide will help you understand the process of calculating power consumption.
Understanding Power and Energy
Before diving into the calculation, it's important to understand the concepts of power and energy. Power refers to the rate at which energy is consumed or produced, measured in watts (W) or kilowatts
(kW). Energy, on the other hand, is the total amount consumed over a given period, measured in kilowatt-hours (kWh).
Imagine power as the speed at which a car is traveling, and energy as the distance covered by that car. The faster the car (higher power), the more distance it can cover (higher energy).
Calculating Power Consumption in kWh
To calculate power consumption in kWh, you will need two primary pieces of information:
1. Power Rating of the Device: You can locate the power rating on the device or its user manual. It is usually mentioned in watts (W) or kilowatts (kW).
2. Usage Time: Determine the number of hours the device is in use during a specific period. This can be a single day, a month, or any desired timeframe.
Once you have these details, follow these steps:
Step 1: Convert the power rating to kilowatts. If the power rating is given in watts, divide it by 1000 to obtain the value in kilowatts.
Step 2: Multiply the power rating (in kilowatts) by the duration of its use (in hours) during the specified timeframe. The result will be in kilowatt-hours (kWh), representing the energy consumption
for that device during that period.
For example, let's say you have a light bulb rated at 60W and it's used for 5 hours a day:
Step 1: Convert the power rating to kilowatts: 60W ÷ 1000 = 0.06kW.
Step 2: Calculate the energy consumption: 0.06kW × 5 hours = 0.3kWh per day.
Keep in mind that this calculation provides the energy consumption for a single device. To determine the total power consumption of multiple devices combined, repeat the steps for each device and sum
up all the results.
Frequently Asked Questions
Q: What is the difference between kW and kWh?
A: kW (kilowatt) is a unit of power, representing the rate at which energy is consumed or produced. kWh (kilowatt-hour) is a unit of energy, representing the total amount consumed over a specific
period. It is obtained by multiplying the power rating (in kW) by the duration of use (in hours). | {"url":"https://www.quickpickdeal.com/blog/energyusage/how-to-calculate-power-consumption-in-kwh","timestamp":"2024-11-10T08:18:11Z","content_type":"text/html","content_length":"36698","record_id":"<urn:uuid:18d6c169-b9c3-4015-9d0a-ecab32f73a21>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00368.warc.gz"} |
A classical Greek mathematician Euclid formulated a statement about two parallels intersection [1] in the third century B.C. However, the failure of the efforts to apply purely geometric images and
premisses for its deduction as a theorem has made Euclid refer the statement to the number of axioms.
The efforts to prove the fifth axiom took two thousand years. Only by 20-30s of XIX century thanks primarily to the works of Lobachevsky N.I. and also Boyom J., Gauss K. and others [2] a new
non-Euclidian geometry - hyperbolic geometry of Lobachevsky - was created [2, 5]. For this it was necessary to give up on the fifth axiom and to offer instead of it the fact that through a point out
of a given line pass at least two lines parallel to the given one.
The angular sum of a big enough triangle constructed on three parallel lines can have as small as we please angular sum (less than 180° (geometry of Lobachevsky) or more than 180° (spherical geometry
of Riemann)). In small dimensionality domains such a geometry is almost undistinguishable from Euclidian one.
Therefore, at present the statement of Euclid has not been proved yet. It is clear that it is possible to make infinitely many geometries as well as logical systems. Only one thing remains unclear -
which of them is realized in the surroundings? And aren´t some of them realized within the frames of Euclidian geometry itself, i.e. in geometrically small dimesion?
The purpose of the given work is to prove the statement of Euclid as a theorem on the ground of the old system of postulates and axioms (synonyms). For this we formulated the statement of Euclid in
the form of a theorem: every time when a line being intersected by two other lines forms inside angles with them the sum of which is equal to two right ones, these lines are crossed from that very
side from which these angles are altered equally.
Deduction. Let us assume that in accord with Euclidian axiom the distance between crossing points A and B of a line with two other lines a and b with inside angles α and β equal to the sum of the two
lines is 1m. Let us assume that the lines a and b are crossed somewhere in the point C (Picture1) and in the formed right triangle ABC whose angles α and β are almost right (89º 59´ 59"), the apical
C angle γ is one angle second altered (1"). Then the right angle side AC or BC has approximately the distance (equation 1):
АС= 1/tg γ ≈ 2.06 x 10^5 м. (1)
However, both noted above logical reasoning and numerous other ones [2-5] are very little applied to the deduction of the Euclid statement as a theorem. To our opinion, the main cause of it - is the
scholastic attempts of application purely geometric images and premises for it.
For the Euclid´s statement deduction let us construct a cone ABDC by rotating the right triangle ABC (picture.1) near one of its sides and match the central points XYZ with the point C (Picture 2).
Let us assume that upon the condition of a zero alteration of the inside angles of the lines AB, AD and BD, the squared distance of the right angles´ sides AC, DC and BC (dl)^2 is equal to [2]:
(dl)^2= dx^2= dy^2= dz^2, (2)
where dx, dy, dz are the differentials of the coordinates.
Let us consider the processes really taking course in the Dekart coordinates of the Euclidian space. For example, light-induced reactions, diffusion bounded processes in the shot of a multienzyme
complex, and others [6-8], in which the number of successful collisions leading to final products for an average time ∆t= 10^-10sec is 10^10, are referred to such ones.
Picture 1. When the lines a and b are less than one angle second altered the length of the right angle side AC grows up to astronomic sizes relative to the side AB. At more than 1" alteration of the
lines a and b the length of the side AC or BC, vice versa, reduces.
Picture2. Through the three points A, B and C on a circle go three parallel lines a, b and c with the distance of 1 m from each other. Let us assume that in every of the three inside opposite angles
α, β and ξ of the right triangles ACD, ACB and DCB a two angle seconds (2") less than 90º circle perimeter alteration takes place. Then the sum of all the three inside angles α, β and ξ will be six
angle seconds (6") less than 360º (3), and the apical C angle of the right cone is equal γ = 6".
Operating with these conditions the hypothetical area of the circle perimeter of the cone CABD can be calculated from the formula (3):
(2πR)^10 х 1/tgγ, или 10^10^lg^2^πR. 1/tgγ= 0,3218 х 10^10 м^2, (3)
where the cone base radius R= 0,5 м, γ= 6".
The analysis of the formula (3) points to a flexible character of the alteration of the inside angles of the intersection lines AB, AD and BD with the lines a, c and b. Because of different geodesic
flat distribution of the last, the chance of the intersection of a, c and b in the scale of three-dimentional space (Picture 3) is extremely ignoble.
A great number of chemical processes taking course while forming geometric space in living systems are known in literature [8]. The rate of these processes behavior is determined exceptionally by the
collision frequency of the reacting elements [6-8].
However, in a respectively large scale of XYZ-coordinates the chance of the particles´ collision is so ignoble that it can be taken for1. But changing from a three-dimensional system to a
two-dimensional system of coordinates the collision frequency of the reacting particles grows tenfold, and to a one-dimensional system - 100-fold [8].
It is known that diffuse reactions have got three-dimensional distribution. For example, 10^-10 mols of adenosine triphosphate (ATP), adenosine diphosphate (ADP) and phosphate (Pi) mixture are
distributed in the body in the following concentrations:[Pi]= 10^-2mol, [АТР]= 10^-3 mol and [АDP]= 10^-5 mol [9]. The content of ADP normally is always more than that of ATP.
From about 100 000 encoding and they are distributed in the three-dimensional system of coordinates in molecular ratios as 10^-5 х 10^-3 х 10^-2.
The analysis of the expressing genes number depending on their specialization shows that about 50% of all informative (template) mRNA of an animal and human body cell is represented by one kind of
the mRNA; about 35% - by a wide range of the mRNA kind and 15% - by not more than 7-8 mRNA kinds [10].
The first mRNA fraction (housekeeping) serves as templates for cellular protein synthesis. The rest two mRNA fractions are referred to "luxury" genes and provide only specialized functions.
Then, from the given above data and the fact that the Universe is asymmetrical in itself [6, 11], one can conclude: the circle perimeter area of the cone CABD in 0,32 х 10^10 m^2 (3) in the
three-dimensional space is distributed asymmetrically. The change from a three-dimensional system to a two-dimensional and further to one-dimensional system of coordinates is attended both with the
alteration increase and matching of the inside angles of the line intersection with the other two lines.
dx- dy: ^х10^5 м^2;
dx- dz: ^х10^3 м^2;
dy- dz: ^х10^2 м^2.
It is meant that the chance of the intersection of the lines a, c and b increases considerably with the area volume reduction in the space of Euclidian, but not non-Euclidian geometry.
It is also follows from the fifth postulate that: if the lines a and b, at an intersection with a third line, form inside angles with it, the sum of which is less than 180º, these lines will
certainly be intersected from that very side of the line, with which this angle sum is less than one of two right angles.
This statement of Euclid bears purely logic loading which has no geodesic substantiation. For example, let us assume the line AB (Picture3) as an X-axis, and α- and β- as the line inclinations to the
X-axis. Then, from the values of slope ratio of the lines a and b to the X-axis (АВ) equal to
tg α = -2,1445 and tg β= 0,58,
one can conclude that the lines a and b cannot be intersected as they lie in different angular coordinates.
Picture 3. The lines a and b cannot be intersected in space because of different geodesic coordinates of distribution.
In geometries of Lobachevsky and Riemann the squared distance between nearly points (х^1, х^2) и (х^1+dx^1, x^2+dx^2) is determined by the congruence (4):
(dl)^2 = δ[ik](x) dx^i dx^k (4)
where δ[ik ]is a template tensor, defining the structure of geometry.
The study of the δ[ik]-dependence on the coordinates allows proving that the space of finite extent, but having no limits, possesses curvature. The indexes i and k (i= k= 1, 2, 3...) exactly rest on
different disturbance values of the inside angles of the triangle [2].
In its turn, in the introduced by us deduction of the Euclidian statement about parallel lines intersection (dl)^2^ from the point х^1, х^2 to х^1+dx^1, x^2+dx^2 also possess curvature, but its
geodesic parameters are the straight lines (5)
(dl)^2 = (dx^1)^2 + (dx^2)^2, (5)
as δ[ik]-template tensor of the Euclidian space is exclusively defined by equal flat alteration angles, which was to be proved.
It is important to emphasize that in the present work the question is about a new geometry which is really implemented in the close round us two- and three-dimensional Euclidian space.
1. Euclid "Beginnings", III century B.C., book IX, theorem 20.
2. Logunov A.A. "New idea of space, time and gravitation". "Science and Mankind", "Znaniye", M., 1988, pp.173-185).
3. Mathematiccal encyclopedia (red. Vinogradov Yu.V. Soviet Encyclopedia, Moscow, 1982, pp. 398-405).
4. Mathematical encyclopedic dictionary (red. Prokhorov Yu.V. Soviet Encyclopedia, Moscow, 1988, pp. 324-327).
5. Shirikov P.A. Brief foundations of Lobachevsky geometry, (the 2-nd edition, Moscow, 1983).
6. Karava Ya. Biomembranes (Higher School, Moscow, 1985, p.234).
7. Dewar M. I. S., Dougherty R. C. The PMO Theory of Organic Chemistry. A Plenum (Rosetta Edition, New York, 1975).
8. Moelwyn- Hughes E. A. The Chemical statics and kinetics of Solutions (Academic press, London- New York, 1971).
9. Alberts B. A., Bray D., Lewis I., Roberts K., Watson I. D. Molecular Biology of the Cell (Garland Publishing, Inc. New York- London, 1983).
10. Lewin B. Cell (I. Wiley and Sons, New York- Toronto- Singapore, 1983).
11. Adair R. The Great Design: Particles, Fields and Creation (Oxford University Press, 1987). | {"url":"https://world-science.ru/en/article/view?id=20595","timestamp":"2024-11-05T16:04:05Z","content_type":"text/html","content_length":"29976","record_id":"<urn:uuid:b8719db2-5b29-44fa-b325-9d5e8031aaf5>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00747.warc.gz"} |
1964 Proof Coin Set: Uncovering Its Value
Proof coins are special coins that are struck multiple times on a highly polished planchet or blank. This process creates a mirror-like finish and sharp details. As a result, proof coins are often
more valuable than their business strike counterparts.
Editor’s Notes: Understanding the factors that affect the value of proof coins will enable consumers to make informed decisions.
After analyzing the market trends, interviewing experts, and gathering data, we put together this guide to help you determine the value of proof coins.
Key Takeaways:
1964 Proof Coin Set
Mintage: 2,884,599
Composition: Silver
Value: $100-$200
1964 Proof Coin Set
The 1964 Proof Coin Set was released by the United States Mint in 1964. The set includes five coins: a dime, a quarter, a half dollar, a dollar, and a five-dollar coin. The coins are made of silver
and have a proof finish. The set was originally sold for $5.95. Today, the 1964 Proof Coin Set is worth between $100 and $200. Several factors affect the value of a proof coin set, including the
condition of the coins, the mintage, the rarity, and the demand.
1964 Proof Coin Set Value
When evaluating the value of a 1964 proof coin set, several key aspects come into play:
• Condition: The condition of the coins is a major factor in determining their value. Coins that are in mint condition will be worth more than those that are damaged or worn.
• Mintage: The mintage of a coin refers to the number of coins that were produced. Coins that were produced in smaller quantities will be worth more than those that were produced in larger
• Rarity: The rarity of a coin refers to how difficult it is to find. Coins that are rare will be worth more than those that are common.
• Demand: The demand for a coin refers to how many people want to own it. Coins that are in high demand will be worth more than those that are not.
• Strike: The strike of a coin refers to the quality of the impression on the coin. Coins that have a sharp strike will be worth more than those that have a weak strike.
• Luster: The luster of a coin refers to the shine on the coin’s surface. Coins that have a bright luster will be worth more than those that have a dull luster.
• Originality: The originality of a coin refers to whether or not it has been altered or cleaned. Original coins will be worth more than those that have been altered or cleaned.
• Certification: A certified coin is a coin that has been examined and authenticated by a professional coin grading service. Certified coins are worth more than uncertified coins.
These are just some of the key aspects that can affect the value of a 1964 proof coin set. By considering these factors, you can get a better idea of how much your set is worth.
The condition of a coin is one of the most important factors in determining its value. This is especially true for proof coins, which are struck multiple times on a highly polished planchet or blank.
Proof coins that are in mint condition will have a mirror-like finish and sharp details. Coins that are damaged or worn will be worth less than those that are in mint condition.
• Facet 1: Strike
The strike of a coin refers to the quality of the impression on the coin. Coins that have a sharp strike will be worth more than those that have a weak strike. A weak strike can occur when the
coin is not struck with enough force, or when the die is worn.
• Facet 2: Luster
The luster of a coin refers to the shine on the coin’s surface. Coins that have a bright luster will be worth more than those that have a dull luster. Luster can be affected by a number of
factors, including the composition of the coin, the way it was struck, and the way it has been stored.
• Facet 3: Scratches and Dings
Scratches and dings are the most common type of damage that can occur to a coin. Scratches can be caused by contact with other coins or objects, while dings can be caused by being dropped or hit.
Scratches and dings will reduce the value of a coin, especially if they are on the obverse or reverse of the coin.
• Facet 4: Environmental Damage
Environmental damage can also occur to a coin, such as toning, spotting, and corrosion. Toning is a natural process that can occur over time, and it can actually add value to a coin. Spotting and
corrosion, however, are caused by exposure to moisture or other harmful substances, and they will reduce the value of a coin.
By understanding the condition of a coin, you can get a better idea of how much it is worth. This is especially important for proof coins, which can be worth a significant amount of money.
The mintage of a coin is an important factor in determining its value. This is especially true for proof coins, which are produced in limited quantities. The 1964 proof coin set was produced in a
mintage of 2,884,599. This is a relatively low mintage, which means that these coins are more valuable than proof coins that were produced in larger quantities.
• Facet 1: Supply and Demand
The mintage of a coin affects its value by influencing the supply and demand for the coin. Coins that are produced in smaller quantities are more scarce, which means that there is more demand for
them. This increased demand drives up the value of the coin.
• Facet 2: Rarity
The mintage of a coin also affects its rarity. Coins that are produced in smaller quantities are more rare, which means that they are more valuable. Rarity is one of the key factors that
determines the value of a coin.
• Facet 3: Historical Significance
The mintage of a coin can also be affected by historical events. For example, coins that were produced during wartime are often more valuable than coins that were produced during peacetime. This
is because wartime coins are often produced in smaller quantities and they may have historical significance.
• Facet 4: Condition
The condition of a coin can also affect its value. Coins that are in mint condition are more valuable than coins that are damaged or worn. This is because mint condition coins are more rare and
they are more desirable to collectors.
By understanding the mintage of a coin, you can get a better idea of how much it is worth. This is especially important for proof coins, which can be worth a significant amount of money.
The rarity of a coin is one of the most important factors that determines its value. This is especially true for proof coins, which are produced in limited quantities. The 1964 proof coin set was
produced in a mintage of 2,884,599. This is a relatively low mintage, which means that these coins are more valuable than proof coins that were produced in larger quantities.
• Facet 1: Supply and Demand
The rarity of a coin affects its value by influencing the supply and demand for the coin. Coins that are rare are more scarce, which means that there is more demand for them. This increased
demand drives up the value of the coin.
• Facet 2: Collector Interest
The rarity of a coin can also affect its value by influencing the interest of collectors. Coins that are rare are often more sought-after by collectors, which can drive up their value.
• Facet 3: Historical Significance
The rarity of a coin can also be affected by historical events. For example, coins that were produced during wartime are often more rare than coins that were produced during peacetime. This is
because wartime coins are often produced in smaller quantities and they may have historical significance.
• Facet 4: Condition
The condition of a coin can also affect its rarity. Coins that are in mint condition are more rare than coins that are damaged or worn. This is because mint condition coins are more difficult to
By understanding the rarity of a coin, you can get a better idea of how much it is worth. This is especially important for proof coins, which can be worth a significant amount of money.
The demand for a coin is one of the most important factors that determines its value. This is especially true for proof coins, which are produced in limited quantities. The 1964 proof coin set is in
high demand among collectors, which is one of the reasons why it is so valuable.
• Facet 1: Collector Interest
The demand for a coin is often driven by collector interest. Coins that are popular among collectors are more likely to be in high demand, which can drive up their value. The 1964 proof coin set
is a popular collector’s item, which is one of the reasons why it is in high demand.
• Facet 2: Historical Significance
The demand for a coin can also be affected by its historical significance. Coins that are associated with important historical events are often in high demand among collectors. The 1964 proof
coin set was released during a time of great change in the United States, which is one of the reasons why it is so popular among collectors.
• Facet 3: Condition
The demand for a coin can also be affected by its condition. Coins that are in mint condition are more likely to be in high demand than coins that are damaged or worn. The 1964 proof coin set is
often found in mint condition, which is one of the reasons why it is so valuable.
• Facet 4: Rarity
The demand for a coin can also be affected by its rarity. Coins that are rare are more likely to be in high demand than coins that are common. The 1964 proof coin set is relatively rare, which is
one of the reasons why it is so valuable.
By understanding the demand for a coin, you can get a better idea of how much it is worth. This is especially important for proof coins, which can be worth a significant amount of money.
The strike of a coin is an important factor in determining its value, especially for proof coins. A sharp strike indicates that the coin was struck with a great deal of force, which results in a
well-defined design and lettering. Weak strikes, on the other hand, can result in a mushy design and lettering, which can lower the value of the coin.
• Facet 1: Eye Appeal
The strike of a coin can have a significant impact on its eye appeal. A coin with a sharp strike will be more visually appealing than a coin with a weak strike. This is because a sharp strike
will result in a more detailed and well-defined design.
• Facet 2: Rarity
The strike of a coin can also affect its rarity. Coins with a weak strike are often more rare than coins with a sharp strike. This is because a weak strike can indicate that the coin was produced
during a time when the dies were not properly aligned or when the striking process was not properly controlled.
• Facet 3: Value
The strike of a coin can have a significant impact on its value. Coins with a sharp strike are typically worth more than coins with a weak strike. This is because collectors are willing to pay a
premium for coins that have a well-defined design and lettering.
When evaluating the value of a 1964 proof coin set, it is important to consider the strike of the coins. Coins with a sharp strike will be worth more than coins with a weak strike. This is because a
sharp strike indicates that the coins were produced with a great deal of care and precision.
The luster of a coin is an important factor in determining its value, especially for proof coins. Proof coins are struck multiple times on a highly polished planchet or blank, which results in a
mirror-like finish. Coins with a bright luster will be more visually appealing to collectors, and they will therefore be worth more money.
The luster of a coin can be affected by a number of factors, including the composition of the coin, the way it was struck, and the way it has been stored. Coins that are made of silver or gold will
typically have a brighter luster than coins that are made of copper or nickel. Coins that are struck with a sharp strike will also have a brighter luster than coins that are struck with a weak
strike. Finally, coins that have been stored in a protective environment will have a brighter luster than coins that have been exposed to the elements.
When evaluating the value of a 1964 proof coin set, it is important to consider the luster of the coins. Coins with a bright luster will be worth more than coins with a dull luster. This is because a
bright luster indicates that the coins were produced with a great deal of care and precision.
Here is a table that summarizes the key factors that affect the luster of a coin:
Factor Effect on Luster
Composition Coins made of silver or gold will typically have a brighter luster than coins made of copper or nickel.
Strike Coins that are struck with a sharp strike will have a brighter luster than coins that are struck with a weak strike.
Storage Coins that have been stored in a protective environment will have a brighter luster than coins that have been exposed to the elements.
The originality of a coin is an important factor in determining its value, especially for proof coins. Proof coins are struck multiple times on a highly polished planchet or blank, which results in a
mirror-like finish. Coins that have been altered or cleaned will have a diminished luster and may have other damage that can reduce their value.
• Facet 1: Eye Appeal
The originality of a coin can have a significant impact on its eye appeal. A coin that is original will have a more natural and pleasing appearance than a coin that has been altered or cleaned.
This is because the original surface of the coin will have a more consistent texture and color.
• Facet 2: Value
The originality of a coin can also have a significant impact on its value. Coins that are original will typically be worth more than coins that have been altered or cleaned. This is because
collectors are willing to pay a premium for coins that are in their original condition.
• Facet 3: Rarity
The originality of a coin can also affect its rarity. Coins that have been altered or cleaned are often more rare than original coins. This is because altered or cleaned coins are often removed
from circulation and melted down for their metal content.
When evaluating the value of a 1964 proof coin set, it is important to consider the originality of the coins. Coins that are original will be worth more than coins that have been altered or cleaned.
This is because original coins are more desirable to collectors and are therefore worth a higher premium.
Certification is an important factor to consider when evaluating the value of a 1964 proof coin set. Certified coins have been examined and authenticated by a professional coin grading service, which
provides assurance of their authenticity and condition. This assurance can add significant value to a coin, especially for rare or valuable coins.
• Facet 1: Authenticity
Certification provides assurance of a coin’s authenticity. This is especially important for rare or valuable coins, as there are many counterfeit coins in circulation. A certified coin has been
examined by a professional coin grading service and has been determined to be genuine.
• Facet 2: Condition
Certification also provides assurance of a coin’s condition. A certified coin has been graded by a professional coin grading service, which provides an objective assessment of its condition. This
grade can be used to compare the condition of different coins and to determine their value.
• Facet 3: Value
Certification can add significant value to a coin. This is because certified coins are more desirable to collectors and investors. Collectors are willing to pay a premium for coins that have been
certified, as they can be assured of their authenticity and condition. Investors are also willing to pay a premium for certified coins, as they can be more easily sold or traded.
When evaluating the value of a 1964 proof coin set, it is important to consider whether or not the coins are certified. Certified coins will typically be worth more than uncertified coins. This is
because certification provides assurance of the coins’ authenticity and condition, which makes them more desirable to collectors and investors.
FAQs about 1964 Proof Coin Set Value
This section addresses frequently asked questions and misconceptions regarding the value of 1964 proof coin sets, providing concise and informative answers.
Question 1: How much is a 1964 proof coin set worth?
The value of a 1964 proof coin set can vary depending on several factors, including the condition of the coins, the rarity of the set, and the overall demand for proof coins. Generally, a 1964 proof
coin set in good condition can be worth anywhere from $100 to $200.
Question 2: What factors affect the value of a 1964 proof coin set?
The value of a 1964 proof coin set is influenced by various factors, such as the condition of the coins, their rarity, the mintage, and the overall demand for proof coins. Coins in mint condition,
rare coins, and coins with a low mintage tend to hold higher value.
Question 3: How can I determine the condition of my 1964 proof coin set?
To determine the condition of your 1964 proof coin set, carefully examine the coins for any signs of wear, scratches, or damage. Coins with no visible imperfections and a mirror-like finish are
considered to be in mint condition.
Question 4: Where can I sell my 1964 proof coin set?
You can sell your 1964 proof coin set to coin dealers, collectors, or through online marketplaces. It’s advisable to research reputable buyers and compare their offers to ensure you get a fair price.
Question 5: How can I protect the value of my 1964 proof coin set?
To protect the value of your 1964 proof coin set, store the coins in a safe and dry environment. Keep them in individual protective holders or capsules to prevent scratches and damage. Regular
inspection and proper storage are crucial for maintaining the condition and value of your coins.
Question 6: Are there any special considerations when buying a 1964 proof coin set?
When buying a 1964 proof coin set, it’s important to verify the authenticity and condition of the coins. Consider purchasing certified sets from reputable dealers or collectors. Additionally,
research the market value and compare prices from different sources to ensure you’re making an informed purchase.
By understanding these factors and taking proper care of your 1964 proof coin set, you can preserve its value and enjoy its numismatic significance.
Explore our next article for more insights into the world of coin collecting and investment.
Tips for Determining the Value of a 1964 Proof Coin Set
Follow these tips to assess the value of your 1964 proof coin set and maximize its worth.
Tip 1: Examine the Condition
Carefully inspect the coins for any signs of wear, scratches, or damage. Coins with no visible imperfections and a mirror-like finish are considered to be in mint condition and hold higher value. Tip
2: Check the Mintage
The mintage of a coin refers to the number of coins produced. Coins with a lower mintage are rarer and generally more valuable. Determine the mintage of your 1964 proof coin set to assess its rarity.
Tip 3: Consider the Strike
The strike of a coin refers to the quality of the impression on its surface. Coins with a sharp strike have well-defined details and lettering, which adds to their value. Examine the strike of your
coins to determine their quality. Tip 4: Evaluate the Luster
The luster of a coin refers to its shine or brilliance. Proof coins typically have a mirror-like luster. Coins with a bright, even luster are more desirable and valuable. Tip 5: Consider
Certification by a reputable coin grading service provides assurance of a coin’s authenticity and condition. Certified coins are generally more valuable, as they have been independently verified and
By following these tips, you can accurately assess the value of your 1964 proof coin set and make informed decisions regarding its preservation or sale.
Key Takeaways:
• Preserve the condition of your coins to maintain their value.
• Rare coins with a low mintage are more valuable.
• A sharp strike and bright luster enhance the value of proof coins.
• Certification adds credibility and value to your coin set.
Remember, the value of your 1964 proof coin set is influenced by various factors. By carefully considering these factors and following the tips provided, you can maximize the value of your
The value of a 1964 proof coin set is determined by a combination of factors including condition, rarity, strike, luster, and certification. By carefully considering these factors and following the
tips provided in this article, you can accurately assess the value of your coin set and make informed decisions regarding its preservation or sale.
1964 proof coin sets are a valuable and collectible part of numismatic history. Understanding the factors that contribute to their value will help you appreciate their significance and make
well-informed decisions about your collection. | {"url":"https://coinfyp.com/1964-proof-coin-set-value/","timestamp":"2024-11-01T18:44:03Z","content_type":"text/html","content_length":"164189","record_id":"<urn:uuid:baa43d19-3684-49c7-b63f-6733fd8b850c>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00109.warc.gz"} |
If f(x+1) =2x and g(3x) = x+6, how do you find the value of f^-1(g(f(13)))3?
| HIX Tutor
If #f(x+1) =2x# and #g(3x) = x+6#, how do you find the value of #f^-1(g(f(13)))3?
Answer 1
${f}^{- 1} \left(g \left(f \left(13\right)\right)\right) = 16$
If #f(x+1) = 2x# then substituting #a# for #x+1# (which #rarr x=a-1#) #f(a) = 2(a-1) =2a-2#
Similarly, if #g(3x) = x+6# then substituting #b# for #3x# (which #rarr x=b/3#) #g(b)=b/3+6#
By definition of inverse of a function #f^(-1)(f(a))=a#
#f^(-1)(c) =(c+2)/2#
So now we have: #f^(-1)(g(f(13)))#
substituting #g(f(13))# for #c# above #color(white)("XXX")=(g(f(13))+2)/2#
substituting #f(13)# for #b# in the equation for #g(b)# #color(white)("XXX")=((f(13)+6)+2)/2=(f(13)+8)/2#
substituting #13# for #a# in the equation for #f(a)# #color(white)("XXX")=(2(13)-2+8)/2=32/2=16#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
First, let's break down the expression step by step:
1. Start with the innermost function, ( f(13) ), which means substituting ( x = 13 ) into ( f(x+1) = 2x ):
[ f(13+1) = 2 \cdot 13 ] [ f(14) = 26 ]
2. Next, apply the function ( g(x) = x + 6 ) to the result from step 1, ( g(f(14)) ):
[ g(f(14)) = f(14) + 6 ] [ g(f(14)) = 26 + 6 ] [ g(f(14)) = 32 ]
3. Now, apply the inverse of ( f(x+1) ) to the result from step 2, ( f^{-1}(g(f(14))) ). Since ( f(x+1) = 2x ), the inverse function ( f^{-1}(y) ) undoes the operation of doubling and subtracting 1:
[ f^{-1}(32) = \frac{32}{2} - 1 ] [ f^{-1}(32) = 16 - 1 ] [ f^{-1}(32) = 15 ]
4. Finally, raise the result from step 3 to the power of 3:
[ f^{-1}(g(f(13)))^3 = 15^3 ] [ f^{-1}(g(f(13)))^3 = 3375 ]
Therefore, the value of ( f^{-1}(g(f(13)))^3 ) is 3375.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/if-f-x-1-2x-and-g-3x-x-6-how-do-you-find-the-value-of-f-1-g-f-13-3-8f9afa5672","timestamp":"2024-11-03T13:56:03Z","content_type":"text/html","content_length":"575161","record_id":"<urn:uuid:76a98360-5273-4409-a22e-64998a8dca57>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00653.warc.gz"} |
The Art of Efficient Encoding: How Many Bits to Represent 116 Characters in a Text-Editing Application?
a text-editing application uses binary sequences to represent each of 116 different characters. what is the minimum number of bits needed to assign a unique bit sequence to each of the possible
In the world of digital communication, every character and symbol needs to be encoded into a format that computers can understand. To represent these characters, binary sequences are commonly used.
But the question arises, how many bits are required to assign a unique bit sequence to each of the 116 different characters? This article delves into the fascinating world of binary encoding and
explores the minimum number of bits needed for this task.
Understanding Binary Encoding
Binary encoding is the representation of characters and symbols using a sequence of bits, typically consisting of 0s and 1s. In the most basic form, a single binary digit (or “bit”) can represent two
unique states, usually 0 and 1. To represent a wider range of characters, a combination of bits is employed.
The Number of Bits Required
The minimum number of bits required to represent a specific number of unique characters can be determined using basic mathematics. The formula to calculate this is:
N = log2(S)
• N is the number of bits needed.
• S is the number of unique characters to be represented.
In this case, we want to represent 116 different characters, so the formula would look like this:
N = log2(116)
Calculating this, we find that:
N ≈ 6.88
In the real world, you can’t use a fraction of a bit; you must always round up to the nearest whole number. Therefore, we would need at least 7 bits to represent 116 different characters using binary
Character Encoding and ASCII
In practice, binary encoding is commonly employed in character encoding schemes. One of the most well-known character encoding schemes is the American Standard Code for Information Interchange
(ASCII). ASCII uses 7 bits to represent 128 different characters, including the English alphabet, numbers, punctuation marks, and control characters.
To represent characters outside the ASCII character set or to include additional symbols, 8-bit encoding schemes like Extended ASCII are used, which can represent 256 different characters. For most
modern applications, the Unicode standard is employed, which uses variable-length encoding, with 8, 16, or 32 bits per character, allowing it to represent an extensive range of characters from
various languages and scripts.
Efficiency and Trade-offs
While 7 bits are the theoretical minimum needed to represent 116 characters, it’s important to consider practical requirements and trade-offs. Using 7 bits might be less efficient compared to using 8
bits because most computer systems operate with bytes (8 bits) as the smallest addressable unit of memory.
Using 8 bits might result in some unused combinations, but it simplifies data handling and processing. This is why many character encoding schemes, even when they could theoretically use fewer bits,
opt for 8 bits to maintain compatibility with existing systems and ease of implementation.
In a text-editing application, the minimum number of bits needed to assign a unique bit sequence to each of the possible 116 characters is 7 bits. However, in practice, character encoding schemes
like ASCII and Unicode often use 8 or more bits for practicality and compatibility with computer systems. Understanding the trade-offs between efficiency and compatibility is crucial in the design of
encoding schemes for text and characters in digital communication. | {"url":"https://multicaretechnical.com/the-art-of-efficient-encoding-how-many-bits-to-represent-116-characters-in-a-text-editing-application","timestamp":"2024-11-10T04:44:06Z","content_type":"text/html","content_length":"151213","record_id":"<urn:uuid:ca6cac94-67b1-4840-aabf-331a35763534>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00525.warc.gz"} |
Properties of Graph
Graphs are a non-linear Data Structure that is widely used in the field of computer science to solve many real-world problems. When we have many entities connected to each other somehow, our priority
is to model this structure as a graph and then proceed towards a solution. Graphs are also used in social networks like Facebook and LinkedIn.
If graphs are so useful, then why not explore the subject graph theory a bit more.
In this article, we are going to study the basic properties of graphs. The properties of graphs are used to characterise them based on their structures.
Let’s see the mathematical definition of a graph before discussing about the properties of graphs -
A graph is a set of vertices and edges G = {V, E}.
Example -
G = {V,E} where,
V = {a,b,c,d}
E = {ab,bc,cd,da}
Following are the important properties of graph:
Distance between two vertices
The distance between two vertices U and V is defined as the number of edges present in the shortest path between U and V.
Notation - d(U,V)
We consider the shortest path between two vertices since there may exist many paths from one vertex to another.
Why not see some examples to understand -
Consider this graph:
There are many paths between vertices d and f. Some of them are as follows:
• da → ae → ef (length = 2)
• dc → cg → gf (length = 2)
• df (length = 1)
• da → ab → bc → cg → gf (length = 4)
• dc → cb → ba → ae → ef (length=4)
Out of these, the shortest path is d → f.
So, the distance between d and f is 1. | {"url":"https://www.naukri.com/code360/library/properties-of-graph","timestamp":"2024-11-07T22:13:20Z","content_type":"text/html","content_length":"403658","record_id":"<urn:uuid:94203ca3-476a-40a7-8c53-e9822cd37ae3>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00554.warc.gz"} |
Linear Algebra - Algebra Study Guides: Notes | Knowt
Linear Algebra Study Guides
Browse through topics
Has Flashcards:
Created by:
It’s never been easier to find and study Linear Algebra subject made by students and teachers using Knowt. Whether you’re reviewing material before a quiz or preparing for a major exam, we’ll help
you find the subject subject that you need to power up your next study session. If you’re looking for more specific Linear Algebra subject, then check out our collection of sets for Linear Algebra,
Pre-Algebra, System of Equations, Inequalities, Functions, Irrational Numbers, Slopes. | {"url":"https://knowt.com/subject/Math/Algebra/Linear-Algebra-notes","timestamp":"2024-11-11T16:02:22Z","content_type":"text/html","content_length":"367631","record_id":"<urn:uuid:4aa609f8-c2c3-4b74-98c5-ee4fab81b19d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00248.warc.gz"} |
fixed_probability_multinomial_process: Multinomial process in individual: Framework for Specifying and Simulating Individual Based Models
Simulates a two-stage process where all individuals in a given source_state sample whether to leave or not with probability rate; those who leave go to one of the destination_states with
probabilities contained in the vector destination_probabilities.
variable a CategoricalVariable object.
source_state a string representing the source state.
destination_states a vector of strings representing the destination states.
rate probability of individuals in source state to leave.
destination_probabilities probability vector of destination states.
a function which can be passed as a process to simulation_loop.
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/individual/man/fixed_probability_multinomial_process.html","timestamp":"2024-11-08T13:00:02Z","content_type":"text/html","content_length":"27044","record_id":"<urn:uuid:31fd7169-00f5-4ea3-8ef6-e31ace9e0984>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00742.warc.gz"} |
Towards the correct prescription of van Dam-Veltman-Zakharov discontinuity in Seigal-Zweibach action from String theory in Fierz-Pauli Gauge
In the present work we will give an explicit solution the problem of divergence of propgator of gauge-invariant Siegel-Zwiebach (SZ) action in Fierz-Pauli (FP) gauge (also called van
Dam-Veltman-Zakharov discontinuity) by connecting it's Green's functions to that of Transverse-Traceless (TT) gauge using improved finite-field-dependent BRST (FFBRST) method.
arXiv e-prints
Pub Date:
August 2024
□ High Energy Physics - Theory;
□ Mathematical Physics
13 Pages | {"url":"https://ui.adsabs.harvard.edu/abs/2024arXiv240902934P/abstract","timestamp":"2024-11-01T21:07:10Z","content_type":"text/html","content_length":"35170","record_id":"<urn:uuid:1d46f893-85ee-4ebf-a959-a1825cd1578e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00634.warc.gz"} |
US Capital Punishment dataset.
US Capital Punishment dataset.¶
This data describes the number of times capital punishment is implemented at the state level for the year 1997. The outcome variable is the number of executions. There were executions in 17 states.
Included in the data are explanatory variables for median per capita income in dollars, the percent of the population classified as living in poverty, the percent of Black citizens in the population,
the rate of violent crimes per 100,000 residents for 1996, a dummy variable indicating whether the state is in the South, and (an estimate of) the proportion of the population with a college degree
of some kind.
Number of Observations - 17
Number of Variables - 7
Variable name definitions::
EXECUTIONS - Executions in 1996
INCOME - Median per capita income in 1996 dollars
PERPOVERTY - Percent of the population classified as living in poverty
PERBLACK - Percent of black citizens in the population
VC100k96 - Rate of violent crimes per 100,00 residents for 1996
SOUTH - SOUTH == 1 indicates a state in the South
DEGREE - An esimate of the proportion of the state population with a
college degree of some kind
State names are included in the data file, though not returned by load.
Jeff Gill’s Generalized Linear Models: A Unified Approach
Used with express permission from the original author, who retains all rights.
Last update: Nov 12, 2024 | {"url":"https://www.statsmodels.org/dev/datasets/generated/cpunish.html","timestamp":"2024-11-13T05:04:52Z","content_type":"text/html","content_length":"44050","record_id":"<urn:uuid:6775e386-a037-42e8-94a8-fe4bccede5aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00092.warc.gz"} |
fold_left on maps in CMap
I'm confused with the meaning to give to fold_left in cMap.mli. Its type is (key -> 'a -> 'b -> 'b) -> 'a t -> 'b -> 'b while I would have expected, following the model of List.fold_left, that it is
(key -> 'a -> 'b -> 'a) -> 'a -> 'b t -> 'a.
Two parameters are in play regarding fold on maps:
• the order of arguments
• whether the fold is in increasing or decreasing order of keys
Then, shouldn't there be four variants? Or, if only the increasing/decreasing order is expected to matter, shouldn't it be called fold_up and fold_down instead, both following the List.fold_right
pattern, that is (key -> 'a -> 'b -> 'b) -> 'a t -> 'b -> 'b?
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/fold_left.20on.20maps.20in.20CMap.html","timestamp":"2024-11-04T20:36:46Z","content_type":"text/html","content_length":"2379","record_id":"<urn:uuid:67224d42-0a49-4ff1-9e6f-e582743aa8be>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00779.warc.gz"} |
Constant Acceleration Dynamics: Velocity and Displacement Calculations
21 Jun 2024
Popularity: ⭐⭐⭐
Velocity and Acceleration Calculator
This calculator provides the calculation of velocity and displacement of an object moving with constant acceleration.
Calculation Example: The velocity and acceleration calculator uses the following formulas to calculate the velocity and displacement of an object moving with constant acceleration:
v = u + at
s = ut + 1/2 * a * t_
Related Questions
Q: What is the importance of velocity and acceleration in physics?
A: Velocity and acceleration are important concepts in physics as they describe the motion of objects. Velocity tells us how fast an object is moving and in which direction, while acceleration tells
us how quickly the object’s velocity is changing.
Q: How are velocity and acceleration related?
A: Velocity and acceleration are related by the following equation: a = dv/dt, where a is acceleration, v is velocity, and t is time. This equation tells us that acceleration is the rate of change of
| —— | —- | —- |
Calculation Expression
Velocity Function: The velocity after time ‘t’ is given by v = u + at.
Displacement Function: The displacement after time ‘t’ is given by s = ut + 1/2 * a * t_.
Calculated values
Considering these as variable values: a=9.81, t=10.0, v=0.0, the calculated value(s) are given in table below
| —— | —- |
Displacement Function 1471.5
Similar Calculators
Calculator Apps
Matching 3D parts for Velocity and acceleration calculation
Related Parameter Sensitivity Analysis engineering data
App in action
The video below shows the app in action. | {"url":"https://blog.truegeometry.com/calculators/Velocity_and_acceleration_calculation.html","timestamp":"2024-11-06T19:52:17Z","content_type":"text/html","content_length":"28978","record_id":"<urn:uuid:a7b15036-5702-476d-a396-c2b58e2b1252>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00387.warc.gz"} |
Books - Quantum Computing ReportBooks
Here is a listing of some popular books on Quantum Computing. As an Amazon Associate, the Quantum Computing Report earns from qualifying purchases.
Quantum Computation and Quantum Information
by Michael A. Nielsen and Isaac L. Chuang
One of the most cited books in physics of all time, Quantum Computation and Quantum Information remains the best textbook in this exciting field of science.
Quantum Information Science
by Riccardo Manenti and Mario Motta
This book provides an introduction to quantum information science, the science at the basis of the new quantum revolution of this century. It teaches the reader to build and program a quantum
computer and leverage its potential. Aimed at quantum physicists and computer scientists, the book covers several topics, including quantum algorithms, quantum chemistry, and quantum engineering of
superconducting qubits.
Quantum Computing: A Gentle Introduction
by Eleanor G Rieffel and Wolfgang H. Polak
A thorough exposition of quantum computing and the underlying concepts of quantum physics, with explanations of the relevant mathematics and numerous examples.
Quantum Computing Since Democritus
by Scott Aaronson
Written by noted quantum computing theorist Scott Aaronson, this book takes readers on a tour through some of the deepest ideas of math, computer science and physics.
Quantum Computing for Dummies
by whurley and Floyd Earl Smith
Quantum Computing For Dummies preps you for the amazing changes that are coming with the world of computing built on the phenomena of quantum mechanics. Need to know what is it and how does it work?
This easy-to-understand book breaks it down and answers your most pressing questions. Get a better understanding of how quantum computing is revolutionizing networking, data management, cryptography,
and artificial intelligence in ways that would have previously been unthinkable.
Quantum Computing for Babies
by Chris Ferrie and whurley
Quantum Computing for Babies is a colorfully simple introduction to the magical world of quantum computers. Babies (and grownups!) will discover the difference between bits and qubits and how quantum
computers will change our future. With a tongue-in-cheek approach that adults will love, this installment of the Baby University board book series is the perfect way to introduce basic concepts to
even the youngest scientists.
Quantum in Pictures: A New Way to Understand the Quantum World
by Bob Coecke and Stefano Gogioso
“Quantum in Pictures” makes it possible to learn about quantum mechanics and quantum computing in a new, fun way. Written by world-leading experts, Quantum in Pictures is a simple, friendly, and
novel way to explain the magical world of quantum theory. This book will be of interest to both the young (and not-so-young) amateur and the quantum specialists. Using pictures alone, this book will
equip you with the tools you need to understand the quantum world including recent developments in quantum computing and prove things about it, both known and new.
Q is for Quantum
by Terry Rudolph
This book teaches a theory at the forefront of modern physics to an audience presumed to already know only basic arithmetic. Topics covered range from the practical (new technologies we can expect
soon) to the foundational (old ideas that attempt to make sense of the theory). The theory is built up precisely and quantitatively. Deceptively vague jargon and analogies are avoided, and mysterious
features of the theory are made explicit and not skirted. The tenacious reader will emerge with a better technical understanding of why we are troubled by this theory than that possessed by many
professional physicists. The book is accompanied by a site Q is for Quantum which also includes more learning resources for beginners.
Quantum Computing and Information: A Scaffolding Approach
by Peter Lee, Huiwen Ji, and Ran Cheng
This expertly crafted guide demystifies the complexities of quantum computing through a progressive teaching method, making it accessible to students and newcomers alike. It explores quantum systems,
gates and circuits, entanglement, algorithms, and more using a unique ‘scaffolding approach’ for easy understanding which is Ideal for educators, students, and self-learners.
Quantum Computing for the Quantum Curious
by Ciaran Hughes, Joshua Isaacson, Anastasia Perry, Ranbel F. Sun, and Jessica Turner
This open access book makes quantum computing more accessible than ever before. A fast-growing field at the intersection of physics and computer science, quantum computing promises to have
revolutionary capabilities far surpassing “classical” computation. Getting a grip on the science behind the hype can be tough: at its heart lies quantum mechanics, whose enigmatic concepts can be
imposing for the novice. This classroom-tested textbook uses simple language, minimal math, and plenty of examples to explain the three key principles behind quantum computers: superposition, quantum
measurement, and entanglement. It then goes on to explain how this quantum world opens up a whole new paradigm of computing.
Learn Quantum Computing with Python and IBM Quantum Experience
by Robert Loredo
IBM Quantum Lab is a platform that enables developers to learn the basics of quantum computing by allowing them to run experiments on a quantum computing simulator and on several real quantum
computers. Updated with new examples and changes to the platform, this edition begins with an introduction to the IBM Quantum dashboard and Quantum Information Science Kit (Qiskit) SDK. You will get
well versed with the IBM Quantum Composer interface as well as the IBM Quantum Lab. You will learn the differences between the various available quantum computers and simulators. Along the way,
you’ll learn some of the fundamental principles of quantum mechanics, quantum circuits, qubits, and the gates that are used to perform operations on each qubit.
Learn Quantum Computing with Python and Q#: A hands-on approach
by Sarah C. Kaiser and Christopher Granade
Learn Quantum Computing with Python and Q# demystifies quantum computing. Using Python and the new quantum programming language Q#, you’ll learn QC fundamentals as you apply quantum programming
techniques to real-world examples including cryptography and chemical analysis. Learn Quantum Computing with Python and Q# builds your understanding of quantum computers, using Microsoft’s Quantum
Development Kit to abstract away the mathematical complexities. You’ll learn QC basics as you create your own quantum simulator in Python, then move on to using the QDK and the new Q# language for
writing and running algorithms very different to those found in classical computing.
Quantum Computing Experimentation with Amazon Braket
by Alex Khan
Amazon Braket is a cloud-based pay-per-use platform for executing quantum algorithms on cutting-edge quantum computers and simulators. It is ideal for developing robust apps with the latest quantum
devices. With this book, you’ll take a hands-on approach to learning how to take real-world problems and run them on quantum devices. You’ll begin with an introduction to the Amazon Braket platform
and learn about the devices currently available on the platform, their benefits, and their purpose. Then, you’ll review key quantum concepts and algorithms critical to converting real-world problems
into a quantum circuit or binary quadratic model based on the appropriate device and its capability. The book also covers various optimization use cases, along with an explanation of the code.
Finally, you’ll work with a framework using code examples that will help to solve your use cases with quantum and quantum-inspired technologies.
Quantum Computing: An Applied Approach
by Jack D. Hidary
This book integrates the foundations of quantum computing with a hands-on coding approach to this emerging field; it is the first work to bring these strands together in an updated manner. This work
is suitable for both academic coursework and corporate technical training.
Dancing with Qubits: How quantum computing works and how it may change the world
by Robert S. Sutor
Dancing with Qubits is for those who want to deeply explore the inner workings of quantum computing. This entails some sophisticated mathematical exposition and is therefore best suited for those
with a healthy interest in mathematics, physics, engineering, and computer science
Quantum Chemistry and Computing for the Curious: Illustrated with Python and Qiskit® code
by Keeper Sharkey and Alain Chancé with Forward by Alex Khan
Explore quantum chemical concepts and the postulates of quantum mechanics in a modern fashion, with the intent to see how chemistry and computing intertwine. Along the way you’ll relate these
concepts to quantum information theory and computation. The book builds a framework of computational tools that lead you through traditional computational methods and straight to the forefront of
exciting opportunities. These opportunities will rely on achieving next-generation accuracy by going further than the standard approximations such as beyond Born-Oppenheimer calculations. The book
provides illustrations made with Python code, Qiskit, and open-source quantum chemistry packages.
A Practical Guide to Quantum Machine Learning and Quantum Optimization: Hands-on Approach to Modern Quantum Algorithms
by Elias F. Combarro (Author), Samuel Gonzalez-Castillo (Author), Alberto Di Meglio (Forward)
This book provides deep coverage of modern quantum algorithms that can be used to solve real-world problems. You’ll be introduced to quantum computing using a hands-on approach with minimal
prerequisites. You’ll discover many algorithms, tools, and methods to model optimization problems with the QUBO and Ising formalisms, and you will find out how to solve optimization problems with
quantum annealing, QAOA, Grover Adaptive Search (GAS), and VQE. This book also shows you how to train quantum machine learning models, such as quantum support vector machines, quantum neural
networks, and quantum generative adversarial networks.
Fundamentals of Quantum Computing: Theory and Practice
by Venkateswaran Kasirajan
This introductory book on quantum computing includes an emphasis on the development of algorithms. Appropriate for both university students as well as software developers interested in programming a
quantum computer, this practical approach to modern quantum computing takes the reader through the required background and up to the latest developments.
Quantum Boost: Using Quantum Computing to Supercharge Your Business
by Brian Lenahan
Quantum computing is the new technology of the 2020’s and the learning curve will be steeper, and the competitive advantage greater in the next few years. Don’t hesitate. Take the time now to learn
and engage in the world of quantum computers to provide your business with faster solutions to the most complex challenges.
Quantum Excellence: How Leading Companies Are Deploying the Transformational Technology
by Brian T. Lenahan
Quantum Excellence shares insights from global experts on quantum technologies including sensing, communications, cryptography and computing and how they are being deployed by leading companies.
Maturing quickly, quantum offers incredible opportunities for portfolio and traffic optimization, lines sciences research, navigation and so much more. Written by the chair of the Quantum Strategy
Institute and author of the prequel Quantum Boost, Quantum Excellence offers a common-language approach to getting involved in this exciting field.
Quantum Networks: A Primer
This book outlines how entanglement is leveraged in quantum networks, the key technological nuances, and concepts, the current state-of-the-art, and how quantum network researchers envision the
future of quantum networks. Moving from classical to quantum networks is not as straight forward as one would expect but neither is the quantum counterpart an utterly foreign concept and this book
discusses how the proven science of classical networking can be leveraged to expand our knowledge into the space of quantum networks.
Law and Policy for the Quantum Age
by Chris Jay Hoofnagle and Simson L. Garfinkel
In Law and Policy for the Quantum Age, Chris Jay Hoofnagle and Simson L. Garfinkel explain the genesis of quantum information science (QIS) and the resulting quantum technologies that are most
exciting: quantum sensing, computing, and communication. This groundbreaking, timely text explains how quantum technologies work, how countries will likely employ QIS for future national defense and
what the legal landscapes will be for these nations, and how companies might (or might not) profit from the technology. Hoofnagle and Garfinkel argue that the consequences of CIS are so profound that
we must begin planning for them today.
Programming Quantum Computers
by Eric Johnston, Nic Harrigan, and Mercedes Gimeno-Segovia
This book show you how to build the skills, tools, and intuition required to write quantum programs at the center of applications. You’ll understand what quantum computers can do and learn how to
identify the types of problems they can solve. It includes three multichapter sections: Programming for a QPU, QPU Primitives, and QPU Applications.
Quantum Information and Quantum Optics with Superconducting Circuits
by Juan José García Ripoll
Superconducting quantum circuits are among the most promising solutions for the development of scalable quantum computers. Built with sizes that range from microns to tens of metres using
superconducting fabrication techniques and microwave technology, superconducting circuits demonstrate distinctive quantum properties such as superposition and entanglement at cryogenic temperatures.
This book provides a comprehensive and self-contained introduction to the world of superconducting quantum circuits, and how they are used in current quantum technology. Beginning with a description
of their basic superconducting properties, the author then explores their use in quantum systems, showing how they can emulate individual photons and atoms, and ultimately behave as qubits within
highly connected quantum systems. Particular attention is paid to cutting-edge applications of these superconducting circuits in quantum computing and quantum simulation. Written for graduate
students and junior researchers, this accessible text includes numerous homework problems and worked examples.
Quantum Computing in Action
by Johan Vos
Quantum Computing in Action will make sure you’re prepared to start programming when quantum supercomputing becomes a practical reality for production systems. Rather than a hardware manual or
academic theory guide, this book is focused on practical implementations of quantum computing algorithms. Using Strange, a Java-based quantum computer simulator, you’ll go hands-on with quantum
computing’s core components including qubits and quantum gates as you write your very first quantum code. By the end of the book you’ll be ahead of the game with the skills to create quantum
algorithms using standard Java and your favorite IDE and build tools.
Schrödinger’s Killer App: Race to Build the World’s First Quantum Computer
by Jonathan P. Dowling
This book presents an inside look at the government’s quest to build a quantum computer capable of solving complex mathematical problems and hacking the public-key encryption codes used to secure the
Internet. It develops the concept of entanglement in the historical context of Einstein’s 30-year battle with the physics community over the true meaning of quantum theory. The author also covers
applications to other important areas, such as quantum physics simulators, synchronized clocks, quantum sensors, and imaging devices.
Picturing Quantum Processes: A First Course in Quantum Theory and Diagrammatic Reasoning
by Bob Coecke and Aleks Kissinger
The unique features of the quantum world are explained in this book through the language of diagrams, setting out an innovative visual method for presenting complex theories. Requiring only basic
mathematical literacy, this book employs a unique formalism that builds intuitive understanding of quantum features while eliminating the need for complex calculations.
Quantum Computing without Magic: Devices
by Zdzislaw Meglicki
This text offers an introduction to quantum computing, with a special emphasis on basic quantum physics, experiment, and quantum devices. Unlike many other texts, which tend to emphasize algorithms,
Quantum Computing Without Magic explains the requisite quantum physics in some depth, and then explains the devices themselves. This book is a great resource on the devices used in quantum
computation and can be used as a complementary text for physics and electronic engineering undergraduates studying quantum computing and basic quantum mechanics, or as an introduction and guide for
electronic engineers, mathematicians, computer scientists, or scholars in these fields who are interested in quantum computing and how it might fit into their research programs.
Quantum Design Sprint
by Moses Ma, Po Chi Wu and Skip Sanzeri
The goal of this book is to help readers perform a quantum risk assessment, and learn how to do “quantum thinking” in order to expand their business model to not only mitigate those risks, but to
take a leadership role in guiding their company into the quantum age. For any organization to succeed in this brave new quantum computing-powered world, it will require an entirely new approach to
truly adapt and innovate.
Quantum Computer Science: An Introduction
by N. David Mermin
This book is a concise introduction to quantum computation, developing the basic elements of it without assuming any background in physics. The book is intended primarily for computer scientists who
know nothing about quantum theory, but will also be of interest to physicists who want to learn the theory of quantum computation, and philosophers of science interested in quantum foundational
Quantum Computing for Computer Scientists
by Noson S. Yonofsky and Mirco A. Monnucci
Quantum Computing for Computer Scientists takes readers on a tour of this fascinating area of cutting-edge research. Written in an accessible yet rigorous fashion, this book employs ideas and
techniques familiar to every student of computer science. The reader is not expected to have any advanced mathematics or physics background.
Quantum Computing for Everyone
by Chris Bernhardt
Chris Bernhardt offers an introduction to quantum computing that is accessible to anyone who is comfortable with high school mathematics. He explains qubits, entanglement, quantum teleportation,
quantum algorithms, and other quantum-related topics as clearly as possible for the general reader. He simplifies the mathematics and provides elementary examples that illustrate both how the math
works and what it means.
An Introduction to Quantum Computing
by Phillip Kaye, Raymond Laflamme, and Michele Mosca
This concise, accessible text provides a thorough introduction to quantum computing is aimed at advanced undergraduate and beginning graduate students in these disciplines, the text is technically
detailed and is clearly illustrated throughout with diagrams and exercises. Some prior knowledge of linear algebra is assumed, including vector spaces and inner products.
Quantum Computing Explained
by David McMahon
If you already have taken courses in elementary quantum mechanics, McMahon removes much of the mystery about quantum computing. Unlike other books on the subject, McMahon’s narrative is generously
interspersed with many examples. These tend to be simple mathematically, but they illustrate key points. The emphasis in McMahon is indeed on providing extended and simple explanations.
Essential Mathematics for Quantum Computing: A beginner’s guide to just the math you need without needless complexities
by Leonard S. Woody III
This book will teach the requisite math concepts in an intuitive way and connect them to principles in quantum computing. Starting with the most basic of concepts, 2D vectors that are just line
segments in space, you’ll move on to tackle matrix multiplication using an instinctive method. Linearity is the major theme throughout the book and since quantum mechanics is a linear theory, you’ll
see how they go hand in hand. As you advance, you’ll understand intrinsically what a vector is and how to transform vectors with matrices and operators. You’ll also see how complex numbers make their
voices heard and understand the probability behind it all.
Mathematics of Quantum Computing: An Introduction
by Wolfgang Scherer
This textbook presents the elementary aspects of quantum computing in a mathematical form. It is intended as core or supplementary reading for physicists, mathematicians, and computer scientists
taking a first course on quantum computing. It starts by introducing the basic mathematics required for quantum mechanics, and then goes on to present, in detail, the notions of quantum mechanics,
entanglement, quantum gates, and quantum algorithms.
7 Comments
1. I recommend Jon Dowling’s book “Schrödinger’s Killer App: Race to Build the World’s First Quantum Computer” from CRC Press.
A second book is in progress.
Great list!
□ Thank you for your comment. We have added this book to the listing.
Doug Finke
Managing Editor
2. Quantum Computing Without Magic by Zdzislaw Meglicki is a great resource on the devices used in quantum computation.
□ Thank you for your comment. We have added Quantum Computing without Magic to this list of books.
Doug Finke
Managing Editor
3. any suggested and freely accessible ebook?
□ If you go to the EDUCATION page of this web site, you will find a number of open source resources that may be helpful. There are some open sourced textbooks, some videos, and some online
courses that are described on that page.
Doug Finke
Managing Editor
4. I recommend “Introduction to Quantum Computing and Programming”. | {"url":"https://quantumcomputingreport.com/books/","timestamp":"2024-11-09T20:32:35Z","content_type":"text/html","content_length":"126361","record_id":"<urn:uuid:5a0b9b8e-abd8-4f26-ac54-ab7a0e32505a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00476.warc.gz"} |
Lien iCal public : https://calendar.google.com/calendar/ical/proval%40lri.fr/public/basic.ics
Séminaires antérieurs
Speaker: Loic Pujet, University of Stockholm, loic@pujet.fr
Tuesday 26 Nov 2024, 14:00, Room 1Z56
Categories with families (CwF) are perhaps the most widely used notion of models for dependent types. They can be described by an algebraic signature with dependent sorts for contexts, substitutions,
types and terms, as well as a plethora of constants and equations. Unfortunately, this mix of dependent sorts and equations is particularly prone to transport hell, and in practice it is nearly
impossible to prove non-trivial results using CwFs in a proof assistant.
In this talk, I will present a method based on Pédrot's "prefascist sets" to strictify (nearly) all the equations of a CwF, so that they hold by definition. I will then discuss applications of this
method to formal proofs of canonicity and normalisation.
This is joint work with Ambrus Kaposi.
Speaker: Teodor Knapik, Université de la Nouvelle-Calédonie, Nouméa
Tuesday 14 May 2024, 14:00, Room 1Z56
Abstract: Imagined by Kolmogorov in the middle of past century, expanders form remarkable graph families with applications in areas as diverse as robust communication networks and probabilistically
checkable proofs, to name just two. Since the proof of the existence of expanders, it took several years to come up with an explicit algebraic construction [Margulis 1973] of some expander families.
Their first elementary (combinatorial) construction has been published in 2002 and awarded Gödel Prize in 2009.
In this talk, we introduce a framework that captures most of the known combinatorial constructions of expanders. It is based on a generalisation of Lindenmayer systems to the domain of graphs. We
call this formalism Lindenmayer graph grammars. We identify a few essential properties which make decidable the language checking problem with respect to first-order sentences. This result is
obtained by encompassing a graph language into an automatic structure. By language checking in this specific context, we mean the following problem.
Instance: a Lindenmayer graph grammar and a first-order sentence. Question: Does there exist a graph in the language for which the sentence holds?
Speaker: Bill Roscoe Emeritus professor, University of Oxford, GB
Thursday (!) May 23 2024, 14:00, Room 1Z56
Abstract: I have been doing practical verification in CSP, its tools and models for 40 years. The main challenge has been packaging this for the industrial engineer. I will discuss how this has been
solved in the Coco System www.cocotec.io, which is used for object based development of massive systems in industry. Separately I will show how I have used it to underpin a highly innovative
blockchain consensus protocol by using it to model decentralised, partly malevolent systems.
Speaker: Jérôme Feret ENS Ulm, jerome.feret@ens.psl.eu
Tuesday Mar 26 2024, 14:00, Room 1Z56
Abstract: Software sciences have a role to play in the description, the organization, the execution, and the analysis of the molecular interaction systems such as biological signaling pathways. These
systems involve a huge diversity of bio-molecular entities whereas their dynamics may be driven by races for shared resources, interactions at different time- and concentration-scales, and non-linear
feedback loops. Understanding how the behavior of the populations of proteins orchestrates itself from their individual interactions, which is the holy grail on systems biology, requires dedicated
languages offering adapted levels of abstraction and efficient analysis tools.
In this talk we describe the design of formal tools for Kappa, a site-graph rewriting language inspired by bio-chemistry. In particular, we introduce a static analysis to compute some properties on
the biological entities that may arise in models, so as to increase our confidence in them. We also present a model reduction approach based on a study of the flow of information between the
different regions of the biological entities and the potential symmetries. This approach is applied both in the differential and in the stochastic semantics.
Speaker: Yezekaël Hayel, Université d'Avignon
Tuesday March 12 2024, 14:00, Room 1Z56
Abstract: Atomic congestion games with separable costs are a specific type of non-cooperative games with a finite number of players where the cost of a commodity depends on the number of players
choosing it. But in many applications, resources may be correlated in the sense that the resource cost may depend on the usage of other resources, and thus cost function is non-separable. This is the
case for traffic models with opposite directions dependencies, resource graph games, and smart charging games to cite a few examples. In this talk, after introducing the concepts of atomic congestion
games with non-separable costs, a specific smart charging game will illustrate such game theoretical framework. In this particular setting, we prove the existence of pure Nash Equilibrium by showing
ordinal potential function existence. We also demonstrate the convergence of a simple Reinforcement Learning algorithm to the pure NE for both synchronous and asynchronous versions. Finally, the
recent framework of Resource Graph Games will be presented. In this setting, dependencies between resources are modeled as an oriented graph. This new framework generalizes atomic congestion games
with non-separable costs and opens new questions about the existence and uniqueness of pure NE in this general setup.
Speaker: Martina Seidel, Johannes Kepler University, Linz, Austria
Tuesday Feb 20 2024, 14:00, Room 1Z53
Abstract: Quantified Boolean Formulas (QBFs) extend propositional logic by quantifiers over the Boolean variables. Despite the PSPACE hardness of their decision problem, much progress has been made
in practical solving, making QBFs an attractive framework for encoding various problems from artificial intelligence and formal verification.
In this talk, we will give an overview on recent trends and developments in QBF solving and we will discuss promising applications of QBFs.
Speaker: Gustave Cortal, LMF
Tuesday Feb 06 2024, 14:00, 1Z71
Abstract: Je propose d'introduire certains concepts clés du traitement automatique des langues. Le cours se concentre sur les modèles de langage, qui sont des modèles prédictifs calculant la
probabilité d’une séquence de mots, et trouvant des applications en traduction, résumé de texte, agent conversationnel, etc.
Je parlerai de différentes architectures utilisées dans l’histoire pour la modélisation statistique du langage, comme les n-grammes, les réseaux de neurones feed-forward, les réseaux de neurones
récurrents et les transformers. Les avantages et les inconvénients de chaque architecture seront exposés. À la fin, il sera possible de comprendre conceptuellement comment un modèle comme ChatGPT
Speaker: Munyque Mittelmann, University of Naples
Tuesday, 12 December 2023, 14:00, Room 1Z71
Abstract: In recent years a wealth of logic-based languages have been introduced to reason about the strategic abilities of autonomous agents in multi-agent systems. This talk presents some of these
important logical formalisms, namely, the Alternating-time Temporal Logic and Strategy Logic. We discuss recent extensions of those formalisms to capture different degrees of satisfaction, as well as
to handle uncertainty caused by the partial observability of agents and the intrinsic randomness of MAS. Finally, we describe the recent application of those formalisms for Mechanism Design and
explain how they can be used either to automatically check that a given mechanism satisfies some desirable property, or to produce a mechanism that does it.
Speaker: Raphael Berthon, RWTH Aachen
Tuesday, 28 November 2023, 14:00, Room 1Z25
Abstract: When verifying multiple properties, it is sufficient to verify each individually, but when synthesizing strategies that satisfy multiple objectives, these objectives must be considered
together. We consider the problem of finding strategies that satisfy a mixture of sure and threshold objectives in Markov decision processes. We focus on a single omega-regular objective expressed as
parity that must be surely met while satisfying n reachability objectives with some probability thresholds too. We consider three combinations with a sure parity objective: (a) strict and (b)
non-strict thresholds on all reachability objectives, and (c) maximizing the thresholds under a lexicographic ordering on the reachability objectives. We highlight the notion of projection for
combining multiple objectives. We show that the decision variants of (a) are in PTIME, (b) in EXPTIME, and (c) can be solved by considering n parity games, and give associated algorithms.
Abstract: The truth semantics of linear logic (i.e. phase semantics) is often overlooked despite having a wide range of applications and deep connections with several denotational semantics. In phase
semantics, one is concerned about the provability of formulas rather than the contents of their proofs (or refutations). Linear logic equipped with the least and greatest fixpoint operators (\muMALL)
has been an active field of research for the past one and a half decades. Various proof systems are known viz. finitary and non-wellfounded, based on explicit and implicit (co)induction respectively.
In this talk, I present an extension of the phase semantics of multiplicative additive linear logic (a.k.a. \MALL) to \muMALL with explicit (co)induction (i.e. \muMALLind). Then I introduce a
Tait-style system for \muMALL where proofs are wellfounded but potentially infinitely branching. We will see its phase semantics and the fact that it does not have the finite model property. This
presentation is based on joint work with Abhishek De and Alexis Saurin. | {"url":"https://lmf.cnrs.fr/Seminar/HomePage?page=1","timestamp":"2024-11-07T23:09:16Z","content_type":"application/xhtml+xml","content_length":"43458","record_id":"<urn:uuid:7fa806e5-8b89-4ec9-8e1d-4712d8664632>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00068.warc.gz"} |
N = 2 JT supergravity and matrix models
Generalizing previous results for N = 0 and N = 1, we analyze N = 2 JT supergravity on asymptotically AdS[2] spaces with arbitrary topology and show that this theory of gravity is dual, in a
holographic sense, to a certain random matrix ensemble in which supermultiplets of different R-charge are statistically independent and each is described by its own N = 2 random matrix ensemble. We
also analyze the case with a time-reversal symmetry, either commuting or anticommuting with the R-charge. In order to compare supergravity to random matrix theory, we develop an N = 2 analog of the
recursion relations for Weil-Petersson volumes originally discovered by Mirzakhani in the bosonic case.
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
• Black Holes
• Extended Supersymmetry
• Matrix Models
Dive into the research topics of 'N = 2 JT supergravity and matrix models'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/n-2-jt-supergravity-and-matrix-models","timestamp":"2024-11-12T06:33:35Z","content_type":"text/html","content_length":"41911","record_id":"<urn:uuid:aa902da2-a9ca-4d68-934c-bb2dfed37999>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00184.warc.gz"} |
Category:Draft Programming Tasks - Rosetta CodeCategory:Draft Programming Tasks
These are tasks that are still under development. They might or might not yet have any solutions, but it is quite possible that the description of the task will change. Your help refining these to
become full tasks would be appreciated!
Pages in category "Draft Programming Tasks"
The following 187 pages are in this category, out of 379 total.
previous page
) (next page)
previous page
) (next page) | {"url":"https://rosettacode.org/wiki/Category:Draft_Programming_Tasks?mobileaction=toggle_view_desktop&pagefrom=Next+special+primes","timestamp":"2024-11-10T22:46:55Z","content_type":"text/html","content_length":"64216","record_id":"<urn:uuid:4a05be61-0cc3-4372-a9d5-c3e07a4de77c>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00640.warc.gz"} |
EXC 3672 Analyses of Financial Data
EXC 3672 Analyses of Financial Data
Course coordinator:
Kjell Jørgensen
Course name in Norwegian:
Analyses of Financial Data
Product category:
BBA - Specialisation in Finance
Teaching language:
Course type:
One semester
The course aims at taking advantage of the information contained in data for decision-making. We are all prone to different kind of biases such as overconfidence or recency. Quantitative
empirical methods help us to discipline decision making process. More specifically, they allow to test the existence of a relation between variables (e.g., does inflation affect nominal interest
rates?), quantify this relation (e.g., a one percent increase in inflation should lead to how much increase in nominal interest rate?) and forecast the evolution of variables (e.g, which interest
rate should we expect in six month from now?).
Learning outcomes - Knowledge
The aims of this course are to:
1. Introduce students to important empirical quantitative techniques that are used in finance and more generally in business.
2. Make students able to apply them appropriately.
3. Prepare students for subsequent course work in finance and business.
More specifically, on completion of the course the students acquired knowledge and skills should be as follows:
• Understand basic probability theory
• Understand basic measures of location, tendency and dispersion such as the expectation, median, variance, standard deviation, skewness, kurtosis.
• Understand what is meant by correlation and regression analysis - and the difference between them
• Understand what is meant by Ordinary Least Squares (OLS) - the estimation technique used in order to estimate our econometric model.
• Limits and assumptions of regression analysis
• Be familiar with basic R syntax.
Learning outcomes - Skills
On completion of the course students should be able to use software like R in order to:
• Perform basic data handling
• Estimate financial models formulated as linear regression models
• Test the statistical assumptions underlying OLS.
Learning Outcome - Reflection
Course content
This course introduces students to empirical techniques that are relevant for finance and business in general. More specifically, the outline of the course is as follows:
Foundations for empirical methods in finance.
• Probability basics
• When and why econometric can work
• Econometric basics
Introduction to programming for data analysis
• Data and computer basics: data in finance, what is a programming language
• Introduction to R: Basic data manipulation with R
• Introduction to programming with R: Control structures in R, Monte Carlo simulation
Linear regression analysis
• Simple regression analysis
• Regression analysis with multiple explanatory variables
• Limits and assumptions of regression analysis
Learning process and requirements to students
Each topic will be accompanied by a hands-on practical application of an empirical finance topic.
The software package R will be an integral part of the coursework. R is a software program that has become a standard for data analysis inside academia and corporations, especially in the finance
industry. It is an open source software available free of charge on internet. The use of R will introduce students to some of the basics of programming. Programming is a skill typically required in
the financial industry.
If a student misses a class, it is her/his responsibility to obtain any information provided in class that is not included on the course homepage/itslearning or in the text book.
Computer-based tools: Google, Yahoo finance, Quandl, and itslearning.
This is a course with continuous assessment (several exam components) and one final exam code. Each exam component is graded by using points on a scale from 0-100. The components will be weighted
together according to the information in the course description in order to calculate the final letter grade for the examination code (course). Students who fail to participate in one/some/all exam
elements will get a lower grade or may fail the course. You will find detailed information about the point system and the cut off points with reference to the letter grades when the course start.
At re-sit all exam components must, as a main rule, be retaken during next scheduled course.
The specialisation requires two years of university education in Business Administration or equivalent.
Required prerequisite knowledge
EXC 2910 Mathematics or EXC 2904 Statistics
Exam category:
Form of assessment:
Written submission
Group/Individual (1 - 4)
2 Week(s)
Exam code:
Grading scale:
Point scale leading to ECTS letter grade
All components must, as a main rule, be retaken during next scheduled course
Exam category:
Form of assessment:
Written submission
Group/Individual (1 - 4)
2 Week(s)
Home assignment
Exam code:
Grading scale:
Point scale leading to ECTS letter grade
All components must, as a main rule, be retaken during next scheduled course
Exam category:
Form of assessment:
Written submission
Support materials:
• BI-approved exam calculator
• Simple calculator
• Bilingual dictionary
3 Hour(s)
Exam code:
Grading scale:
Point scale leading to ECTS letter grade
All components must, as a main rule, be retaken during next scheduled course
Exam organisation:
Continuous assessment
Student workload
Activity Duration Comment
Teaching 39 Hour(s)
Other in classroom 3 Hour(s) Computer sessions.
Student's own work with learning resources 133 Hour(s) Review of the slides every evening after the lecture.
Group work / Assignments 5 Hour(s)
Student's own work with learning resources 20 Hour(s)
A course of 1 ECTS credit corresponds to a workload of 26-30 hours. Therefore a course of 7,5 ECTS credit corresponds to a workload of at least 200 hours. | {"url":"https://programmeinfo.bi.no/nb/course/EXC-3672/2018-autumn","timestamp":"2024-11-02T02:14:47Z","content_type":"text/html","content_length":"26401","record_id":"<urn:uuid:dc670423-0758-4c46-bfc3-c182e89e43c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00438.warc.gz"} |
Can someone help me with Java multithreading assignment optimization for parallel algorithms in aerospace engineering simulations? | Hire Java Programming Assignment Expert
Can someone help me with Java multithreading assignment optimization for parallel algorithms in aerospace engineering simulations? A bit more research required! I have had 3x-12x problems after 3
years of trials of TPSL and I’d suspect my goal is to improve them once and for all. Can anyone give me advice on how to do this for XQuery for example? I have tried read what he said 4x-12…I’m
trying to understand the purpose of the work but I don’t see any significant improvements as the solution approaches to optimization that should be good though I want to try it out for myself!! It
looks like I may have to take a break thanks to the new solution stated here. So I guess I would like to try and learn more things in more areas. click here to read I think you’re getting the idea.
What I’m going to try and improve on after I’ve done my homework might not even be an XQuery problem I’ve been unable to find! I have just typed this out for my application in forked itup please see
here for the code provided. I really wanted to try it out for myself and I think if you’d like I can point jspenard at you anytime. That’s great!!! I wanted to know if you have any suggestions for
anyone who can help with XQuery. I wish the link I posted read what he said exactly what I was looking for. Thanks and good luck! I keep looking till next. Let me know in the ‘comments or any other
useful information’ section. I hope you’ve found this article useful. Please consider that there’s a similar type of cross-posting feature in Yahoo! that you could make to help find an XQuery model
for an algorithm solving the XQuery. All your suggestions would be appreciated! This thread is a participant in the Amazon Services LLC Associates Query Optimization program, an affiliate advertising
program designed to provide a means for sites to earn fees by advertising and linking to amCan someone help me with Java multithreading assignment optimization for parallel algorithms in aerospace
engineering simulations? Especially, the one in use in my previous answer is new? Will the multithreading work also time-consuming? I am thinking if there is a way to better optimize the
multithreading, I have great interest in this. A: I think there is image source of suggestions on how to optimize your multithreading better than your default multithreading algorithm. But you can
only optimize for Java multithreading. You would have to reduce the number of threads to get better performance. A more in-depth answer for that area can be found in this other answer.
Matlab Programming Homework Help
In Java in general and what has been done in it as a functional/interactivity functional programming problem is a non-linear function (in terms of dimensionality). In my case, the two most generic
optimization algorithms (Arithmetic-vector-vector) use the following two types of variables: std::vector variable and std::vector variable. They can take on a default or even a list of values:
std::vector>>> number; So, for any function that takes into account the value of number and number-like vector, the variable number should be limited to its std::vector case: std::vector name; So,
using Arithmetic-vector-vector you can make yourself more efficient: // this function needs 5×5 functions: int temp_number() { std::vector temp; // for each function, use std::vector to get the
values for the 3 elements of temp // one value, 2,5,3 and 3. std::vector temp_value() { return temp_.value(); } // and so on. As for the above code: // this function has 5Can someone help me with
Java multithreading assignment optimization for parallel algorithms in aerospace engineering simulations? Answers: The current design of a multi-purpose workspace planner her explanation the
efficiency of the interactive task and improves the coherence of the problem. The design is motivated by the author’s desire to use his knowledge to ensure the use of data for two-dimensional
computations in parallel environments. As already discussed before, the available space limitation for the workspace is to be set so that the inner work can be done in parallel, the inner components
can be done in parallel, and the inner algorithms know-how and resources are available. The design comes with several benefits: It makes it possible to work with tensor data at different scales. The
optimization problem is controlled through an inner optimization algorithm. The inner optimization algorithm can be carried out in minimal number of iterations as long as the expected space is
feasible-overhead. The speed at which the algorithm is operated is not dependent on the power of the inner or outer optimization algorithms. In order to increase the speed of the inner optimization,
the next iteration of the optimization is executed in parallel. The optimization and the inner optimization can be carried out in just one-copy time and space. In this case, the time is taken to
complete each operation before running the inner optimization algorithm. For instance, the previous time for the inner optimization algorithm is six times the actual time in binary processing, and
thus the speed is less than that of the parallel algorithm. I’m sorry for bad english 🙂 Can anyone see this me if there should a simple way to solve the problem that we have proposed in the post
mentioned: The wikipedia reference : i = find() and do i = i + 2. would much be easier. so if this is an elegant way of solving the problem of finding the shortest search distance that is feasible
from local minima and maxima of x(i) Could anyone help me? I was really confused about the line | {"url":"https://javaassignments.com/can-someone-help-me-with-java-multithreading-assignment-optimization-for-parallel-algorithms-in-aerospace-engineering-simulations","timestamp":"2024-11-13T16:10:56Z","content_type":"text/html","content_length":"118777","record_id":"<urn:uuid:172dfb4b-845e-4ccf-b16d-c58baeffdc43>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00494.warc.gz"} |
Firewood Weight and BTU Chart (160+ Trees) - Theyardable
• MC- Moisture content
• BTU- British Thermal Unit
Common ft3/ ft3/lbs, MC 1 cord of stacked green 1 cord stacked wood MC 20% 1/2 cord stacked firewood MC 1/4 cord stacked wood MC BTUs per cord MC BTUs per cord 70% fireplace efficency
Name lbs 20% wood (lbs) (lbs) 20% (lbs) 20% (lbs) 20% & MC 20%
Please note that the table describes the average volume of one net cord’s weight, which in this case is 85 ft^3 (a stacked cord of wood)
Calculating the correct number of BTUs for each tree is a tricky process because each tree produces a slightly different amount of heat due to sap and resin content.
However, the amount of heat generated per pound of firewood is very similar. In ideal conditions one pound of wood, with 0% moisture content can produce 8600 to 9000 BTUs per pound depending on the
resing content. Wood with higher resin burns slightly hotter.
Furthermore, the combustion process itself can reduce the amount of potential BTU amount about 6%. If we take that into account, we will get two numbers:
8084 BTUs per pound for non-resinous woods
8460 BTUs per pound for high resinous woods
Since we are doing calculations on bulk to get a general ballpark, let’s take an average of that number, which would be 8272
Now all that’s left to do is to calculate how the moisture content inside the wood affects its efficiency.
The results are as follows:
At 65% Moisture content the wood would output BTUs 5013 per pound
At 20% moisture content, the average heat output would be 6893 BTUs per pound.
Another factor that affects the heat output of the wood is the efficiency of your fireplace. For example, if your fireplace efficiency rating is at 70% it will reduce the amount of BTUs received by
Elaborating on BTU and Net Cord
BTU – this unit of measurement is most commonly used for evaluating the thermal output of timber. 1 BTU is the amount of energy it takes to heat 1 lb of water by 1 F. Cord of firewood measures 4x4x8
ft equaling 128 cubic feet in dimensions.
However, the net cord is the actual amount of wood in a cord. It is calculated by subtracting the bark and air space from the cord of wood. Usually, in one cord of stacked firewood, around 75 to 100
cubic feet of is the actual amount of wood, or ‘net cord’
To elaborate more, 1 BTU is the amount of energy that is released when lighting a match. In contrast, 1 Million BTU is equal to the energy generated by 179 gallons of oil or 7,283 kW of electricity.
I am the guy behind Theyardable.com. I grew up on a homestead and I am here to share the knowledge I have and things I learn while living in the countryside. | {"url":"https://theyardable.com/firewood-weight-btu-chart/","timestamp":"2024-11-11T12:42:20Z","content_type":"text/html","content_length":"195494","record_id":"<urn:uuid:ecc13b5d-c2d6-4d71-86fa-e3b6692ddcea>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00313.warc.gz"} |
ME6502 Heat and Mass Transfer Important questions Regulation 2013
ME6502 Heat and Mass Transfer Important questions Regulation 2013 Anna University
ME6502 Heat and Mass Transfer Important questions
ME6502 Heat and Mass Transfer Important questions Regulation 2013 Anna University free download. Heat and Mass Transfer ME6502 Important questions pdf free download.
Sample ME6502 Heat and Mass Transfer Important questions:
PART – A
1. What is Fourier’s Law of heat conduction?
2. What is temperature gradient?
3. What is coefficient of Thermal conductivity?
4. Give some examples of heat transfer in engineering.
6. Define Temperature field. (ME6502 Heat and Mass Transfer Important questions)
7. Define heat flux.
8. Define thermal Diffusivity.
9. What is Lap lace equation for heat flow?
10. What is Poisson’s equation for heat flow?
11. What critical radius of insulation; (ME6502 Heat and Mass Transfer Important questions)
12. Give examples for initial’&; boundary conditions.
13. What is a Fin?
14. Define efficiency of the fin.
15. Define effectiveness of the fin.
16. Give examples of use of fins in various engineering applications.
17. What is meant by Transient heat conduction? (ME6502 Heat and Mass Transfer Important questions)
18. Give governing differential equation for the one dimensional transient heat flow.
19. What is Biot number?
20. What is Newtonian heating or cooling process?
21. Give examples for Transient heat transfer.
22. What is meant by thermal resistance?
23. What is meant by periodic heat transfer?
24. What are Heisler chart? (ME6502 Heat and Mass Transfer Important questions)
25. What is the function of insulating materials?
PART – B
1. A pipe consists of 100 mm internal diameter and 8mm thickness carries steam at 170°C. The convective heat transfer coefficient on the inner surface of pipe is 75 W/m2C. The pipe is insulated by
two layers of insulation. The first layer of insulation is 46 mm in thickness having thermal conductivity of 0.14 W/m°C. The second layer of insulation is also 46 mm in thickness having thermal
conductivity of 0.46 W/m°C. Ambient air temperature = 33°C. The convective heat transfer coefficient from the outer surface of pipe = 12 W/m2C. Thermal conductivity of
steam pipe = 46 W/m°C. Calculate the heat loss per unit length of pipe and determine the interface temperatures. Suggest the materials used for insulation.
(ME6502 Heat and Mass Transfer Important questions)
Subject Name Heat and Mass Transfer
Subject code ME6502
Regulation 2013
ME6502 Heat and Mass Transfer Important questions Click here to download
ME6502 Heat and Mass Transfer Syllabus
ME6502 Heat and Mass Transfer Notes | {"url":"https://padeepz.net/me6502-heat-and-mass-transfer-important-questions-regulation-2013-anna-university/","timestamp":"2024-11-09T22:39:33Z","content_type":"text/html","content_length":"49247","record_id":"<urn:uuid:b3ef81f3-c73d-4f3d-81da-7832c91b720c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00624.warc.gz"} |
Quantum Computing: A Gentle Introduction - PDF Free Download
Scientific and Engineering Computation William Gropp and Ewing Lusk, editors; Janusz Kowalik, founding editor
A complete list of the books in this series can be found at the back of this book.
QUANTUM COMPUTING A Gentle Introduction
Eleanor Rieffel and Wolfgang Polak
The MIT Press Cambridge, Massachusetts London, England
©2011 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or
information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email
[email protected]
This book was set in Syntax and Times Roman by Westchester Book Group. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Rieffel, Eleanor, 1965–
Quantum computing : a gentle introduction / Eleanor Rieffel and Wolfgang Polak. p. cm.—(Scientific and engineering computation) Includes bibliographical references and index. ISBN 978-0-262-01506-6
(hardcover : alk. paper) 1. Quantum computers. 2. Quantum theory. I. Polak, Wolfgang, 1950– II. Title. QA76.889.R54 2011 004.1—dc22 2010022682 10 9 8 7 6 5 4 3 2 1
Single-Qubit Quantum Systems
2.2 2.3 2.4 2.5
2.6 2.7
The Quantum Mechanics of Photon Polarization 9 2.1.1 A Simple Experiment 10 2.1.2 A Quantum Explanation 11 Single Quantum Bits 13 Single-Qubit Measurement 16 A Quantum Key Distribution Protocol 18
The State Space of a Single-Qubit System 21 2.5.1 Relative Phases versus Global Phases 21 2.5.2 Geometric Views of the State Space of a Single Qubit 2.5.3 Comments on General Quantum State Spaces 25
References 25 Exercises 26
Multiple-Qubit Systems 3.1
3.2 3.3 3.4 3.5 3.6
Quantum State Spaces 32 3.1.1 Direct Sums of Vector Spaces 32 3.1.2 Tensor Products of Vector Spaces 33 3.1.3 The State Space of an n-Qubit System 34 Entangled States 38 Basics of Multi-Qubit
Measurement 41 Quantum Key Distribution Using Entangled States 43 References 44 Exercises 44
Measurement of Multiple-Qubit States 4.1 4.2
Dirac’s Bra/Ket Notation for Linear Transformations Projection Operators for Measurement 49
4.3 4.4
4.5 4.6
Quantum State Transformations 5.1 5.2
5.5 5.6 5.7 5.8
Hermitian Operator Formalism for Measurement 53 4.3.1 The Measurement Postulate 55 EPR Paradox and Bell’s Theorem 60 4.4.1 Setup for Bell’s Theorem 62 4.4.2 What Quantum Mechanics Predicts 62 4.4.3
Special Case of Bell’s Theorem: What Any Local Hidden Variable Theory Predicts 4.4.4 Bell’s Inequality 64 References 65 Exercises 66
Unitary Transformations 72 5.1.1 Impossible Transformations: The No-Cloning Principle 73 Some Simple Quantum Gates 74 5.2.1 The Pauli Transformations 75 5.2.2 The Hadamard Transformation 76 5.2.3
Multiple-Qubit Transformations from Single-Qubit Transformations 5.2.4 The Controlled-NOT and Other Singly Controlled Gates 77 Applications of Simple Gates 80 5.3.1 Dense Coding 81 5.3.2 Quantum
Teleportation 82 Realizing Unitary Transformations as Quantum Circuits 84 5.4.1 Decomposition of Single-Qubit Transformations 84 5.4.2 Singly-Controlled Single-Qubit Transformations 86 5.4.3
Multiply-Controlled Single-Qubit Transformations 87 5.4.4 General Unitary Transformations 89 A Universally Approximating Set of Gates 91 The Standard Circuit Model 93 References 93 Exercises 94
Quantum Versions of Classical Computations 6.1 6.2
From Reversible Classical Computations to Quantum Computations 99 6.1.1 Reversible and Quantum Versions of Simple Classical Gates 101 Reversible Implementations of Classical Circuits 103 6.2.1 A
Naive Reversible Implementation 103 6.2.2 A General Construction 106 A Language for Quantum Implementations 110 6.3.1 The Basics 111 6.3.2 Functions 112 Some Example Programs for Arithmetic
Operations 115 6.4.1 Efficient Implementation of AND 115 6.4.2 Efficient Implementation of Multiply-Controlled Single-Qubit Transformations 6.4.3 In-Place Addition 117 6.4.4 Modular Addition 117 6.4.5
Modular Multiplication 118 6.4.6 Modular Exponentiation 119
6.5 6.6
II 7
References 120 Exercises 121
Introduction to Quantum Algorithms 7.1
7.3 7.4
7.6 7.7
7.9 7.10
Shor’s Algorithm 8.1 8.2
8.3 8.4 8.5 8.6
Computing with Superpositions 126 7.1.1 The Walsh-Hadamard Transformation 126 7.1.2 Quantum Parallelism 128 Notions of Complexity 130 7.2.1 Query Complexity 131 7.2.2 Communication Complexity 132 A
Simple Quantum Algorithm 132 7.3.1 Deutsch’s Problem 133 Quantum Subroutines 134 7.4.1 The Importance of Unentangling Temporary Qubits in Quantum Subroutines 7.4.2 Phase Change for a Subset of Basis
Vectors 135 7.4.3 State-Dependent Phase Shifts 138 7.4.4 State-Dependent Single-Qubit Amplitude Shifts 139 A Few Simple Quantum Algorithms 140 7.5.1 Deutsch-Jozsa Problem 140 7.5.2 Bernstein-Vazirani
Problem 141 7.5.3 Simon’s Problem 144 7.5.4 Distributed Computation 145 Comments on Quantum Parallelism 146 Machine Models and Complexity Classes 148 7.7.1 Complexity Classes 149 7.7.2 Complexity:
Known Results 150 Quantum Fourier Transformations 153 7.8.1 The Classical Fourier Transform 153 7.8.2 The Quantum Fourier Transform 155 7.8.3 A Quantum Circuit for Fast Fourier Transform 156
References 158 Exercises 159
Classical Reduction to Period-Finding 164 Shor’s Factoring Algorithm 164 8.2.1 The Quantum Core 165 8.2.2 Classical Extraction of the Period from the Measured Value Example Illustrating Shor’s
Algorithm 167 The Efficiency of Shor’s Algorithm 169 Omitting the Internal Measurement 170 Generalizations 171 8.6.1 The Discrete Logarithm Problem 172 8.6.2 Hidden Subgroup Problems 172
8.7 8.8
Grover’s Algorithm and Generalizations 9.1
9.2 9.3
9.6 9.7 9.8
III 10
References 175 Exercises 176
Grover’s Algorithm 178 9.1.1 Outline 178 9.1.2 Setup 178 9.1.3 The Iteration Step 180 9.1.4 How Many Iterations? 181 Amplitude Amplification 183 9.2.1 The Geometry of Amplitude Amplification 185
Optimality of Grover’s Algorithm 188 9.3.1 Reduction to Three Inequalities 189 9.3.2 Proofs of the Three Inequalities 191 Derandomization of Grover’s Algorithm and Amplitude Amplification 193 9.4.1
Approach 1: Modifying Each Step 194 9.4.2 Approach 2: Modifying Only the Last Step 194 Unknown Number of Solutions 196 9.5.1 Varying the Number of Iterations 197 9.5.2 Quantum Counting 198 Practical
Implications of Grover’s Algorithm and Amplitude Amplification 199 References 200 Exercises 201
ENTANGLED SUBSYSTEMS AND ROBUST QUANTUM COMPUTATION Quantum Subsystems and Properties of Entangled States 10.1
10.3 10.4
10.5 10.6
Quantum Subsystems and Mixed States 206 10.1.1 Density Operators 207 10.1.2 Properties of Density Operators 213 10.1.3 The Geometry of Single-Qubit Mixed States 215 10.1.4 Von Neumann Entropy 216
Classifying Entangled States 218 10.2.1 Bipartite Quantum Systems 218 10.2.2 Classifying Bipartite Pure States up to LOCC Equivalence 222 10.2.3 Quantifying Entanglement in Bipartite Mixed States 224
10.2.4 Multipartite Entanglement 225 Density Operator Formalism for Measurement 229 10.3.1 Measurement of Density Operators 230 Transformations of Quantum Subsystems and Decoherence 232 10.4.1
Superoperators 233 10.4.2 Operator Sum Decomposition 234 10.4.3 A Relation Between Quantum State Transformations and Measurements 10.4.4 Decoherence 239 References 240 Exercises 240
Quantum Error Correction 11.1
11.5 11.6 11.7
Three Simple Examples of Quantum Error Correcting Codes 246 11.1.1 A Quantum Code That Corrects Single Bit-Flip Errors 246 11.1.2 A Code for Single-Qubit Phase-Flip Errors 251 11.1.3 A Code for All
Single-Qubit Errors 252 Framework for Quantum Error Correcting Codes 253 11.2.1 Classical Error Correcting Codes 254 11.2.2 Quantum Error Correcting Codes 257 11.2.3 Correctable Sets of Errors for
Classical Codes 258 11.2.4 Correctable Sets of Errors for Quantum Codes 259 11.2.5 Correcting Errors Using Classical Codes 261 11.2.6 Diagnosing and Correcting Errors Using Quantum Codes 264 11.2.7
Quantum Error Correction across Multiple Blocks 268 11.2.8 Computing on Encoded Quantum States 268 11.2.9 Superpositions and Mixtures of Correctable Errors Are Correctable 269 11.2.10 The Classical
Independent Error Model 270 11.2.11 Quantum Independent Error Models 271 CSS Codes 274 11.3.1 Dual Classical Codes 274 11.3.2 Construction of CSS Codes from Classical Codes Satisfying a Duality
Condition 11.3.3 The Steane Code 278 Stabilizer Codes 280 11.4.1 Binary Observables for Quantum Error Correction 280 11.4.2 Pauli Observables for Quantum Error Correction 282 11.4.3 Diagnosing and
Correcting Errors 283 11.4.4 Computing on Encoded Stabilizer States 285 CSS Codes as Stabilizer Codes 289 References 290 Exercises 291
Fault Tolerance and Robust Quantum Computing 12.1 12.2
12.4 12.5
Further Topics in Quantum Information Processing 13.1 13.2
Setting the Stage for Robust Quantum Computation 294 Fault-Tolerant Computation Using Steane’s Code 297 12.2.1 The Problem with Syndrome Computation 297 12.2.2 Fault-Tolerant Syndrome Extraction and
Error Correction 12.2.3 Fault-Tolerant Gates for Steane’s Code 300 12.2.4 Fault-Tolerant Measurement 303 12.2.5 Fault-Tolerant State Preparation of |π/4 304 Robust Quantum Computation 305 12.3.1
Concatenated Coding 306 12.3.2 A Threshold Theorem 308 References 310 Exercises 310
Further Quantum Algorithms 311 Limitations of Quantum Computing
13.3 13.4
Further Techniques for Robust Quantum Computation 314 Alternatives to the Circuit Model of Quantum Computation 316 13.4.1 Measurement-Based Cluster State Quantum Computation 317 13.4.2 Adiabatic
Quantum Computation 318 13.4.3 Holonomic Quantum Computation 319 13.4.4 Topological Quantum Computation 320 13.5 Quantum Protocols 320 13.6 Insight into Classical Computation 321 13.7 Building
Quantum Computers 322 13.8 Simulating Quantum Systems 325 13.9 Where Does the Power of Quantum Computation Come From? 326 13.10 What if Quantum Mechanics Is Not Quite Correct? 327
Some Relations Between Quantum Mechanics and Probability Theory A.1 A.2 A.3 A.4
Tensor Products in Probability Theory 331 Quantum Mechanics as a Generalization of Probability Theory References 339 Exercises 339
Solving the Abelian Hidden Subgroup Problem B.1 B.2
B.3 B.4
B.5 B.6 B.7
Representations of Finite Abelian Groups 341 B.1.1 Schur’s Lemma 344 Quantum Fourier Transforms for Finite Abelian Groups 345 B.2.1 The Fourier Basis of an Abelian Group 345 B.2.2 The Quantum Fourier
Transform Over a Finite Abelian Group General Solution to the Finite Abelian Hidden Subgroup Problem 348 Instances of the Abelian Hidden Subgroup Problem 350 B.4.1 Simon’s Problem 350 B.4.2 Shor’s
Algorithm: Finding the Period of a Function 351 Comments on the Non-Abelian Hidden Subgroup Problem 351 References 351 Exercises 352
Bibliography 353 Notation Index 365 Index 369
Quantum computing is a beautiful combination of quantum physics, computer science, and information theory. The purpose of this book is to make this exciting research area accessible to a broad
audience. In particular, we endeavor to help the reader bridge the conceptual and notational barriers that separate quantum computing from conventional computing. The book is concerned with theory:
what changes when the classical model underpinning conventional computing is replaced with a quantum one. It contains only a brief discussion of the ongoing efforts to build quantum computers, an
active area which is still so young that it is impossible even for experts to predict which approaches will be most successful. While this book is about theory, it is important to ground the
discussion of quantum computation in the physics that motivates it. For this reason, the text includes discussions of quantum physics and experiments that illustrate why the theory is defined the way
it is. We precisely define concepts used in quantum computation and emphasize subtle distinctions. This rigor is motivated in part by our experience working with members of the joint FXPAL1 /PARC2
reading group and with reviewing papers by authors new to the field. Mistakes commonly arise due to a lack of precision. For example, we take care to distinguish a quantum state from a vector that
represents it. We make clear which notions are basis dependent (e.g., superposition) and which are not (e.g., entanglement), and emphasize the dependence of certain notions (e.g., entanglement) on a
particular tensor decomposition. The distinction between tensor decompositions and direct sum decompositions, both used extensively in quantum mechanics, is discussed explicitly in both quantum
mechanical and classical probabilistic settings. Definitions are carefully motivated. For example, instead of starting with axioms for density operators or mixed states, the definitions of these
concepts are motivated by a discussion of what can be deduced about a subsystem from measurements of the subsystem alone. One advantage of dealing only with theory, and not with the efforts to build
quantum computers, is that the amount of quantum physics and supporting mathematics needed is reduced. We are able to develop all of the necessary quantum mechanics within the book; no previous
exposure to quantum physics is required. We give careful and precise descriptions of fundamental concepts— such as quantum state spaces, quantum measurement, and entanglement—before covering the
standard quantum algorithms and other quantum information processing tasks such as quantum key distribution and quantum teleportation. The intent of this book is to make quantum computing accessible
to a wide audience of computer scientists, engineers, mathematicians, and anyone with a general interest in the subject who knows sufficient mathematics. Basic concepts from college-level linear
algebra such as vector spaces, linear transformations, eigenvalues, and eigenvectors are used throughout the book. Afew sections require more mathematics; familiarity with group theory is required
for sections 8.6.1 and 8.6.2, appendix B, and much of chapter 11. Group theory is reviewed in boxes, but readers who have never seen group theory should consult a book on the subject or skip those
sections. While we hope our book lives up to the gentle of its title, reading it will require effort. Many of the concepts are subtle and unintuitive, and much of the notation unfamiliar. Readers
will need to spend time working with the concepts and notations to develop a level of fluency at each stage. For example, even readers with significant mathematical background may not have worked
much with tensor products and may not be familiar with the relation of tensor product spaces to their component spaces. The early chapters of the book develop these notions carefully, since they are
absolutely fundamental to quantum information processing. It is well worth the effort to master them, as well as the concise Dirac notation in which they are generally expressed, but mastery will
require effort. The precise nature of these mathematical formalisms provides a means of working with quantum concepts before fully understanding them. Intuition for quantum mechanics and quantum
information processing will develop from playing with the formal mathematics. The book emphasizes features of quantum mechanics that give quantum computation its power and are responsible for its
limitations. Neither the extent of the power of quantum computation nor its limitations have been fully understood. Research challenges remain not only in building quantum computers and developing
novel algorithms and protocols, but also in answering fundamental questions as to the source of quantum computing’s power and the reasons for its limitations. This book examines what is known about
what quantum computers can and cannot do, and also explores what is known about why. The focus on the reasons underlying quantum computing’s effectiveness results in the inclusion of topics
frequently left out of other expositions of the subject. For example, one theme of the book is the relationship of quantum information processing to probability. That many quantum algorithms are
nonprobabilistic is emphasized. A section is devoted to modifications of Grover’s original algorithm that preserve the speed-up but return a solution with certainty. On the other hand, the strong
formal resemblance between quantum theory and probability theory is described in detail and distinctions are highlighted, illuminating, for example, how entanglement differs from correlation, and the
difference between a superposition and a mixture. As another example, while quantum entanglement is the most common explanation given for why quantum information processing works, multipartite
entanglement remains poorly understood. Bipartite entanglement is much better understood but has limited use for understanding quantum computation. The book includes sections on multipartite
entanglement, a topic often left
out of introductory books, and discusses bipartite entanglement. Discussions of multipartite entanglement require examples, which made it natural to include a section on cluster states, the
fundamental entanglement resource used for cluster state, or one-way, quantum computation. Cluster state quantum computation and adiabatic quantum computation, two alternatives to the standard
circuit model, are briefly introduced and their strengths and applications discussed. As a final example, while the conversion between general classical circuits and reversible classical circuits is
a purely classical topic, it is the heart of the proof that anything a classical computer can do, a quantum computers can do with comparable efficiency. For this reason, the book includes a detailed
account of this piece of classical, but nonstandard, computer science. This is not a book about quantum mechanics. We treat quantum mechanics as an abstract mathematical theory and consider the
physical aspects only to elucidate theoretical concepts. We do not discuss issues of interpretation of quantum mechanics; the occasional use of terms such as quantum parallelism, for example, is not
to be construed as an endorsement of one or another particular interpretation. Acknowledgments
We are enormously indebted to Michael B. Heaney and Paul McEvoy, both of whom read multiple versions of many of the chapters and provided valuable comments each time. It is largely due to their
steadfast belief in this project that the book reached completion. The FXPAL/PARC reading group enabled us to discover which expository approaches worked and which did not. The group’s comments,
struggles, and insights spurred substantial improvements in the book. We are grateful to all of the members of that group, particularly Dirk Balfanz, Stephen Jackson, and Michael Plass. Many thanks
to Tad Hogg and Marc Rieffel for their feedback on some of the most technical and notationally heavy sections. Thanks also go to Gene Golovchinsky for suggestions that clarified and streamlined the
writing of an early draft, to Livia Polanyi for suggestions that positively impacted the flow and emphasis, to Garth Dales for comments on an early draft that improved our wording and use of
notation, and to Denise Greaves for extensive editorial assistance. Many people provided valuable comments on drafts of the tutorial3 that was the starting point for this book. Their comments
improved this book as well as the tutorial. We gratefully acknowledge the support of FXPAL for part of this work. We are grateful to our friends, to our family, and especially to our spouses for
their support throughout the years it took us to write this book. Notes 1. FX Palo Alto Laboratory. 2. Palo Alto Research Center. 3. E. G. Rieffel and W. Polak. An introduction to quantum computing
for non-physicists. ACM Computing Surveys, 32(3):300–335, 2000.
In the last decades of the twentieth century, scientists sought to combine two of the century’s most influential and revolutionary theories: information theory and quantum mechanics. Their success
gave rise to a new view of computation and information. This new view, quantum information theory, changed forever how computation, information, and their connections with physics are thought about,
and it inspired novel applications, including some wildly different algorithms and protocols. This view and the applications it spawned are the subject of this book. Information theory, which
includes the foundations of both computer science and communications, abstracted away the physical world so effectively that it became possible to talk about the major issues within computer science
and communications, such as the efficiency of an algorithm or the robustness of a communication protocol, without understanding details of the physical devices used for the computation or the
communication. This ability to ignore the underlying physics proved extremely powerful, and its success can be seen in the ubiquity of the computing and communications devices around us. The
abstraction away from the physical had become such a part of the intellectual landscape that the assumptions behind it were almost forgotten. At its heart, until recently, information sciences have
been firmly rooted in classical mechanics. For example, the Turing machine is a classical mechanical model that behaves according to purely classical mechanical principles. Quantum mechanics has
played an ever-increasing role in the development of new and more efficient computing devices. Quantum mechanics underlies the working of traditional, classical computers and communication devices,
from the transistor through the laser to the latest hardware advances that increase the speed and power and decrease the size of computer and communications components. Until recently, the influence
of quantum mechanics remained confined to the lowlevel implementation realm; it had no effect on how computation or communication was thought of or studied. In the early 1980s, a few researchers
realized that quantum mechanics had unanticipated implications for information processing. Charles Bennett and Gilles Brassard, building on ideas of Stephen Wiesner, showed how nonclassical
properties of quantum measurement provided a provably secure mechanism for establishing a cryptographic key. Richard Feynman, Yuri Manin, and others recognized that certain quantum
phenomena—phenomena associated with so-called
1 Introduction
entangled particles—could not be simulated efficiently by a Turing machine. This observation led to speculation that perhaps these quantum phenomena could be used to speed up computation in general.
Such a program required rethinking the information theoretic model underlying computation, taking it out of the purely classical realm. Quantum information processing, a field that includes quantum
computing, quantum cryptography, quantum communications, and quantum games, explores the implications of using quantum mechanics instead of classical mechanics to model information and its
processing. Quantum computing is not about changing the physical substrate on which computation is done from classical to quantum, but rather changing the notion of computation itself. The change
starts at the most basic level: the fundamental unit of computation is no longer the bit, but rather the quantum bit or qubit. Placing computation on a quantum mechanical foundation led to the
discovery of faster algorithms, novel cryptographic mechanisms, and improved communication protocols. The phrase quantum computing does not parallel the phrases DNA computing or optical computing:
these describe the substrate on which computation is done without changing the notion of computation. Classical computers, the ones we all have on our desks, make use of quantum mechanics, but they
compute using bits, not qubits. For this reason, they are not considered quantum computers. A quantum or classical computer may or may not be an optical computer, depending on whether optical devices
are used to carry out the computation. Whether the computer is quantum or classical depends on whether the information is represented and manipulated in a quantum or classical way. The phrase quantum
computing is closer in character to analog computing because the computational model for analog computing differs from that of standard computing: a continuum of values, rather than only a discrete
set, is allowed. While the phrases are parallel, the two models differ greatly in that analog computation does not support entanglement, a key resource for quantum computation, and measurements of a
quantum computer’s registers can yield only a small, discrete set of values. Furthermore, while a qubit can take on a continuum of values, in many ways a qubit resembles a bit, with its two discrete
values, more than it does analog computation. For example, as we will see in section 4.3.1, only one bit’s worth of information can be extracted from a qubit by measurement. The field of quantum
information processing developed slowly in the 1980s and early 1990s as a small group of researchers worked out a theory of quantum information and quantum information processing. David Deutsch
developed a notion of a quantum mechanical Turing machine. Daniel Bernstein, Vijay Vazirani, and Andrew Yao improved upon his model and showed that a quantum Turing machine could simulate a classical
Turing machine, and hence any classical computation, with at most a polynomial time slowdown. The standard quantum circuit model was then defined, which led to an understanding of quantum complexity
in terms of a set of basic quantum transformations called quantum gates. These gates are theoretical constructs that may or may not have direct analogs in the physical components of an actual quantum
computer. In the early 1990s, researchers developed the first truly quantum algorithms. In spite of the probabilistic nature of quantum mechanics, the first quantum algorithms, for which superiority
1 Introduction
over classical algorithms could be proved, give the correct answer with certainty. They improve upon classical algorithms by solving in polynomial time with certainty a problem that can be solved in
polynomial time only with high probability using classical techniques. Such a result is of no direct practical interest, since the impossibility of building a perfect machine reduces any practical
machine running any algorithm to solving a problem only with high probability. But such results were of high theoretical interest, since they showed for the first time that quantum computation is
theoretically more powerful than classical computation for certain computational problems. These results caught the interest of various researchers, including Peter Shor, who in 1994 surprised the
world with his polynomial-time quantum algorithm for factoring integers. This result provided a solution to a well-studied problem of practical interest. A classical polynomial-time solution had long
been sought, to the point where the world felt sufficiently confident that no such solution existed that many security protocols, including the widely used RSA algorithm, base their security entirely
on the computational difficulty of this problem. It is unknown whether an efficient classical solution exists, so Shor’s result does not prove that quantum computers can solve a problem more
efficiently than a classical computer. But even in the unlikely event that a polynomial-time classical algorithm is found for this problem, it would be an indication of the elegance and effectiveness
of the quantum information theory point of view that a quantum algorithm, in spite of all the unintuitive aspects of quantum mechanics, was easier to find. While Shor’s result sparked a lot of
interest in the field, doubts as to its practical significance remained. Quantum systems are notoriously fragile. Key properties, such as quantum entanglement, are easily disturbed by environmental
influences that cause the quantum states to decohere. Properties of quantum mechanics, such as the impossibility of reliably copying an unknown quantum state, made it look unlikely that effective
error-correction techniques for quantum computation could ever be found. For these reasons, it seemed unlikely that reliable quantum computers could be built. Luckily, in spite of serious and
widespread doubts as to whether quantum information processing could ever be practical, the theory itself proved so tantalizing that researchers continued to explore it. As a result, in 1996 Shor and
Robert Calderbank, and independently Andrew Steane, saw a way to finesse the seemingly show-stopping problems of quantum mechanics to develop quantum error correction techniques. Today, quantum error
correction is arguably the most mature area of quantum information processing. How practical quantum computing and quantum information will turn out is still unknown. No fundamental physical
principles are known that prohibit the building of large-scale and reliable quantum computers. Engineering issues, however, remain. As of this writing, laboratory experiments have demonstrated
quantum computations with several quantum bits performing dozens of quantum operations. Myriad promising approaches are being explored by theorists and experimentalists around the world, but much
uncertainty remains as to how, when, or even whether, a quantum computer capable of carrying out general quantum computations on hundreds of qubits will be built.
1 Introduction
Quantum computational approaches improve upon classical methods for a number of specialized tasks. The extent of quantum computing’s applicability is still being determined. It does not provide
efficient solutions to all problems; neither does it provide a universal way of circumventing the slowing of Moore’s law. Strong limitations on the power of quantum computation are known; for many
problems, it has been proven that quantum computation provides no significant advantage over classical computation. Grover’s algorithm, the other major algorithm of the mid1990s, provides a small
speedup for unstructured search algorithms. But it is also known that this small speedup is the most that quantum algorithms can attain. Grover’s search algorithm applies to unstructured search. For
other search problems, such as searching an ordered list, quantum computation provides no significant advantage over classical computation. Simulation of quantum systems is the other significant
application of quantum computation known in the mid-1990s. Of interest in its own right, the simulation of increasingly larger quantum systems may provide a bootstrap that will ultimately lead to the
building of a scalable quantum computer. After Grover’s algorithm, there was a hiatus of more than five years before a significantly new algorithm was discovered. During that time, other areas of
quantum information processing, such as quantum error correction, advanced significantly. In the early 2000s, several new algorithms were discovered. Like Shor’s algorithm, these algorithms solve
specific problems with narrow, if important, applications. Novel approaches to constructing quantum algorithms also developed. Investigations of quantum simulation from a
quantum-information-processing point of view have led to improved classical techniques for simulating quantum systems, as well as novel quantum approaches. Similarly, the
quantum-information-processing point of view has led to novel insights into classical computing, including new classical algorithms. Furthermore, alternatives to the standard circuit model of quantum
computation have been developed that have led to new quantum algorithms, breakthroughs in building quantum computers, new approaches to robustness, and significant insights into the key elements of
quantum computation. However long it takes to build a scalable quantum computer and whatever the breadth of applications turns out to be, quantum information processing has changed forever the way in
which quantum physics is understood. The quantum information processing view of quantum mechanics has done much to clarify the character of key aspects of quantum mechanics such as quantum
measurement and entanglement. This advancement in knowledge has already had applications outside of quantum information processing to the creation of highly entangled states used for microlithography
at scales below the wavelength limit and for extraordinarily accurate sensors. The precise practical consequences of this increased understanding of nature are hard to predict, but the unification of
the two theories that had the most profound influence on the technological advances of the twentieth century can hardly fail to have profound effects on technological and intellectual developments
throughout the twenty-first. Part I of this book covers the basic building blocks of quantum information processing: quantum bits and quantum gates. Physical motivation for these building blocks is
given and tied to the key quantum concepts of quantum measurement, quantum state transformations, and entanglement between quantum subsystems. Each of these concepts is explored in depth. Quantum key
1 Introduction
distribution, quantum teleportation, and quantum dense coding are introduced along the way. The final chapter of part I shows that anything that can be done on a classical computer can be done with
comparable efficiency on a quantum computer. Part II covers quantum algorithms. It begins with a description of some of the most common elements of quantum computation. Since the advantage of quantum
computation over classical computation is all about efficiency, part II carefully defines notions of complexity. Part II also discusses known bounds on the power of quantum computation. A number of
simple algorithms are described. Full chapters are devoted to Shor’s algorithm and Grover’s algorithm. Part III explores entanglement and robust quantum computation. A discussion of quantum
subsystems leads into discussions of quantifying entanglement and of decoherence, the environmental errors affecting a quantum system because it is really a part of a larger quantum system. The
elegant and important topic of quantum error correction fills a chapter, followed by a chapter on techniques to achieve fault tolerance. The book finishes with brief descriptions and pointers to
references for many quantum information processing topics the book could not cover in depth. These include further quantum algorithms and protocols, adiabatic, cluster state, holonomic, and
topological quantum computing, and the impact quantum information processing has had on classical computer science and physics.
Quantum mechanics, that mysterious, confusing discipline, which none of us really understands, but which we know how to use. —Murray Gell-Mann [126]
Single-Qubit Quantum Systems
Quantum bits are the fundamental units of information in quantum information processing in much the same way that bits are the fundamental units of information for classical processing. Just as there
are many ways to realize classical bits physically (two voltage levels, lights on or off in an array, positions of toggle switches), there are many ways to realize quantum bits physically. As is done
in classical computer science, we will concern ourselves only rarely with how the quantum bits are realized. For the sake of concretely illustrating quantum bits and their properties, however,
section 2.1 looks at the behavior of polarized photons, one of many possible realizations of quantum bits. Section 2.2 abstracts key properties from the polarized photon example of section 2.1 to
give a precise definition of a quantum bit, or qubit, and a description of the behavior of quantum bits under measurement. Dirac’s bra / ket notation, the standard notation used throughout quantum
information processing as well as quantum mechanics, is introduced in this section. Section 2.4 describes the first application of quantum information processing: quantum key distribution. The
chapter concludes with a detailed discussion of the state space of a single-qubit system. 2.1 The Quantum Mechanics of Photon Polarization
A simple experiment illustrates some of the nonintuitive behavior of quantum systems, behavior that is exploited to good effect in quantum algorithms and protocols. This experiment can be performed
by the reader using only minimal equipment: a laser pointer and three polaroids (polarization filters), readily available from any camera supply store. The formalisms of quantum mechanics that
describe this simple experiment lead directly to a description of the quantum bit, the fundamental unit of quantum information on which quantum information processing is done. The experiment not only
gives a concrete realization of a quantum bit, but it also illustrates key properties of quantum measurement. We encourage you to obtain the equipment and perform the experiment yourself.
2 Single-Qubit Quantum Systems
2.1.1 A Simple Experiment
Shine a beam of light on a projection screen. When polaroid A is placed between the light source and the screen, the intensity of the light reaching the screen is reduced. Let us suppose that the
polarization of polaroid A is horizontal (figure 2.1). Next, place polaroid C between polaroid A and the projection screen. If polaroid C is rotated so that its polarization is orthogonal (vertical)
to the polarization of A, no light reaches the screen (figure 2.2).
A Figure 2.1 Single polaroid attenuates unpolarized light by 50 percent.
C A Figure 2.2 Two orthogonal polaroids block all photons.
2.1 The Quantum Mechanics of Photon Polarization
C B A Figure 2.3 Inserting a third polaroid allows photons to pass.
Finally, place polaroid B between polaroids A and C. One might expect that adding another polaroid will not make any difference; if no light got through two polaroids, then surely no light will pass
through three! Surprisingly, at most polarization angles of B, light shines on the screen. The intensity of this light will be maximal if the polarization of B is at 45 degrees to both A and C
(figure 2.3). Clearly the polaroids cannot be acting as simple sieves; otherwise, inserting polaroid B could not increase the number of photons reaching the screen. 2.1.2 A Quantum Explanation
For a bright beam of light, there is a classical explanation of the experiment in terms of waves. Versions of the experiment described here, using light so dim that only one photon at a time
interacts with the polaroid, have been done with more sophisticated equipment. The results of these single photon experiments can be explained only using quantum mechanics; the classical wave
explanation no longer works. Furthermore, it is not just light that behaves in this peculiar way. The quantum mechanical explanation of the experiment consists of two parts: a model of a photon’s
polarization state and a model of the interaction between a polaroid and a photon. The description of this experiment, and the definition of a qubit, use basic notions of linear algebra such as
vector, basis, orthonormal, and linear combination. Linear algebra is used throughout the book; we briefly remind readers of the meanings of these concepts in section 2.2. Section 2.6 suggests some
books on linear algebra. Quantum mechanics models a photon’s polarization state by a unit vector, a vector of length 1, pointing in the appropriate direction. We write |↑ and |→ for the unit vectors
that represent vertical and horizontal polarization respectively. Think of |v as a vector with some arbitrary label v. In quantum mechanics, the standard notation for a vector representing a quantum
2 Single-Qubit Quantum Systems
v =a
Figure 2.4 Measurement of state |v = a|↑ + b|→ by a measuring device with preferred basis {|↑, |→}.
is |v, just as v or v are notations used for vectors in other settings. This notation is part of a more general notation, Dirac’s notation, that will be explained in more detail in sections 2.2 and
4.1. An arbitrary polarization can be expressed as a linear combination |v = a|↑ + b|→ of the two basis vectors |↑ and |→. For example, | = √12 |↑ + √12 |→ is a unit vector representing polarization
of 45 degrees. The coefficients a and b in |v = a|↑ + b|→ are called the amplitudes of |v in the directions |↑ and |→ respectively (see figure 2.4). When a and b are both non-zero, |v = a|↑ + b|→ is
said to be a superposition of |↑ and |→. Quantum mechanics models the interaction between a photon and a polaroid as follows. The polaroid has a preferred axis, its polarization. When a photon with
polarization |v = a|↑ + b|→ meets a polaroid with preferred axis |↑, the photon will get through with probability |a|2 and will be absorbed with probability |b|2 ; the probability that a photon
passes through the polaroid is the square of the magnitude of the amplitude of its polarization in the direction of the polaroid’s preferred axis. The probability that the photon is absorbed by the
polaroid is the square of the magnitude of the amplitude in the direction perpendicular to the polaroid’s preferred axis. Furthermore, any photon that passes through the polaroid will now be
polarized in the direction of the polaroid’s preferred axis. The probabilistic nature of the interaction and the resulting change of state are features of all interactions between qubits and
measuring devices, no matter what their physical realization. In the experiment, any photons that pass through polaroid A will leave polarized in the direction of polaroid A’s preferred axis, in this
case horizontal, |→. A horizontally polarized photon has no amplitude in the vertical direction, so it has no chance of passing through polaroid C, which was given a vertical orientation. For this
reason, no light reaches the screen. Had polaroid C been in any other orientation, a horizontally polarized photon would have some amplitude in the direction of polaroid C’s preferred axis, and some
photons would reach the screen. To understand what happens once polaroid B, with preferred axis |, is inserted, it is helpful to write the horizontally polarized photon’s polarization state |→ as 1 1
|→ = √ | − √ |. 2 2
2.2 Single Quantum Bits
Any photon that passes through polaroid A becomes horizontally polarized, so the amplitude of any such photon’s state |→ in the direction | is √12 . Applying the quantum theory we just learned tells
us that a horizontally polarized photon will pass through polaroid B with probability 2 1 = | √12 | . Any photons that have passed through polaroid B now have polarization |. When 2 these photons hit
polaroid C, they do have amplitude in the vertical direction, so some of them (half ) will pass thorough polaroid C and hit the screen (see figure 2.3). In this way, quantum mechanics explains how
more light can reach the screen when the third polaroid is added, and it provides a means to compute how much light will reach the screen. In summary, the polarization state of a photon is modeled as
a unit vector. Its interaction with a polaroid is probabilistic and depends on the amplitude of the photon’s polarization in the direction of the polaroid’s preferred axis. Either the photon will be
absorbed or the photon will leave the polaroid with its polarization aligned with the polaroid’s preferred axis. 2.2 Single Quantum Bits
The space of possible polarization states of a photon is an example of a quantum bit, or qubit. A qubit has a continuum of possible values: any state represented by a unit vector a|↑ + b|→ is a
legitimate qubit value. The amplitudes a and b can be complex numbers, even though complex amplitudes were not needed for the explanation of the experiment. (In the photon polarization case, the
imaginary coefficients correspond to circular polarization.) In general, the set of all possible states of a physical system is called the state space of the system. Any quantum mechanical system
that can be modeled by a two-dimensional complex vector space can be viewed as a qubit. (There is redundancy in this representation in that any vector multiplied by a modulus one [unit length]
complex number represents the same quantum state. We discuss this redundancy carefully in sections 2.5 and 3.1.) Such systems, called twostate quantum systems, include photon polarization, electron
spin, and the ground state together with an excited state of an atom. The two-state label for these systems does not mean that the state space has only two states—it has infinitely many—but rather
that all possible states can be represented as a linear combination, or superposition, of just two states. For a two-dimensional complex vector space to be viewed as a qubit, two linearly independent
states, labeled |0 and |1, must be distinguished. For the theory of quantum information processing, all two-state systems, whether they be electron spin or energy levels of an atom, are equally good.
From a practical point of view, it is as yet unclear which two-state systems will be most suitable for physical realizations of quantum information processing devices such as quantum computers; it is
likely that a variety of physical representation of qubits will be used. Dirac’s bra / ket notation is used throughout quantum physics to represent quantum states and their transformations. In this
section we introduce the part of Dirac’s notation that is used for quantum states. Section 4.1 introduces Dirac’s notation for quantum transformations. Familiarity and fluency with this notation will
help greatly in understanding all subsequent material; we strongly encourage readers to work the exercises at the end of this chapter.
2 Single-Qubit Quantum Systems
In Dirac’s notation, a ket such as |x, where x is an arbitrary label, refers to a vector representing a state of a quantum system. A vector |v is a linear combination of vectors |s1 , |s2 , . . . , |
sn if there exist complex numbers ai such that |v = a1 |s1 + a2 |s2 + · · · + an |sn . A set of vectors S generates a complex vector space V if every element |v of V can be written as a complex
linear combination of vectors in the set: every |v ∈ V can be written as |v = a1 |s1 + a2 |s2 + · · · + an |sn for some elements |si ∈ S and complex numbers ai . Given a set of vectors S, the
subspace of all linear combinations of vectors in S is called the span of S and is denoted span(S). A set of vectors B for which every element of V can be written uniquely as a linear combination of
vectors in B is called a basis for V . In a two-dimensional vector space, any two vectors that are not multiples of each other form a basis. In quantum mechanics, bases are usually required to be
orthonormal, a property we explain shortly. The two distinguished states, |0 and |1, are also required to be orthonormal. An inner product v2 |v1 , or dot product, on a complex vector space V is a
complex function defined on pairs of vectors |v1 and |v2 in V , satisfying •
v|v is non-negative real,
v2 |v1 = v1 |v2 , and
(a v2 | + b v3 |)|v1 = a v2 |v1 + b v3 |v1 ,
where z is the complex conjugate z = a − ib of z = a + ib. Two vectors |v1 and |v2 are said to be orthogonal if v1 |v2 = 0. A set of vectors is orthogonal if all of its members are orthogonal to each
other. The length, or norm, of a vector |v is ||v| = √
v|v. Since all vectors |x representing quantum states are of unit length, x|x = 1 for any state vector |x. A set of vectors is said to be orthonormal if all of its elements are of length one and
orthogonal to each other: a set of vectors B = {|β1 , |β2 , . . . , |βn } is orthonormal if
βi |βj = δij for all i, j , where 1 if i = j δij = 0 otherwise. In quantum mechanics we are mainly concerned with bases that are orthonormal, so whenever we say basis we mean orthonormal basis unless
we say otherwise. For the state space of a two-state system to represent a quantum bit, two orthonormal distinguished states, labeled |0 and |1, must be specified. Apart from the requirement that |0
and |1 be orthonormal, the states may be chosen arbitrarily. For instance, in the case of photon polarization, we may choose |0 and |1 to correspond to the states |↑ and |→, or to | and |. We follow
the convention that |0 = |↑ and |1 = |→, which implies that | = √12 (|0 + |1) and
| = √12 (|0 − |1). In the case of electron spin, |0 and |1 could correspond to the spin-up and spin-down states, or spin-left and spin-right. When talking about qubits, and quantum information
processing in general, a standard basis {|0, |1} with respect to which all statements are made must be chosen in advance and remain fixed throughout the discussion. In quantum information
2.2 Single Quantum Bits
processing, classical bit values of 0 and 1 will be encoded in the distinguished states |0 and |1. This encoding enables a direct comparison between bits and qubits: bits can take on only two values,
0 and 1, while qubits can take on not only the values |0 and |1 but also any superposition of these values, a|0 + b|1, where a and b are complex numbers such that |a|2 + |b|2 = 1. Vectors and linear
transformations can be written using matrix notation once a basis has been a ; specified. That is, if basis {|β1 , |β2 } is specified, a ket |v = a|β1 + b|β2 can be written b a ket |v corresponds to
a column vector v, where v is simply a label, a name for this vector. The conjugate transpose v † of a vector ⎞ ⎛ a1 ⎟ ⎜ v = ⎝ ... ⎠ is v † = ( a1 , . . . , an ) . an In Dirac’s notation, the
conjugate transpose of a ket |v is called a bra and is written v|, so ⎛ ⎞ a1 ⎜ ⎟ |v = ⎝ ... ⎠ and v| = ( a1 , . . . , an ) . an A bra v| corresponds to a row vector v † . Given two complex vectors ⎛
⎞ ⎛ ⎞ a1 b1 ⎜ ⎟ ⎟ ⎜ |a = ⎝ ... ⎠ and |b = ⎝ ... ⎠ , an
the standard inner product a|b is defined to be the scalar obtained by multiplying the conjugate transpose a| = ( a1 , . . . , an ) with |b: ⎞ ⎛ b1 n ⎟ ⎜
a|b = a||b = ( a1 , . . . , an ) ⎝ ... ⎠ = ai bi . i=1 bn When a = |a and b = |b are real vectors, this inner product is the same as the standard dot Dirac’s product on the n dimensional real vector
space Rn : a|b = a1 b1 + · · · + an bn = a · b. choice of bra and ket arose as a play on words: an inner product a|b of a bra a| and a ket |b is sometimes called a bracket. The following relations
hold, where v = a|0 + b|1: 0|0 = 1,
1|1 = 1, 1|0 = 0|1 = 0, 0|v = a, and 1|v = b. In the standard basis, with ordering {|0, |1}, the basis elements |0 and |1 can be expressed 1 0 a as and , and a complex linear combination |v = a|0 + b
|1 can be written . 0 1 b
2 Single-Qubit Quantum Systems
This choice of basis and order of the basis vectors are mere convention. Representing |0 as 1 1 1 0 and |1 as √12 would be equally and |1 as or representing |0 as √12 −1 1 0 1 good as long as it is
done consistently. Unless otherwise specified, all vectors and matrices in this book will be written with respect to the standard basis {|0, |1} in this order. A quantum state |v is a superposition
of basis elements {|β1 , |β2 } if it is a nontrivial linear combination of |β1 and |β2 , if |v = a1 |β1 + a2 |β2 where a1 and a2 are non-zero. For the term superposition to be meaningful, a basis
must be specified. In this book, if we say “superpostion” without explicitly specifying the basis, we implicitly mean with respect to the standard basis. Initially the vector/matrix notation will be
easier for many readers to use because it is familiar. Sometimes matrix notation is convenient for performing calculations, but it always requires the choice of a basis and an ordering of that basis.
The bra / ket notation has the advantage of being independent of basis and the order of the basis elements. It is also more compact and suggests correct relationships, as we saw for the inner
product, so once it becomes familiar, it is easier to read and faster to use. Instead of qubits, physical systems with states modeled by three- or n-dimensional vector spaces could be used as
fundamental units of computation. Three-valued units are called qutrits, and n-valued units are called qudits. Since qudits can be modeled using multiple qubits, a model of quantum information based
on qudits has the same computational power as one based on qubits. For this reason we do not consider qudits further, just as in the classical case most people use a bit-based model of information.
We now have a mathematical model with which to describe quantum bits. In addition, we need a mathematical model for measuring devices and their interaction with quantum bits. 2.3 Single-Qubit
The interaction of a polaroid with a photon illustrates key properties of any interaction between a measuring device and a quantum system. The mathematical description of the experiment can be used
to model all measurements of single qubits, whatever their physical instantiation. The measurement of more complicated systems retains many of the features of single-qubit measurement: the
probabilistic outcomes and the effect measurement has on the state of the system. This section considers only measurements of single-qubit systems. Chapter 4 discusses measurements of more general
quantum systems. Quantum theory postulates that any device that measures a two-state quantum system must have two preferred states whose representative vectors, {|u, |u⊥ }, form an orthonormal basis
for the associated vector space. Measurement of a state transforms the state into one of the measuring device’s associated basis vectors |u or |u⊥ . The probability that the state is measured as
basis vector |u is the square of the magnitude of the amplitude of the component of the state in the direction of the basis vector |u. For example, given a device for measuring the polarization of
2.3 Single-Qubit Measurement
photons with associated basis {|u, |u⊥ }, the state |v = a|u + b|u⊥ is measured as |u with probability |a|2 and as |u⊥ with probability |b|2 . This behavior of measurement is an axiom of quantum
mechanics. It is not derivable from other physical principles; rather, it is derived from the empirical observation of experiments with measuring devices. If quantum mechanics is correct, all devices
that measure single qubits must behave in this way; all have associated bases, and the measurement outcome is always one of the two basis vectors. For this reason, whenever anyone says “measure a
qubit," they must specify with respect to which basis the measurement takes place. Throughout the book, if we say “measure a qubit" without further elaboration, we mean that the measurement is with
respect to the standard basis {|0, |1}. Measurement of a quantum state changes the state. If a state |v = a|u + b|u⊥ is measured as |u, then the state |v changes to |u. A second measurement with
respect to the same basis will return |u with probability 1. Thus, unless the original state happens to be one of the basis states, a single measurement will change that state, making it impossible
to determine the original state from any sequence of measurements. While the mathematics of measuring a qubit in the superposition state a|0 + b|1 with respect to the standard basis is clear,
measurement brings up questions as to the meaning of a superposition. To begin with, the notion of superposition is basis-dependent; all states are superpositions with respect to some bases and not
with respect to others. For instance, a|0 + b|1 is a superposition with respect to the basis {|0, |1} but not with respect to {a|0 + b|1, b|0 − a|1}. Also, because the result of measuring a
superposition is probabilistic, some people are tempted to think of the state |v = a|0 + b|1 as a probabilistic mixture of |0 and |1. It is not. In particular, it is not true that the state is really
either |0 or |1 and that we just do not happen to know which. Rather, |v is a definite state, which, when measured in certain bases, gives deterministic results, while in others it gives random
results: a photon with polarization | = √12 (|↑ + |→) behaves deterministically when measured with respect to the Hadamard basis {|, |}, but it gives random results when measured with respect to the
standard basis {|↑, |→}. It is okay to think of a superposition |v = a|0 + b|1 as in some sense being in both state |0 and state |1 at the same time, as long as that statement is not taken too
literally: states that are combinations of |0 and |1 in similar proportions but with different amplitudes, such as √12 (|0 + |1), √12 (|0 − |1) and √12 (|0 + i|1), represent distinct states that
behave differently in many situations. Given that qubits can take on any one of infinitely many states, one might hope that a single qubit could store lots of classical information. However, the
properties of quantum measurement severely restrict the amount of information that can be extracted from a qubit. Information about a quantum bit can be obtained only by measurement, and any
measurement results in one of only two states, the two basis states associated with the measuring device; thus, a single measurement yields at most a single classical bit of information. Because
measurement changes the state, one cannot make two measurements on the original state of a qubit. Furthermore, section 5.1.1 shows that an unknown quantum state cannot be cloned, which means it is
not possible to measure a qubit’s state in two ways, even indirectly by copying the qubit’s state and measuring the copy.
2 Single-Qubit Quantum Systems
Thus, even though a quantum bit can be in infinitely many different superposition states, it is possible to extract only a single classical bit’s worth of information from a single quantum bit. 2.4 A
Quantum Key Distribution Protocol
The quantum theory introduced so far is sufficient to describe a first application of quantum information processing: a key distribution protocol that relies on quantum effects for its security and
for which there is no classical analog. Keys—binary strings or numbers chosen randomly from a sufficiently large set—provide the security for most cryptographic protocols, from encryption to
authentication to secret sharing. For this reason, the establishment of keys between the parties who wish to communicate is of fundamental importance in cryptography. Two general classes of keys
exist: symmetric keys and public-private key pairs. Both types are used widely, often in conjunction, in a wide variety of practical settings, from secure e-commerce transactions to private
communication over public networks. Public-private key pairs consist of a public key, knowable by all, and a corresponding private key whose secrecy must be carefully guarded by the owner. Symmetric
keys consist of a single key (or a pair of keys easily computable from one another) that are known to all of the legitimate parties and no one else. In the symmetric key case, multiple parties are
responsible for guarding the security of the key. Quantum key distribution protocols establish a symmetric key between two parties, who are generally known in the cryptographic community as Alice and
Bob. Quantum key distribution protocols can be used securely anywhere classical key agreement protocols such as Diffie-Hellman can be used. They perform the same task; however, the security of
quantum key distribution rests on fundamental properties of quantum mechanics, whereas classical key agreement protocols rely on the computational intractability of a certain problem. For example,
while Diffie-Hellman remains secure against all known classical attacks, the problem on which it is based, the discrete logarithm problem, is tractable on a quantum computer. Section 8.6.1 discusses
Shor’s quantum algorithm for the discrete log problem. The earliest quantum key distribution protocol is known as BB84 after its inventors, Charles Bennett and Gilles Brassard, and the year of the
invention. The aim of the BB84 protocol is to establish a secret key, a random sequence of bit values 0 and 1, known only to the two parties, Alice and Bob, who may use this key to support a
cryptographic task such as exchanging secret messages or detecting tampering. The BB84 protocol enables Alice and Bob to be sure that if they detect no problems while attempting to establish a key,
then with high probability it is secret. The protocol does not guarantee, however, that they will succeed in establishing a private key. SupposeAlice and Bob are connected by two public channels: an
ordinary bidirectional classical channel and a unidirectional quantum channel. The quantum channel allows Alice to send a sequence of single qubits to Bob; in our case we suppose the qubits are
encoded in the polarization states of individual photons. Both channels can be observed by an eavesdropper Eve. This situation
2.4 A Quantum Key Distribution Protocol
classical channel Alice
quantum channel
Figure 2.5 Alice and Bob wish to agree on a common key not known to Eve.
is illustrated in figure 2.5. To begin the process of establishing a private key, Alice uses quantum or classical means to generate a random sequence of classical bit values. As we will see, a random
subset of this sequence will be the final private key. Alice then randomly encodes each bit of this sequence in the polarization state of a photon by randomly choosing for each bit one of the
following two agreed-upon bases in which to encode it: the standard basis, 0 → |↑ 1 → |→, or the Hadamard basis, 0 → | = 1 → | =
√1 (|↑ + |→) 2 √1 (|↑ − |→). 2
She sends this sequence of photons to Bob through the quantum channel. Bob measures the state of each photon he receives by randomly picking either basis. Over the classical channel, Alice and Bob
check that Bob has received a photon for every one Alice has sent, and only then do Alice and Bob tell each other the bases they used for encoding and decoding (measuring) each bit. When the choice
of bases agree, Bob’s measured bit value agrees with the bit value that Alice sent. When they chose different bases, the chance that Bob’s bit matches Alice’s is only 50 percent. Without revealing
the bit values themselves, which would also reveal the values to Eve, there is no way for Alice and Bob to figure out which of these bit values agree and which do not. So they simply discard all the
bits on which their choice of bases differed. An average of 50 percent of all bits transmitted remain. Then, depending on the level of
2 Single-Qubit Quantum Systems
assurance they require, Alice and Bob compare a certain number of bit values to check that no eavesdropping has occurred. These bits will also be discarded, and only the remaining bits will be used
as their private key. We describe one sort of attack that Eve can make and how quantum aspects of this protocol guard against it. On the classical channel, Alice and Bob discuss only the choice of
bases and not the bit values themselves, so Eve cannot gain any information about the key from listening to the classical channel alone. To gain information, Eve must intercept the photons
transmitted by Alice through the quantum channel. Eve must send photons to Bob before knowing the choice of bases made by Alice and Bob, because they compare bases only after Bob has confirmed
receipt of the photons. If she sends different photons to Bob, Alice and Bob will detect that something is wrong when they compare bit values, but if she sends the original photons to Bob without
doing anything, she gains no information. To gain information, Eve makes a measurement before sending the photons to Bob. Instead of using a polaroid to measure, she can use a calcite crystal and a
photon detector; a beam of light passing through a calcite crystal is split into two spatially separated beams, one polarized in the direction of the crystal’s optic axis and the other polarized in
the direction perpendicular to the optic axis. A photon detector placed in one of the beams performs a quantum measurement: the probability with which a photon ends up in one of the beams can be
calculated just as described in section 2.3. Since Alice has not yet told Bob her sequence of bases, Eve does not know in which basis to measure each bit. If she randomly measures the bits, she will
measure using the wrong basis approximately half of the time. (Exercise 2.10 examines the case in which Eve does not even know which two bases to choose from.) When she uses the wrong basis to
measure, the measurement changes the polarization of the photon before it is resent to Bob. This change in the polarization means that, even if Bob measures the photon in the same basis as Alice used
to encode the bit, he will get the correct bit value only half the time. Overall, for each of the qubits Alice and Bob retain, if the qubit was measured by Eve before she sent it to Bob, there will
be a 25 percent chance that Bob measures a different bit value than the one Alice sent. Thus, this attack on the quantum channel is bound to introduce a high error rate that Alice and Bob detect by
comparing a sufficient number of bits over the classical channel. If these bits agree, they can confidently use the remaining bits as their private key. So, not only is it likely that 25 percent of
Eve’s version of the key is incorrect, but the fact that someone is eavesdropping can be detected by Alice and Bob. Thus Alice and Bob run little risk of establishing a compromised key; either they
succeed in creating a private key or they detect that eavesdropping has taken place. Eve does not know in which basis to measure the qubits, a property crucial to the security of this protocol,
because Alice and Bob share information about which bases they used only after Bob has received the photons; if Eve knew in which basis to measure the photons, her measurements would not change the
state, and she could obtain the bit values without Bob and Alice noticing anything suspicious. A seemingly easy way for Eve to overcome this obstacle is for her to copy the qubit, keeping a copy for
herself while sending the original on to Bob. Then she can measure her copy
2.5 The State Space of a Single-Qubit System
later after learning the correct basis from listening in on the classical channel. Such a protocol is defeated by an important property of quantum information. As we will show in section 5.1.1, the
no-cloning principle of quantum mechanics means that it is impossible to reliably copy quantum information unless a basis in which it is encoded is known; all quantum copying machines are basis
dependent. Copying with the wrong machine not only does not produce an accurate copy, but it also changes the original in much the same way measuring in the wrong basis does. So Bob and Alice would
detect attempts to copy with high probability. The security of this protocol, like other pure key distribution protocols such as Diffie-Hellman, is vulnerable to a man-in-the-middle attack in which
Eve impersonates Bob to Alice and impersonates Alice to Bob. To guard against such an attack, Alice and Bob need to combine it with an authentication protocol, be it recognizing each other’s voices
or a more mathematical authentication protocol. More sophisticated versions of this protocol exist that support quantum key distribution through noisy channels and stronger guarantees about the
amount of information Eve can gain. In the noisy case, Eve is able to gain some information initially, but techniques of quantum error correction and privacy amplification can reduce the amount of
information Eve gains to arbitrarily low levels as well as compensate for the noise in the channels. 2.5 The State Space of a Single-Qubit System
The state space of a classical or quantum physical system is the set of all possible states of the system. Depending on which properties of the system are under consideration, a state of the system
consists of any combination of the positions, momenta, polarizations, spins, energy, and so on of the particles in the system. When we are considering only polarization states of a single photon, the
state space is all possible polarizations. More generally, the state space for a single qubit, no matter how it is realized, is the set of possible qubit values, {a|0 + b|1}, where |a | + |b| = 1 and
a|0 + b|1 and a |0 + b |1 are considered the same qubit value if a|0 + b|1 = c(a |0 + b |1) for some modulus one complex number c. 2
2.5.1 Relative Phases versus Global Phases
That the same quantum state is represented by more than one vector means that there is a critical distinction between the complex vector space in which we write our qubit values and the quantum state
space itself. We have reduced the ambiguity by requiring that vectors representing quantum states be unit vectors, but some ambiguity remains: unit vectors equivalent up to multiplication by a
complex number of modulus one represent the same state. The multiple by which two vectors representing the same quantum state differ is called the global phase and has no physical meaning. We use the
equivalence relation |v ∼ |v to indicate that |v = c|v for some complex global phase c = eiφ . The space in which two two-dimensional complex vectors are considered equivalent if they are multiples
of each other is called complex projective space of dimension one.
2 Single-Qubit Quantum Systems
This quotient space, a space obtained by identifying sets of equivalent vectors with a single point in the space, is expressed with the compact notation used for quotient spaces: CP1 = {a|0 + b|1}/ ∼
. So the quantum state space for a single-qubit system is in one-to-one correspondence with the points of the complex projective space CP1 . We will make no further use of CP1 in this book, but it is
used in the quantum information processing literature. Because the linearity of vector spaces makes them easier to work with than projective spaces (we know how to add vectors and there is no
corresponding way of adding points in projective spaces), we generally perform all calculations in the vector space corresponding to the quantum state space. The multiplicity of representations of a
single quantum state in this vector space representation, however, is a common source of confusion for newcomers to the field. A physically important quantity is the relative phase of a single-qubit
state a|0 + b|1. The relative phase (in the standard basis) of a superposition a|0 + b|1 is a measure of the angle in the complex plane between the two complex numbers a and b. More precisely, the
relative phase is the modulus one complex number eiφ satisfying a/b = eiφ |a |/|b|. Two superpositions a|0 + b|1 and a |0 + b |1 whose amplitudes have the same magnitudes but that differ in a
relative phase represent different states. The physically meaningful relative phase and the physically meaningless global phase should not be confused. While multiplication with a unit constant does
not change a quantum state vector, relative phases in a superposition do represent distinct quantum states: even though |v1 ∼ eiφ |v1 , the vectors √12 (eiφ |v1 + |v2 ) and √12 (|v1 + |v2 ) do not
represent the same state. We must always be cognizant of the ∼ equivalence when we interpret the results of our computations as quantum states. A few single-qubit states will be referred to often
enough that we give them special labels: √ (2.1) |+ = 1/ 2(|0 + |1) √ |− = 1/ 2(|0 − |1) (2.2) √ (2.3) |i = 1/ 2(|0 + i|1) √ (2.4) |−i = 1/ 2(|0 − i|1). The basis {|+, |−} is referred to as the
Hadamard basis. We sometimes use the notation {|, |} for the Hadamard basis when discussing photon polarization. Some authors omit normalization factors, allowing vectors of any length to represent a
state where two vectors represent the same state if they differ by any complex factor. We will explicitly write the normalizations factors, both because then the amplitudes have a more direct
relation to the measurement probabilities and because keeping track of the normalization factor provides a check that helps avoid errors.
2.5 The State Space of a Single-Qubit System
2.5.2 Geometric Views of the State Space of a Single Qubit
While we primarily use vectors to represent quantum states, it is helpful to have models of the single-qubit state space in which there is a one-to-one correspondence between states and points in the
space. We give two related but different geometric models with this property. The second of these, the Bloch sphere model, will be used in section 5.4.1 to illustrate single-qubit quantum
transformations, and in chapter 10 it will be generalized to aid in the discussion of single-qubit subsystems. These models are just different ways of looking at complex projective space of dimension
1. As we will see, complex projective space of dimension 1 can be viewed as a sphere. First we show that it can be viewed as the extended complex plane, the complex plane C together with an
additional point traditionally labeled ∞. Extended Complex Plane C ∪ {∞} A correspondence between the set of all complex numbers and
single-qubit states is given by a|0 + b|1 → b/a = α and its inverse α →
1 1 + |α |
|0 +
α 1 + |α |
The preceding mapping is not defined for the state with a = 0 and b = 1. To make this correspondence one-to-one we need to add a single point, which we label ∞, to the complex plane and define ∞ ↔ |
1. For example, we have |0 → 0 |1 → ∞ |+ → + 1 |− → − 1 |i
→ i
|−i → − i. We now describe another useful model, related to but different from the previous one. Starting with the previous representation, we can map each state, represented by the complex number α
= s + it, onto the unit sphere in three real dimensions, the points (x, y, z) ∈ C satisfying |x|2 + |y|2 + |z|2 = 1, via the standard stereographic projection 2 2t 1 − |α | 2s , , , (s, t) → |α |2 +
1 |α |2 + 1 |α |2 + 1
Bloch Sphere
2 Single-Qubit Quantum Systems
– –i
0 Figure 2.6 Location of certain single-qubit states on the surface of the Bloch sphere.
further requiring that ∞ → (0, 0, −1). Figure 2.6 illustrates the following correspondences: |0 → (0, 0, 1) |1 → (0, 0, −1) |+ → (1, 0, 0) |− → (−1, 0, 0) |i
→ (0, 1, 0)
|−i → (0, −1, 0). We have given three representations of the quantum state space for a single-qubit system. 1. Vectors written in ket notation: a|0 + b|1 with complex coefficients a and b, subject to
|a |2 + |b|2 = 1, where a and b are unique up to a unit complex factor. Because of this factor, the global phase, this representation is not one-to-one. 2. Extended complex plane: a single complex
number α ∈ C or ∞. This representation is oneto-one. 3. Bloch sphere: points (x, y, z) on the unit sphere. This representation is also one-to-one. As we will see in section 10.1, the points in the
interior of the sphere also have meaning for quantum information processing. For historical reasons, the entire ball, including the interior, is called the Bloch sphere, instead of just the states on
the surface, which truly form a sphere. For
2.6 References
this reason, we refer to the state space of a single-qubit system as the surface of the Bloch sphere (figure 2.6). One of the advantages of the Bloch sphere representation is that it is easy to read
off all possible bases from the model; orthogonal states correspond to antipodal points of the Bloch sphere. In particular, every diameter of the Bloch sphere corresponds to a basis for the
single-qubit state space. The illustration we gave in figure 2.4 differs from the Bloch sphere representation of singlequbit quantum states in that the angles are half that of those in the Bloch
sphere representation: in particular, the angle between two states in figure 2.4 has the usual relation to the inner product, whereas in the Bloch sphere representation the angle is twice that of the
angle in the inner product formula. 2.5.3 Comments on General Quantum State Spaces
The states of all quantum systems satisfy certain properties that are encapsulated by a linear differential equation called the Schrödinger wave equation. For this reason, solutions to the
Schrödinger equation are called wave functions, so all quantum states have representations as wave functions. For the theory of quantum information processing, we do not need to concern ourselves
with properties specific to any of the various possible physical realizations of quantum bits, so we do not need to look at the details of specific wave function solutions; we can simply view wave
functions as abstract vectors which we will denote by kets such as |→ or |0. Since the Schrödinger equation is linear, the addition of two solutions to the Schrödinger equation or a constant multiple
of a solution of the Schrödinger equation are also solutions to the Schrödinger equation. Thus, the set of solutions to the Schrödinger equation for any quantum system is a complex vector space.
Furthermore, the set of solutions has a natural inner product. For the theoretical aspects of quantum information processing, considering only finite dimensional vector spaces usually suffices. We
simply mention that, in the infinite dimensional case, the space of solutions satisfies the conditions needed to form a Hilbert space. Hilbert spaces are frequently mentioned in the literature, since
they are the most general case, but in most papers on quantum information processing, the Hilbert spaces discussed are finite-dimensional, in which case they are nothing more or less than
finite-dimensional complex vector spaces. We discuss the state spaces of multiple-qubit systems in chapter 3. Just as in the single-qubit case, there is redundancy in this model. In fact, there is
greater redundancy in the vector space representation of larger quantum systems, which leads to a significantly more complicated geometry. 2.6 References
The early essays of Feynman and Manin can be found in [119, 120, 121] and [202, 203] respectively. The bra / ket notation was first introduced by Dirac in 1958 [103]. It is found in most quantum
mechanics textbooks and is used in virtually all papers on quantum computing.
2 Single-Qubit Quantum Systems
More information about linear algebra, in particular proofs of facts stated here, can be found in any linear algebra text, including Strang’s Linear Algebra and Its Applications [265] and Hoffman and
Kunze’s Linear Algebra [152], or in a book on mathematics for physicists such as Bamberg and Sternberg’s A Course in Mathematics for Students of Physics [30]. The BB84 quantum key distribution
protocol was developed by Charles Bennett and Gilles Brassard [42, 43, 45] building on work of Stephen Wiesner [284]. A related protocol was shown to be unconditionally secure by Lo and Chau [198].
Their proof was later simplified by Shor and Preskill [255] and extended to BB84. Another proof was given by Mayers [206]. The BB84 protocol was first demonstrated experimentally by Bennett et al. in
1992 over 30 cm of free space [37]. Since then, several groups have demonstrated this protocol and other quantum key distribution protocols over 100 km of fiber optic cable. Bienfang et al. [51]
demonstrated quantum key distribution over 23 km of free space at night, and Hughes et al. have achieved distances of 10 km through free space in daylight [156]. See the ARDA roadmap [157], the QIPC
strategic report [295], and Gisin et al. [130] for detailed overviews of implementation efforts and the challenges involved. The companies id Quantique, MagiQ, and SmartQuantum currently sell quantum
cryptographic systems implementing the BB84 protocol. Other quantum key distribution protocols exist. Exercise 2.11 develops the B92 protocol, and section 3.4 describes Ekert’s entanglement-based
quantum key distribution protocol. While we explain all quantum mechanics needed for the topics covered in this book, the reader may be interested in books on quantum mechanics. Countless books on
quantum mechanics are available. Greenstein and Zajonc [140] give a readable high-level exposition of quantum mechanics, including descriptions of many experiments. The third volume of the Feynman
Lectures on Physics [122] is accessible to a large audience. A classical explanation of the polarization experiment is given in the first volume. Shankar’s textbook [247] defines much more of the
notation and mathematics required for performing calculations than do the previously mentioned books, and it is quite readable as well. Other textbooks, such as Liboff [194], may be more appropriate
for readers with a physics background. 2.7 Exercises Exercise 2.1. Let the direction |v of polaroid B’s preferred axis be given as a function of θ ,
|v = cos θ |→ + sin θ |↑, and suppose that the polaroids A and C remain horizontally and vertically polarized as in the experiment of Section 2.1.1. What fraction of photons reach the screen? Assume
that each photon generated by the laser pointer has random polarization. Exercise 2.2. Which pairs of expressions for quantum states represent the same state? For those pairs that represent different
states, describe a measurement for which the probabilities of the two outcomes differ for the two states and give these probabilities. a. |0 and −|0 b. |1 and i|1
2.7 Exercises
√1 2
(|0 + |1) and
√1 2
(−|0 + i|1)
√1 2
(|0 + |1) and
√1 2
(|0 − |1)
√1 2
(|0 − |1) and
√1 2
(|1 − |0)
√1 2
(|0 + i|1) and
√1 2
(|+ + |−) and |0
√1 2
(|i − |−i) and |1
√1 2
(|i + |−i) and
√1 2
√1 2
(i|1 − |0)
√1 2
(|− + |+)
|0 + eiπ/4 |1 and
√1 2
e−iπ/4 |0 + |1
Exercise 2.3. Which states are superpositions with respect to the standard basis, and which are not? For each state that is a superposition, give a basis with respect to which it is not a
superposition. a. |+ b.
√1 (|+ + |−) 2
√1 (|+ − |−) 2 √
3 |+ − 12 |−) 2
√1 (|i − |−i) 2
√1 (|0 − |1) 2
Exercise 2.4. Which of the states in 2.3 are superpositions with respect to the Hadamard basis, and which are not? Exercise 2.5. Give the set of all values of θ for which the following pairs of
states are equivalent. a. |1 and
√1 2
|+ + eiθ |−
|i + eiθ |−i and √12 |−i + e−iθ |i √ √ c. 12 |0 − 23 |1 and eiθ 12 |0 − 23 |1 b.
√1 2
Exercise 2.6. For each pair consisting of a state and a measurement basis, describe the possible
measurement outcomes and give the probability for each outcome. √
3 |0 − 12 |1, 2
{|0, |1}
2 Single-Qubit Quantum Systems √
3 |1 − 12 |0, 2
{|0, |1}
c. |−i, {|0, |1} d. |0, {|+, |−} e.
√1 2
(|0 − |1), {|i, |−i}
f. |1, {|i, |−i} g. |+, { 12 |0 +
√ 3 |1, 23 |0 − 12 |1} 2
Exercise 2.7. For each of the following states, describe all orthonormal bases that include that
state. (|0 + i|1)
√1 2
1+i |0 − 1−i |1 2 2
√1 2
√ 1 i 3 |+ − |− 2 2
|0 + eiπ/6 |1
Exercise 2.8. Alice is confused. She understands that |1 and −|1 represent the same state. But
she does not understand why that does not imply that the same state. Can you help her out?
√1 (|0 + |1) 2
√1 (|0 − |1) 2
would be
Exercise 2.9. In the BB84 protocol, how many bits do Alice and Bob need to compare to have a 90 percent chance of detecting Eve’s presence? Exercise 2.10. Analyze Eve’s success in eavesdropping on
the BB84 protocol if she does not
even know which two bases to choose from and so chooses a basis at random at each step. a. On average, what percentage of bit values of the final key will Eve know for sure after listening to Alice
and Bob’s conversation on the public channel? b. On average, what percentage of bits in her string are correct? c. How many bits do Alice and Bob need to compare to have a 90 percent chance of
detecting Eve’s presence? Exercise 2.11. B92 quantum key distribution protocol. In 1992 Bennett proposed the following quantum key distribution protocol. Instead of encoding each bit in either the
standard basis or the Hadamard basis as is done in the BB84 protocol, Alice encodes her random string x as follows
0 → |0 1 1 → |+ = √ (|0 + |1) 2 and sends them to Bob. Bob generates a random bit string y. If yi = 0 he measures the i th qubit in the Hadamard basis {|+, |−}, if yi = 1 he measures in the standard
basis {|0, |1}. In this protocol, instead of telling Alice over the public classical channel which basis he used to measure
2.7 Exercises
y x
y x Figure 2.7 Bloch sphere representation of single-qubit quantum states.
each qubit, he tells her the results of his measurements. If his measurement resulted in |+ or |0 Bob sends 0; if his measurement indicates the state is |1 or |−, he sends 1. Alice and Bob discard
all bits from strings x and y for which Bob’s bit value from measurement yielded 0, obtaining strings x and y . Alice uses x as the secret key and Bob uses y . Then, depending on the security level
they desire, they compare a number of bits to detect tampering. They discard these check bits from their key. a. Show that if Bob receives exactly the states Alice sends, then the strings x and y are
strings. b. Why didn’t Alice and Bob decide to keep the bits of x and y for which Bob’s bit value from
measurement was 0? c. What if an eavesdropper Eve measures each bit in either the standard basis or the Hadamard
basis to obtain a bit string z and forwards the measured qubits to Bob? On average, how many bits of Alice and Bob’s key does she know for sure after listening in on the public classical? If Alice
and Bob compare s bit values of their strings x and y , how likely are they to detect Eve’s presence? Exercise 2.12. Bloch Sphere: Spherical coordinates: a. Show that the surface of the Bloch sphere
can be parametrized in terms of two real-valued
parameters, the angles θ and φ illustrated in figure 2.7. Make sure your parametrization is in one-to-one correspondence with points on the sphere, and therefore single-qubit quantum states, in the
range θ ∈ [0, π ] and φ ∈ [0, 2π] except for the points corresponding to |0 and |1. b. What are θ and φ for each of the states |+, |−, |i, and |−i?
2 Single-Qubit Quantum Systems
Exercise 2.13. Relate the four parametrizations of the state space of a single qubit to each other: Give formulas for a. vectors in ket notation b. elements of the extended complex plane c. spherical
coordinates for the Bloch sphere (see exercise 2.12) in terms of the x, y, and z coordinates of the Bloch sphere. Exercise 2.14. a. Show that antipodal points on the surface of the Block sphere
represent orthogonal states. b. Show that any two orthogonal states correspond to antipodal points.
Multiple-Qubit Systems
The first glimpse into why encoding information in quantum states might support more efficient computation comes when examining systems of more than one qubit. Unlike classical systems, the state
space of a quantum system grows exponentially with the number of particles. Thus, when we encode computational information in quantum states of a system of n particles, there are vastly more possible
computation states available than when classical states are used to encode the information. The extent to which these large state spaces corresponding to small amounts of physical space can be used
to speed up computation will be the subject of much of the rest of this book. The enormous difference in dimension between classical and quantum state spaces is due to a difference in the way the
spaces combine. Imagine a macroscopic physical system consisting of several components. The state of this classical system can be completely characterized by describing the state of each of its
component pieces separately. A surprising and unintuitive aspect of quantum systems is that often the state of a system cannot be described in terms of the states of its component pieces. States that
cannot be so described are called entangled states. Entangled states are a critical ingredient of quantum computation. Entangled states are a uniquely quantum phenomenon; they have no classical
counterpart. Most states in a multiple-qubit system are entangled states; they are what fills the vast quantum state spaces. The impossibility of efficiently simulating the behavior of entangled
states on classical computers suggested to Feynman, Manin, and others that it might be possible to use these quantum behaviors to compute more efficiently, leading to the development of the field of
quantum computation. The first few sections of this chapter will be fairly abstract as we develop the mathematical formalism to discuss multiple-qubit systems. We will try to make this material more
concrete by including many examples. Section 3.1 formally describes the difference between the way quantum and classical state spaces combine, the difference between the direct sum of two or more
vector spaces and the tensor product of a set of vector spaces. Section 3.1 then explores some of the implications of this difference, including the exponential increase in the dimension of a quantum
state space with the number of particles. Section 3.2 formally defines entangled states and begins to describe their uniquely quantum behavior. As a first illustration of the usefulness of this
behavior, section 3.4 discusses a second quantum key distribution scheme.
3 Multiple-Qubit Systems
3.1 Quantum State Spaces
In classical physics, the possible states of a system of n objects, whose individual states can be described by a vector in a two-dimensional vector space, can be described by vectors in a vector
space of 2n dimensions. Classical state spaces combine through the direct sum. However, the combined state space of n quantum systems, each with states modeled by two-dimensional vectors, is much
larger. The vector spaces associated with the quantum systems combine through the tensor product, resulting in a vector space of 2n dimensions. We begin by reviewing the formal definition of a direct
sum as well as of the tensor product in order to compare the two and the difference in size between the resulting spaces. 3.1.1 Direct Sums of Vector Spaces
The direct sum V ⊕ W of two vector spaces V and W with bases A = {|α1 , |α2 , . . . , |αn } and B = {|β1 , |β2 , . . . , |βm } respectively is the vector space with basis A ∪ B = {|α1 , |α2 , . . . ,
|αn , |β1 , |β2 , . . . , |βm }. The order of the basis is arbitrary. Every element |x ∈ V ⊕ W can be written as |x = |v ⊕ |w for some |v ∈ V and |w ∈ W . For V and W of dimension n and m
respectively, V ⊕ W has dimension n + m: dim(V ⊕ W ) = dim(V ) + dim(W ). Addition and scalar multiplication are defined by performing the operation on the two component vector spaces separately and
adding the results. When V and W are inner product spaces, the standard inner product on V ⊕ W is given by ( v2 | ⊕ w2 |)(|v1 ⊕ |w1 ) = v2 |v1 + w2 |w1 . The vector spaces V and W embed in V ⊕ W in
the obvious canonical way, and the images are orthogonal under the standard inner product. Suppose that the state of each of three classical objects O1 , O2 , and O3 is fully described by two
parameters, the position xi and the momentum pi . Then the state of the system can be described by the direct sum of the states of the individual objects: ⎞ ⎛ x1 ⎜ p ⎟ 1 ⎟ ⎜ ⎟ ⎜ x1 x2 x3 ⎜ x2 ⎟ ⊕ ⊕ =
⎜ ⎟. ⎜ p2 ⎟ p1 p2 p3 ⎟ ⎜ ⎝ x3 ⎠ p3 More generally, the state space of n such classical objects has dimension 2n. Thus the size of the state space grows linearly with the number of objects.
3.1 Quantum State Spaces
3.1.2 Tensor Products of Vector Spaces
The tensor product V ⊗ W of two vector spaces V and W with bases A = {|α1 , |α2 , . . . , |αn } and B = {|β1 , |β2 , . . . , |βm } respectively is an nm-dimensional vector space with a basis
consisting of the nm elements of the form |αi ⊗ |βj where ⊗ is the tensor product, an abstract binary operator that satisfies the following relations: (|v1 + |v2 ) ⊗ |w = |v1 ⊗ |w + |v2 ⊗ |w |v ⊗ (|
w1 + |w2 ) = |v ⊗ |w1 + |v ⊗ |w2 (a|v) ⊗ |w = |v ⊗ (a|w) = a(|v ⊗ |w). Taking k = min(n, m), all elements of V ⊗ W have form |v1 ⊗ |w1 + |v2 ⊗ |w2 + · · · + |vk ⊗ |wk for some vi ∈ V and wi ∈ W . Due
to the relations defining the tensor product, such a representation is not unique. Furthermore, while all elements of V ⊗ W can be written α1 (|α1 ⊗ |β1 ) + α2 (|α2 ⊗ |β1 ) + · · · + αnm (|αn ⊗ |βm
), most elements of V ⊗ W cannot be written as |v ⊗ |w, where v ∈ V and w ∈ W . It is common to write |v|w for |v ⊗ |w. Example 3.1.1 Let V and W be two-dimensional vector spaces with orthonormal
bases A =
{|α1 , |α2 } and B = {|β1 , |β2 } respectively. Let |v = a1 |α1 + a2 |α2 and |w = b1 |β1 + b2 |β2 be elements of V and W . Then |v ⊗ |w = a1 b1 |α1 ⊗ |β1 + a1 b2 |α1 ⊗ |β2 + a2 b1 |α2 ⊗ |β1 + a2 b2 |
α2 ⊗ |β2 .
If V and W are vector spaces corresponding to a qubit, each with standard basis {|0, |1}, then V ⊗ W has {|0 ⊗ |0, |0 ⊗ |1, |1 ⊗ |0, |1 ⊗ |1} as basis. The tensor product of two single-qubit states
a1 |0 + b1 |1 and a2 |0 + b2 |1 is a1 a2 |0 ⊗ |0 + a1 b2 |0 ⊗ |1 + a2 b1 |1 ⊗ |0 + a2 b2 |1 ⊗ |1. To write examples in the more familiar matrix notation for vectors, we must choose an ordering for
the basis of the tensor product space. For example, we can choose the dictionary ordering {|α1 |β1 , |α1 |β2 , |α2 |β1 , |α2 |β2 }. Example 3.1.2 With the dictionary ordering of the basis for the
tensor product space, the tensor
product of the unit vectors with matrix representation |v = is the unit vector |v ⊗ |w =
1 √
(−1, 3, 2, −6) . †
√1 (1, −2)† 5
and |w =
√1 (−1, 3)† 10
3 Multiple-Qubit Systems
If V and W are inner product spaces, then V ⊗ W can be given an inner product by taking the product of the inner products on V and W ; the inner product of |v1 ⊗ |w1 and |v2 ⊗ |w2 is given by ( v2 |
⊗ w2 |) · (|v1 ⊗ |w1 ) = v2 |v1 w2 |w1 , The tensor product of two unit vectors is a unit vector, and given orthonormal bases {|αi } for V and {|βi } for W , the basis {|αi ⊗ |βj } for V ⊗ W is also
orthonormal. The tensor product V ⊗ W has dimension dim(V ) × dim(W ), so the tensor product of n two-dimensional vector spaces has 2n dimensions. Most elements |w ∈ V ⊗ W cannot be written as the
tensor product of a vector in V and a vector in W (though they are all linear combinations of such elements). This observation is of crucial importance to quantum computation. States of V ⊗ W that
cannot be written as the tensor product of a vector in V and a vector in W are called entangled states. As we will see, for most quantum states of an n-qubit system, in particular for all entangled
states, it is not meaningful to talk about the state of a single qubit of the system. A tensor product structure also underlies probability theory. While the tensor product structure there is rarely
mentioned, a common source of confusion is a tendency to try to impose a direct sum structure on what is actually a tensor product structure. Readers may find it useful to read section A.1, which
discusses the tensor product structure inherent in probability theory, which illustrates the use of tensor product in another, more familiar, context. Readers may also wish to do exercises A.1
through A.4. 3.1.3 The State Space of an n-Qubit System
Given two quantum systems with states represented by unit vectors in V and W respectively, the possible states of the joint quantum system are represented by unit vectors in the vector space V ⊗ W .
For 0 ≤ i < n, let Vi be the vector space, with basis {|0i , |1i }, corresponding to a single qubit. The standard basis for the vector space Vn−1 ⊗ · · · ⊗ V1 ⊗ V0 for an n-qubit system consists of
the 2n vectors {|0n−1 ⊗ · · · ⊗ |01 ⊗ |00 , |0n−1 ⊗ · · · ⊗ |01 ⊗ |10 , |0n−1 ⊗ · · · ⊗ |11 ⊗ |00 , .. ., |1n−1 ⊗ · · · ⊗ |11 ⊗ |10 }. The subscripts are often dropped, since the corresponding qubit
is clear from position. The convention that adjacency of kets means the tensor product enables us to write this basis more compactly:
3.1 Quantum State Spaces
{|0 · · · |0|0, |0 · · · |0|1, |0 · · · |1|0, .. ., |1 · · · |1|1}. Since the tensor product space corresponding to an n-qubit system occurs so frequently throughout quantum information processing,
an even more compact and readable notation uses |bn−1 . . . b0 to represent |bn−1 ⊗ · · · ⊗ |b0 . In this notation the standard basis for an n-qubit system can be written {|0 · · · 00, |0 · · · 01, |
0 · · · 10, . . . , |1 · · · 11}. Finally, since decimal notation is more compact than binary notation, we will represent the state |bn−1 . . . b0 more compactly as |x, where bi are the digits of the
binary representation for the decimal number x. In this notation, the standard basis for an n-qubit system is written {|0, |1, |2, . . . , |2n − 1}. The standard basis for a two-qubit system can be
written as {|00, |01, |10, |11} = {|0, |1, |2, |3}, and the standard basis for a three-qubit system can be written as {|000, |001, |010, |011, |100, |101, |110, |111} = {|0, |1, |2, |3, |4, |5, |6, |
7}. Since the notation |3 corresponds to two different quantum states in these two bases, one a two-qubit state, the other a three-qubit state, in order for such notation to be unambiguous, the
number of qubits must be clear from context. We often revert to a less compact notation when we wish to set apart certain sets of qubits, to indicate separate registers of a quantum computer, or to
indicate qubits controlled by different people. If Alice controls the first two qubits and Bob the last three, we may write a state as √12 (|00|101 + |10|011), or even as √12 (|00A |101B + |10A |011B
), where the subscripts indicate which qubits Alice controls and which qubits Bob controls. Example 3.1.3 The superpositions
1 1 1 1 √ |0 + √ |7 = √ |000 + √ |111 2 2 2 2
3 Multiple-Qubit Systems
and 1 1 (|1 + |2 + |4 + |7) = (|001 + |010 + |100 + |111) 2 2 represent possible states of a three-qubit system. To use matrix notation for state vectors of an n-qubit system, the order of basis
vectors must be established. Unless specified otherwise, basis vectors labeled with numbers are assumed to be sorted numerically. Using this convention, the two qubit state i 1 i 1 1 1 |00 + |01 + √
|11 = |0 + |1 + √ |3 2 2 2 2 2 2 will have matrix representation ⎛
1 2 i 2
⎜ ⎜ ⎜ ⎝ 0
⎞ ⎟ ⎟ ⎟. ⎠
√1 2
We use the standard basis predominantly, but we use other bases from time to time. For example, the following basis, the Bell basis for a two-qubit system, {|+ , |− , | + , | − }, where √ |+ = 1/ 2(|
00 + |11 √ |− = 1/ 2(|00 − |11 (3.1) √ | + = 1/ 2(|01 + |10 √ | − = 1/ 2(|01 − |10, is important for various applications of quantum information processing including quantum teleportation. As in the
single-qubit case, a state |v is a superposition with respect to a set of orthonormal states {|β1 , . . . , |βi } if it is a linear combination of these states, |v = a1 |β1 + · · · + ai |βi , and at
least two of the ai are non-zero. When no set of orthonormal states is specified, we will mean that the superposition is with respect to the standard basis. Any unit vector of the 2n -dimensional
state space represents a possible state of an n-qubit system, but just as in the single-qubit case there is redundancy. In the multiple-qubit case, not only do vectors that are multiples of each
other refer to the same quantum state, but properties of the tensor product also mean that phase factors distribute over tensor products; the same phase factor in different qubits of a tensor product
represent the same state: |v ⊗ (eiφ |w) = eiφ (|v ⊗ |w) = (eiφ |v) ⊗ |w. Phase factors in individual qubits of a single term of a superposition can always be factored out into a single coefficient
for that term.
3.1 Quantum State Spaces
Example 3.1.4
√1 (|0 + |1) ⊗ √1 (|0 + |1) 2 2
Example 3.1.5 ( 12 |0 +
= 12 (|00 + |01 + |10 + |11)
3 |1) ⊗ ( √12 |0 + √i2 |1) 2
√ 1 √ |00 + 2√i 2 |01 + 2√32 |10 + 2 2
√ i √3 |11) 2 2
Just as in the single-qubit case, vectors that differ only in a global phase represent the same quantum state. If we write every quantum state as a0 |0 . . . 00 + a1 |0 . . . 01 + · · · + a2n −1 |1 .
. . 11 and require the first non-zero ai to be real and non-negative, then every quantum state has a unique representation. Since this representation uniquely represents quantum states, the quantum
state space of an n-qubit system has 2n − 1 complex dimensions. For any complex vector space of dimension N , the space in which vectors that are multiples of each other are considered equivalent is
called complex projective space of dimension N − 1. So the space of distinct quantum states of an n-qubit system is a complex projective space of dimension 2n − 1. Just as in the single-qubit case,
we must be careful not to confuse the vector space in which we write our computations with the quantum state space itself. Again, we must be careful to avoid confusion between the relative phases
between terms in the superposition, of critical importance in quantum mechanics, and the global phase which has no physical meaning. Using the notation of section 2.5.1, we write |v ∼ |w when two
vectors |v and |w differ only by a global phase and thus represent the same quantum state. For example, even though |00 ∼ eiφ |00, the vectors |v = √12 (eiφ |00 + |11) and |w = √12 (|00 + |11)
represent different quantum states, which behave differently in many situations: 1 1 √ (eiφ |00 + |11) ∼ √ (|00 + |11). 2 2 However, eiφ 1 1 √ (eiφ |00 + eiφ |11) ∼ √ (|00 + |11) ∼ √ (|00 + |11). 2 2
2 Quantum mechanical calculations are usually performed in the vector space rather than in the projective space because linearity makes vector spaces easier to work with. But we must always be aware
of the ∼ equivalence when we interpret the results of our calculations as quantum states. Further confusions arise when states are written in different bases. Recall from section 2.5.1 that |+ = √12
(|0 + |1) and |− = √12 (|0 − |1). The expression √12 (|+ + |−) is a different way of writing |0, and √12 (|0|0 + |1|1) and for the same vector.
√1 (|+|+ + |−|−) are simply different expressions 2
3 Multiple-Qubit Systems
Fluency with properties of tensor products, and with the notation just presented, will be crucial for understanding the rest of the book. The reader is strongly encouraged to work exercises 3.1
through 3.9 at this point to begin to develop that fluency. 3.2 Entangled States
As we saw in section 2.5.2, a single-qubit state can be specified by a single complex number so any tensor product of n individual single-qubit states can be specified by n complex numbers. But in
the last section, we saw that it takes 2n − 1 complex numbers to describe states of an n-qubit system. Since 2n n, the vast majority of n-qubit states cannot be described in terms of the state of n
separate single-qubit systems. States that cannot be written as the tensor product of n single-qubit states are called entangled states. Thus the vast majority of quantum states are entangled.
Example 3.2.1 The elements of the Bell basis (Equation 3.1) are entangled. For instance, the Bell state |+ = √12 (|00 + |11) cannot be described in terms of the state of each of its component qubits
separately. This state cannot be decomposed, because it is impossible to find a1 , a2 , b1 , b2 such that 1 (a1 |0 + b1 |1) ⊗ (a2 |0 + b2 |1) = √ (|00 + |11), 2
since (a1 |0 + b1 |1) ⊗ (a2 |0 + b2 |1) = a1 a2 |00 + a1 b2 |01 + b1 a2 |10 + b1 b2 |11 and a1 b2 = 0 implies that either a1 a2 = 0 or b1 b2 = 0. Two particles in the Bell state |+ are called an EPR
pair for reasons that will become apparent in section 4.4.
Example 3.2.2 Other examples of two-qubit entangled states include
1 | + = √ (|01 + |10), 2 1 √ (|00 − i|11), 2 √ 99 i |00 + |11), 10 10 and 7 1 1 7 |00 + |01 + |10 + |11). 10 10 10 10
3.2 Entangled States
The four entangled states 1 |+ = √ (|00 + |11) 2 1 |− = √ (|00 − |11) 2 and 1 | + = √ (|01 + |10) 2 1 | − = √ (|01 − |10) 2 of equation 3.1 are called Bell states. Bell states are of fundamental
importance to quantum information processing. For example, section 5.3 exhibits their use for quantum teleportation and dense coding. Section 10.2.1 shows that these states are maximally entangled.
Strictly speaking, entanglement is always with respect to a specified tensor product decomposition of the state space. More formally, given a state |ψ of some quantum system with associated vector
space V and a tensor decomposition of V , V = V1 ⊗ · · · ⊗ Vn , the state |ψ is separable, or unentangled, with respect to that decomposition if it can be written as |ψ = |v1 ⊗ · · · ⊗ |vn , where |
vi is contained in Vi . Otherwise, |ψ is entangled with respect to this decomposition. Unless we specify a different decomposition, when we say an n-qubit state is entangled, we mean it is entangled
with respect to the tensor product decomposition of the vector space V associated to the n-qubit system into the n two-dimensional vector spaces Vn−1 , . . . V0 associated with each of the individual
qubits. For such statements to have meaning, it must be specified or clear from context which of the many possible tensor decompositions of V into two-dimensional spaces corresponds with the set of
qubits under consideration. It is vital to remember that entanglement is not an absolute property of a quantum state, but depends on the particular decomposition of the system into subsystems under
consideration; states entangled with respect to the single-qubit decomposition may be unentangled with respect to other decompositions into subsystems. In particular, when discussing entanglement in
quantum computation, we will be interested in entanglement with respect to a decomposition into registers, subsystems consisting of multiple qubits, as well as entanglement with respect to the
decomposition into individual qubits. The following example demonstrates how a state can be entangled with respect to one decomposition and not with respect to another. Example 3.2.3 Multiple
meanings of entanglement. We say that the four-qubit state
|ψ =
1 1 (|00 + |11 + |22 + |33) = (|0000 + |0101 + |1010 + |1111) 2 2
3 Multiple-Qubit Systems
is entangled, since it cannot be expressed as the tensor product of four single-qubit states. That the entanglement is with respect to the decomposition into single qubits is implicit in this
statement. There are other decompositions with respect to which this state is unentangled. For example, |ψ can be expressed as the product of two two-qubit states: |ψ =
1 (|01 |02 |03 |04 + |01 |12 |03 |14 + |11 |02 |13 |04 + |11 |12 |13 |14 2
1 1 = √ (|01 |03 + |11 |13 ) ⊗ √ (|02 |04 + |12 |14 ), 2 2 where the subscripts indicate which qubit we are talking about. So |ψ is not entangled with respect to the system decomposition consisting
of a subsystem of the first and third qubit and a subsystem consisting of the second and fourth qubit. On the other hand, the reader can check that |ψ is entangled with respect to the decomposition
into the two two-qubit systems consisting of the first and second qubits and the third and fourth qubits. It is important to recognize that the notion of entanglement is not basis dependent, even
though it depends on the tensor decomposition under consideration; there is no reference, explicit or implicit, to a basis in the definition of entanglement. Certain bases may be more or less
convenient to work with, depending for instance on how much they reflect the tensor decomposition under consideration, but that choice does not affect what states are considered entangled. In section
2.3, we puzzled over the meaning of quantum superpositions. We now extend the remarks we made on the meaning of superpositions in section 2.3 to the multiple-qubit case. As in the single-qubit case,
most n-qubit states are superpositions, nontrivial linear combinations of basis vectors. As always, the notion of superposition is basis-dependent; all states are superpositions with respect to some
bases, and not superpositions with respect to other bases. For multiple qubits, the answer to the question of what superpositions mean is more involved than in the single-qubit case. The common way
of talking about superpositions in terms of the system being in two states “at the same time" is even more suspect in the multiple-qubit case. This way of thinking fails to distinguish between states
like √12 (|00 + |11) and √12 (|00 + i|11) that differ only by a relative phase and behave differently under a variety of circumstances. Furthermore, which states a system is viewed as “being in at
the same time” is basis-dependent; the expressions √12 (|00 + |11) and √1 (|+|+ + |−|−) 2
represent the same state but have different interpretations, one as being in the states |00 and |11 at the same time, and the other as being in the states |++ and |−− at the same time, in spite of
being the same state and thus behaving in precisely the same way under all circumstances. This example underscores that quantum superpositions are not probabilistic mixtures. Sections 3.4 and 4.4
will illustrate how the basis dependence of this interpretation obscures an essential part of the quantum nature of these states, an aspect that becomes apparent only
3.3 Basics of Multi-Qubit Measurement
when such states are considered in different bases. Nevertheless, as long as one is aware that this description should not be taken too literally, it can be helpful at first to think of
superpositions as being in multiple states at once. Over the course of this chapter and the next, you will begin to develop more of a feel for the workings of these states. Not only is entanglement
between qubits key to the exponential size of quantum state spaces of multiple-qubit systems, but, as we will see in sections 3.4, 5.3.1, and 5.3.2, particles in an entangled state can also be used
to aid communication of both classical and quantum information. Furthermore, the quantum algorithms of part II exploit entanglement to speed up computation. The way entangled states behave when
measured is one of the central mysteries of quantum mechanics, as well as a source of power for quantum information processing. Entanglement and quantum measurement are two of the uniquely quantum
properties that are exploited in quantum information processing. 3.3 Basics of Multi-Qubit Measurement
The experiment of section 2.1.2 illustrates how measurement of a single qubit is probabilistic and transforms the quantum state into a state compatible with the measuring device. A similar statement
is true for measurements of multiple-qubit systems, except that the set of possible measurements and measurement outcomes is significantly richer than in the single-qubit case. The next paragraph
develops some mathematical formalism to handle the general case. Let V be the N = 2n dimensional vector space associated with an n-qubit system. Any device that measures this system has an associated
direct sum decomposition into orthogonal subspaces V = S1 ⊕ · · · ⊕ Sk for some k ≤ N. The number k corresponds to the maximum number of possible measurement outcomes for a state measured with that
particular device. This number varies from device to device, even between devices measuring the same system. That any device has an associated direct sum decomposition is a direct generalization of
the single-qubit case. Every device measuring a singlequbit system has an associated orthonormal basis {|v1 , |v2 } for the vector space V associated with the single-qubit system; the vectors |vi
each generate a one-dimensional subspace Si (consisting of all multiples a|vi where a is a complex number), and V = S1 ⊕ S2 . Furthermore, the only nontrivial decompositions of the vector space V are
into two one-dimensional subspaces, and any choice of unit length vectors, one from each of the subspaces, yields an orthonormal basis. When a measuring device with associated direct sum
decomposition V = S1 ⊕ · · · ⊕ Sk interacts with an n-qubit system in state |ψ, the interaction changes the state to one entirely contained within one of the subspaces, and chooses the subspace with
probability equal to the square of the absolute value of the amplitude of the component of |ψ in that subspace. More formally, the state |ψ has a unique direct sum decomposition |ψ = a1 |ψ1 ⊕ · · · ⊕
ak |ψk , where |ψi is a unit vector in Si and ai is real and non-negative. When |ψ is measured, the state |ψi is obtained
3 Multiple-Qubit Systems
with probability |ai |2 . That any measuring device has an associated direct sum decomposition, and that the interaction can be modeled in this way, is an axiom of quantum mechanics. It is not
possible to prove that every device behaves in this way, but so far it has provided an excellent model that predicts the outcome of experiments with high accuracy. Example 3.3.1 Single-qubit
measurement in the standard basis. Let V be the vector space associated with a single-qubit system. A device that measures a qubit in the standard basis has, by definition, the associated direct sum
decomposition V = S1 ⊕ S2 , where S1 is generated by |0 and S2 is generated by |1. An arbitrary state |ψ = a|0 + b|1 measured by such a device will be |0 with probability |a|2 , the amplitude of |ψ
in the subspace S1 , and |1 with probability |b|2 .
Example 3.3.2 Single-qubit measurement in the Hadamard basis. A device that measures a single
qubit in the Hadamard basis 1 1 {|+ = √ (|0 + |1), |− = √ (|0 − |1)} 2 2 has associated subspace decomposition V = S+ ⊕ S− , where S+ is generated by |+ and S− is √ |+ + a−b √ |−, so the generated by
|−. A state |ψ = a|0 + b|1 can be rewritten as |ψ = a+b 2 2
√ | and |− will be | a−b √ | . probability that |ψ is measured as |+ will be | a+b 2 2 2
The next two examples describe measurements of two-qubit states that are used in the entanglement-based quantum key distribution protocol described in section 3.4. Chapter 4 explores measurement of
multiple-qubit systems in more detail and builds up the standard notational shorthand for describing quantum measurements. Example 3.3.3 Measurement of the first qubit of a two-qubit state in the
standard basis. Let V be the vector space associated with a two-qubit system. A device that measures the first qubit in the standard basis has associated subspace decomposition V = S1 ⊕ S2 where S1 =
|0 ⊗ V2 , the two-dimensional subspace spanned by {|00, |01}, and S2 = |1 ⊗ V2 , which is spanned by {|10, |11}. To see what happens when such a device measures an arbitrary two-qubit state |ψ = a00
|00 + a01 |01 + a10 |10 + a11 |11, we write |ψ = c1 |ψ1 + c2 |ψ2 where |ψ1 = 1/c1 (a00 |00 + a01 |01) ∈ S1 and |ψ2 = 1/c2 (a10 |10 + a11 |11) ∈ S2 , with c1 = |a00 |2 + |a01 |2 and c2 = |a10 |2 + |
a11 |2 as the normalization factors. Measurement of |ψ with this device results in the state |ψ1 with probability |c1 |2 = |a00 |2 + |a01 |2 and the state |ψ2 with probability |c2 |2 = |a10 |2 + |a11
|2 . In particular, when the Bell state |+ = √12 (|00 + |11) is measured, we obtain |00 and |11 with equal probability.
3.4 Quantum Key Distribution Using Entangled States
Example 3.3.4 Measurement of the first qubit of a two-qubit state in the Hadamard basis. A
device that measures the first qubit of a two-qubit system with respect to the Hadamard basis {|+, |−} has an associated direct sum decomposition V = S1 ⊕ S2 , where S1 = |+ ⊗ V2 , the
two-dimensional subspace spanned by {|+|0, |+|1}, and S2 = |− ⊗ V2 . We write |ψ = a00 |00 + a01 |01 + a10 |10 + a11 |11 as |ψ = a1 |ψ1 + a2 |ψ2 , where a00 + a10 a01 + a11 |ψ1 = c1 |+|0 + √ |+|1 √ 2
2 and |ψ2
a00 − a10 a01 − a11 |−|0 + √ |−|1 . √ 2 2
We leave it to the reader to calculate c1 and c2 and the probabilities for the two outcomes, and to show that such a measurement on the state |+ = √12 (|00 + |11 yields |+|+ and |−|− with equal
3.4 Quantum Key Distribution Using Entangled States
In 1991, Artur Ekert developed a quantum key distribution scheme that makes use of special properties of entangled states. The Ekert 91 protocol resembles the BB84 protocol of section 2.4 in some
ways. In his protocol, Alice and Bob establish a shared key by separately performing random measurements on their halves of an EPR pair and then comparing which bases they used over a classical
channel. Because Alice and Bob do not exchange quantum states during the protocol, and an eavesdropper Eve cannot learn anything useful by listening in on the classical exchange alone, Eve’s only
chance to obtain information about the key is for her to interact with the purported EPR pair as it is being created or transmitted in the setup for the protocol. For this reason it is easier to
prove the security of protocols based on entangled states. Such proofs have then been modified to prove the security of other QKD protocols like BB84. As with BB84, we describe only the protocol;
tools developed in later chapters are needed to describe many of Eve’s possible attacks and to give a proof of security. Exercise 3.15 analyzes the limited effectiveness of some simple attacks Eve
could make. The protocol begins with the creation of a sequence of pairs of qubits, all in the entangled state |+ = √12 (|00 + |11. Alice receives the first qubit of each pair, while Bob receives the
second. When they wish to create a secret key, for each qubit they both independently and randomly choose either the standard basis {|0, |1} or the Hadamard basis {|+, |−} in which to measure, just
as in the BB84 protocol. After they have made their measurements, they compare bases and discard those bits for which their bases differ.
3 Multiple-Qubit Systems
If Alice measures the first qubit in the standard basis and obtains |0, then the entire state becomes |00. If Bob now measures in the standard basis, he obtains the result |0 with certainty. If
instead he measures in the Hadamard basis {|+, |−}, he obtains |+ and |− with equal probability, since |00 = |0( √12 (|+ + |−)). Just as in the BB84 protocol, he interprets the states |+ and |− as
corresponding to the classical bit values 0 and 1 respectively; thus when he measures in the basis {|+|−} and Alice measures in the standard basis, he obtains the same bit value as Alice only half
the time. The behavior is similar when Alice’s measurement indicates her qubit is in state |1. If instead Alice measures in the Hadamard basis and obtains the result that her qubit is in the state |
+, the whole state becomes |+|+. If Bob now measures in the Hadamard basis, he obtains |+ with certainty, whereas if he measures in the standard basis he obtains |0 and |1 with equal probability.
Since they always get the same bit value if they measure in the same basis, the protocol results in a shared random key, as long as the initial pairs were EPR pairs. The security of the scheme relies
on adding steps to the protocol we have just described that enable Alice and Bob to test the fidelity of their EPR pairs. We are not yet in a position to describe such tests. The tests Ekert
suggested are based on Bell’s inequalities (section 4.4.3). Other, more efficient tests have been devised. This protocol has the intriguing property that in theory Alice and Bob can prepare shared
keys as they need them, never needing to store keys for any length of time. In practice, to prepare keys on an as-needed basis in this way, Alice and Bob would need to be able to store their EPR
pairs so that they are not corrupted during that time. The capability of long-term reliable storage of entangled states does not exist at present. 3.5 References
In the early 1980s, Richard Feynman and Yuri Manin separately recognized that certain quantum phenomena associated with entangled particles could not be simulated efficiently on standard computers.
Turning this observation around caused them to speculate whether these quantum phenomena could be used to speed up computation in general. Their early musings on quantum computation can be found in
[121], [150], [202], and [203]. More extensive treatments of the tensor product can be found in Arno Bohm’s Quantum Mechanics [53], Paul Bamberg and Shlomo Sternberg’s A Course in Mathematics for
Students of Physics [30], and Thomas Hungerford’s Algebra [158]. Ekert’s key distribution protocol based on EPR pairs, originally proposed in [111], has been demonstrated in the laboratory [163,
294]. Gisin et al. [130] provide a detailed survey of work on quantum key distribution including Ekert’s algorithm. 3.6 Exercises Exercise 3.1. Let V be a vector space with basis {(1, 0, 0), (0, 1,
0), (0, 0, 1)}. Give two different
bases for V ⊗ V .
3.6 Exercises
Exercise 3.2. Show by example that a linear combination of entangled states is not necessarily
entangled. Exercise 3.3. Show that the state
1 |Wn = √ (|0 . . . 001 + |0 . . . 010 + |0 . . . 100 + · · · + |1 . . . 000) n is entangled, with respect to the decomposition into the n qubits, for every n > 1. Exercise 3.4. Show that the state
1 |GH Zn = √ (|00 . . . 0 + |11 . . . 1) 2 is entangled, with respect to the decomposition into the n qubits, for every n > 1. Exercise 3.5. Is the state
√1 (|0|+ + |1|−) 2
Exercise 3.6. If someone asks you whether the state |+ is entangled, what will you say? Exercise 3.7. Write the following states in terms of the Bell basis. a. |00 b. |+|− c.
√1 (|00 + |01 + |10) 3
Exercise 3.8. a. Show that b. Show that
√1 (|0|0 + |1|1) 2 √1 (|0|0 − |1|1) 2
√1 (|+|+ + |−|−) 2
refers to the same state as
refer to the same quantum state. √1 (|i|i + |−i|−i). 2
Exercise 3.9. a. Show that any n-qubit quantum state can be represented by a vector of the form
a0 |0 . . . 00 + a1 |0 . . . 01 + · · · + a2n −1 |1 . . . 11 where the first non-zero ai is real and non-negative. b. Show that this representation is unique in the sense that any two different
vectors of this form
represent different quantum states. Exercise 3.10. Show that for any orthonormal basis B = {|β1 , |β2 , . . . , |βn } and vectors |v =
a1 |β1 + a2 |β2 + · · · + an |βn and |w = c1 |β1 + c2 |β2 + · · · + cn |βn
a. the inner product of |v and |w is c¯1 a1 + c¯2 a2 + · · · + c¯2 a2 , and b. the length squared of |v is ||v| = v|v = |a1 | + |a2 | + · · · + |an | . 2
Write all steps in Dirac’s bra/ket notation.
3 Multiple-Qubit Systems
Exercise 3.11. Let |ψ be an n-qubit state. Show that the sum of the distances from |ψ to the standard basis vectors |j is bounded below by a positive constant that depends only on n, ||ψ − |j | ≥ C,
where |v| indicates the length of the enclosed vector. Specify such a constant C in terms of n. Exercise 3.12. Give an example of a two-qubit state that is a superposition with respect to the
standard basis but that is not entangled. Exercise 3.13. a. Show that the four-qubit state |ψ = 12 (|00 + |11 + |22 + |33) of example 3.2.3 is entangled
with respect to the decomposition into two two-qubit subsystems consisting of the first and second qubits and the third and fourth qubits. b. For the four decompositions into two subsystems
consisting of one and three qubits, say whether |ψ is entangled or unentangled with respect to each of these decompositions. Exercise 3.14. a. For the standard basis, the Hadamard basis, and the
basis B = { √1 (|0 + i|1, |0 − i|1}, 2
determine the probability of each outcome when the second qubit of a two-qubit system in the state |00 is measured in each of the bases. b. Determine the probability of each outcome when the second
qubit of the state |00 is first measured in the Hadamard basis and then in the basis B of part a). c. Determine the probability of each outcome when the second qubit of the state |00 is first
measured in the Hadamard basis and then in the standard basis. Exercise 3.15. This exercise analyzes the effectiveness of some simple attacks an eavesdropper Eve could make on Ekert’s entangled state
based QKD protocol. a. Say Eve can measure Bob’s half of each of the EPR pairs before it reaches him. Say she always
measures in the standard basis. Describe a method by which Alice and Bob can determine that there is only a 2−s chance that this sort of interference by Eve has gone undetected. What happens if Eve
instead measures each qubit randomly in either the standard basis of the Hadamard basis? What happens if she uniformly at random chooses a basis from all possible bases? b. Say Eve can pose as the
entity sending the purported EPR pairs. Say instead of sending EPR
pairs she sends a random mixture of qubit pairs in the states |00, |11, |+|+, and |−|−. After Alice and Bob perform the protocol of section 3.4, on how many bits on average do their purported shared
secret keys agree? On average, how many of these bits does Eve know?
Measurement of Multiple-Qubit States
The nonclassical behavior of quantum measurement is critical to quantum information processing applications. This chapter develops the standard formalism used for measurement of multiplequbit
systems, and uses this formalism to describe the highly nonclassical behavior of entangled states under measurement. In particular, it discusses the EPR paradox and Bell’s theorem, which illustrate
the nonclassical nature of these states. Section 4.1 extends the Dirac bra/ket notation to linear transformations. It will be used in this chapter to describe measurements, and in chapter 5 to
describe quantum transformations acting on quantum systems. Section 4.2 slowly introduces some of the notation and standard formalism for quantum measurement. Section 4.3 uses this material to give a
full description of the standard formalism. Both sections contain a myriad of examples. The chapter concludes with a detailed discussion in section 4.4 of the behavior under measurement of the most
famous of entangled states, EPR pairs. 4.1 Dirac’s Bra/Ket Notation for Linear Transformations
Dirac’s bra/ket notation provides a convenient way of specifying linear transformations on quantum states. Recall from section 2.2 that the conjugate transpose of the vector denoted by ket |ψ is
denoted by bra ψ|, and the inner product of vectors |ψ and |φ is given by ψ|φ. The notation |x y| represents the outer product of the vectors |x and |y. Matrix multiplication is associative, and
scalars commute with everything, so relations such as the following hold: (|a b|)|c = |a( b||c) = ( b|c)|a. Let V be a vector space associated with a single-qubit system. The matrix for the operator
|0 0| with respect to the standard basis in the standard order {|0, |1} is 1 1 0 |0 0| = . 1 0 = 0 0 0
4 Measurement of Multiple-Qubit States
The notation |0 1| represents the linear transformation that maps |1 to |0 and |0 to the null vector, a relationship suggested by the notation: (|0 1|)|1 = |0( 1|1) = |0(1) = |0, (|0 1|)|0 = |0( 1|0)
= |0(0) = 0. Similarly 0 0 |1 0| = , 1 0 0 0 |1 1| = . 0 1 Thus, all two-dimensional linear transformations on V can be written in Dirac’s notation: a b = a|0 0| + b|0 1| + c|1 0| + d|1 1|. c d
Example 4.1.1 The linear transformation that exchanges |0 and |1 is given by
X = |0 1| + |1 0|. We will also use notation X:
|0 → |1 |1 → |0,
which specifies a linear transformation in terms of its effect on the basis vectors. The transformation X = |0 1| + |1 0| can also be represented by the matrix 0 1 1 0 with respect to the standard
Example 4.1.2 The transformation that exchanges the basis vectors |00 and |10 and leaves the others alone is written |10 00| + |00 10| + |11 11| + |01 01| and has matrix representation ⎛ ⎞ 0 0 1 0 ⎜
0 1 0 0 ⎟ ⎜ ⎟ ⎝ 1 0 0 0 ⎠
in the standard basis.
4.2 Projection Operators for Measurement
An operator on an n-qubit system that maps the basis vector |j to |i and all other standard basis elements to 0 can be written O = |i j | in the standard basis; the matrix for O has a single non-zero
entry 1 in the ij th place. A general operator O with entries aij in the standard basis can be written aij |i j |. O= i
Similarly, the ij th entry of the matrix for O in the standard basis is given by i|O|j . As an example of working with this notation, we write out the result of applying operator O to a vector |ψ = k
bk |k: ⎛ ⎞ O|ψ = ⎝ aij |i j |⎠ bk |k = aij bk |i j ||k i
aij bj |i.
More generally, if {|βi } is a basis for an N -dimensional vector space V , then an operator O : V → V can be written as N N
bij |βi βj |
i=1 j =1
with respect to this basis. In particular, the matrix for O with respect to basis {|βi } has entries Oij = bij . Initially the vector/matrix notation may be easier for the reader to comprehend
because it is more familiar, and sometimes this notation is convenient for performing calculations. But it requires choosing a basis and an ordering of that basis. The bra/ket notation is independent
of the basis and the order of the basis elements. It is also more compact, and it suggests correct relationships, as we saw for the outer product, so that once it becomes familiar, it is easier to
read. 4.2 Projection Operators for Measurement
Section 2.3 described measurement of a single qubit in terms of projection onto a basis vector associated with the measurement device. This notion generalizes to measurement in multiple-qubit
systems. For any subspace S of V , the subspace S ⊥ consists of all vectors that are perpendicular to all vectors in S. The subspaces S and S ⊥ satisfy V = S ⊕ S ⊥ ; thus, any vector |v ∈ V can be
written uniquely as the sum of a vector s1 ∈ S and a vector s2 ∈ S ⊥ . For any S, the projection operator PS is the linear operator PS : V → S that sends |v → s1 where |v = s1 + s2 with
4 Measurement of Multiple-Qubit States
s1 ∈ S1 and s2 ∈ S2 . We use the notation si because s1 and s2 are generally not unit vectors. The operator |ψ ψ| is the projection operator onto the subspace spanned by |ψ. Projection operators are
sometimes called projectors for short. For any direct sum decomposition of V = S1 ⊕ · · · ⊕ Sk into orthogonal subspaces Si there are k related projection operators Pi : V → Si where Pi |v = si where
|v = s1 + · · · + sk with si ∈ Si . In this terminology, a measuring device with associated decomposition V = S1 ⊕ · · · ⊕ Sk acting on a state |ψ results in the state |φ =
Pi |ψ
|Pi |ψ|
with probability |Pi |ψ| . 2
Example 4.2.1 The projector |0 0| acts on a single-qubit state |ψ and obtains the component of |ψ in the subspace generated by |0. Let |ψ = a|0 + b|1. Then (|0 0|)|ψ = a 0|0|0 + b 0|1|0 = a|0. The
projector |1|0 1| 0| acts on two-qubit states. Let
|φ = a00 |00 + a01 |01 + a10 |10 + a11 |11. Then (|1|0 1| 0|) |φ = a10 |1|0. Let PS be the projection operator from an n-dimensional vector space V onto an s-dimensional subspace S with basis {|α0 ,
. . . , |αs−1 }. Then PS =
|αi αi | = |α0 α0 | + · · · + |αs−1 αs−1 |.
Example 4.2.2 Let |ψ = a00 |00 + a01 |01 + a10 |10 + a11 |11 represent a state of a twoqubit system with associated vector space V . Let S1 be the subspace spanned by |00, |01. The operator PS = |00
00| + |01 01| is the projection operator that sends |ψ to the (nonnormalized) vector a00 |00 + a01 |01.
Let V and W be two vector spaces with inner product. The adjoint operator or conjugate transpose O † : V → W of an operator O : W → V is defined to be the operator that satisfies the following inner
product relation. For any v ∈ V and O w ∈ W , the inner product between O † v and w is the same as the inner product between v and O w: O † v · w = v · O w. The matrix for the adjoint operator O † of
O is obtained by taking the complex conjugate of all entries and then the transpose of the matrix for O, where we are assuming consistent use of bases
4.2 Projection Operators for Measurement
for V and W . Recall from section 2.2 that x| is the conjugate transpose of |x. The reader can check that (A|x)† = x|A† . In bra/ket notation, the relation between the inner product of O † |x and |w
and the inner product of |x and O|w is reflected in the notation: ( x|O)|w = x|(O|w) = x|O|w. The definition of a projection operator P implies that applying a projection operator many times in
succession has the same effect as just applying it once: P P = P . Furthermore, any projection operator is its own adjoint: P = P † . Thus
|P |v|2 = ( v|P † )(P |v) = v|P |v for any projection operator P and all |v ∈ V . To solidify our understanding of projection operators and Dirac’s notation, let us describe single-qubit measurement
in the standard basis in terms of this formalism. Example 4.2.3 Formal treatment of single-qubit measurement in the standard basis. Let V be
the vector space associated with a single-qubit system. The direct sum decomposition for V associated with measurement in the standard basis is V = S ⊕ S , where S is the subspace generated by |0 and
S is the subspace generated by |1. The related projection operators are P : V → S and P : V → S , where P = |0 0| and P = |1 1|. Measurement of the state |ψ = a|0 + b|1 2 results in the state P |ψ
with probability |P |ψ| . Since
|P |ψ|
P |ψ = (|0 0|)|ψ = |0 0|ψ = a|0 and
|P |ψ|2 = ψ|P |ψ = ψ|(|0 0|)|ψ = ψ|0 0|ψ = aa = |a|2 , the result of the measurement is a|0 with probability |a|2 . Since by section 2.5 an overall phase |a| factor is physically meaningless, the
state represented by |0 has been obtained with probability |a|2 . A similar calculation shows that the state represented by |1 is obtained with probability |b|2 . Before giving examples of more
interesting measurements, we describe measurement of a two-qubit state with respect to the full decomposition associated with the standard basis. Example 4.2.4 Measuring a two-qubit state with
respect to the full standard basis decom-
position. Let V be the vector space associated with a two-qubit system and |φ = a00 |00 + a01 |01 + a10 |10 + a11 |11 an arbitrary two-qubit state. Consider a measurement with decomposition V = S00 ⊕
S01 ⊕ S10 ⊕ S11 , where Sij is the one-dimensional complex subspace spanned by |ij . The related projection operators Pij : V → Sij are P00 = |00 00|, P01 = |01 01|, P |ψ P10 = |10 10|, and P11 = |11
11|. The state after measurement will be ij with probability
|Pij |ψ|
4 Measurement of Multiple-Qubit States
|Pij |ψ|2 . Recall from sections 2.5.1 and 3.1.3 that two unit vectors |v and |w represent the
same quantum state if |v = eiθ |w for some θ, and that |v ∼ |w indicates that |v and |w represent the same quantum state. The state after measurement is either P00 |ψ a00 |00 ∼ |00 = |a00 | |P00 |ψ|
with probability ψ|P00 |ψ = |a00 |2 , or |01 with probability |a01 |2 , or |10 with probability |a10 |2 , or |11 with probability |a11 |2 .
To develop fluency with this material, the reader may now want to rewrite, using this notation, the examples of section 3.3. More interesting are measurements that give information about the
relations between qubit values without giving any information about the qubit values themselves. For example, we can measure two qubits for bit equality without determining the actual value of the
bits. Such measurements will be used heavily in quantum error correction schemes. Example 4.2.5 Measuring a two-qubit state for bit equality in the standard basis. Let V be the vector space
associated with a two-qubit system. Consider a measurement with associated direct sum decomposition V = S1 ⊕ S2 , where S1 is the subspace generated by {|00, |11}, the subspace in which the two bits
are equal, and S2 is the subspace generated by {|10, |01}, the subspace in which the two bits are not equal. Let P1 and P2 be the projection operators onto S1 and S2 respectively. When a system in
state |ψ = a00 |00 + a01 |01 + a10 |10 + a11 |11 is measured 2 in this way, with probability |Pi |ψ| = ψ|Pi |ψ, the state after measurement becomes Pi |ψ . |Pi |ψ|
2 2 2 2 Let c1 = ψ|P1 |ψ = |a00 | + |a11 | and c2 = ψ|Pw |ψ = |a01 | + |a10 | . After measure-
2 2 1 (a |00 + a11 |11) with probability |c1 |2 = a00 + a11 and c1 00 2 2 1 (a |01 + a10 |10) with probability |c2 |2 = a01 + a10 . If the first outcome happens, c2 01
ment the state will be |u =
|v = | | | | then we know that the two bit values are equal, but we do not know whether they are 0 or 1. If the second case happens, we know that the two bit values are not equal, but we do not know
which one is 0 and which one is 1. Thus, the measurement does not determine the value of the two bits, only whether the two bits are equal. As in the case of single-qubit states, most states are a
superposition with respect to a measurement’s subspace decomposition. In the previous example, a state that is a superposition containing components with both equal and unequal bit values is
transformed by measurement either to a state (generally still a superposition of standard basis elements), in which in all components the bit values are equal, or to a state in which the bit values
are not equal in all of the components.
4.3 Hermitian Operator Formalism for Measurement
Before further developing the formalism used to describe quantum measurement, we give an additional example, one in which the associated subspaces are not generated by subsets of the standard basis
elements. Example 4.2.6 Measuring a two-qubit state with respect to the Bell basis decomposition. Recall
from section 3.2 the four Bell states |+ =
√1 (|00 + |11), 2
|− =
√1 (|00 − |11), 2
| + =
and | − = √12 (|01 − |10). Let V = S+ ⊕ S− ⊕ S + ⊕ S − be the direct sum decomposition into the subspaces generated by the Bell states. Measurement of the state |00 with respect to this decomposition
yields |+ with probability 1/2 and |− with probability 1/2, because |00 = √12 (|+ + |− ). The reader can determine the outcomes and their probabilities for the three other standard basis elements,
and a general two-qubit state. √1 (|01 + |10), 2
The next section continues developing the standard formalism used throughout the quantum mechanics literature to describe quantum measurement. 4.3 Hermitian Operator Formalism for Measurement
Instead of explicitly writing out the subspace decomposition associated with a measurement, including the definition of each subspace of the decomposition in terms of a generating set, a mathematical
shorthand is used. Certain operators, called Hermitian operators, define a unique orthogonal subspace decomposition, their eigenspace decomposition. Moreover, for every such decomposition, there
exists a Hermitian operator whose eigenspace decomposition is this decomposition. Given this correspondence, Hermitian operators can be used to describe measurements. We begin by reminding our
readers of definitions and facts about eigenspaces and Hermitian operators. Let O : V → V be a linear operator. Recall from linear algebra that if O v = λ v for some non-zero vector v ∈ V , then λ is
an eigenvalue and v is a λ-eigenvector of O. If both v and w are λ-eigenvectors of O, then v + w is also a λ-eigenvector, so the set of all λ-eigenvectors forms a subspace of V called the
λ-eigenspace of O. For an operator with a diagonal matrix representation, the eigenvalues are simply the values along the diagonal. An operator O : V → V is Hermitian if it is equal to its adjoint, O
† = O. The eigenspaces of Hermitian operators have special properties. Suppose λ is an eigenvalue of an Hermitian operator O with eigenvector |x. Since ¯ λ x|x = x|λ|x = xλ |(O|xλ ) = ( x|O † )|x = λ
x|x, λ = λ¯ , which means that all eigenvalues of a Hermitian operator are real. To give the connection between Hermitian operators and orthogonal subspace decompositions, we need to show that the
eigenspaces Sλ1 , Sλ2 , . . . , Sλk of a Hermitian operator are orthogonal and satisfy Sλ1 ⊕ Sλ2 ⊕ · · · ⊕ Sλk = V . For any operator, two distinct eigenvalues
4 Measurement of Multiple-Qubit States
have disjoint eigenspaces since, for any unit vector |x, O|x = λ0 |x and O|x = λ1 |x imply (λ0 − λ1 )|x = 0, which implies that λ0 = λ1 . For any Hermitian operator, the eigenvectors for distinct
eigenvalues must be orthogonal. Suppose |v is a λ-eigenvector and |w is a μ-eigenvector with λ = μ. Then λ v|w = ( v|O † )|w = v|(O|w) = μ v|w. Since λ and μ are distinct eigenvalues, v|w = 0. Thus,
Sλi and Sλj are orthogonal for λi = λj . Exercise 4.16 shows that the direct sum of all of the eigenspaces for a Hermitian operator O : V → V is the whole space V . Let V be an N -dimensional vector
space, and let λ1 , λ2 , . . . , λk be the k ≤ N distinct eigenvalues of an Hermitian operator O : V → V . We have just shown that V = Sλ1 ⊕ · · · ⊕ Sλk , where Sλi is the eigenspace of O with
eigenvalue λi . This direct sum decomposition of V is called the eigenspace decomposition of V for the Hermitian operator O. Thus, any Hermitian operator O : V → V uniquely determines a subspace
decomposition for V . Furthermore, any decomposition of a vector space V into the direct sum of subspaces S1 , . . . , Sk can be realized as the eigenspace decomposition of a Hermitian operator O : V
→ V : let Pi be the projectors onto the subspaces Si , and let λ1 , λ2 , . . . , λk be any set of distinct real values; then O = ki=1 λi Pi is a Hermitian operator with the desired direct sum
decomposition. Thus, when describing a measurement, instead of directly specifying the associated subspace decomposition, we can specify a Hermitian operator whose eigenspace decomposition is that
decomposition. Any Hermitian operator with the appropriate direct sum decomposition can be used to specify a given measurement; in particular, the values of the λi are irrelevant as long as they are
distinct. The λi should be thought of simply as labels for the corresponding subspaces, or equivalently as labels for the measurement outcomes. In quantum physics, these labels are often chosen to
represent a shared property, such as the energy, of the eigenstates in the corresponding eigenspace. For our purposes, we do not need to assign labels with meaning; any distinct set of eigenvalues
will do. Specifying a measurement in terms of a Hermitian operator is standard practice throughout the quantum-mechanics and quantum-information-processing literature. It is important to recognize,
however, that quantum measurement is not modeled by the action of a Hermitian operator on a state. The projectors Pj associated with a Hermitian operator O, not O itself, act on a state. Which
projector acts on the state depends on the probabilities pj = ψ|Pj |ψ. For example, the result of measuring |ψ = a|0 + b|1 according to the Hermitian operator Z = |0 0| − |1 1| does not result in the
state a|0 − b|1, even though 1 0 a a = . 0 −1 b −b Direct multiplication by a Hermitian operator generally does not even result in a well-defined state; for example,
4.3 Hermitian Operator Formalism for Measurement
|0 =
The Hermitian operator is only a convenient bookkeeping trick, a concise way of specifying the subspace decomposition associated with the measurement. 4.3.1 The Measurement Postulate
Many aspects of our model of quantum mechanics are not directly observable by experiment. For example, as we saw in section 2.3, given a single instance of an unknown single-qubit state a|0 + b|1,
there is no way to determine experimentally what state it is in; we cannot directly observe the quantum state. It is only the results of measurements that we can directly observe. For this reason,
the Hermitian operators we use to specify measurements are called observables. The measurement postulate of quantum mechanics states that: •
Any quantum measurement can be specified by a Hermitian operator O called an observable.
The possible outcomes of measuring a state |ψ with an observable O are labeled by the eigenvalues of O. Measurement of state |ψ results in the outcome labeled by the eigenvalue λi 2 of O with
probability |Pi |ψ| where Pi is the projector onto the λi -eigenspace. •
(Projection) The state after measurement is the normalized projection Pi |ψ/|Pi |ψ| of |ψ onto the λi -eigenspace Si . Thus the state after measurement is a unit length eigenvector of O with
eigenvalue λi .
We should make clear that what we have described here is a mathematical formalism for measurement. It does not tell us what measurements can be done in practice, or with what efficiency. Some
measurements that may be mathematically simple to state may not be easy to implement. Furthermore, the eigenvalues of physically realizable measurements may have meaning—for example, as the position
or energy of a particle—but for us the eigenvalues are just arbitrary labels. While a Hermitian operator uniquely specifies a subspace decomposition, for a given subspace decomposition there are many
Hermitian operators whose eigenspace decomposition is that decomposition. In particular, since the eigenvalues are simply labels for the subspaces or possible outcomes, the specific values of the
eigenvalues are irrelevant; it matters only which ones are distinct. For example, measuring with the Hermitian operator |0 0| − |1 1| results in the same states with the same probabilities as
measuring with 100|0 0| − 100|1 1|, but these outcomes do not agree with the outcomes of the trivial measurement corresponding to the Hermitian operator |0 0| + |1 1| or 42|0 0| + 42|1 1|. Example
4.3.1 Hermitian operator formalism for measurement of a single qubit in the standard
basis. Using the description in example 4.2.3 of measurement of a single-qubit system in the standard basis, let us build up a Hermitian operator that specifies this measurement. The subspace
4 Measurement of Multiple-Qubit States
decomposition corresponding to this measurement is V = S ⊕ S , where S is the subspace generated by |0 and S is generated by |1. The projectors associated with S and S are P = |0 0| and P = |1 1|
respectively. Let λ and λ be any two distinct real values, say λ = 2 and λ = −3. Then the operator 2 0 O = 2|0 0| − 3|1 1| = 0 −3 is a Hermitian operator specifying the measurement of a single-qubit
state in the standard basis. Any other distinct values for λ and λ could have been used. We will generally use either 0 0 |1 1| = or 0 1 1 0 Z = |0 0| − |1 1| = 0 −1 to specify single-qubit
measurements in the standard basis.
Example 4.3.2 Hermitian operator formalism for measurement of a single qubit in the Hadamard
basis. We wish to construct a Hermitian operator corresponding to measurement of a single qubit in the Hadamard basis {|+, |−}. The subspaces under consideration are S+ , generated by |+, and S− ,
generated by |−, with associated projectors P+ = |+ +| = 12 (|0 0| + |0 1| + |1 0| + |1 1|) and P− = |− −| = 12 (|0 0| − |0 1| − |1 0| + |1 1|). We are free to choose λ+ and λ− any way we like as
long as they are distinct. If we take λ+ = 1 and λ− = −1, then 0 1 X = |0 1| + |1 0| = 1 0 is a Hermitian operator for single-qubit measurement in the Hadamard basis.
Example 4.3.3 The Hermitian operator A = |01 01| + 2|10 10| + 3|11 11| has matrix repre-
sentation ⎛ 0 0 ⎜ 0 1 ⎜ ⎝ 0 0 0 0
⎞ 0 0 ⎟ ⎟ 0 ⎠ 3
with respect to the standard basis in the standard order {|00, |01, |10, |11}. The eigenspace decomposition for A consists of four subspaces, each generated by one of the standard basis
4.3 Hermitian Operator Formalism for Measurement
vectors |00, |01, |10, |11. The operator A is one of many Hermitian operators that specify measurement with respect to the full standard basis decomposition described in example 4.2.4. The Hermitian
operator A = 73|00 00| + 50|01 01| − 3|10 10| + 23|11 11| is another.
Example 4.3.4 The Hermitian operator
1 ⎜ 0 B = |00 00| + |01 01| + π(|10 10| + |11 11|) = ⎜ ⎝ 0 0
⎞ 0 0 ⎟ ⎟ 0 ⎠ π
0 0 π 0
specifies measurement of a two-qubit system with respect to the subspace decomposition V = S0 ⊕ S1 , where S0 is generated by {|00, |01} and S1 is generated by {|10, |11}, so B specifies measurement
of the first qubit in the standard basis as described in example 3.3.3.
Example 4.3.5 The Hermitian operator
2 ⎜ 0 C = 2(|00 00| + |11 11|) + 3(|01 01| + |10 10|) = ⎜ ⎝ 0 0
⎞ 0 0 ⎟ ⎟ 0 ⎠ 2
specifies measurement with respect to the subspace decomposition V = S2 ⊕ S3 , where S2 is generated by {|00, |11} and S3 is generated by {|01, |10}, so C specifies the measurement for bit equality
described in example 4.2.5. Given the subspace decomposition for a Hermitian operator O, it is possible to find an orthonormal eigenbasis of V for O. If O has n distinct eigenvalues, as in the
general case, the eigenbasis is unique up to length one complex factors. If O has fewer than n eigenvalues, some of the eigenvalues are associated with an eigenspace of more than one dimension. In
this case, a random orthonormal basis can be chosen for each eigenspace Si . The matrix for the Hermitian operator O with respect to any of these eigenbases is diagonal. Any Hermitian operator O with
eigenvalues λj can be written as O = j λj Pj , where Pj are the projectors for the λj -eigenspaces of O. Every projector is Hermitian with eigenvalues 1 and 0 where the 1-eigenspace is the image of
the operator. For an m-dimensional subspace S of V spanned by the basis {|i1 , . . . , |im }, the associated projector PS =
m j =1
|ij ij |
4 Measurement of Multiple-Qubit States
maps vectors in V into S. If PS and PT are projectors for orthogonal subspaces S and T , the projector for the direct sum S ⊕ T is PS + PT . If P is a projector onto subspace S then tr(P ), the sum
of the diagonal elements of any matrix representing P , is the dimension of S. This argument applies to any basis since the trace is basis independent. Box 10.1 describes this and other properties of
the trace. Given linear operators A and B on vector spaces V and W respectively, the tensor product A ⊗ B acts on elements v ⊗ w of the tensor product space V ⊗ W as follows: (A ⊗ B)(v ⊗ w) = Av ⊗
Bw. It follows from this definition that (A ⊗ B)(C ⊗ D) = AC ⊗ BD. Let O0 and O1 be Hermitian operators on spaces V0 and V1 respectively. Then O0 ⊗ O1 is a Hermitian operator on the space V0 ⊗ V1 .
Furthermore, if Oi has eigenvalues λij with associated eigenspaces Sij , then O0 ⊗ O1 has eigenvalues λ j k = λ0j λ1k . If an eigenvalue λ j k = λ0j λ1k is unique, then its associated eigenspace Sj k
is the tensor product of S0j and S1k . In general, the eigenvalues λ j k need not be distinct. An eigenvalue λ of O0 ⊗ O1 that is the product of eigenvalues of O0 and O1 in multiple ways, λ i = λ j1
k1 = · · · = λ jm km , has eigenspace S = (S0j1 ⊗ S1k1 ) ⊕ · · · ⊕ (S0jm ⊗ S1km ). Most Hermitian operators O on V0 ⊗ V1 cannot be written as a tensor product of two Hermitian operators O1 and O2
acting on V0 and V1 respectively. Such a decomposition is possible only if each subspace in the subspace decomposition described by O can be written as S = S0 ⊗ S1 for S0 and S1 in the subspace
decompositions associated to O1 and O2 respectively. While for most Hermitian operators this condition does not hold, it does hold for all of the observables we have described so far. For example, 1
0 2 0 ⊗ = (|0 0| − |1 1|) ⊗ (2|0 0| + 3|1 1|) 0 −1 0 3 = 2|00 00| + 3|01 01| − 2|10 10| − 3|11 11| specifies the full measurement in the standard basis, but with a different Hermitian operator from
the one used in example 4.3.3. The operator 1 0 ⊗ I = |00 00| + |01 01| + π(|10 10| + |11 11|) 0 π specifies measurement of the first qubit in the standard basis as described in example 4.3.4, as
does Z ⊗ I , where Z = |0 0| − |1 1|. The Hermitian operator Z ⊗ Z = |00 00| − |01 01| − |10 10| + |11 11|
4.3 Hermitian Operator Formalism for Measurement
specifies the measurement for bit equality described in example 4.3.5. We now give an example of a two-qubit measurement that cannot be expressed as the tensor product of two single-qubit
measurements. Example 4.3.6 Not all measurements are tensor products of single-qubit measurements. Consider
a two-qubit state. The observable M with matrix representation ⎛ ⎞ 0 0 0 0 ⎜ 0 0 0 0 ⎟ ⎟ M=⎜ ⎝ 0 0 0 0 ⎠ 0 0 0 1 determines whether both bits are set to one. Measurement with the operator M results
in a state contained in one of the two subspaces S0 and S1 , where S1 is the subspace spanned by {|11} and S0 is spanned by {|00, |01, |10}. Measuring with M is quite different from measuring both
qubits in the√standard basis and then performing the classical and operation. For instance, the state |ψ = 1/ 2(|01 + |10) remains unchanged when measured with M, but measuring both qubits of |ψ
would result in either the state |01 or |10. Any Hermitian operator Q1 ⊗ Q2 on a two-qubit system is said to be composed of single-qubit measurements if Q1 and Q2 are Hermitian operators on the
single-qubit systems. Furthermore, any Hermitian operator of the form Q ⊗ I or I ⊗ Q on a two-qubit system is said to be a measurement on a single qubit of the system. More generally, a Hermitian
operator of the form I ⊗···⊗I ⊗Q⊗I ⊗···⊗I on an n-qubit system is said to be a single-qubit measurement of the system. Any Hermitian operator of the form A ⊗ I on a system V ⊗ W , where A is a
Hermitian operator acting on V is said to be a measurement of subsystem V . Section 5.1 shows that measurement operators in the standard basis, when combined with quantum state transformations, are
sufficient to perform arbitrary quantum measurements. In particular, there are quantum operations taking any basis to any other, so we can get all possible subspace decompositions of the state space
by starting with a subspace decomposition in which all of the subspaces are generated by standard basis vectors and transforming. Understanding the effects of quantum measurement in different bases
is crucial for a thorough understanding of entangled states and quantum information processing generally. Sections 2.4 and 3.4 illustrate the power of measuring in different bases as a key aspect of
these quantum key distribution schemes. The next section turns to Bell’s theorem, which further illustrates this point while at the same time giving deeper insight into nonclassical properties of
entangled states.
4 Measurement of Multiple-Qubit States
When talking about measurement of an n-qubit system, there are two totally distinct types of decompositions of the vector space V under consideration: the tensor product decomposition into the n
separate qubits and the direct sum decomposition into k ≤ 2n subspaces associated with the measuring device. These decompositions could not be more different. In particular, a tensor component Vi of
V = V1 ⊗ · · · ⊗ Vn is not a subspace of V . Similarly, the subspaces associated with measurements do not correspond to the subsystems, such as individual qubits, of the whole system. Section 2.3
mentioned that only one classical bit of information can be extracted from a single qubit. We can now both generalize this statement and make it more precise. Since any observable on an n-qubit
system has at most 2n distinct eigenvalues, there are at most 2n possible results of a given measurement. Thus, a single measurement of an n-qubit system will reveal at most n bits of classical
information. Since, in general, the measurement changes the state, any further measurements give information about the new state, not the original one. In particular, if the observable has 2n
distinct eigenvalues, measurement sends the state to an eigenvector, and further measurement cannot extract any additional information about the original state. 4.4 EPR Paradox and Bell’s Theorem
In 1935, Albert Einstein, Boris Podolsky, and Nathan Rosen wrote a paper entitled “Can quantummechanical description of physical reality be considered complete?”. The paper contained a thought
experiment that inspired the simpler thought experiment, due to David Bohm, that we describe here. The experiment involves a pair of photons in the state √12 (|00 + |11). Pairs of particles in such a
state are called EPR pairs in honor of Einstein, Podolsky, and Rosen, even though such states did not appear in their paper. Imagine a source that generates EPR pairs √12 (|00 + |11) and sends the
first particle to Alice and the second to Bob. Alice and Bob can be arbitrarily far apart. Each person can perform measurements only on the particle he or she receives. More precisely, Alice can use
only observables of the form O ⊗ I to measure the system, and Bob can use only observables of the form I ⊗ O , where O and O are single-qubit observables.
EPR source
As we saw when we analyzed the Ekert91 quantum key distribution protocol in section 3.4, if Alice measures her particle in the standard single-qubit basis, and observes the state |0, the
4.4 EPR Paradox and Bell’s Theorem
effect of this measurement is to project the state of the quantum system onto that part of the state compatible with the results of Alice’s measurement, so the combined state will now be |00. If Bob
now measures his particle, he will always observe |0. Thus it appears that Alice’s measurement has affected the state of Bob’s particle. Similarly, if Alice measures |1, so will Bob. By symmetry, if
Bob were to measure his qubit first, Alice would observe the same result as Bob. When measuring in the standard basis, Alice and Bob will always observe the same results, regardless of the relative
timing. The probability that either qubit is measured to be |0 is 1/2, but the two results are always correlated. If these particles are far enough apart and the measurements happen close in time
(more specifically, if the measurements are relativistically spacelike separated), it may sound as if an interaction between these particles is happening faster than the speed of light. We said
earlier that a measurement performed by Alice appears to affect the state of Bob’s particle, but this wording is misleading. Following special relativity, it is incorrect to think of one measurement
happening first and causing the results of the other; it is possible to set up the EPR scenario so that one observer sees Alice measure first, then Bob, while another observer sees Bob measure first,
then Alice. According to relativity, physics must explain equally well the observations of both observers. While the causal terminology we used cannot be compatible with both observers’ observations,
the actual experimental values are invariant under change of observer; the experimental results can be explained equally well by Bob measuring first and then Alice as the other way around. This
symmetry shows while there is correlation between the two particles, Alice and Bob cannot use their EPR pair to communicate faster than the speed of light. All that can be said is that Alice and Bob
will observe correlated random behavior. Even though the results themselves are perfectly compatible with relativity theory, the behavior remains mysterious. If Alice and Bob had a large number of
EPR pairs that they measure in sequence, they would see an odd mixture of correlated and probabilistic results: each of their sequences of measurements appear completely random, but if Alice and Bob
compare their results, they see that they witnessed the same random sequence from their two separate particles. Their sequence of entangled pairs behaves like a pair of magic coins that always land
the same way up when tossed together, but whether they both land heads or both land tails is completely random. So far, quantum mechanics is not the only theory that can explain these results; they
could also be explained by a classical theory that postulates that particles have an internal hidden state that determines the result of the measurement, and that this hidden state is identical in
two particles generated at the same time by the EPR source, but varies randomly over time as the pairs are generated. According to such a classical theory, the reason we see random instead of
deterministic results is simply because we, as of yet, have no way of accessing these hidden states. The hope of proponents of such theories was that eventually physics would advance to a stage in
which this hidden state would be known to us. Such theories are known as local hidden-variable theories. The local part comes from the assumption that the hidden variables are internal to each of the
particles and do not depend on external influences; in particular, the hidden variables do not depend on the state of faraway particles or measuring devices.
4 Measurement of Multiple-Qubit States
Is it possible to construct a local hidden-variable theory that agrees with all of the experimental results we use quantum mechanics to model? The answer is “no," but it was not until Bell’s work of
1964 that anyone realized that it was possible to construct experiments that could distinguish quantum mechanics from all local hidden-variable theories. Since then such experiments have been done,
and all of the results have agreed with those predicted by quantum mechanics. Thus, no local hidden-variable theory whatsoever can explain how nature truly works. Bell showed that any local hidden
variable theory predicts results that satisfy an inequality, known as Bell’s inequality. Section 4.4.1 presents the setup. Section 4.4.2 describes the results predicted by quantum theory. Section
4.4.3 establish Bell’s inequality for any local hidden variable theory in a special case. Section 4.4.4 gives Bell’s inequality in full generality. 4.4.1 Setup for Bell’s Theorem
Imagine an EPR source that emits pairs of photons whose polarizations are in an entangled state |ψ = √12 (|↑↑ + |→→), where we are using the notation |↑ and |→ for photon polarization of section
2.1.2. We suppose that the two photons travel in opposite directions, each toward a polaroid (polarization filter). These polaroids can be set at three different angles. In the special case we
consider first, the polaroids can be set to vertical, +60◦ off vertical, and −60◦ off vertical.
EPR source
4.4.2 What Quantum Mechanics Predicts
Let Oθ be a single-qubit observable with 1-eigenspace generated by |v = cos θ |0 + sin θ |1 and −1-eigenspace generated by |v ⊥ = − sin θ |0 + cos θ |1. Quantum mechanics predicts that measurement of
|ψ with Oθ1 ⊗ Oθ2 results in a state with eigenvalue 1 with probability cos2 (θ1 − θ2 ). In other words, the probability that the state ends up in the subspace generated by {|v1 |v2 , |v1⊥ |v2⊥ },
and not the −1-eigenspace generated by {|v1 |v2⊥ , |v1⊥ |v2 }, is cos2 (θ1 − θ2 ). Proving this fact is the subject of exercise 4.20. Here we describe its surprising nonclassical implications. The
three different settings for each polaroid, −60◦ , vertical, and +60◦ , correspond to three observables, M , M↑ , and M , each with two possible outcomes: either the photon passes through the
polaroid, an outcome we will denote with P, or it is absorbed, an outcome we will denote with A. Using the fact that measurement with observable Oθ1 ⊗ Oθ2 results in a state with eigenvalue 1 with
probability cos2 (θ1 − θ2 ), we can compute the probability that measurement of two photons, by polaroids set at angles θ1 and θ2 , give the same result, PP or AA. If both
4.4 EPR Paradox and Bell’s Theorem
polaroids are set at the same angle, then both photon measurements give the same results with probability cos2 0 = 1: both photons will pass through the polaroids, or both will be absorbed. When the
polaroid on the right is set to vertical, and the one on the left is set to +60◦ , both measurements agree with probability cos2 60 = 1/4. Unless the two polaroids are set at the same angle, the
difference between the angles is either 60 or 120 degrees, so in all of these cases the two measurements agree 1/4 of the time and disagree 3/4 of the time. If the polaroids are set randomly for a
series of EPR pairs emanating from the source, then •
with probability 1/3 the polaroid orientation will be the same and the measurements will agree, and
with probability 2/3 the polaroid orientation will differ and the measurements will agree with probability 1/4.
Thus, overall, the measurements will agree half the time and disagree half the time. When such an experiment is performed, these are indeed the probabilities that are seen. 4.4.3 Special Case of
Bell’s Theorem: What Any Local Hidden Variable Theory Predicts
This section shows that no local hidden-variable theory can give these probabilities. Suppose there is some hidden state associated with each photon that determines the result of measuring the photon
with a polaroid in each of the three possible settings. We do not know the nature of such a state, but there are only 23 binary combinations in which these states can respond to measurement by
polaroids in the 3 orientations. We label these 8 possibilities h0 , . . . , h7 .
h0 h1 h2 h3 h4 h5 h6 h7
P P P P A A A A
P P A A P P A A
P A P A P A P A
We can think of hi as the equivalence class of all hidden states, however these might look, that give the indicated measurement results. Experimentally, it has been established that both polaroids,
when set at the same angle, always give the same result when measuring the photons of an EPR pair |ψ. For a local hidden-variable theory to have any chance of modeling experimental results, it must
predict that both photons of the entangled pair be in the same equivalence class of hidden states hi . For example, if the photon on the right responds to the three polaroid positions , ↑, with PAP,
then so must the photon on the left. Now consider the 9 possible combinations of orientations of the two polaroids
4 Measurement of Multiple-Qubit States
{(, ), (, ↑), . . . , (, )} and the expected agreement of the measurements for photon pairs in each hidden state hi . Measurements on hidden states h0 and h7 ( {PPP, PPP} and {AAA, AAA}) agree for
all possible pairs of orientations, giving 100 percent agreement. Measurements of the hidden state h1 , {PPA, PPA}, agree in five of the nine possible orientations and disagree in the others. The
other six cases are similar to h1 , giving 5/9 agreement and 4/9 disagreement. No matter with what probability distribution the EPR source emits photons with hidden states, the expected agreement
between the two measurements will be at least 5/9. Thus, no local hidden-variable theory can give the 50–50 agreement predicted by quantum theory and seen in experiments. 4.4.4 Bell’s Inequality
Bell’s inequality is an elegant generalization of the preceding argument. The more general setup also has a sequence of EPR pairs emanating from a photon source toward two polaroids, with three
possible settings. We now consider polaroids that can be set at any triple of three distinct angles a, b, and c. If we record the results of repeated measurements at random settings of the polaroids,
chosen from the settings above, we can count the number of times that the measurements match for any pair of settings. Let Pxy denote the sum of the observed probability that either •
the two photons interact in the same way with both polaroids (either both pass through, or both are absorbed) when the first polaroid is set at angle x and the second at angle y, or
the two photons interact in the same way with both polaroids when the first polaroid is set at angle y and the second at angle x. Since whenever the two polaroids are on the same setting, the
measurement of the photons will always give the same result Pxx = 1 for any setting x. We now show that the inequality,
Pab + Pac + Pbc ≥ 1, known as Bell’s inequality, holds for any local hidden-variable theory and any sequence of settings for each of the polaroids. We establish this inequality by showing that the
inequality holds for the probabilities associated with any one equivalence class of hidden states, from which we deduce that it holds for any distribution of these equivalence classes. According to
any local hidden-variable theory, the result of measuring a photon by a polaroid in each of the three possible settings is determined by a local hidden state h of the photon. Again, we think of h as
an equivalence class of all hidden states that give the indicated measurement results. The fact that both polaroids, when set at the same angle, always give the same result when measuring the photons
in an EPR state |ψ means that both photons of the entangled pair must be in the same equivalence class of hidden states h. For example, if the photon on the right responds to the three polaroid
positions a, b, c with h be 1 if the result of the two measurements agree PAP, then so must the photon on the left. Let Pxy on states with hidden variable h, and 0 otherwise. Since any measurement
has only two possible
4.5 References
results, P and A, simple logic tells us that the result of measuring a photon, with a given hidden state h, in each of the three polaroid settings, a, b, and c, will be the same for at least one of
the settings. Thus, since the two photons of state |ψ are in the same hidden state, for any h, h h h Pab + Pac + Pbc ≥ 1.
Let wh be the probability with which the source emits photons of kind h. Then the sum of the observed probabilities Pab + Pac + Pbc is a weighted sum, with weights wh , of the results for photons of
each hidden kind h: h h h wh (Pab + Pac + Pbc ). Pab + Pac + Pbc = h h h h + Pac + Pbc ≥1 The weighted average of numbers all greater than 1 is greater than 1, so since Pab for any h, we may conclude
Pab + Pac + Pbc ≥ 1. This inequality holds for any local hidden-variable theory and gives us a testable requirement. By exercise 4.20, quantum theory predicts that the probability that the two
results will be the same is the square of the cosine of the angle between the two polaroid settings. If we take the angle between settings a and b to be θ and the angle between settings b and c to be
φ, then the inequality becomes cos2 θ + cos2 φ + cos2 (θ + φ) ≥ 1. For the special case of section 4.4.3, quantum theory tells us that for θ = φ = 60◦ each term is 1/4. Since 3/4 < 1, these
probabilities violate Bell’s inequality, and therefore we can conclude that no local, deterministic theory can give the same predictions as quantum mechanics. Furthermore, experiments similar to but
somewhat more sophisticated than the setup described here have been done, and their results confirm the prediction of quantum theory and nature’s violation of Bell-like inequalities. Bell’s theorem
shows that it is not possible to model entangled states and their measurement with a local hidden-variable theory. Strictly speaking, entangled states should not be talked about in terms of local
hidden states or cause and effect. But since there are some situations in which entanglement can be safely talked about in one or the other of these ways, and since both are more familiar than the
sort of quantum correlation that actually exists, terminology suggesting either of these modes of thinking persists in the literature. 4.5 References
The original Einstein, Podolsky, Rosen paper [109] is worth reading for an account of their thinking. The first formulation of the paradox as we presented it here is due to Bohm [54]. Our account of
Bell’s inequalities is loosely based on Penrose’s excellent account [225] of a special case of Bell’s theorem for spin-1/2 particles. Greenstein and Zajonc [140] give a detailed
4 Measurement of Multiple-Qubit States
description, accessible to nonphysicists, of Bell’s theorem and the EPR paradox, experimental techniques for generating entangled photon pairs, and Aspect’s experiments testing for quantum violation
of Bell’s inequalities. Detailed results of the experiments by Aspect et al. are published in [25, 26, 24]. Stronger statements than the ones we presented can be made about the sorts of theories that
Bell’s inequality rules out. The issues here can be relatively subtle. Mermin’s article [208] gives a readable account of some of these issues. Peres’s book [226] delves into these issues in detail.
For a discussion of the various interpretations of quantum mechanics and their perceived strengths and weaknesses, see Sudbery’s book [267] and Bub’s book [71]. 4.6 Exercises Exercise 4.1. Give the
matrix, in the standard basis, for the following operators a. |0 0|. b. |+ 0| − i|− 1|. c. |00 00| + |01 01|. d. |00 00| + |01 01| + |11 01| + |10 11|. e. | + + | where | + =
√1 (|00 + |11). 2
Exercise 4.2. Write the following operators in bra/ket notation
a. The Hadamard operator H =
b. X =
c. Y =
d. Z =
23 ⎜ 0 e. ⎜ ⎝ 0 0 f. X ⊗ X.
0 −5 0 0
√1 2 − √12
0 1 −1 0 1 0
√1 2 √1 2
0 −1
. .
⎞ 0 0 0 0 ⎟ ⎟. 0 0 ⎠ 0 9
g. X ⊗ Z. h. H ⊗ H . i. The projection operators P1 : V → S1 and P2 : V → S2 ,
{|+|+, |−|−} and S2 is spanned by {|+|−, |−|+}.
where S1 is spanned by
4.6 Exercises
Exercise 4.3. Show that any projection operator is its own adjoint. Exercise 4.4. Rewrite example 3.3.2 on page 42 in terms of projection operators. Exercise 4.5. Rewrite example 3.3.3 on page 42 in
terms of projection operators. Exercise 4.6. Rewrite example 3.3.4 on page 43 in terms of projection operators. Exercise 4.7. Using the projection operator formalism a. compute the probability of
each of the possible outcomes of measuring the first qubit of an
arbitrary two-qubit state in the Hadamard basis {|+, |−}. b. compute the probability of each outcome for such a measurement on the state | + = √1 (|00 + |11. 2
c. for each possible outcome in (b), describe the possible outcomes if we now measure the second
qubit in the standard basis. d. for each possible outcome in (b), describe the possible outcomes if we now measure the second
qubit in the Hadamard basis. Exercise 4.8. Show that (A|x)† = x|A† . Exercise 4.9. Design a measurement on a three-qubit system that distinguishes between states in
which all bit values are equal and those in which they are not, and gives no other information. Write all operators in bra/ket notation. Exercise 4.10. Design a measurement on a three-qubit system
that distinguishes between states
in which the number of 1 bits is even, and those in which the number of 1 bits is odd, and gives no other information. Write all operators in bra/ket notation. Exercise 4.11. Design a measurement on
a three-qubit system that distinguishes between states with different numbers of 1 bits and gives no other information. Write all operators in bra/ket notation. Exercise 4.12. Suppose O is a
measurement operator corresponding to a subspace decomposition
V = S1 ⊕ S2 ⊕ S3 ⊕ S4 with projection operators P1 , P2 , P3 , and P4 . Design a measurement operator for the subspace decomposition V = S5 ⊕ S6 , where S5 = S1 ⊕ S2 and S6 = S3 ⊕ S4 . Exercise 4.13.
a. Let O be any observable specifying a measurement of an n-qubit system. Suppose that after
measuring |ψ according to O, we obtain |φ. Show that if we now measure |φ according to O, we simply obtain |φ again, with certainty. b. Reconcile the result of (a) with the fact that for most
observables O it is not true that O 2 = O.
4 Measurement of Multiple-Qubit States
Exercise 4.14. a. Give the outcomes and their probabilities for measurement of each of the standard basis elements with respect to the Bell decomposition of example 4.2.6. b. Give the outcomes and
their probabilities for measurement of a general two-qubit state |ψ = a00 |00 + a01 |01 + a10 |10 + a11 |11 with respect to the Bell decomposition. Exercise 4.15. a. Show that the operator B of
example 4.3.4 is of the form Q ⊗ I , where Q is a (2 × 2)-Hermitian
operator. b. Show that any operator of the form Q ⊗ I , where Q is a (2 × 2)-Hermitian operator and I is the (2 × 2)-identity operator, specifies a measurement of a two-qubit system. Describe the
subspace decomposition associated with such an operator. c. Describe the subspace decomposition associated with an operator of the form I ⊗ Q where Q is a (2 × 2)-Hermitian operator and I is the (2 ×
2)-identity operator, and give a high-level description of such measurements. Exercise 4.16. This exercise shows that for any Hermitian operator O : V → V , the direct sum
of all eigenspaces of O is V . A unitary operator U satisfies U † U = I . a. Show that the columns of a unitary matrix U form an orthonormal set. b. Show that if O is Hermitian, then so is U OU −1
for any unitary operator U . c. Show that any operator has at least one eigenvalue λ and λ-eigenvector vλ . d. Use the result of (c) to show that for any matrix A : V → V , there is a unitary
operator U such that the matrix for U AU −1 is upper triangular (meaning all entries below the diagonal are zero). e. Show that for any Hermitian operator O : V → V with eigenvalues λ1 , . . . , λk ,
the direct sum of the λi -eigenspaces Sλi gives the whole space:
V = Sλ1 ⊕ Sλ2 ⊕ · · · ⊕ Sλk . Exercise 4.17. a. Show that any state resulting from measuring an unentangled state with a single-qubit measurement is still unentangled. b. Can other types of
measurement produce an entangled state from an unentangled one? If so, give an example. If not, give a proof. c. Can an unentangled state be obtained by measuring a single qubit of an entangled
state? Exercise 4.18. Show that if there is no measurement of one of the qubits that gives a single result
with certainty, then the two qubits are entangled.
4.6 Exercises
Exercise 4.19. Give an explicit description of the observable Oθ of section 4.4.2 in both bra/ket
and matrix notation. Exercise 4.20. Let Oθ1 be the single-qubit observable with +1-eigenvector
|v1 = cos θ1 |0 + sin θ1 |1 and −1-eigenvector |v1⊥ = − sin1 θ |0 + cos θ1 |1. Similarly, let Oθ2 be the single-qubit observable with +1-eigenvector |v2 = cos θ2 |0 + sin θ2 |1 and −1-eigenvector |
v2⊥ = − sin θ2 |0 + cos θ2 |1. Let O be the two-qubit observable Oθ1 ⊗ Oθ2 . We consider various measurements on the EPR state |ψ = √12 (|00 + |11). We are interested in the probability that the
measurements Oθ1 ⊗ I and I ⊗ Oθ2 , if they were performed on the state |ψ, would agree on the two qubits in that either both qubits are measured in the 1-eigenspace or both are measured in
−1-eigenspace of their respective single-qubit observables. As in example 4.2.5, we are not interested in the specific outcome of the two measurements, just whether or not they would agree. The
observable O = Oθ1 ⊗ Oθ2 gives exactly this information. a. Find the probability that the measurements Oθ1 ⊗ I and I ⊗ Oθ2 , when performed on |ψ,
would agree in the sense of both resulting in a +1 eigenvector or both resulting in a −1 eigenvector. (Hint: Use the trigonometric identities cos(θ1 − θ2 ) = cos(θ1 ) cos(θ2 ) + sin(θ1 ) sin(θ2 ) and
sin(θ1 − θ2 ) = sin(θ1 ) cos(θ2 ) − cos(θ1 ) sin(θ2 ) to obtain a simple form for your answer.) b. For what values of θ1 and θ2 do the results always agree? c. For what values of θ1 and θ2 do the
results never agree? d. For what values of θ1 and θ2 do the results agree half the time? e. Show that whenever θ1 = θ2 and θ1 and θ2 are chosen from {−60◦ , 0◦ , 60◦ }, then the results
agree 1/4 of the time and disagree 3/4 of the time. Exercise 4.21. a. Most of the time the effect of performing two measurements, one right after the other, cannot
be achieved by a single measurement. Find a sequence of two measurements whose effect cannot be achieved by a single measurement, and explain why this property is generally true for most pairs of
measurements. b. Describe a sequence of two distinct nontrivial measurements that can be achieved by a single
4 Measurement of Multiple-Qubit States
c. For each of the measurements specified by the operators A, B, C, and M from examples 4.3.3, 4.3.4, 4.3.5, and 4.3.6, say whether the measurement can be achieved as a sequence of single-qubit
measurements. d. How does performing the sequence of measurements Z ⊗ I followed by I ⊗ Z compare with
performing the single measurement Z ⊗ Z?
Exercise 4.22. Show that no matter in which basis the first qubit of an EPR pair 12 (|00 + |11)
is measured, the two possible outcomes have equal probability.
Quantum State Transformations
The last two chapters discussed encoding information in quantum states and some of the uniquely quantum properties of such quantum information, such as entangled states, the exponential state space,
and quantum measurement. This chapter develops the basic mechanisms for computing on quantum information. Computation on quantum information takes place through dynamic transformation of quantum
systems. In order to understand quantum computation, we must understand which sorts of transformations nature allows and which it does not. This chapter focuses on transformations of a closed quantum
system, transformations that map the state space of the quantum system to itself. Measurement is not a transformation in this sense. Chapter 10 discusses more general transformations, transformations
of a subsystem that is part of a larger quantum system. This chapter begins with a brief discussion of transformations on general quantum systems, and it then focuses on multiple-qubit systems.
Section 5.1 discusses the unitarity requirement on quantum state transformations and the no-cloning principle. The no-cloning restriction is central to both the limitations and the advantages of
encoding information in quantum states; for example, it underlies the security of quantum cryptographic protocols such as the ones described in sections 2.4 and 3.4, and it is also vital to the
argument of section 4.3.1 that no more than n classical bits worth of information can be extracted from an n-qubit system. After discussing considerations for transformations of general quantum
systems, the chapter restricts discussion to n-qubit systems and develops building blocks for the standard circuit model of quantum computation. Part II uses this model to describe quantum
algorithms. All quantum transformations on n-qubit quantum systems can be expressed as a sequence of transformations on single-qubit and two-qubit subsystems. Some quantum state transformations can
be implemented in terms of these basic gates more easily than others. The efficiency of a quantum transform is quantified in terms of the number of one- and two-qubit gates used. Section 5.2 looks at
singlequbit and two-qubit transformations, ways of combining them, and a graphical notation for describing sequences of transformations. Section 5.3 describes applications of these simple gates to
two communication problems: dense coding and quantum state teleportation. Section 5.4 is devoted to showing that any quantum transformation can be realized as a sequence of one- and two-qubit
transformations. Section 5.5 discusses finite sets of gates that can be used to approximate all quantum transformations universally. The chapter concludes with a definition of the standard circuit
model for quantum computation.
5 Quantum State Transformations
5.1 Unitary Transformations
In this book, quantum transformation will mean a mapping from the state space of a quantum system to itself. Measurements are not quantum transformations in this sense; there are only finitely many
outcomes, and the result of applying a measurement to a specific state is only probabilistic. Chapter 10 considers open quantum systems, systems that are subsystems of a larger quantum system, and
studies the transformations of subsystems induced by transformations of the larger system. In this chapter, we concern ourselves only with transformations of closed quantum systems. Nature does not
allow arbitrary transformations of a quantum system. Nature forces these transformations to respect properties connected to quantum measurement and quantum superposition. The transformations must be
linear transformations of the vector space associated with the state space so that a state that is a superposition of other states goes to the superposition of their images; more precisely, linearity
means that for any quantum transformation U , U (a1 |ψ1 + · · · + ak |ψk ) = a1 U |ψ1 + · · · + ak U |ψk on any superposition |ψ = a1 |ψ1 + · · · + ak |ψk . Unit length vectors must go to unit length
vectors, which implies that orthogonal subspaces go to orthogonal subspaces. These properties ensure that measuring and then applying a transform to the outcome gives the same result as first
applying the transform and then measuring in the transformed basis. Specifically, the probability of obtaining outcome U |φ by first applying U to |ψ and then measuring with respect to the
decomposition ⊕U Si is the same as the probability of obtaining U |φ by measuring |ψ with respect to the decomposition ⊕Si and then applying U . These properties hold if U preserves the inner
product; for any |ψ and |φ, the inner product of their images, U |ψ and U |φ, must be the same as the inner product between |ψ and |φ:
φ|U † U |ψ = φ|ψ. A straightforward mathematical argument shows that this condition holds for all |ψ and |φ only if U † U = I . In other words, for any quantum transformation U , its adjoint U † must
be equal to its inverse, precisely the condition, U † = U −1 , for a linear transformation to be unitary. Furthermore, this condition is sufficient; the set of allowed transformations of a quantum
system corresponds exactly to the set of unitary operators on the complex vector space associated with the state space of the quantum system. Since unitary operators preserve the inner product, they
map orthonormal bases to orthonormal bases. In fact, the converse is true: any linear transformation that maps an orthonormal basis to an orthonormal basis is unitary. Geometrically, all quantum
state transformations are rotations of the complex vector space associated with the quantum state space. The ith column of the matrix is the image U |i of the ith basis vector, so for a unitary
transformation given in matrix form, U is unitary if and only if the set of columns of its matrix representation are orthonormal. Since U † is unitary if and only
5.1 Unitary Transformations
if U is, it follows that U is unitary if and only if its rows are orthonormal. The product U1 U2 of two unitary transformations is again unitary. The tensor product U1 ⊗ U2 is a unitary
transformation of the space X1 ⊗ X2 if U1 and U2 are unitary transformations of X1 and X2 respectively. Linear combinations of unitary operators, however, are not in general unitary. The unitarity
condition simply ensures that the operator does not violate any general principles of quantum theory. It does not imply that a transformation can be implemented efficiently; most unitary operators
cannot be efficiently implemented, even approximately. In later chapters, particularly when we examine quantum algorithms, we will concern ourselves with questions about the efficiency of certain
quantum transformations. An obvious consequence of the unitary condition is that every quantum state transformation is reversible. Chapter 6 describes work of Charles Bennett, Edward Fredkin, and
Tommaso Toffoli, done prior to the development of quantum information processing, that shows that all classical computations can be made reversible with only a negligible loss of efficiency. Thus,
the reversibility requirement does not impose an unworkably strict restriction on quantum algorithms. In the standard circuit model of quantum computation, all computation is carried out by quantum
transformations, with measurement used only at the end to read out the results. Since measurement can effect changes in quantum states, the dynamics of measurement, rather than quantum state
transformations, provide an alternative means to achieve computation. Section 13.4 describes an alternate, but equally powerful, model of quantum computation in which all computation takes place by
measurement. The phrases quantum transformation or quantum operator refer to unitary operators acting on the state space, not measurement operators. While measurements are modeled by operators, the
behavior of measurement is not modeled by the direct action of the measurement’s Hermitian operator on the state space, but rather by the indirect, probabilistic procedure described by the
measurement postulate of section 4.3.1. One of the least satisfactory aspects of quantum theory is that there are two distinct classes of manipulations of quantum states: quantum transformations and
measurement. Section 10.3 describes a tighter, but still unsatisfactory, relation between the two. 5.1.1 Impossible Transformations: The No-Cloning Principle
This section describes a simple, but important, consequence of the unitary condition: unknown quantum states cannot be copied or cloned. In fact, the linearity of unitary transformations alone
implies the result. Suppose U is a unitary transformation that clones, in that U (|a|0) = |a|a for all quantum states |a. Let |a and |b be two orthogonal quantum states. That U clones means U (|a|0)
= |a|a and U (|b|0) = |b|b. Consider |c = √12 (|a + |b). By linearity, 1 U (|c|0) = √ (U (|a|0) + U (|b|0)) 2 1 = √ (|a|a + |b|b). 2
5 Quantum State Transformations
But if U is a cloning transformation then U (|c|0) = |c|c = 1/2(|a|a + |a|b + |b|a + |b|b), √ which is not equal to (1/ 2)(|a|a + |b|b). Thus, there is no unitary operation that can reliably clone
all quantum states. The no-cloning theorem tells us that it is impossible to clone a specific unknown quantum state reliably. It does not preclude the construction of a known quantum state from a
known quantum state. It is possible to perform an operation that appears to be copying the state in one basis but does not do so in others. For example, it is possible obtain n particles in an
entangled state a|00 . . . 0 + b|11 . . . 1 from an unknown state a|0 + b|1. But it is not possible to create the n particle state (a|0 + b|1) ⊗ · · · ⊗ (a|0 + b|1) from an unknown state a|0 + b|1.
5.2 Some Simple Quantum Gates
Just as for classical computation, it is a boon to quantum computation, both for implementation and analysis, that arbitrarily complex computations can be achieved by composing simple elements.
Section 5.4 shows that any quantum state transformation on an n-qubit system can be realized using a sequence of one- and two-qubit quantum state transformations. We will call any quantum state
transformation that acts on only a small number of qubits a quantum gate. Sequences of quantum gates are called quantum gate arrays or quantum circuits. In the quantum-information-processing
literature, gates are mathematical abstractions useful for describing quantum algorithms; quantum gates do not necessarily correspond to physical objects, as they do in the classical case. So the
gate terminology and its accompanying graphical notation must not be taken too literally. For solid state or optical implementations, there may be actual physical gates, but in NMR and ion trap
implementations, the qubits are stationary particles, and the gates are operations on these particles using magnetic fields or laser pulses. For these implementations, gates operate on a physical
register of qubits. From a practical point of view, the standard description of computation in terms of one- and two-qubit gates leaves something to be desired. Ideally, we would write all our
computations in terms of gates that are easy to implement physically and are robust, but we do not yet know which ones these are. Furthermore, in order to realize physically a quantum computer
capable of performing arbitrary quantum transformations, it would be convenient to have only finitely many gates that could generate all unitary transformations. Unfortunately, such a set is
impossible; there are uncountably many quantum transformations, and a finite set of generators can only generate countably many elements. Section 5.5 shows that it is possible, however, for finite
sets of gates to generate arbitrarily close approximations to all unitary transformations. A number of such sets are known, but it is unclear which of these will be most practical from a physical
implementation point of view. For analyzing quantum algorithms, it is useful to have a standard set of gates with which to analyze the efficiency of quantum algorithms. The set we use includes all
one-qubit gates together with the two-qubit gate described in section 5.2.4.
5.2 Some Simple Quantum Gates
U1 Figure 5.1 A sample graphical representation for a three-qubit quantum gate array. Data flow left to right through the circuit.
Graphical notation, representing series of quantum state transformations acting on various combinations of qubits, is commonly used to describe sequences of transformations and to analyze the
resulting algorithms. Simple transformations are graphically represented by appropriately labeled boxes which are connected to form more complex circuits. A sample graphical representation is shown
in figure 5.1. Each horizontal line corresponds to a qubit. The transformations on the left are performed first, and the processing proceeds from left to right. The boxes labeled with U0 , U1 , and
U3 correspond to single-qubit transformations, while the one labeled U2 corresponds to a two-qubit transformation. When we talk about applying an operator U to qubit i of an n-qubit quantum system,
we mean that we apply the operator I ⊗ · · · ⊗ I ⊗ U ⊗ I ⊗ · · · ⊗ I to the entire system, where I is the single-qubit identity operator, applied to each of the other qubits of the system. The
remainder of this section describes a variety of frequently used quantum gates. 5.2.1 The Pauli Transformations
The Pauli transformations are the most commonly used single-qubit transformations: 1 0 I : |0 0| + |1 1| 0 1 0 1 X : |1 0| + |0 1| 1 0 0 1 Y : − |1 0| + |0 1| −1 0 1 0 Z : |0 0| − |1 1| , 0 −1 where
I is the identity transformation, X is negation (the classical not operation on |0 and |1 viewed as classical bits), Z changes the relative phase of a superposition in the standard basis, and Y = ZX
is a combination of negation and phase change. In graphical notation, these gates are represented by boxes
5 Quantum State Transformations
labeled appropriately. There is variation in the literature as to which transformations are the Pauli transformations, and in the notation used. The main discrepancy is whether −i(|0 1| − |1 01|) is
considered the Pauli transformation instead of Y = |0 1| − |1 0|, as we do here. The operator iY is Hermitian, which is a useful property in some settings, for example, if we wanted to use it to
describe measurement. Also, sometimes the notation σx , σy , and σz is used instead. Throughout this book, we use I , X, Y , and Z for the Pauli operators representing single-qubit transformations.
In chapter 10, we use the notation σx = X, σy = −iY , and σz = Z when the Pauli operators are used to describe quantum states. 5.2.2 The Hadamard Transformation
Another important single-qubit transformation is the Hadamard transformation 1 H = √ (|0 0| + |1 0| + |0 1| − |1 1|), 2 or H : |0 → |+ = |1 → |− =
√1 (|0 + |1) 2 √1 (|0 − |1), 2
which produces an even superposition of |0 and |1 from either of the standard basis elements. Note H H = I . In the standard basis, the matrix for the Hadamard transformation is 1 1 1 H =√ . 2 1 −1
5.2.3 Multiple-Qubit Transformations from Single-Qubit Transformations
Multiple-qubit transformations can be constructed as tensor products of single-qubit transformations. These transformations are uninteresting as multiple-qubit transformations in the sense that they
are equivalent to performing the single-qubit transformations on each of the qubits separately in some order. For example, U ⊗ V can be obtained by first applying U ⊗ I and then I ⊗ V . More
interesting are those multiple-qubit transformations that can change the entanglement between qubits of the system. Entanglement is not a local property in the sense that transformations that act
separately on two or more subsystems cannot affect the entanglement between those subsystems. More precisely, let |ψ be a two-qubit state and U and V be single-qubit unitary transformations. Then (U
⊗ V )|ψ is entangled if and only if |ψ is. The widely used class of two-qubit controlled gates discussed in the next section illustrates the effects transformations can have on entanglement.
5.2 Some Simple Quantum Gates
5.2.4 The Controlled-NOT and Other Singly Controlled Gates
The controlled-not gate, Cnot , acts on the standard basis for a two-qubit system, with |0 and |1 viewed as classical bits, as follows: it flips the second bit if the first bit is 1 and leaves it
unchanged otherwise. The Cnot transformation has representation Cnot = |0 0| ⊗ I + |1 1| ⊗ X = |0 0| ⊗ (|0 0| + |1 1|) + |1 1| ⊗ (|1 0| + |0 1|) = |00 00| + |01 01| + |11 10| + |10 11|, from which it
is easy to read off its effect on the standard basis elements: Cnot : |00 |01 |10 |11
→ → → →
|00 |01 |11 |10.
The matrix representation (in the standard basis) for Cnot is ⎛ ⎞ 1 0 0 0 ⎜ 0 1 0 0 ⎟ ⎜ ⎟ ⎝ 0 0 0 1 ⎠. 0 0 1 0 Observe that Cnot is unitary and is its own inverse. Furthermore, the Cnot gate cannot
be decomposed into a tensor product of two single-qubit transformations. The importance of the Cnot gate for quantum computation stems from its ability to change the entanglement between two qubits.
For example, it takes the unentangled two-qubit state √1 (|0 + |1)|0 to the entangled state √1 (|00 + |11): 2 2 1 1 Cnot √ (|0 + |1) ⊗ |0 = Cnot √ (|00 + |10) 2 2 1 = √ (|00 + |11). 2 Similarly,
since it is its own inverse, it can take an entangled state to an unentangled one. The controlled-not gate is so common that it has its own graphical notation.
The open circle indicates the control bit, the × indicates negation of the target bit, and the line between them indicates that the negation is conditional, depending on the value of the control bit.
Some authors use a solid circle to indicate negative control, in which the target bit is toggled when the control bit is 0 instead of 1.
5 Quantum State Transformations
A useful class of two-qubit controlled gates, which generalizes the Cnot gate, consists of gates that perform a single-qubit transformation Q on the second qubit when the first qubit is |1 and do
nothing when it is |0. These controlled gates have graphical representation
Q We use the following shorthand for these transformations: Q = |0 0| ⊗ I + |1 1| ⊗ Q. The transformation Cnot , for example, becomes X in this notation. In the standard compu tational basis, the
two-qubit operator Q is represented by the 4 × 4 matrix I 0 . 0 Q Let us look in more depth at one of these controlled gates, the controlled phase shift eiθ , where eiθ is shorthand for eiθ I . In
the standard basis, the controlled phase shift changes the phase of the second bit if and only if the control bit is one: eiθ = |00 00| + |01 01| + eiθ |10 10| + eiθ |11 11|. Its effect on the
standard basis elements is as follows: iθ e : |00 → |00 |01 → |01 |10 → eiθ |10 |11 → eiθ |11 and it has matrix representation ⎞ ⎛ 1 0 0 0 ⎜ 0 1 0 0 ⎟ ⎟ ⎜ ⎝ 0 0 eiθ 0 ⎠ . 0 0 0 eiθ The controlled
phase shift makes use of a single-qubit transformation that was a physically meaningless global phase shift when applied to a single-qubit system, but when used as part of a conditional
transformation, this phase shift becomes nontrivial, changing the relative phase between elements of a superposition. For example, it takes 1 1 √ (|00 + |11) → √ (|00 + eiθ |11). 2 2 Graphical icons
can be combined into quantum circuits. The following circuit, for instance, swaps the value of the two bits.
5.2 Some Simple Quantum Gates
In other words, this swap circuit takes |00 |01 |10 |11
→ |00 → |10 → |01 → |11,
and |ψ|φ → |φ|ψ for all single-qubit states |ψ and |φ. Three cautions are in order. The first concerns the use of a basis to specify the transformation. The second concerns the basis dependence of
the notion of control. The third suggests care in interpreting the graphical notation for quantum circuits. Caution 1: Phases in Specifications of Transformations Section 3.1.3 discussed the
important distinction between the quantum state space (projective space) and the associated complex vector space. We need to keep this distinction in mind when interpreting the standard ways quantum
state transformations are specified. A unitary transformation on the complex vector space is completely determined by its action on a basis. The unitary transformation is not completely determined by
specifying what states the states corresponding to basis states are sent to, a subtle distinction. For example, the controlled phase shift takes the four quantum states represented by |00, |01, |10,
and |11 to themselves; |10 and eiθ |10 represent exactly the same quantum state, and so do |11 and eiθ |11. As we saw above, however, this transformation is not the identity transformation since it
takes √12 (|00 + |11) to √12 (|00 + eiθ |10). To avoid mistakes, remember that notation such as |00 |01 |10 |11
→ → → →
|00 |01 eiθ |10 eiθ |11
is used to specify a unitary transformation on the complex vector space in terms of vectors in that vectors space, not in terms of the states corresponding to these vectors. Specifying that the
vector |0 goes to the vector −|1 is different from specifying that |0 goes to |1 because the two vectors −|1 and |1 are different vectors even if they correspond to the same state. The quantum
transformation on the state space is easily derived from the unitary transformation on the associated complex vector space. Caution 2: Basis Dependence of the Notion of Control The notion of the
control bit and the target bit is a carryover from the classical gate and should not be taken too literally. In the standard basis, the Cnot operator behaves exactly as the classical gate does on
classical bits. However, one should not conclude that the control bit is never changed. When the input qubits are not one of the
5 Quantum State Transformations
standard basis elements, the effect of the controlled gate can be somewhat counterintuitive. For example, consider the Cnot gate in the Hadamard basis {|+, |−}: Cnot : |++ |+− |−+ |−−
→ → → →
|++ |−− |−+ |+−.
In the Hadamard basis, it is the state of the second qubit that remains unchanged, and the state of the first qubit that is flipped depending on the state of the second bit. Thus, in this basis the
sense of which bit is the control bit and which the target bit has been reversed. But we have not changed the transformation at all, only the way we are thinking about it. Furthermore, in most bases,
we do not see a control bit or a target bit at all. For example, as we have seen, the controlled-not transforms √12 (|0 + |1)|0 to √12 (|00 + |11). In this case the controlled-not entangles the
qubits so that it is not possible to talk about their states separately. A related fact, which we will use in constructing algorithms and in quantum error correction, is that the following two
circuits are equivalent:
Caution 3: Reading circuit diagrams The graphical representation of quantum circuits can be misleading if one is not careful to interpret it properly. In particular, one cannot determine the effect
the transformation has on the input qubits, even if they are all in standard basis states, by simply looking at the line in the diagram corresponding to that qubit. Let us look at the circuit
acting on the input state |0|0. Since the Hadamard transformation is its own inverse, it might at first appear that the first qubit’s state would remain unchanged by the transformation. But it does
not. Recall from caution 2 that the controlled-not gate does not leave the first qubit unaffected in general. In fact, this circuit takes the input state |00 to 1/2(|00 + |10 + |01 − |11), an effect
that cannot be seen immediately from the circuit and so must be explicitly calculated. 5.3 Applications of Simple Gates
For many years, EPR pairs, and entanglement more generally, were viewed as quantum mechanical oddities of merely theoretical interest. Quantum information processing changes that perception by
providing practical applications of entanglement. Two communications applications,
5.3 Applications of Simple Gates
dense coding and teleportation, illustrate the usefulness of EPR pairs when used together with a few simple quantum gates. Dense coding uses one quantum bit together with a shared EPR pair to encode
and transmit two classical bits. Since EPR pairs can be distributed ahead of time, only one qubit needs to be physically transmitted to communicate two bits of information. This result is surprising,
since, as section 2.3 explained, only one classical bit’s worth of information can be extracted from a qubit. Teleportation is the opposite of dense coding in that it uses two classical bits to
transmit the state of a single qubit. Teleportation is surprising in two respects. In spite of the no-cloning principle of quantum mechanics, there exists a mechanism for the transmission of an
unknown quantum state. Also, teleportation shows that two classical bits suffice to communicate a qubit state that can be in any one of an infinite number of possible states. The key to both dense
coding and teleportation is the use of entangled particles. The initial setup is the same for both processes. Alice and Bob wish to communicate. Each is sent one of the entangled particles making up
an EPR pair 1 |ψ0 = √ (|00 + |11). 2 Suppose Alice is sent the first particle, and Bob the second: 1 |ψ0 = √ (|0A |0B + |1A |1B ). 2 Alice can perform transformations only on her particle, and Bob
can perform transformations only on his, until Alice sends Bob her particle or vice versa. In other words, until a particle is transmitted between them, Alice can perform transformations only of the
form Q ⊗ I on the EPR pair, where Q is a single-qubit transformation, and Bob transformations only of the form I ⊗ Q. More generally, for K = 2k , let I (K) be the 2k × 2k identity matrix. If Alice
has n qubits and Bob has m qubits, then Alice can perform transformations only of the form U ⊗ I (M) , where U is an n-qubit transformation, and Bob can perform transformations only of the form I (N
) ⊗ U . 5.3.1 Dense Coding
EPR source
5 Quantum State Transformations
Alice Alice wishes to transmit the state of two classical bits encoding one of the numbers 0 through 3. Depending on this number, Alice performs one of the Pauli transformations {I, X, Y, Z} on her
qubit of the entangled pair |ψ0 . The resulting state is shown in the following table.
Value Transformation 0 1 2 3
New state
|ψ0 = (I ⊗ I )|ψ0 |ψ1 = (X ⊗ I )|ψ0 |ψ2 = (Z ⊗ I )|ψ0 |ψ3 = (Y ⊗ I )|ψ0
√1 (|00 + |11) 2 √1 (|10 + |01) 2 √1 (|00 − |11) 2 √1 (−|10 + |01) 2
Alice then sends her qubit to Bob. Bob To decode the information, Bob applies a controlled-not to the two qubits of the entangled
pair and then applies the Hadamard transformation H to the first qubit: √1 (|00 + |11) 2 √1 (|10 + |01) 2 √1 (|00 − |11) 2
⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨
Cnot ⎪ −→ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ ⎩ 1 √ (−|10 + |01) 2 ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ = ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
√1 (|00 + |10) 2 √1 (|11 + |01) 2 √1 (|00 − |10) 2 √1 (−|11 + |01) 2
⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭
√1 (|0 + |1) ⊗ |0 2 √1 (|1 + |0) ⊗ |1 2 √1 (|0 − |1) ⊗ |0 2 √1 (−|1 + |0) ⊗ |1 2
⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭
⎧ ⎪ |00 ⎪ ⎪ ⎨ |01 H ⊗I ⎪ |10 −→ ⎪ ⎪ ⎩ |11.
Bob then measures the two qubits in the standard basis to obtain the two-bit binary encoding of the number Alice wished to send. 5.3.2 Quantum Teleportation
The objective of teleportation is to transmit enough information, using only classical bits, about the quantum state of a particle that a receiver can reconstruct the exact quantum state. Since the
no-cloning principle of quantum mechanics means that a quantum state cannot be copied, the quantum state of the original particle cannot be preserved. It is this property—that the original state at
the source must be destroyed in the course of creating the state at the target—that gives quantum teleportation its name.
5.3 Applications of Simple Gates
EPR source
Alice Alice has a qubit whose state |φ = a|0 + b|1 she does not know. She wants to send this
state to Bob through classical channels. As in the setup for the dense coding application, Alice and Bob each possess one qubit of an entangled pair 1 |ψ0 = √ (|00 + |11). 2 The starting state is the
three-qubit quantum state 1 |φ ⊗ |ψ0 = √ a|0 ⊗ (|00 + |11) + b|1 ⊗ (|00 + |11) 2 1 = √ a|000 + a|011 + b|100 + b|111 . 2 Alice controls the first two qubits and Bob controls the last one. Alice
applies the decoding step used by Bob in the dense coding scenario to the combined state of the qubit |φ to be transmitted and her half of the entangled pair. In other words, Alice now applies Cnot ⊗
I followed by H ⊗ I ⊗ I to this state to obtain (H ⊗ I ⊗ I )(Cnot ⊗ I )(|φ ⊗ |ψ0 ) 1 = (H ⊗ I ⊗ I ) √ a|000 + a|011 + b|110 + b|101 2 1 = a(|000 + |011 + |100 + |111) + b(|010 + |001 − |110 − |101) 2
1 |00(a|0 + b|1) + |01(a|1 + b|0) + |10(a|0 − b|1) + |11(a|1 − b|0) . 2 Alice measures the first two qubits and obtains one of the four standard basis states |00, |01, |10, and |11 with equal
probability. Depending on the result of her measurement, the quantum =
5 Quantum State Transformations
state of Bob’s qubit is projected to a|0 + b|1, a|1 + b|0, a|0 − b|1, or a|1 − b|0. Alice sends the result of her measurement as two classical bits to Bob. After these transformations, crucial
information about the original state |φ is contained in Bob’s qubit. There is now nothing Alice can do on her own to reconstruct the original state of her qubit. In fact, the no-cloning principle
implies that at any given time, only one of Alice or Bob can reconstruct the original quantum state. Bob When Bob receives the two classical bits from Alice, he knows how the state of his half
of the entangled pair compares to the original state of Alice’s qubit. Bob can reconstruct the original state of Alice’s qubit, |φ, by applying the appropriate decoding transformation to his qubit,
originally part of the entangled pair. The following table shows the state of Bob’s qubit before the decoding has taken place and the decoding operator Bob should use depending on the value of the
bits he received from Alice. State a|0 + b|1 a|1 + b|0 a|0 − b|1 a|1 − b|0
Bits received Decoding 00 I 01 X 10 Z 11 Y
After decoding, Bob’s qubit will be in the quantum state, a|0 + b|1, in whichAlice’s qubit started. This decoding step is the encoding step of dense coding, and the encoding step was the decoding
step of dense coding, so teleportation and dense coding are in some sense inverses of each other. 5.4 Realizing Unitary Transformations as Quantum Circuits
This section shows how arbitrary unitary transformations can be implemented from a set of primitive transformations. The primitive set we consider includes the two-qubit Cnot gate, in addition to
three kinds of single-qubit gates. Using just these four types of operations, any arbitrary n-qubit unitary transformation can be implemented. Section 5.4.1 shows that general single-qubit
transformations can be decomposed into products of the three kinds of primitive single-qubit operators. Sections 5.4.2 and 5.4.3 show how to construct multiple-qubit controlled versions of
single-qubit transformations. Section 5.4.4 uses these transformations to construct arbitrary unitary transformations. This chapter merely shows that all quantum transformations can be implemented in
terms of simple gates; we are not yet concerned with the efficiency of such implementations. Most quantum transformations do not have an efficient implementation in terms of simple gates. Much of the
rest of the book will be devoted to understanding which quantum transformations have efficient implementations and how these can be used to solve computational problems. 5.4.1 Decomposition of
Single-Qubit Transformations
This section shows that all single-qubit transformations can be written as a combination of three types of transformations, phase shifts K(δ), rotations R(β), and phase rotations T (α).
5.4 Realizing Unitary Transformations as Quantum Circuits
K(δ) = eiδ I A phase shift by δ cos β sin β R(β) = A rotation by β − sin β cos β iα e 0 T (α) = A phase rotation by α. −iα 0 e Note that K(δ1 + δ2 ) = K(δ1 )K(δ2 ), R(β1 + β2 ) = R(β1 )R(β2 ), and T
(α1 + α2 ) = T (α1 )T (α2 ), and that the operator K commutes with K, T , and R. Rather than write K(δ), we frequently just write the scalar factor eiδ . Even though, as a transformation on a
single-qubit system, K(δ) performs a global phase change, and thus is equivalent to the identity on the single-qubit system, we include it here because we will use it later as part of multiple-qubit
conditional transformations in which this factor becomes a relative phase shift that is physically relevant. The transformation R(α) and T (α) are rotations by 2α about the y- and z-axis of the Bloch
sphere respectively. This paragraph shows that any single-qubit unitary transformation Q can be decomposed into a sequence of transformations of the form Q = K(δ)T (α)R(β)T (γ ). Since the K(δ) is a
global phase shift with no physical effect, the space of all single-qubit transformations has only three real dimensions. Given the transformation u00 u01 , Q= u10 u11 it follows immediately from the
unitarity condition QQ† = I that |u00 |2 + |u01 |2 = 1, u00 u10 + u01 u11 = 0, and |u11 |2 + |u10 |2 = 1. A short calculation gives |u00 | = |u11 | and |u01 | = |u10 |. So the magnitudes of the
coefficients uij can be written as the sine and cosine of some angle β; we can write Q as Q=
eiθ 00 cos(β) −eiθ10 sin(β)
eiθ 01 sin(β) eiθ11 cos(β)
Furthermore, the phases are not independent: u10 u00 + u11 u01 = 0 implies that θ10 − θ00 = θ11 − θ01 . Since i(δ+α+γ ) cos β ei(δ+α−γ ) sin β e , K(δ)T (α)R(β)T (γ ) = −ei(δ−α+γ ) sin β ei(δ−α−γ )
cos β we can find δ, α, γ for a given Q by solving the equations
5 Quantum State Transformations
δ + α + γ = θ00 , δ + α − γ = θ01 , δ − α + γ = θ10 . Using θ11 = θ10 − θ00 + θ01 , it is easy to see that this solution also satisfies δ − α − γ = θ11 . 5.4.2 Singly Controlled Single-Qubit
Let Q = K(δ)T (α)R(β)T (γ ) be an arbitrary single-qubit unitary transformation. The controlled gate Q can be implemented by first constructing K(δ) and implementing Q for Q = T (α)R(β)T (γ ). Then Q
= ( K(δ))( Q ). We now show how to implement these two transformations in terms of basic gates. The conditional phase shift can be implemented by primitive single-qubit operations: K(δ) = |0 0| ⊗ I +
|1 1| ⊗ K(δ) = |0 0| ⊗ I + eiδ |1 1| ⊗ I = (K(δ/2)T (−δ/2)) ⊗ I. Graphically, the implementation looks like b
T (−δ/2)
K( 2δ )
It may appear surprising that the conditional phase shift K(δ) can be realized by a circuit acting on the first qubit only, with no transformations acting directly on the second qubit. The reason
that transformations on the first qubit suffice is that a phase shift affects the whole quantum state, not just a single qubit. In particular, |x ⊗ a|y = a|x ⊗ |y. Implementing Q is slightly more
involved. For Q = T (α)R(β)T (γ ), define the following transformations: Q0 = T (α)R(β/2), −γ − α Q1 = R(−β/2)T , 2 γ −α . Q2 = T 2 The claim is that Q can be defined as Q = (I ⊗ Q0 )Cnot (I ⊗ Q1 )
Cnot (I ⊗ Q2 )
5.4 Realizing Unitary Transformations as Quantum Circuits
or graphically b Q2
b Q1
It is easy to see that this circuit performs the following transformation: |0 ⊗ |x → |0 ⊗ Q0 Q1 Q2 |x. |1 ⊗ |x → |1 ⊗ Q0 XQ1 XQ2 |x. Using R(β)R(−β) = I and T (α)T (γ ) = T (α + γ ), the property Q0
Q1 Q2 = I follows immediately from the definition of the Qi . To show that Q0 XQ1 XQ2 = Q , use XR(β)X = R(−β) and XT (α)X = T (−α). Then Q0 XQ1 XQ2 = T (α)R(β/2)(XR(−β/2)X)(XT (−
γ −α γ +α )X)T ( ) 2 2
= Q . In this way, we can realize a version of an arbitrary single-qubit transformation controlled by a single qubit. 5.4.3 Multiply Controlled Single-Qubit Transformations
The graphical notation of sections 5.2.4 and 5.4.2 for controlled operations generalizes to more than one control bits. Let k Q be the (k + 1)-qubit transformation that applies Q to qubit 0 when
qubits 1 through k are all 1. For example, the controlled-controlled-not gate or Toffoli gate 2 X, which negates the last bit of three if and only if the first two are both 1, has the following
graphical representation.
The subscript 2 in the notation 2 X indicates that there are two control bits. We write the Cnot gate as both X and 1 X. The construction of 5.4.2 can be iterated to obtain arbitrary single-qubit
transformations con trolled by k qubits. To implement 2 Q, a three-qubit gate that applies Q controlled by two qubits, start by replacing each of Q0 , Q1 , and Q2 in the previous construction with a
single-qubit controlled version,
5 Quantum State Transformations
T (–d / 2)
K( 2 ) Q2
This circuit can be expanded, as in the previous section, into single-qubit and controlled-not gates, for a total of twenty five single-qubit gates and 12 controlled-not gates. Repeating this process
leads to circuits for controlled versions of single-qubit transformations with k control bits, k Q, with 5k single-qubit transformations and 12 (5k − 1) controlled-not gates. As section 6.4.2 shows,
significantly more efficient implementations of k Q are known. All of the controlled gates seen so far are executed when the control bits are 1. To implement a singly controlled gate that is executed
when the control bit is 0, the control bit can be negated, as in
X Q
For any length k bit-string s, temporarily negating the appropriate control qubits in this way, enables the realization of a controlled gate that applies Q to qubit 0 exactly when the other k qubits
are in the pattern s. More precisely, let |s be the k-qubit standard basis vector labeled with bit-string s. This construction implements the (k + 1)–qubit controlled gate that applies the
singlequbit transformation Q to qubit 0 when qubits 1 though k are in the basis state |s and does nothing to qubit 0 when qubits 1 though k are in a different basis state. Such constructions can be
further generalized to (k + 1)-qubit controlled gates that apply the single-qubit transformation Q to qubit i when the other qubits are in a specific basis state and do nothing when they are in a
different basis state. In other words, this transformation applies Q to the two-dimensional subspace spanned by the two basis vectors |xk . . . xi . . . x0 and |xk . . . xˆi . . . x0 , where xˆi = xi
⊕ 1, that differ only in bit i, and it leaves the orthogonal subspace invariant. Section 5.4.4 uses such gates to exhibit an explicit implementation of an arbitrary unitary transformation. The
construction of section 5.4.4 uses two different transformations related to a pair consisting of a k-bit bit-string s and a single-qubit transformation Q: the first applies Q to the ith qubit with
the standard ordering of the basis {|0, |1} when the other k qubits are in state |s, and the second applies Q to the ith qubit with the basis in the other order. In other words, this second
transformation applies XQX to qubit i when the other qubits are in state |s. We use the notation ix Q, or i x
5.4 Realizing Unitary Transformations as Quantum Circuits
where x is a (k + 1)-bit bit-string such that xk . . . xi+1 xi−1 . . . x0 = sk−1 . . . s0 , to represent both of these transformations depending on the value of xi . When xi is 0, the single-qubit
transformation Q is applied. When xi is 1, the transformation XQX is applied. When i is specified, the notation xˆ means that the ith bit of a bit-string x has been flipped: xˆ = x ⊕ 2i . For any
single-qubit ˆ where Q ˆ = XQX. Geometrically ix Q transformation Q, the transformation ixˆ Q = ix Q, is a rotation in the two-dimensional complex subspace spanned by standard basis vectors |x and |
x. ˆ 0 X is the standard Cnot , with b1 being the 100 control bit and b0 being the target. The notation 11 X also represents the Cnot transformation because X is invariant under reversing the order
of the basis for qubit b0 : X = XXX. The notation 0 transformation except that now X is performed only when b1 has value 00 X is a controlled-not 0. The notation 101 X describes the standard Cnot but
with b0 as the control bit and b1 as the target. Example 5.4.1 On a two-qubit system |b1 b0 ,
This section showed how to implement multiply controlled single-qubit gates using a number of basic gates that is exponential in the number of qubits. Section 6.4.2 shows how to implement efficiently
any multiply controlled single-qubit operation. That construction uses linearly many basic gates and a single additional qubit. 5.4.4 General Unitary Transformations
This section presents a systematic way to implement an arbitrary unitary transformation on the 2n -dimensional vector space associated with the state space of an n-qubit system. The intuitive idea
behind the construction is that any unitary transformation is simply a rotation of the 2n dimensional complex vector space underlying the n-qubit quantum state space, and that any rotation can be
obtained by a sequence of rotations in two-dimensional subspaces. Let N = 2n . This section writes all matrices in the standard basis, but with a nonstandard ordering {|x0 , . . . , |xN−1 } such that
successive basis elements differ by only one bit. Such a sequence of binary numbers is called a Gray code. Any Gray code will do. For 0 ≤ i ≤ N − 2, let ji be the bit on which |xi and |xi+1 differ,
and Bi be the shared pattern of all the other bits in |xi and |xi+1 . The next few paragraphs show how to realize an arbitrary unitary operator U as a sequence of multiply controlled single-qubit
operators jxii Q that perform a series of rotations, each in a two-dimensional subspace spanned by successive basis elements. Consider transformations Um of the form Um =
I (m) 0
0 VN−m
where I (m) is the m × m identity matrix and VN −m is an (N − m) × (N − m)-unitary matrix with 0 ≤ m ≤ N − 2. We wish to show that given any (N × N )-matrix Um−1 , 0 < m ≤ N − 2, of this
5 Quantum State Transformations
form there exist operators Cm , the product of multiply controlled single-qubit operators, and a Um , now with a larger identity component I (m) , such that Um−1 = Cm Um . Then, taking VN = U , the
unitary operator U can be written as U = U0 = C1 · · · CN−2 UN−2 . The transformation UN−2 has the form UN −2 =
I (N−2) 0
0 V2
which is simply the operation jx V2 where x = xN −2 and, using the Gray code condition, j = jN −2 is the bit in which the last two basis vectors |xN −2 and |xN −1 differ. So once we show how to
implement the Cm using multiply controlled single-qubit operators, we will have succeeded in showing that any unitary operator can be expressed in terms of such operators, and thus can be implemented
using only Cnot , K(δ), R(β), and T (α). The basis vector |xm is the first basis vector on which Um−1 acts nontrivially. Write |vm = Um−1 |xm = am |xm + · · · + aN |xN . We may assume that aN is
real, since we can multiply Um−1 by a global phase. If we can find a unitary transformation Wm , composed only of multiply controlled single-qubit transformations, that takes |vm to |xm and does not
affect any of the first m elements of the basis, Wm Um−1 would have the desired form, so we would take Um = Wm Um−1 and Cm = Wm−1 . To define Wm , begin by rewriting the coefficients of the last two
components of |vm : |vm = am |xm + · · · + cN−1 cos(θN −1 )eiφN−1 |xN −1 + cN −1 sin(θN −1 )|xN , where aN−1 = |aN−1 |eiφN−1 ,
2 2 cN−1 = |aN−1 | + |aN | , cos(θN−1 ) = |aN−1 |/cN−1 , sin(θN−1 ) = |aN |/cN−1 . Then
R(θN−1 )
K(−φN−1 )
takes |vm to am |xm + · · · + aN−1 |xN−1 ,
5.5 A Universally Approximating Set of Gates
where aN−1 = cN−1 , since jxN−1 K(−φN −1 ) cancels the eiφN−1 factor, and jxN−1 R(θN −1 ) N−1 N−1 rotates so that all of the amplitude that was in |xN is now in |xN −1 . None of the other basis
vectors are affected because the controlled part of the operators ensure that only basis vectors with bits in pattern BN−1 are affected. To obtain the rest of Wm , we iterate this procedure over all
} through {am , am+1 } to obtain the operator pairs of coordinates {aN−2 , aN−1 Wm =
jm xm
R(θm )
K(−φm ) · · ·
R(θN −1 )
K(−φN −1 ),
which takes |vm to am |xm , where
ai = |ai |eiφi , ai = ci ,
2 2 ci = |ai | + |ai+1 | , cos(θi ) = |ai |/ci , |/ci . sin(θi ) = |ai+1
The coefficient am = 1, since the image of |vm must be a unit vector, and the final jxmm K(−φm ) ensures that it is a positive real. While this procedure provides an implementation for any unitary
operator U in terms of simple transformations, the number of gates needed is exponential in the number of qubits. For this reason, it has limited practical value in that more efficient
implementations are needed for realistic computations. Most unitary operators do not have efficient realizations in terms of simple gates; the art of quantum algorithm design is in finding useful
unitary operators that have efficient implementations. 5.5 A Universally Approximating Set of Gates
Section 5.4 showed that all unitary transformations can be realized as a sequence of single-qubit transformations and controlled-not gates. From a practical point of view, we would prefer to deal
with a finite set of gates. It is easy to show that for any finite set of gates there are unitary transformations that cannot be realized as a combination of these gates, but there are finite sets of
gates that can approximate any unitary transformation to arbitrary accuracy. Furthermore, for any desired level of accuracy 2−d , this approximation can be done efficiently; there is a polynomial p
(d) such that any single-qubit unitary transformation can be approximated to within 2−d by a sequence of no more than p(d) gates from the finite set. We will not prove this efficiency result, known
as the Solovay-Kitaev theorem, but we will exhibit a finite set of gates that can be used to approximate all unitary transformations.
5 Quantum State Transformations
Since any unitary transformation can be realized using single-qubit and Cnot gates, it suffices to find a finite set of gates that can approximate all single-qubit transformations. Consider the set
consisting of the Hadamard gate H , the phase gate P π2 , the π/8-gate P π4 , and the Cnot gate where 1 0 P π2 = = |0 0| + i|1 1| 0 eiπ/2 and P π4 =
1 0 iπ 0 e4
= |0 0| + e 4 |1 1|.
Recall from section 5.4.1 the single-qubit operator T (θ ) = eiθ |0 0| + e−iθ |1 1|. The π/8-gate P π4 got its name because, up to a global phase, it acts in the same way as the gate T (− π8 ), π iπ
P π4 = e 8 T − , 8
and unfortunately the name stuck in spite of the confusion it causes. (When used on their own, it does not matter whether P π4 or T (− π8 ) is used, since they differ only in a global phase, but when
used as part of a controlled gate construction, this phase becomes a physically relevant relative phase.) A rotation R is a rational rotation if, for some integer m, R m = I . If no such m exists,
then R is an irrational rotation. It may seem surprising that a set of gates consisting only of rational rotations on the Bloch sphere can approximate all single-qubit transformations. Don’t we need
an irrational rotation? In fact, the proof proceeds by using these gates to construct an irrational rotation. Such a construction is possible because the group of rotations of a sphere differs from
the group of rotations of a Euclidean plane. In the Euclidean plane, the product of two rational rotations is always rational, but the analogous statement is not true for rotations of the sphere.
Exercise 5.21 guides the reader through proofs of the relevant properties of groups of rotations of the sphere and the Euclidean plane. Exercises 5.19–5.22 develop the steps in the following
spherical geometry argument in more detail. The gate P π4 is a rotation by π/4 about the z-axis of the Bloch sphere. The transformation S = H P π4 H is a rotation by π/4 about the x-axis. It is a
good exercise in spherical geometry to show that V = P π4 S is an irrational rotation. Since V is irrational, any rotation W about the same axis can be approximated to within arbitrary precision 2−d
by some power of V . Recall from section 5.4.1 that any single-qubit transformation may be achieved (up to global phase) by combining rotations about the y- and z-axes: for every single-qubit
operation W there exist angles α, β, γ , and δ such that W = K(δ)T (α)R(β)T (γ ), where T (α) rotates by angle α about the z-axis and R(α) rotates by angle α about the y-axis. The set of rotations
about any two distinct axes can achieve arbitrary single-qubit transformations.
5.7 References
Since H V H has a different axis from V , the two transformations H and V generate all single-qubit operators. Other universally approximating finite sets, with varying advantages and disadvantages,
exit. 5.6 The Standard Circuit Model
Acircuit model for quantum computation describes all computations in terms of a circuit composed of simple gates followed by a sequence of measurements. The simple gates are drawn either from a
universal set of simple gates or a universally approximating set of quantum gates. The standard circuit model for quantum computation takes as its gate set the Cnot gate together with all singlequbit
transformations, and it takes as its set of measurements single-qubit measurements in the standard basis. So all computations in the standard model consist of a sequence of single-qubit and Cnot
gates followed by a sequence of single-qubit measurements in the standard basis. While a finite set of gates would be more realistic than the infinite set of all single-qubit transformations, the
infinite set is easier to work with and, by the results of Solovay and Kitaev, the infinite set does not yield significantly greater computational power. For conceptual clarity, the n qubits of the
computation are often organized into registers, subsets of the n qubits. Other models of quantum computation exist. Each model provides its own insights into the workings of quantum computation, and
each has contributed to the growth of the field through new algorithms, new approaches to robust quantum computation, or new approaches to building quantum computers. The most significant of these
models will be discussed in section 13.4. One of the strengths of the standard circuit model is that it makes finding quantum analogs of classical computation straightforward. That is the subject of
the next chapter. Finding quantum analogs of reversible classical circuits is easy; all of the technical difficulties involve the entirely classical problem of converting an arbitrary classical
circuit into a reversible classical circuit. The results of section 5.4 show that any quantum transformation can be realized in terms of the basic gates of the standard circuit model. But it says
nothing about efficiency. Chapter 6 finds not only a quantum analog for any classical computation, but also a quantum analog with comparable efficiency. Part II explores the design of quantum
algorithms, which involves finding quantum transformations that can be efficiently implemented in terms of the basic gates of the standard circuit model and figuring out how to use them to solve
certain problems more efficiently than is possible classically. 5.7 References
The no-cloning theorem is due to Wootters and Zurek [286]. Both dense coding and quantum teleportation were discovered in the early 1990s, dense coding by Bennett and Wiesner [46] and quantum
teleportation by Bennett et al. [44]. Single-qubit teleportation has been realized in several experiments, see for example, [57], [221], and [56].
5 Quantum State Transformations
An outline for a proof of the Solovay-Kitaev theorem was given in [173]. Dawson and Nielsen provide a pedagogical review of this result in [95]. A related issue, namely how much precision is needed
to carry out a quantum computation of k steps is answered by Bernstein and Vazirani [49]: a precision of O(logk) bits suffices. (See box 6.1 for the O(t) notation.) The implementation of complex
unitary transformations from basic ones is described in a paper by Barenco et al. [31]. A proof that most quantum transformations cannot be implemented efficiently and exactly in terms of two-qubit
gates can be found in Knill’s Approximation by Quantum Circuits [177]. Deutsch found a single three-qubit gate that by itself can produce arbitrarily good approximations to any unitary transformation
[100]. Later, Deutsch, Barenco, and Ekert showed that almost any two-qubit gate could accomplish the same thing [101]. Others have found other small sets of generators. 5.8 Exercises Exercise 5.1.
Show that any linear transformation U that takes unit vectors to unit vectors preserves orthogonality: if subspaces S1 and S2 are orthogonal, then so are U S1 and U S2 . Exercise 5.2. For which sets
of states is there a cloning operator? If the set has a cloning operator, give the operator. If not, explain your reasoning. a. {|0, |1}, b. {|+, |−}, c. {|0, |1, |+, |−}, d. {|0|+, |0|−, |1|+, |1|
−}, e. {a|0 + b|1}, where |a|2 + |b|2 = 1. Exercise 5.3. Suppose Eve attacks the BB84 quantum key distribution of section 2.4 as follows.
For each qubit she intercepts, she prepares a second qubit in state |0, applies a Cnot from the transmitted qubit to her prepared qubit, sends the first qubit on to Bob, and measures her qubit. How
much information can she gain, on average, in this way? What is the probability that she is detected by Alice and Bob when they compare s bits? How do these quantities compare to those of the direct
measure-and-transmit strategy discussed in section 2.4? Exercise 5.4.
Prove that the following are decompositions for some of the standard gates.
I = K(0)T (0)R(0)T (0) X = −iT (π/2)R(π/2)T (0) H = −iT (π/2)R(π/4)T (0) Exercise 5.5. A vector |ψ is stabilized by an operator U if U |ψ = |ψ. Find the set of vectors
stabilized by
5.8 Exercises
a. the Pauli operator X, b. the Pauli operator Y , c. the Pauli operator Z, d. X ⊗ X, e. Z ⊗ X, f. Cnot . Exercise 5.6. a. Show that R(α) is a rotation of 2α about the y-axis of the Bloch sphere. b.
Show that T (β) is a rotation of 2β about the z-axis of the Bloch sphere. c. Find a family of single-qubit transformations that correspond to rotations of 2γ about the x axis. Exercise 5.7. Show that
the Pauli operators form a basis for all linear operators on a two-
dimensional space. Exercise 5.8. What measurement does the operator iY describe?
How can the circuit of figure 5.2 be used to measure the qubits b0 and b1 for equality without learning anything else about the state of b0 and b1 ? (Hint: you are free to chose any initial state on
the register consisting of qubits a0 and a1 .)
Exercise 5.9.
An n-qubit cat state is the state upon input of |00 . . . 0, constructs a cat state. Exercise 5.10.
√1 (|00 . . . 0 + |11 . . . 1. 2
Exercise 5.11. Let
1 |Wn = √ (|0 . . . 001 + |0 . . . 010 + |0 . . . 100 + · · · + |1 . . . 000). n Design a circuit that, upon input of |00 . . . 0, constructs |Wn . Exercise 5.12. Design a circuit that constructs the
Hardy state
1 √ (3|00 + |01 + |10 + |11). 12
b1 b0 a1 a0 Figure 5.2 Circuit for exercise 5.9.
Design a circuit that,
5 Quantum State Transformations
Exercise 5.13. Show that the swap circuit of section 5.2.4 does indeed swap two single-qubit values in that it sends |ψ|φ to |φ|ψ for all single-qubit states |ψ and |φ. Exercise 5.14. Show how to
implement the Toffoli gate 2 X in terms of single-qubit and Cnot gates. Exercise 5.15. Design a circuit that determines if two single qubits are in the same quantum
state. The circuit may include an ancilla qubit to be measured. The measurement should give a positive answer if the two-qubit states are identical, a negative answer if the two-qubit states are
orthogonal, and be more likely to give a positive answer the closer the states are to being identical. Exercise 5.16. Design a circuit that permutes the values of three qubits in that it sends |ψ|φ|η
to |φ|η|ψ for all single-qubit states |ψ, |φ, and |η. Exercise 5.17. Compare the effect of the following two circuits
H X
Z 0
Show that for any finite set of gates there must exist unitary transformations that cannot be realized as a sequence of transformations chosen from this set.
Exercise 5.18.
Exercise 5.19. Let R be an irrational rotation about some axis of a sphere. Show that for any other rotation R about the same axis and for any desired level of approximation 2−d there is some power
of R that approximates R to the desired level of accuracy.
Show that the set of rotations about any two distinct axes of the Bloch sphere generate all single-qubit transformations (up to global phase). Exercise 5.20.
Exercise 5.21. a. In the Euclidean plane, show that a rotation of angle θ may be achieved by composing two reflections. b. Use part (a) to show that a clockwise rotation of angle θ about a point P
followed by a clockwise rotation of angle φ about a point Q results in a clockwise rotation of angle θ + φ around the point R, where R is the intersection point of the two rays, one through P at
angle θ/2 from the line between P and Q, and the other through point Q at an angle of φ/2 from the line between P and Q.
5.8 Exercises
c. Show that the product of any two rational rotations of the Euclidean plane is also rational. d. On a sphere of radius 1, a triangle with angles θ , φ, and η has area θ + φ + η (where θ, φ, and η
are in radians). Use this fact to describe the result of rotating clockwise by angle θ around a point P followed by rotating clockwise by angle φ around a point Q in terms of the area of a triangle.
e. Prove that on the sphere the product of two rational rotations may be an irrational rotation. Exercise 5.22. a. Show that the gates H , P π and P π are all (up to global phase) rational rotations
of the Bloch 2
sphere. Give the axis of rotation and the angle of rotation for each of these gates, and also the gate S = H P π4 H . b. Show that the transformation V = P π S is an irrational rotation of the Bloch
sphere. 4
Quantum Versions of Classical Computations
This chapter constructs, for any classical computation, a quantum circuit that can perform the same computation with comparable efficiency. This result proves that quantum computation is at least as
powerful as classical computation. In addition, many quantum algorithms begin by using this construction to compute a classical function on a superposition of values prior to using nonclassical means
for efficiently extracting information from this superposition. The construction of quantum analogs to all classical computations relies on a classical result that constructs a reversible analog to
any classical computation. Section 6.1 describes relations between classical reversible computation and both general classical computation and quantum computation. Section 6.1.1 exhibits reversible
versions of Boolean logic gates and quantum analogs of these reversible versions. Given a classical reversible circuit composed of reversible Boolean logic gates, simple substitution of the analogous
quantum gates for the reversible gates gives the desired quantum circuit. The hard step in proving that every classical computation has a comparably efficient quantum analog is proving that every
classical computation has a reversible version of comparable efficiency. Although this construction is purely classical, it is of such fundamental importance to quantum computation that we present it
here. Section 6.2 provides this construction. Section 6.3 describes the language that section 6.4 uses to specify explicit quantum circuits for several classical functions such as arithmetic
operations. 6.1 From Reversible Classical Computations to Quantum Computations
Any sequence of quantum transforms effects a unitary transformation U on the quantum system. As long as no measurements are made, the initial quantum state of the system prior to a computation can be
recovered from the final quantum state |ψ by running U −1 = U † on |ψ. Thus, any quantum computation is reversible prior to measurement in the sense that the input can always be computed from the
output. In contrast, classical computations are not in general reversible: it is not usually possible to compute the input from the output. For example, while the classical not operation is
reversible, the and, or, and nand are not. Every classical computation does, however, have a classical reversible analog that takes only slightly more computational resources. Section 6.1.1 shows how
to make basic Boolean gates reversible. Section 6.2.2 shows how to make entire Boolean circuits
6 Quantum Versions of Classical Computations
reversible in a resources efficient way, considering space, the number of bits required, and the number of primitive gates. This construction of efficient classical reversible versions of arbitrary
Boolean circuits easily generalizes to a construction of quantum circuits that efficiently implement general classical circuits. Any classical reversible computation with n input and n output bits
simply permutes the N = 2n bit strings. Thus, for any such classical reversible computation there is a permutation π : ZN → ZN sending an input bit string to its output bit string. This permutation
can be used to define a quantum transformation Uπ :
ax |x →
ax |π(x),
that behaves on the standard basis vectors, viewed as classical bit strings, exactly as π did. The transformation Uπ is unitary, since it simply reorders the standard basis elements. Any classical
computation on n input and m output bits defines function f : ZN → ZM x → f (x) mapping the N = 2n input bit strings to the M = 2m output bit strings. Such a function can be extended in a canonical
way to a reversible function πf acting on n + m bits partitioned into two registers, the n-bit input register and the m-bit output register: πf : ZL → ZL (x, y) → (x, y ⊕ f (x)), where ⊕ denotes the
bitwise exclusive-or. The function πf acts on the L = 2n+m bit strings, each made up of an n-bit bit string x and an m-bit bit string y. For y = 0, the function π acts like f , except that the output
appears in the output register and the input register retains the input. There are many other ways of making a classical computation reversible, and for a particular classical computation, there may
be a reversible version that requires fewer bits, but this construction always works. Since πf is reversible, there is a corresponding unitary transformation Uf : |x, y → |x, y ⊕ f (x). Graphically
the transformation Uf is depicted as
x Uf
6.1 From Reversible Classical Computations to Quantum Computations
Section 5.4 showed how to implement any unitary operation in terms of simple gates. For most unitary transformations, that implementation is highly inefficient. While most unitary operators do not
have an efficient implementation, Uf has an efficient implementation as long as there is a classical circuit that computes f efficiently. The method for constructing an efficient implementation of Uf
from an efficient classical circuit for f has two parts. The first part constructs an efficient reversible classical circuit that computes f. The second part substitutes quantum gates for each of the
reversible gates that make up the reversible classical circuit. Section 6.1.1 defines reversible Boolean logic gates and covers the easy second part of the construction. Section 6.2 explains the
involved construction of an efficient reversible classical circuit for any efficient classical circuit. 6.1.1 Reversible and Quantum Versions of Simple Classical Gates
This section describes reversible versions of the Boolean logic gates not, xor, and, and nand. Quantum versions of these gates act like the reversible gates on elements of the standard basis. Their
action on other input states is prescribed by the linearity of quantum operations; the action of a gate on a superposition is the linear combination of the action of the gate on the standard basis
elements making up the superposition. In this way, the behavior of a reversible gate fully defines the behavior of its quantum analog, and vice versa. The tight connection between the two allows us
to use the same notation for both gates with the understanding that the quantum gates can be applied to arbitrary superpositions, whereas the classical reversible gates are applied to bit strings
that correspond to the standard basis elements. Let b1 and b0 be two binary variables, variables taking on only values 0 or 1. We define the following quantum gates: not The not gate is already
reversible. We will use X to refer to both the classical reversible gate and the single-qubit operator X = |0 1| + |1 0| of section 5.2, which performs a classical not operation on classical bits
encoded as the standard basis elements. xor The controlled negation performed by the Cnot = 1 X gate amounts to an xor operation on its input values. It retains the value of the first bit b1 , and
replaces the value of the bit b0 with the xor of the two values.
The quantum version behaves like the reversible version on the standard basis vectors, and its behavior on all other states can be deduced from the linearity of the operator. and It is impossible to
perform a reversible and operation with only two bits. The three-bit controlled-controlled-not gate, or Toffoli gate, T = 2 X can be used to perform a reversible and operation.
6 Quantum Versions of Classical Computations
T |b1 , b0 , 0 = |b1 , b0 , b1 ∧ b0 , where ∧ is notation for the classical and of the two bit values. The Toffoli gate is defined for all input: when the value of the third bit is 1, T |b1 , b0 , 1
= |b1 , b0 , 1 ⊕ b1 ∧ b0 . By varying the values of input bits, the Toffoli gate T can be used to construct a complete set of Boolean connectives, not just the classical and. Thus, any combinatorial
circuit can be constructed from Toffoli gates alone. The Toffoli gate computes not, and, xor, and nand in the following way: T |1, 1, x = |1, 1, ¬x T |x, y, 0 = |x, y, x ∧ y T |1, x, y = |1, x, x ⊕ y
T |x, y, 1 = |x, y, ¬(x ∧ y), where ¬ indicates the classical not acting on the bit value. An alternative to the Toffoli gate, the Fredkin gate F , acts as a controlled swap: F = 1 S, where S is the
two-bit swap operation S : |xy → |yx. The Fredkin gate F , like the Toffoli gate T , can implement a complete set of classical Boolean operators: F |x, 0, 1 = |x, x, ¬x F |x, y, 1 = |x, y ∨ x, y ∨ ¬x
F |x, 0, y = |x, y ∧ x, y ∧ ¬x, where ∨ is notation for the classical or of the two bit values. Because a complete set of classical Boolean connectives can be implemented using just the Toffoli gate
T , or the Fredkin gate F , these gates can be combined to realize arbitrary Boolean circuits. Section 6.2 describes explicit implementations of certain classical functions. As the equations for the
Toffoli gate illustrate, the operations Cnot and X can be implemented by Toffoli gates with the addition of one or two bits permanently set to 1. For clarity, we use Cnot and X gates in our
construction, but all constructions can be done using only Toffoli gates, since we can replace all uses of Cnot and X with Toffoli gates that have additional input bits with their
6.2 Reversible Implementations of Classical Circuits
Figure 6.1 One-bit full adder.
input values set appropriately. For example, the circuit shown in figure 6.1 implements a one-bit full adder using Toffoli and controlled-not gates, where x and y are the data bits, s is their sum
(modulo 2), c is the incoming carry bit, and c is the new carry bit. Several one-bit adders can be strung together to achieve full n-bit addition. 6.2 Reversible Implementations of Classical Circuits
This section develops systematic ways to turn arbitrary classical Boolean circuits into reversible classical circuits of comparable computational efficiency in terms of the number of bits and the
number of gates. The resulting reversible circuits are composed entirely of Toffoli and negation gates. A quantum circuit with the same efficiency as the classical reversible circuit is obtained by
the trivial substitution of quantum Toffoli and X gates for classical Toffoli and negation gates. Thus, as soon as we have an efficient version of a computation in terms of Toffoli gates, we
immediately know how to obtain a quantum implementation of the same efficiency. 6.2.1 A Naive Reversible Implementation
Rather than start with arbitrary Boolean circuits, we consider a classical machine that consists of a register of bits and a processing unit. The processing unit performs simple Boolean operations or
gates on one or two of the bits in the register at a time and stores the result in one of the register’s bits. We assume that, for a given size input, the sequence of operations and their order of
execution are fixed and do not depend on the input data or on other external control. In analogy with quantum circuits, we draw bits of the register as horizontal lines. A simple program (for
four-bit conjunction) for this kind of machine is depicted in figure 6.2. An arbitrary Boolean circuit can be transformed into a sequence of operations on a large enough register to hold input,
output, and intermediate bits. The space complexity of a circuit is the size of the register. Computations performed by this machine are not reversible in general; by reusing bits in the register,
the machine erases information that cannot be reconstructed later. A trivial, but highly
6 Quantum Versions of Classical Computations
n3 n2
n3 n2 n1
Figure 6.2 Irreversible classical circuit for four-bit conjunction.
n3 n2
t1 = 0 n1
n3 n2 n1
t2 = 0 n0
n3 n2 n1 T
m0 = 0
Figure 6.3 Reversible classical circuit for four-bit conjunction.
space inefficient, solution to this problem is not to reuse bits during the entire computation. Figure 6.3 illustrates how the circuit can be made reversible by assigning the results of each
operation to a new bit. The operation that reversibly computes the conjunction and leaves the result in a bit initially set to 0 is, of course, the Toffoli gate. Since the not gate is reversible, and
not together with and form a complete set of Boolean operations, this construction can be generalized to turn any computation using Boolean logic operations into one using only reversible gates. This
implementation, however, needs an additional bit for every and performed, so if the original computation takes t steps, then a reversible one constructed in this naive way requires up to t additional
bits of space. Furthermore, this additional space is no longer in the 0 state and cannot be directly reused, for example, to compose two reversible circuits. Reusing temporary bits will be crucial to
keeping the space requirements close to that of the original nonreversible classical computation. Resetting a bit to zero is not as trivial as it might seem. A transformation that resets a bit to 0,
regardless of
6.2 Reversible Implementations of Classical Circuits
whether it was 0 or 1 before, is not reversible (it loses information), so it cannot be used as part of a reversible computation. Reversible computations cannot reclaim space through a simple reset
operation. They can, however, uncompute any bit set during the course of a reversible computation by reversing the part of the computation that computed the bit. Example 6.2.1 Consider the
computation of figure 6.3. Bits t1 and t0 are temporarily used to obtain the output in bit m0 . Figure 6.4 shows how to uncompute these bits, resetting them to their original 0 value by reversing all
but the last step of the circuit in figure 6.3, so that they may be reused as part of a continuing computation. Here the temporary bits are reclaimed at the cost of roughly doubling the number of
We can reduce the number of qubits needed by uncomputing them and reusing them in the course of the algorithm. The method of uncomputing bits by performing all of the steps in reverse order, except
those giving the output, works for any classical Boolean subcircuit. Consider a classical Boolean subcircuit of t gates operating on a s-bit register. The naive construction requires up to t
additional bits in the register. Example 6.2.2 Suppose we want to construct the conjunction of eight bits. Simply reversing the steps, generalizing the approach shown in figure 6.3, would require six
additional temporary bits and one bit for the final output. We can save space by using the four-bit and circuit of figure 6.4 four times and then combining the results as shown in figure 6.5. This
0 n1
0 T
0 n0
0 T
Figure 6.4 Reversible circuit that reclaims temporary bits.
n0 ni
6 Quantum Versions of Classical Computations
–1 4
–1 4
Figure 6.5 Combining reversible four-bit and-circuits of figure 6.4 to construct an eight-way conjunction.
uses two temporary bits in addition to the two temporary bits used in each of the four-bit ands. Since each of the four-bit ands uncomputes its temporary bits, these bits can be reused by the
subsequent four-bit ands. This circuit uses only a total of four additional temporary bits, though it does require more gates. There is an art to deciding when to uncompute which bits to maintain
efficiency and to retain subresults used subsequently in the computation. The key ideas of this section, adding bits to obtain reversibility and uncomputing their values so that they may be reused,
are the main ingredients of the general construction described in section 6.2.2. By choosing carefully when and what to uncompute, it is possible to make a positive tradeoff, sacrificing some
additional gates to obtain a much more efficient use of space. Examples, such as an explicit efficient implementation of an m-way and, are given in section 6.4. 6.2.2 A General Construction
This section shows how, by carefully choosing which bits to uncompute when, a reversible version of any classical computation can be achieved with only minor increases in the number of gates and
bits. We show that any classical circuit using t gates and s bits, has a reversible counterpart using only O(t 1+ ) gates and O(s log t) bits. (See box 6.1 for the O(t) notation.) For t s, this
construction uses significantly less space than the (s + t) space of the naive approach described in section 6.2.1 at only a small increase in the number of gates.
6.2 Reversible Implementations of Classical Circuits
Box 6.1 Notation for Efficiency Bounds
O(f (n)) is the set of functions bounded by f . Formally, g ∈ O(f (n)) if and only if there exist constants k and n0 such that |g(n)| ≤ k |f (n)| for all n > n0 . Similarly, (f (n)) is the set of
functions such that g ∈ (f (n)) if and only if there exist constants k and n0 such that |g(n)| ≥ k |f (n)| for all n > n0 . Finally, the class of functions bounded by f from above and below is (f
(n)) = O(f (n)) ∩ (f (n)).
0 x
Ri x
Figure 6.6 Converting circuits Ci into reversible ones Ri .
In order to understand how to obtain these bounds, we must consider carefully how many bits are being used and in what way. Let C be a classical circuit, composed of and and not gates, that uses no
more than t gates and s bits. The circuit C can be partitioned in time into r = t/s subcircuits each containing s or fewer consecutive gates C = C1 C2 . . . Cr . Each subcircuit Ci has s input and s
output bits, some of which may be unchanged. Using techniques from section 6.2.1, each circuit Ci can be replaced by a reversible circuit Ri that uses at most s additional bits as shown in figure
6.6. The circuit Ri returns its input as well as the s output values used in the subsequent computation. The input values will be used to uncompute and recompute Ri in order to save space. More than
s gates may be required to construct Ri . In general, Ri can be constructed using at most 3s gates. While other more efficient constructions are possible, the following three steps always work. •
Step 1 Compute all of the output values in a reversible way. For every and or not gate in the
original circuit Ci , the circuit Ri has a Toffoli or not gate. This step uses the same number of gates, s, as Ci , and uses no more than s additional bits.
6 Quantum Versions of Classical Computations
Step 2 Copy all of the output values, the values used in subsequent parts of the computation, to the output register, a set of no more than s additional bits. •
Step 3 Perform the sequence of gates used to carry out step 1, but this time in reverse order. In this way all bits, except those in the output register, are reset to their original values.
Specifically, all temporary bits are returned to 0, and we have recovered all of the input values.
The circuits R1 . . . Rr , when combined as in figure 6.7, perform the computation C in a reversible but space-inefficient way. The subcircuits Ri can be combined in a special way that uses space
more efficiently by uncomputing and reusing some of the bits. Uncomputing requires additional gates, so we must choose carefully when to uncompute in order to reduce the usage of space without
needing too many more gates. First, we show how to obtain a reversible version using O(t log2 3 ) gates and O(s log t) bits, and then we improve on this method to obtain O(t 1+ ) gates and O(s log t)
bit bounds. The basic principle for combining the r = t/s circuits Ri is indicated in figure 6.8. The idea is to uncompute and recompute parts of the state selectively to reuse the space. We
systematically modify the computation R1 R2 . . . Rr to reduce both the total amount of space used and to reset all the temporary bits to zero by the end of the computation. To simplify the analysis,
we take r to be a power of two, r = 2k . For 1 ≤ i ≤ k, let ri = 2i . We perform the following recursive transformation B that breaks a sequence into two equal-sized parts, recursively transforms the
parts, and then composes them in the way shown: −1 B(R1 , . . . , Rri+1 ) = B(R1 , . . . , Rri )B(R1+ri , . . . , Rri+1 ) B(R1 , . . . , Rri ) B(R) = R, where (B(R1 , . . . , Rti ))−1 acts on exactly
the same bits as B(R1 , . . . , Rti ) and so requires no additional space.
0 0 x Figure 6.7 Composing the circuits Ri to obtain a reversible, but inefficient, version of the circuit C.
6.2 Reversible Implementations of Classical Circuits
x Figure 6.8 Composition of reversible computation circuits Ri in a way that reclaims storage.
The transformed computation uncomputes all space except the output of the last step, so the additional space usage is bounded by s. Thus, B(R1 , . . . , Rri ) requires at most s more space than B(R1
, . . . , Rri−1 ). We can write the space S(i) required for each of the k = log2 r steps i in the recursion in terms of the space requirements of the previous step: S(i) ≤ s + S(i − 1) with S(1) ≤
2s. The recursion ends after k = log2 r steps, so the final computation B(R1 , . . . , Rr ) requires at most S(r) ≤ (k + 1)s = s(log2 r + 1) space. From the definition of B, it follows immediately
that T (i), the number of circuits Rj executed by the computation B(R1 , . . . , Rri ), is T (i) = 3T (i − 1) with T (1) = 1. By assumption r = 2k , so the reversible version of C we constructed uses
T (2k ) = 3T (2k−1 ) = 3k = 3log2 r = r log2 3 reversible circuit Ri , each of which requires fewer than 3s gates. Thus, any classical computation of t steps and s bits can be done reversibly in O(t
log2 3 ) steps and O(s log2 t) bits. To obtain the O(t 1+ ) bound, instead of using a binary decomposition, consider the following m-ary decomposition. To simplify the analysis, suppose that r is a
power of m, r = mk . For 1 ≤ i ≤ k, let ri = mi . Abbreviating R1+(x−1)ri , . . . , Rxri as Rx,i , then B(R1,i+1 ) = B(R1,i , R2,i , . . . Rm,i ) = B(R1,i ), B(R2,i ), . . . B(Rm−1,i ), B(Rm,i ), B
(Rm−1,i )−1 , . . . B(R2,i )−1 , B(R1,i )−1 B(R) = R. In each step of the recursion, each block is split into m pieces and replaced with 2m − 1 blocks. We may assume without loss of generality that r
= mk for some k, in which case we stop recursing after k steps. At this point r = mk subcircuits C1 have been replaced by (2m − 1)k reversible
6 Quantum Versions of Classical Computations
circuits Ri , so the total number of circuits Ri for the final computation is (2m − 1)k , which we rewrite in terms of r: (2m − 1)logm r = r logm (2m−1) ≈ r logm 2m = r
1+ log1 m 2 .
The number of primitive gates in Ri is bounded by 3s and r = t/s. The total number of gates for a reversible circuit of t gates is T (t) ≈ 3s
1+ 1 log2 m t 1+ 1 < 3t log2 m . s
Thus, for any > 0, it is possible to choose m sufficiently large that the number of gates required for the reversible computation is O(t 1+ ). The space bound remains the same as before, O(s log2 t).
Reversible versions of classical Boolean circuits constructed in this manner can be turned directly into quantum circuits consisting entirely of Toffoli and X gates. While our argument was given in
terms of Boolean circuits, Bennett used the same argument to show that any classical Turing machine can be turned into a reversible one. Based on these arguments, from any classical circuit for f ,
an implementation of Uf can be constructed of comparable number of gates and bits. The care needed in uncomputing and reusing bits generalizes to qubits where the need for uncomputing values is even
greater: uncomputing ensures that temporary qubits are no longer entangled with output qubits. This need to unentangle temporary values at the end of a computation is one of the differences between
classical and quantum implementations. Quantum transformations, being reversible, cannot simply reset qubits. Naively, one might think that temporary qubits could be reset by measuring the qubit and
then, depending on the measurement outcome, performing a transformation to set them to |0. But if the temporary qubits were entangled with qubits containing the desired result, or results used later
in the computation, measuring the temporary qubits may alter those results. Uncomputing temporary qubits disentangles them from the rest of the system without affecting the state of the rest of the
system. The circuits of section 6.4 contain a number of examples of uncomputing temporary qubits. The next section sets up the language for quantum implementations used in section 6.4 to describe
explicit implementations of certain arithmetic functions. These implementations are often more efficient than the general construction just given, but they all have analogous classical
implementations of comparable efficiency. Part II is devoted to truly quantum algorithms, algorithms with no classical analog. 6.3 A Language for Quantum Implementations
The quantum circuits we have discussed provide one way of describing a sequence of quantum gates acting on registers of qubits. We now give an alternate way of describing quantum circuits
6.3 A Language for Quantum Implementations
that is more compact and easier to reason about. We use this notation to describe quantum implementations for some specific arithmetic functions. When we talk about the efficiency of these
implementations, we simply count the number of simple gates in the quantum circuits they describe. This quantity is called the circuit complexity. We explain the relation of circuit complexity with
other notions of complexity in section 7.2. The notation up to this point is standard and frequently used in the literature. Here we describe a language that we developed for describing quantum
circuits or sequences of simple quantum gates. We use a program-like notation to give concise descriptions of quantum circuits that are cumbersome in graphical notation. Moreover, a single program in
this notation can describe precisely a whole class of circuits acting on variable numbers of qubits as input (and classes that depend on other varying classical parameters). For example, just as in
the classical case, a quantum circuit for adding 24-bit numbers differs from a circuit for adding 12-bit numbers, though they may be related. This programlike notation enables us to describe the
relation precisely, while the graphical notation, though it may be suggestive, remains imprecise. The notation uses both classical and quantum variables. Classical control structures such as
iteration, recursion, and conditionals are used to define the order in which quantum state transformations are to be applied. Classical information can be used in the construction of a quantum state
or as parameters of quantum state transformations, but quantum information cannot be used in classical control structures. The programs we write are simply classical prescriptions for sequences of
quantum gates that operate on a single global quantum register. 6.3.1 The Basics
Quantum variables are names for registers, subsets of qubits of a single global quantum register. If x is the variable name for an n-qubit register, we may write x[n] if we wish to make the number of
qubits in x explicit. We use xi to refer to the ith qubit of x, and xi · · · xk for qubits i through k of the register denoted by x. We will generally order the qubits of a register from highest
index to lowest index so that if register x contains a standard basis vector |b, then b = i xi 2i . If U is a unitary transformation on n qubits, and x, y, and z are names for registers with a
combined total of n qubits, then the program step U |x, y, z = U |x|y|z means “apply U to the qubits denoted by the register names in the order given.” It is illegal to use any qubit twice in this
notation, so the registers x, y, and z must be disjoint; this restriction is necessary because “wiring” different input values to the same quantum bit is not possible or even meaningful. We are
abusing the ket notation slightly here in that it is sometimes being used to stand for a placeholder, a qubit, that can contain a qubit value, a quantum state, and sometimes for the qubit value
itself, but context should keep the two uses clear. Example 6.3.1 The Toffoli gate with control bits b5 and b3 and target bit b2 has the following
graphical representation
6 Quantum Versions of Classical Computations
b5 b4 b3 b2 b1 b0
which is awkward to represent in the standard tensor product notation because the qubits it acts on are not adjacent. In our notation, this transformation can be written as T |b5 , b3 , b2 . The
notation T |b2 , b3 , b2 is not allowed, since it repeats qubits. The notation (T ⊗ Cnot ⊗ H )|x5 · · · x3 |x1 , x0 |x7 , for a transformation acting on six qubits of a ten-qubit register x = x9 x8 ·
· · x0 , is just another way of representing the transformation I ⊗ I ⊗ H ⊗ I ⊗ T ⊗ I ⊗ Cnot , where the separate kets indicate which qubits the transformation making up the tensor product is acting
upon; the Toffoli gate T acts on qubits x5 , x4 , and x3 , the CN OT on qubits x1 and x0 , and the Hadamard gate H on qubit x7 . The notation (T ⊗ Cnot ⊗ H )|x5 · · · x3 |x4 , x0 |x7 is illegal
because the first and second registers are not disjoint: they share qubit |x4 . Controlled operations are so frequently used that we give them their own notation; the notation |b control U |x, where
b and x are disjoint registers, means that on any standard basis vector the operator U is applied to the contents of register x only if all of the bits in b are 1. Writing ¬|b control U |x is a
convenient shorthand for the sequence X ⊗ · · · ⊗ X|b |b control U |x, y X ⊗ · · · ⊗ X|b. If we write a sequence of state transformations, they are intended to be applied in order. We allow programs
to declare local temporary registers using qubit t[n], provided that the program restores the qubits in these registers to their initial |0 state. This condition ensures that temporary qubits can be
reused for different executions of the program and that the overall storage requirement is bounded. Furthermore, it ensures that the temporary qubits do not remain entangled with the other registers.
6.3.2 Functions
We allow the introduction of new names for sequences of program steps. Unlike commands such as control , the command define does not do anything to the qubits; it simply defines
6.3 A Language for Quantum Implementations
Box 6.2 Language Summary
Terms U
Name for a unitary transform
U −1
Name for the inverse of U
Name for a register of qubits
x[k] qubit x[k] qubit t xi
Indicates number of qubits in register x Indicates x is a name for a register of temporary qubits initially set to |0 Indicates t is a name for a temporary qubit initially set to |0 Name for the ith
qubit of register x
A sequence of qubits of register x Indicates use of qubits named r
Statements U |r |b control ¬|b control |b1 |b0 control
( stands for an abstract statement) Apply U to qubits named r Controlled form of statement with control qubits b Statement controlled by negation of qubits b Statement controlled by two qubits named
b1 and b0
for i ∈ [a..b] (i) define Name|x[k] = 0 , 1 , . . . n Name|r
Perform the sequence of statements (a), (a + 1), . . . , (b), which depend on the classical parameter i
xi . . . xj
Name−1 |r
Introduce Name as a name for a statement that performs statements 0 through n to a k-qubit register x Applies the steps described in the definition of Name to register r. Applies the inverse of all
the steps described in the definition of Name in reverse order to register r. Since all quantum transformations are reversible, this transformation is always well defined.
a new function by telling the machine what sequence of commands a new function variable name represents. For example, addition modulo 2 with an incoming carry bit can be defined as Sum : |c, a, b → |
c, a, (a + b + c) mod 2 define Sum |c|a|b = |a control X|b |c control X|b. It operates on three single qubits by adding the value of a and the value of the carry c to the value of b. The program
would be drawn as the circuit
6 Quantum Versions of Classical Computations
a b c A corresponding carry operator is of the form Carry : |c, a, b, c → |c, a, b, c ⊕ C(a, b, c), where the carry C(a, b, c) is 1 if two or more of the bits a, b, c, are 1, that is, C(a, b, c) = (a
∧ b) ⊕ (c ∧ (a ⊕ b)). A program for Carry might look like define Carry |c, a, b, c = |a|b control X|c Compute a ∧ b in register c |a control X|b Compute a ⊕ b in register b |c|b control X|c Toggle
result c if c and current value of b |a control X|b Reset b to original value
(1) (2) (3) (4)
In this program, the register b temporarily holds, starting in step (2), the xor of the original values of a and b. In step (3), this value of register b means that c is toggled if c and exactly one
of the original values of a and b is 1. Register b is reset to its original value in step (4). Repetition and conditional execution of sequences of quantum state transformations can be controlled
using classical programming constructs. Only classical, not quantum, information can be used in the control structure. However, in quantum algorithms there is a choice as to which classical input
values are placed in quantum registers and which are used simply as part of the classical control structure. For instance, one program to add x to itself n times might take classical input n and use
it only as part of the classical control, while another might place n in an additional quantum register. The two programs would be of the form An : |x, 0 → |x, nx and A : |x, n, 0 → |x, n, nx
respectively. This distinction will be more important when we consider quantum algorithms that act on superpositions of input values; only input values placed in quantum registers, not input values
that are part of the classical control structure, can be in superposition. The definition of a new program may use the same program recursively provided that the recursion can be unwound classically:
recursive application of functions is allowed only as a shorthand for a classically specified sequence of quantum transformations. We can use the qubit t[n] construction recursively as long as the
recursion depth is bounded by a static classical constant.
6.4 Some Example Programs for Arithmetic Operations
6.4 Some Example Programs for Arithmetic Operations
The programs in this section implement quantum circuits for modular arithmetic and supporting operations. The operations shown are more general (though less efficient) than the modular arithmetic
implementations used as part of Shor’s algorithm; here the modulus, M, is placed in a quantum register so these algorithms can act on superpositions of different moduli as well as on superpositions
of other input. 6.4.1 Efficient Implementation of AND
We give a linear implementation of an m-way and computed into an output qubit using just one additional temporary qubit. First we define a supporting transformation Flip that generalizes the Toffoli
gate T . The transformation Flip acts on an m-qubit register a = |am−1 . . . a0 and an (m − 1)-qubit register b = |bm−2 . . . b0 and negates qubit bi exactly when the (i + 2)-conjunction i+1 j =0 aj
is true. We define Flip in terms of Toffoli gates T that perform bit flips on some of the qubits of register b depending on the contents of register a. define Flip |a[2]|b[1] = T |a1 |a0 |b
(base case m = 2)
define Flip |a[m]|b[m − 1] = T |am−1 |bm−3 |bm−2 F lip |am−2 . . . a0 |bm−3 . . . b0 T |am−1 |bm−3 |bm−2
(general case m ≥ 3)
An inductive argument shows that Flip, when defined in this way, behaves as described. The transformation Flip, when applied to an m-qubit register a and an (m − 1)-qubit register b, uses 2(m − 2) +
1 Toffoli gates T . Next we define an AndTemp operation on a (2m − 1)-qubit computational state that uses m − 2 additional qubits to compute an and on m bits. We will shortly use AndTemp to construct
an and operation that makes more efficient use of qubits. The operation AndTemp places the conjunction of the bits in register a in the single-qubit register b, making temporary use of the qubits in
register c. define AndT emp |a[2]|b[1] = T |a1 |a0 |b
(base case m = 2)
define AndT emp |a[m]|b[1]|c[m − 2] = (general case m ≥ 3) F lip |a (|b|c) Compute conjunction in b F lip |am−2 . . . a0 |c Reset c
(1) (2)
The parentheses in Flip |a (|b|c) indicate that Flip is applied to the m-qubit register a and the m − 1 qubit register that is the concatenation of registers b and c. By the definition of Flip,
6 Quantum Versions of Classical Computations
step (1) leaves the conjunction of the aj in b but changes the contents of c in the process. Step (2) undoes these changes to c. Since the first Flip uses 2(m − 2) + 1 Toffoli gates and the second
Flip uses 2(m − 3) + 1 Toffoli gates, AndTemp requires 4m − 8 gates. An attractive feature of this construction of AndTemp is that the m − 2 additional qubits in register c can be in any state at the
start of the computation, and they will be returned to their original states by the end of the computation so that we can use these to compute the m-way and if there are sufficiently many
computational qubits already (n ≥ 2m − 2). Clever use of this property of AndTemp will allow us to define an And on up to n qubits that uses only 1 additional temporary qubit. To construct the
conjunction using less space, we recursively use AndTemp on one half of the qubits, using the other half temporarily and vice versa. Thus, a general And operator that requires a single temporary
qubit can be defined as follows: Let k = m/2!, and j = k − 2 for even m, j = k − 1 for odd m. The operator And has the effect of flipping b if and only if all bits of a are 1. define And |a[1]|b[1] =
Cnot |a0 |b
Trivial unary case, m = 1
define And |a[2]|b[1] = T |a1 |a0 |b
Binary case, m = 2
define And |a[m]|b = General case, 3 ≤ m qubit t[1] use a temporary qubit AndT emp |am−1 . . . ak |t |aj . . . a0 AndT emp (|t|aj . . . a0 ) |b |ak+j −2 . . . ak AndT emp |am−1 . . . ak |t |aj . . .
(1) (2) (3)
Step (1) computes the conjunction of the high-order bits using the low-order bits temporarily. In step (2) we compute the conjunction of the low-order bits using the high-order bits temporarily.
Since AndTemp uses a linear number of gates, so does And. 6.4.2 Efficient Implementation of Multiply Controlled Single-Qubit Transformations
The linear implementation of And given in the last section enables a linear implementation of the multiply controlled single-qubit transformations ix Q of section 5.4.3. Given an m-bit bit string z,
let X(z) be the transformation X (z) = X ⊗ I · · · ⊗ X ⊗ X, which contains an X at any position where z has a 0 bit, and an I at any position where z has a 1 bit. We implement the transformation
Conditional(z, Q), which acts on qubit b with single-qubit transformation Q if and only if the bits of register a match bit string z. define Conditional(z, Q) |a[m]|b[1] = qubit t use a temporary
6.4 Some Example Programs for Arithmetic Operations
X(z) |a And |a|t |t control Q|b And |a|t X(z) |a
if a and z match, a becomes all 1’s and bits of a if a matched z, apply Q to b uncompute and uncompute match
(2) (3) (4) (5) (6)
This construction uses 2 additional qubits and only O(m) simple gates. When z is 11 . . . 1 and Q = X, then Conditional(z, Q) is simply the And operator of the previous section. 6.4.3 In-Place
We define an Add transformation that adds two n-bit binary numbers. The transformation Add : |c|a|b → |c|a|(a + b + c) mod 2n+1 , where a and c are n-qubit registers and b is an (n + 1)-qubit
register, adds two n-bit numbers, placed in registers a and b, and puts the result in register b when register c and the highest order bit, bn , of register b are initially 0. The implementation of
Add uses n recursion steps, where n is the number of bits in the numbers to be added. The ith step in the recursion adds the n − i highest bits, with the carry in the lowest of these n − i highest
bits having first been computed. The construction uses Sum and Carry defined in section 6.3.2. We consider the two cases n = 1 and n > 1: define Add |c|a|b[2] = Carry |c|a|b0 |b1 Sum |c|a|b0
base case n = 1 carry in high bit of b sum in low bit of b
define Add |c[n]|a[n]|b[n + 1] = Carry |c0 |a0 |b0 |c1 Add |cn−1 · · · c1 |an−1 · · · a1 |bn · · · b1 Carry −1 |c0 |a0 |b0 |c1 Sum |c0 |a0 |b0
general case n > 1 compute the carry for low bits (3) add n − 1 highest bits (4) uncompute the carry (5) compute the low order bit (6)
(1) (2)
Step (5) is needed to ensure that the carry register is reset to its initial value. The Carry−1 operator is implemented by running, in reverse order, the inverse of each transformation in the
definition of the Carry operator. 6.4.4 Modular Addition
The following program defines modular addition for n-bit binary numbers a and b. AddMod |a|b|M → |a|(b + a) mod M|M, where the registers a and M have n qubits and b is an n + 1-qubit register. When
the highest order bit, bn , of register b is initially 0, the transformation AddMod replaces the contents of register
6 Quantum Versions of Classical Computations
b with b + a mod M, where M is the contents of register M. The contents of registers a and M (and the temporaries c and t) are unchanged by AddMod. The construction makes use of the Add
transformation we defined in the previous section. define AddMod |a[n]|b[n + 1]|M[n] = qubit t use a temporary bit qubit c[n] storage for the n-bit carry Add |c|a|b add a to b Add −1 |c|M|b subtract
M from b toggle t when underflow |bn control X|t |t control Add |c|M|b when underflow, add M back to b subtract a again Add −1 |c|a|b reset t ¬|bn control X|t Add |c|a|b construct final result
(1) (2) (3) (4) (5) (6) (7) (8) (9)
Classically, steps (3) through (6) are all that are needed. In (4) if M > b, subtracting M from b causes bn to become 1. Steps (7) through (9) are needed to reset t. Note that each Add operation
internally resets |c back to its original value. The condition 0 ≤ a, b < M is necessary, since for values outside that range, an operation that sends |a, b, M to |a, b + a mod M, M is not reversible
and therefore not unitary. If this condition does not hold, for example if b ≥ M initially, then the final value of b may still be greater than M, since the algorithm subtracts M at most once. 6.4.5
Modular Multiplication
The TimesMod transformation multiplies two n-bit binary numbers a and b modulo another n-bit binary number M. The transformation TimesMod |a|b|M|p → |a|b|M|(p + ba) mod M is defined by the following
program that successively adds bi 2i a mod M to the result register p. It is assumed that a < M, but b can be arbitrary. Both a and p are (n + 1)-qubit registers; the additional high-order bit is
needed for intermediate results. The operation Shift simply cyclically shifts all bits by 1, which can easily be done by swapping bits ai+1 with ai for all i, starting with the high-order bits. Shift
acts as multiplication by 2, since the high-order bit of a will be 0. define TimesMod |a[n + 1]|b[k]|M[n]|p[n + 1] = qubit t[k] use k temporary bits qubit c[n] carry register for addition for i ∈ [0
. . . k − 1] iterate through bits of b Add −1 |c|M|a subtract M from a ti = 1 if M > a |an control X|ti add M to a if ti is set |ti control Add |c|M|a
(1) (2) (3) (4) (5) (6)
6.4 Some Example Programs for Arithmetic Operations
|bi control AddMod |an−1 · · · a0 |p|M Shif t |a for i ∈ [k − 1 . . . 0] Shif t −1 |a |ti control Add −1 |c|M|a |an control X|ti Add |c|M|a
add a to p if bi is set (7) multiply a by 2 (8) clear t and restore a (9) divide a by 2 (10) perform all steps in reverse (11) clear ith bit of t (12) add M to a (13)
Lines (4)–(6) compute the a mod M. The second loop, (9)–(13), undoes all the steps of the first one, (3)–(8), except the conditional addition to the output p (line 8). Note that modular
multiplication cannot be defined as an in-place operation because the transformation that sends |a, b, M to |a, ab mod M, M is not unitary: both |2, 1, 4 and |2, 3, 4 would be mapped to the same
state |2, 2, 4. 6.4.6 Modular Exponentiation
We implement modular exponentiation, ExpMod |a|b|M|0 → |a|b|M|a b mod M using O(n2 ) temporary qubits where a, b, and M are n-qubit registers. First, we define two transformations we will use in our
implementation of ExpMod, an n-bit copy and an n-bit modular squaring function. The Copy transformation Copy : |a|b → |a|a ⊕ b copies the contents of an n-bit register a to another n-bit register b
whenever the register b is initialized to 0. The operation Copy can be implemented as bitwise xor operations between the corresponding bits in registers a and b. define Copy |a[n]|b[n] = for i ∈
[0..n − 1] bit-wise xor a with b |ai control X|bi The modular squaring operation SquareMod SquareMod : |a|M|s → |a|M|(s + a 2 ) mod M places the result of squaring the contents of register a, modulo
the contents of register M, in the register s. define SquareMod |a[n + 1]|M[n]|s[n + 1] = qubit t[n] use n temporary bits copy n bits of a to t Copy |an−1 · · · a0 |t TimesMod |a|t|M|s compute a 2
mod M. Copy−1 |an−1 · · · a0 |t clear t
(1) (2) (3) (4)
6 Quantum Versions of Classical Computations
Finally, we can give a recursive definition of modular exponentiation with the signature ExpMod : |a|b|M|p|e → |a|b|M|p|e ⊕ (pa b ) mod M define ExpMod |a[n + 1]|b[1]|M[n]|p[n + 1]|e[n + 1] = base
case result is p ¬|b0 control Copy |p |e result is pa 1 mod M |b0 control T imesMod |a|p|M|e
(1) (2)
define ExpMod |a[n + 1]|b[k]|M[n]|e[n + 1] = general case k > 1 2 qubit u[n + 1] for a mod M qubit v[n + 1] for (p ∗ a b0 ) mod M v = pa 0 mod M ¬|b0 control Copy |p |v |b0 control T imesMod |a|p|M|e
e = pa 1 mod M SquareMod |a|M|u compute a 2 mod M in u compute v(a 2 )b/2 mod M ExpMod |u|bk−1 · · · b1 |M|v|e −1 uncompute u SquareMod |a|M|u |b0 control T imesMod −1 |a|p|M|e uncompute e uncompute
v ¬|b0 control Copy −1 |p |v
(3) (4) (5) (6) (7) (8) (9) (10) (11)
The program unfolds recursively k times, once for each bit of b. Steps (5)–(8) and the base case (1) and (2) perform the classical computation. The division b/2 in step (8) is integer division. Each
recursive step requires two temporary registers of size n + 1 that are reset at the end in steps (9) and (11). Thus, the algorithm requires a total of 2(k − 1)(n + 1) temporary qubits. The algorithm
for modular multiplication given in 6.4.5 requires O(n2 ) steps to multiply two n-bit numbers. Thus, the modular exponentiation requires O(kn2 ) steps. But more efficient multiplication algorithms
are possible and this complexity can be reduced to O(kn log n log log n) using the Schönhage-Strassen multiplication algorithm. 6.5 References
See Feynman’s Lectures on Computation [121] for an account of reversible computation and its relation to the energy of computation and information. In his 1980 paper [270], Tommaso Toffoli shows that
any (classical) function with finite domain and range can be realized as a reversible function using additional bits. To prove this theorem, he introduces a family of controlled gates θ (n) that we
write as n−1 X. The instance 2 X is generally known as the Toffoli gate. The Fredkin gate was first described as a billiard-ball gate in [124]. Reversible classical computations were first discussed
by Bennett in [39], where he constructs reversible Turing machines from nonreversible ones. In [40] Bennett discusses the recursive decomposition presented in section 6.2. Bennett’s argument uses
multitape Turing machines instead of registers.
6.6 Exercises
Deutsch [99] shows how to construct reversible quantum gates for any classically computable function. Deutsch defines, and Yao [287] and Bernstein and Vazirani [48] refine, the definition of a
universal quantum Turing machine. This construction assumes a sufficient supply of qubits that correspond to the finite but unbounded tape of a Turing machine. Section 7.2 discusses quantum Turing
machines briefly. The implementations of the m-way and and Conditional(z, Q) are due to Barenco et al. [31], who also describe a O(n2 )-gate circuit for Conditional(z, Q) that uses no additional
qubits. Vedral, Barenco, and Ekert [275] give a comprehensive definition of quantum circuits for arithmetic operations. In particular, they show how modular exponentiation a x mod M can be done with
fewer temporary qubits than the version presented here for the case where a and M are classical and relative prime. Fast multiplication was first described in Schönhage and Strassen’s paper [245].
Descriptions in English can be found in most books on algorithms such as [182]. 6.6 Exercises Exercise 6.1. Show that it is impossible to perform a reversible and operation with only two bits.
Exercise 6.2. a. Construct a classical Boolean circuit with three input bits and two output bits that computes
as a two-bit binary number the number of 1 bits in the input. b. Convert your circuit into a classical reversible one. c. Give an equivalent quantum circuit. Exercise 6.3. Given two-qubit registers |
c and |a and three-qubit register |b, construct the
quantum circuit that computes Add |c |a |b. Exercise 6.4. a. Define a quantum algorithm that computes the maximum of two n-qubit registers. b. Explain why such an algorithm requires one additional
qubit that cannot be reused, that is, the
algorithm will have to have 2n + 1 input and output qubits. Exercise 6.5. Show how to construct an efficient reversible circuit for every classical circuit along the lines of the construction of
section 6.2.2, but without the assumption that t is a power of 2. Give the time and space bounds for your construction.
Introduction to Quantum Algorithms
The previous chapter used quantum computers in an essentially classical manner; in each of the algorithms of part I, if the quantum computer starts in a standard basis state, the state after every
step of the computation is also a standard basis vector, not a superposition, so the computational state always has an obvious interpretation as a classical state. These algorithms do not make use of
the ability of qubits to be in superposition or of sets of qubits to be entangled. In part I, we showed that quantum computation is at least as powerful as classical computation: for any classical
circuit, there exists a quantum circuit that performs the same computation with similar efficiency. We now turn our attention to showing that quantum computation is more powerful than classical
computation. Part II is concerned with truly quantum algorithms, quantum computations that outperform classical ones. The algorithms in this part make use of the simple gates used in the quantum
analogs of classical computations of chapter 6, and they also use more general unitary transformations that have no classical counterpart. Geometrically, all quantum state transformations on n qubits
are rotations of 2n -dimensional complex state space. Nonclassical quantum computations involve rotations to nonstandard bases, whereas, as explained in section 6.1, the steps of any classical
computation merely permute the standard basis elements. Section 5.4.4 showed how any quantum transformation can be implemented in terms of simple gates. We now concentrate on quantum transformations
that can be implemented efficiently and how such transformations can be used to speed up certain types of computation. The key to designing a truly quantum algorithm is figuring out how to use these
nonclassical basic unitary gates to perform a computation more efficiently. In this and the next few chapters, all discussion is in terms of the standard circuit model of quantum computation we
described in section 5.6. We use the language introduced in 6.3 to specify general sequences of simple quantum gates as we did when we discussed quantum analogs of classical computations in chapter
6, but now we allow basic unitary transformations that have no classical counterpart. The way efficiency is computed in the quantum circuit model resembles the way it is computed classically, which
makes it easy to compare the efficiency of quantum and classical algorithms. Early quantum algorithms were designed in the circuit model, but it is not the only, or necessarily the best, model to use
for quantum algorithm design. Other models of quantum computation exist, and algorithms in these models have a different flavor. In chapter 13
7 Introduction to Quantum Algorithms
we describe alternative models that have been shown to be equivalent in terms of computational power to the standard circuit model of quantum computation. In addition to having led to new types of
quantum algorithms, these models underlie some promising efforts to build quantum computers. In the standard circuit model of quantum computation, the efficiency of a quantum algorithm is computed in
terms of the circuit complexity, the number of basic gates together with the number of qubits used, of the circuits used to implement the algorithm. Sometimes we are interested in the efficient use
of other resources, so we will measure, say, the number of bits or qubits transmitted between two parties to carry out a task, or the number of calls to a (usually expensive to compute) function.
Such functions are often called black box or oracle functions, since it is assumed that one does not have access to the inner workings of the computation of this function, only to the result of its
application. These various notions of complexity are discussed in section 7.2. Section 7.1 begins the chapter with a general discussion of computing with superpositions, including the notion of
quantum parallelism. Section 7.2 describes various notions of complexity including circuit complexity, query complexity, and communication complexity. Deutsch’s algorithm of section 7.3.1 provides
the first example of a truly quantum algorithm, one for which there is no classical analog. The quantum subroutines of section 7.4 pave the way for the description in section 7.5 of four simple
quantum algorithms, including Simon’s algorithm, which inspired Shor’s factoring algorithm. While the problems these algorithms solve are not so interesting, a study of the techniques they use will
aid in understanding Grover’s algorithm and Shor’s algorithm. Section 7.7 defines quantum complexity and describes relations between quantum complexity classes and classical complexity classes. The
final section of the chapter, section 7.8, discusses quantum Fourier transforms, which, in one form or another, are used in most of the algorithms described in this book. 7.1 Computing with
Many quantum algorithms use quantum analogs of classical computation as at least part of their computation. Quantum algorithms often start by creating a quantum superposition and then feeding it into
a quantum version Uf of a classical circuit that computes a function f . This setup, called quantum parallelism, accomplishes nothing by itself—any algorithm that stopped at this point would have no
advantage over a classical algorithm—but this construction leaves the system in a state that quantum algorithm designers have found a useful starting point. Both Shor’s algorithm and Grover’s
algorithm begin with the quantum parallelism setup. 7.1.1 The Walsh-Hadamard Transformation
Quantum parallelism, the first step of many quantum algorithms, starts by using the WalshHadamard transformation, a generalization of the Hadamard transformation, to create a superposition of all
input values. Recall from section 5.2.2 that the Hadamard transformation H applied to |0 creates a superposition state √12 (|0 + |1). Applied to n qubits individually, all in state |0, H generates a
superposition of all 2n standard basis vectors, which can be viewed as the binary
7.1 Computing with Superpositions
representation of the numbers from 0 to 2n − 1: (H ⊗ H ⊗ · · · ⊗ H )|00 . . . 0 1 = √ ((|0 + |1) ⊗ (|0 + |1) ⊗ · · · ⊗ (|0 + |1)) 2n 1 = √ (|0 . . . 00 + |0 . . . 01 + |0 . . . 10 + · · · + |1 . . .
11) 2n 2 −1 1 |x. =√ 2n x=0 n
Box 7.1 Hamming Weight and Hamming Distance
The Hamming distance dH (x, y) between two bit strings x and y is the number of bits in which the two strings differ. The Hamming weight dH (x) of a bit string x is the number of 1-bits in x, which
is equal to the Hamming distance between x and the bit string consisting of all zeros: dH (x) = dH (x, 0). For two bit strings x and y, x · y is the number of common 1 bits in x and y, x ⊕ y is the
bitwise exclusive-or, and x ∧ y is the bitwise and of x and y. The bitwise exclusive-or ⊕ can also be viewed as bitwise modular addition of the strings x and y, viewed as elements of Zn2 . We use ¬x
to denote the bit string that flips 0 and 1 throughout bit string x, so ¬x = x ⊕ 11 . . . 1. The following identities hold: x · y = dH (x ∧ y) (x · y mod 2) =
1 1 − (−1)x·y 2
x · y + x · z =2 x · (y ⊕ z) dH (x ⊕ y) =2 dH (x) + dH (y) where the notation x =2 y means equality modulo 2; it is shorthand for x mod 2 = y mod 2. Note that n −1 2
(−1)x·x = 0
since the successive (2i and 2i + 1) terms cancel. Finally, we note that n −1 2
(−1)x·y =
2n if y = 0 0 otherwise.
7 Introduction to Quantum Algorithms
The transformation W = H ⊗ H ⊗ · · · ⊗ H , which applies H to each of the qubits in an n-qubit state, is called the Walsh, or Walsh-Hadamard, transformation. Using N = 2n , we may write N−1 1 |x. W |
0 = √ N x=0
Another way of writing W is useful for understanding the effect of W in quantum algorithms. In the standard basis, the matrix for the n-qubit Walsh-Hadamard transformation is a 2n × 2n matrix W with
entries Wrs , such that 1 Wsr = Wrs = √ (−1)r·s , 2n where r · s is the number of common one-bits in s and r (see box 7.1) and both r and s range from 0 to 2n − 1. To see this equality, note that W
(|r) = Wrs |s. s
Let rn−1 . . . r0 be the binary representation of r, and sn−1 . . . s0 be the binary representation of s. W (|r) = (H ⊗ · · · ⊗ H )(|rn−1 ⊗ · · · ⊗ |r0 ) 1 = √ (|0 + (−1)rn−1 |1) ⊗ · · · ⊗ (|0 + (−1)
r0 |1) 2n 2 −1 1 =√ (−1)sn−1 rn−1 |sn−1 ⊗ · · · ⊗ (−1)s0 r0 |s0 2n s=0 n
2 −1 1 (−1)s·r |s. =√ 2n s=0 n
7.1.2 Quantum Parallelism
Any transformation of the form Uf = |x, y → |x, y ⊕ f (x) from section 6.1 is linear and therefore acts on a superposition ax |x of input values as follows: Uf : ax |x, 0 → ax |x, f (x). x
Consider the effect of applying Uf to the superposition of values from 0 to 2n − 1 obtained from the Walsh transformation: N−1 N −1 1 1 |x|0 → √ |x|f (x). Uf : (W |0) ⊗ |0 = √ N x=0 N x=0
7.2 Notions of Complexity
After only one application of Uf , the superposition now contains all of the 2n function values f (x) entangled with their corresponding input value x. This effect is called quantum parallelism.
Since n qubits enable us to work simultaneously with 2n values, quantum parallelism in some sense circumvents the time/space trade-off of classical parallelism through its ability to hold
exponentially many computed values in a linear amount of physical space. However, this effect is less powerful than it may initially appear. To begin with, it is possible to gain only limited
information from this superposition: these 2n values of f are not independently accessible. We can gain information only from measuring the states, but measuring in the standard basis will project
the final state onto a single input /output pair |x, f (x), and a random one at that. The following simple example uses the basic setup of quantum parallelism and illustrates how useless the raw
superposition arising from quantum parallelism is on its own, without performing any additional transformations. Example 7.1.1 The controlled-controlled-not (Toffoli) gate, T , of section 5.4.3
computes the
conjunction of two values: x
Take as input the superposition of all possible bit combinations of x and y together with a singlequbit register, initially set to |0, to contain the output. We use quantum parallelism to construct
this input state in the standard way: 1 1 W (|00) ⊗ |0 = √ (|0 + |1) ⊗ √ (|0 + |1) ⊗ |0 2 2 1 (|000 + |010 + |100 + |110). 2 Applying the Toffoli gate T to this superposition of inputs yields =
1 (|000 + |010 + |100 + |111). 2 This superposition can be viewed as a truth table for conjunction. The values of x, y, and x ∧ y are entangled in such a way that measuring in the standard basis will
give one line of the truth table. Computing the and using quantum parallelism, and then measuring in the standard basis, gives no advantage over classical parallelism: only one result is obtained
and, worse still, we cannot even choose which result we get. T (W |00 ⊗ |0) =
7 Introduction to Quantum Algorithms
7.2 Notions of Complexity
Complexity theory analyzes the amount of resources, most often time or space, asymptotically required to perform a computation. Turing machines provide a formal model of computation often used for
reasoning about computational complexity. Early work by Benioff, improved by David Deutsch, then Andrew Yao, then Ethan Bernstein and Umesh Vazirani, defined quantum Turing machines and enabled the
formalization of quantum complexity and comparison with classical results. In both quantum and classical settings, other methods, such as the circuit model, provide alternative means for formalizing
complexity notions. Because most research on quantum algorithms discusses complexity in terms of quantum circuit complexity, we have chosen to take that approach in this book. Another common
complexity measure used in the analysis of quantum algorithms is quantum query complexity, which will be discussed in section 7.2.1. Furthermore, there are a number of complexity measures used for
analyzing quantum communication protocols. Communication complexity will be discussed in section 7.2.2. A circuit family C = {Cn } consists of circuits Cn indexed by the maximum input size for that
circuit; the circuit Cn handles input of size n (bits or qubits). The complexity of a circuit C is defined to be the number of simple gates in the circuit, where the set of simple gates under
consideration must be specified. Any of the finite sets of gates discussed in section 5.5 may be used, or the infinite set consisting of all single qubit operations together with the Cnot may be
used. The circuit complexity, or time complexity, of a family of circuits C = {Cn } is the asymptotic number of simple gates in the circuits expressed as a function of the input size; the circuit
complexity for a circuit family C = {Cn } is O(f (n)) if the size of the circuit is bounded by O(f (n)): the function t (n) = |Cn | satisfies t (n) ∈ O(f (n)). Any of the simple gate sets mentioned
earlier give the same asymptotic circuit complexity. Circuit complexity models are nonuniform in that different, larger circuits are required to handle larger input sizes. Both quantum and classical
Turing machines, by contrast, propose a single machine that can handle arbitrarily large input. The nonuniformity of circuit models makes circuit complexity more complicated to define than Turing
machine models because of the following issue: complexity can be hidden in the complexity of constructing the circuits Cn themselves, even if the size of the circuits Cn is asymptotically bounded. To
get sensible notions of complexity, in particular to obtain circuit complexity measures similar to Turing machine based ones, a separate uniformity condition must be imposed. Both quantum and
classical circuit complexity use similar uniformity conditions. In addition to uniformity, a requirement that the behavior of the circuits Cn in a circuit family C behave in a consistent manner is
usually imposed as well. This consistency condition is usually phrased in terms of a function g(x), and says that all circuits Cn ∈ C that can take x as input give g(x) as output. This condition is
sometimes misunderstood to include restrictions on the sorts of functions g(x) a consistent circuit family can compute. For this reason, and to generalize easily to the quantum case, we phrase this
same consistency condition without explicit reference to a function g(x).
7.2 Notions of Complexity
Consistency Condition A quantum or classical circuit family C is consistent if its circuits Cn give
consistent results: for all m < n: applying circuit Cn to input x of size m must give the same result as applying Cm to that input. The most common uniformity condition, and the one we impose here,
is the polynomial uniformity condition. Uniformity condition A quantum or classical circuit family C = {Cn } is polynomially uniform if
there exists a polynomial-time classical algorithm that generates the circuits. In other words, C is polynomially uniform if there exists a polynomial f (n) and a classical program that, given n,
constructs the circuit Cn in at most O(f (n)) steps.
The uniformity condition means that the circuit construction cannot be arbitrarily complex. The relation between the circuit complexity of polynomially uniform, consistent circuit families and the
Turing machine complexity is understood for both the classical and quantum case. In the classical case, for any classical function g(x) computable on a Turing machine in time O(f (n)), there is a
polynomially uniform, consistent classical circuit family that computes g(x) in time O(f (n) log f (n)). Conversely, a polynomially uniform, consistent family of Boolean circuits can be simulated
efficiently by a Turing machine. In the quantum case, Yao has shown that any polynomial time computation on a quantum Turing machine can be computed by a polynomially uniform, consistent family of
polynomially sized quantum circuits. As in the classical case, demonstrating that any polynomially uniform, consistent family of quantum circuit can be simulated by a quantum Turing machine is
straightforward. Since we are not concerned with sublinear complexity differences, asymptotic differences of at most a polynomial in log(f (n), we discuss quantum complexity in terms of circuit
complexity with the polynomial uniformity condition instead of using quantum Turing machines. 7.2.1 Query Complexity
The earliest quantum algorithms solve black box, or oracle, problems. A classical black box out puts f (x) upon input of x. A quantum black box behaves like Uf , outputting x αx |x, f (x) ⊕ y upon
input of x αx |x|y. Black boxes are theoretical constructs; they may or may not have an efficient implementation. For this reason, they are often called oracles. The black box terminology emphasizes
that only the output of a black box can be used to solve the problem, not anything about its implementation or any of the intermediate values computed along the way; we cannot see inside it. The most
common type of complexity discussed with respect to black box problems is query complexity: how many calls to the oracle are required to solve the problem. Black box algorithms of low query
complexity, algorithms that solve a black box problem with few calls to the oracle, are only of practical use if the black box has an efficient implementation. The black box approach is very useful,
however, in establishing lower bounds on the circuit complexity of a problem. If the query complexity is (N )—in other words, at least (N ) calls to the oracle are required—then the circuit
complexity must be at least (N ).
7 Introduction to Quantum Algorithms
Black boxes have been used to establish lower bounds on the circuit complexity for quantum algorithms, but their first use in quantum computation was to show that the quantum query complexity of
certain black box problems was strictly less than the classical query complexity: the number of calls to a quantum oracle needed to solve certain problems is strictly less than the required number of
calls to a classical oracle to solve the same problem. The first few quantum algorithms solve black box problems: Deutsch’s problem (section 7.3.1), the Deutsch-Jozsa problem (section 7.5.1), the
Bernstein-Vazirani problem (section 7.5.2), and Simon’s problem √ (section 7.5.3). The most famous query complexity result is Grover’s: that it takes only O( N ) calls to a quantum black box to solve
an unstructured search problem over N elements, where as the classical query complexity of unstructured search is (N ). Grover’s algorithm, and the extent to which its superior query complexity
provide practical benefit, are discussed in chapter 9. 7.2.2 Communication Complexity
For communication protocols, common complexity measures include the minimum number of bits, or the minimum number of qubits, that must be transmitted to accomplish a task. Bounds on other resources,
such as the number of bits of shared randomness or, in the quantum case, the number of shared EPR pairs, may or may not be of interest as well. Various notions of communication complexities exist,
depending on whether the task requires quantum or classical information to be transmitted, whether qubits or bits can be sent, and what entanglement resources can be used. We have already seen some
examples of communication complexity results. The complexity notion of interest in dense coding is the number of qubits that must be sent in order to communicate n bits of information. While
classical protocols require the transmission of n bits, only n/2 qubits need to be sent in order to communicate n bits of information. The other resource used in dense coding, the number of EPR
pairs, sometimes called ebits in the communication protocol context, required in the setup is also n/2. Teleportation, by contrast, aims to transmit quantum information using a classical channel that
can only send bits not qubits. The relevant complexity notion is the number of bits needed to transmit n qubits worth of quantum information. Using quantum teleportation, 2n bits can be used to
transmit the state of n qubits. The number of ebits used to teleport n qubits is n. The distributed computation protocol described in section 7.5.4 does not require the transmission of any bits or
qubits, but it requires n ebits in order to accomplish a task concerning exponentially large bit strings, bit strings of length N = 2n . A classical solution to this problem requires a minimum of N/2
bits to be transmitted. Since this book is concerned primarily with quantum computation not quantum communication, we will not discuss quantum communication complexity again except briefly in section
13.5. 7.3 A Simple Quantum Algorithm
We are now in a position to describe our first truly quantum algorithm. This algorithm, due to David Deutsch in 1985, was the first result that showed that quantum computation could
7.3 A Simple Quantum Algorithm
outperform classical computation. The problem Deutsch’s algorithm solves is a black box problem. Deutsch showed that his quantum algorithm has better query complexity than any possible classical
algorithm: it can solve the problem with fewer calls to the black box than is possible classically. While the problem it solves is too trivial to be of practical interest, the algorithm contains
simple versions of a number of key elements of intrinsically quantum computation, including the use of nonstandard bases and quantum analogs of classical functions applied to superpositions, that
will recur in more complex quantum algorithms. 7.3.1 Deutsch’s Problem
Given a Boolean function f : Z2 → Z2 , determine whether f is constant.
Deutsch’s Problem
Deutsch’s quantum algorithm, described in this section, requires only a single call to a black box for Uf to solve the problem. Any classical algorithm requires two calls to a classical black box for
Cf , one for each input value. The key to Deutsch’s algorithm is the nonclassical ability to place the second qubit of the input to the black box in a superposition. The subroutine of section 7.4.2
generalizes this trick. Recall from 6.1 that Uf for a single bit function f takes two qubits of input and produces two qubits of output. On input |x|y, Uf produces |x|f (x) ⊕ y, so when |y = |0, the
result of applying Uf is |x|f (x). The algorithm applies Uf to the two-qubit state |+|−, where the first qubit is a superposition of the two values in the domain of f , and the third qubit is in the
superposition |− = √12 (|0 − |1). We obtain 1 (|0 + |1)(|0 − |1) Uf (|+|−) = Uf 2 =
1 (|0(|0 ⊕ f (0) − |1 ⊕ f (0)) + |1(|0 ⊕ f (1) − |1 ⊕ f (1))) . 2
In other words, 1 |x(|0 ⊕ f (x) − |1 ⊕ f (x)). 2 x=0 1
Uf (|+|−) =
When f (x) = 0,
√1 (|0 ⊕ f (x) − |1 ⊕ f (x)) becomes √1 (|0 − |1) = 2 2 √1 (|0 ⊕ f (x) − |1 ⊕ f (x)) becomes √1 (|1 − |0) = −|−. Therefore 2 2
|−. When f (x) = 1,
1 1 1 1 |x|− = √ (−1)f (x) |x|−. √ 2 x=0 2 x=0
For f constant, (−1)f (x) is just a physically meaningless global phase, so the state is simply |+|−. For f not constant, the term (−1)f (x) negates exactly one of the terms in the superposition so,
up to a global phase, the state is |−|−. If we apply the Hadamard transformation H to the first qubit and then measure it, with certainty we obtain |0 in the first case and |1 in the second
7 Introduction to Quantum Algorithms
case. Thus with a single call to Uf we can determine, with certainty, whether f is constant or not. We now have our first example of a quantum algorithm that outperforms any classical algorithm! It
may surprise readers that this algorithm succeeds with certainty; the most commonly remembered aspect of quantum mechanics is its probabilistic nature, so people often naively expect that anything
done with quantum means must be probabilistic, or at least that anything that exhibits peculiarly quantum properties must be probabilistic. We already know, from our study of quantum analogs to
classical computations, that the first of these expectations does not hold. The algorithm for Deutsch’s problem shows that even inherently quantum processes do not have to be probabilistic. 7.4
Quantum Subroutines
We now look at some useful nonclassical operations that can be performed on a quantum computer. The first subroutine, discussed in section 7.4.2, is commonly used; in particular, it is part of
Grover’s search algorithm as well as being used in most of the simpler quantum algorithms of section 7.5, including the Deutsch-Jozsa problem, a multiple bit generalization of Deutsch’s problem. To
illustrate further how to work with quantum superpositions, we describe a couple of other subroutines, though these subroutines are not used elsewhere in the book. 7.4.1 The Importance of
Unentangling Temporary Qubits in Quantum Subroutines
Chapter 6, when describing the constructions of section 6.2, emphasized the importance of uncomputing temporarily used bits to conserve space in classical computations. In quantum computation,
uncomputing qubits used temporarily as part of subroutines is crucial even when conserving space and reusing qubits is not an issue; failing to uncompute temporary qubits can result in entanglement
between the computational qubits and the temporary qubits, and in this way can destroy the calcula tion. More specifically, if a subroutine claims to compute state i αi |xi , it is not okay if it
actually computes i αi |xi |yi and throws away the qubits storing |yi unlessthere is no entanglement α |x ⊗ |y , which between the two registers. There is no entanglement if i αi |xi |yi = i i i i
can happen only if |yi = |yj for all i and j . In general, the states i αi |xi and i αi |xi |yi behave quite differently, even if we have access only to the first register of the second state.
Chapter 10, which discusses quantum subsystems, provides the means of talking about the differences between these two situations without looking at the consequences for computation. In this section,
we illustrate the difference by showing how using the first state when expecting the second can mess up computation. In particular, we show that if we replace the black box Uf used in Deutsch’s
problem with the black box for Vf that outputs Vf : |x, t, y → |x, t ⊕ x, y ⊕ f (x), Deutsch’s algorithm no longer works.
7.4 Quantum Subroutines
Begin with qubit |t in the state |0 and, as before, the first qubit in the state |+, and the third qubit in the state |−. Apply Vf to obtain 1 1 1 1 Vf (|+|0|−) = Vf √ |x|0|− = √ (−1)f (x) |x|x|−. 2
x=0 2 x=0 The first qubit is now entangled with the second qubit. Because of this entanglement, applying H to the first qubit and then measuring it no longer has the desired effect. For example, when
f is constant, the state is (|00 + |11)|−, and applying H ⊗ I ⊗ I results in the state 1 (|00 + |10 + |01 − |11)|−. 2 The second and fourth terms canceled before, but they do so no longer. Now there
is an equal chance of measuring the first qubit as |0 or |1. A similar calculation shows that when the function is not constant, there is also an equal chance of measuring the first qubit as |0 or |
1. Thus, we can no longer distinguish the two cases. Entanglement with the qubit |t has destroyed the quantum computation. Had Vf properly uncomputed t so that at the end of the calculation it was in
state |0, the algorithm would still work properly. For example, for f constant, we would have state 1 (|00 + |10 + |00 − |10)|− 2 in which case the appropriate terms would cancel to yield (|00)|−. If
a quantum subroutine claims to produce a state |ψ, it must not produce a state that looks like |ψ but is entangled with other qubits. In particular, if a subroutine makes use of other qubits, by the
end of the subroutine these qubits must not be entangled with the other qubits. For this reason, the following quantum subroutines are careful to uncompute any auxiliary qubits so that at the end of
the algorithm they are always in state |0. 7.4.2 Phase Change for a Subset of Basis Vectors
Change the phase of terms in a superposition |ψ = ai |i depending on whether i is in a subset X of {0, 1, . . . , N − 1} or not. More specifically, we wish to find an efficient implementation of the
quantum transformation
SX :
N−1 x=0
ax |x →
ax eiφ |x +
ax |x.
x ∈X
Section 5.4 explained how to realize an arbitrary unitary transformation without regard to φ efficiency. Applying that algorithm blindly would give an implementation of SX using more
7 Introduction to Quantum Algorithms
than N = 2n simple gates. This section shows how, for any efficiently computable subset X, the φ φ transformation SX can be implemented efficiently. An efficiently implementable SX is used in some of
the quantum algorithms we describe later that outperform classical ones. φ We can hope to implement SX efficiently only if there is an efficient algorithm for computing membership in X: the Boolean
function f : Z2n → Z2 , where 1 if x ∈ X f (x) = 0 otherwise must be efficiently computable, say polynomial in n. Most subsets X do not have this property. For subsets X with this property, the main
result of chapter 6 implies that there is an efficient φ quantum circuit for Uf . Given such an implementation for Uf , we can compute SX using a few additional steps. We use Uf to compute f in a
temporary qubit, use the value in that qubit to effect the phase change, and then uncompute f in order to remove any entanglement between the temporary qubit and the rest of the state. define Phasef
(φ)|x[k] = qubit a[1] a temporary bit compute f in a Uf |x, a K(φ/2)|a T (−φ/2)|a Uf−1 |x, a uncompute f Since T (−φ/2)K(φ/2) =
1 0 0 eiφ
(1) (2) (3) (4) (5)
where K and T are the single-qubit operations introduced in 5.4.1, together steps (3) and (4) shift the phase by eiφ if and only if bit a is one. Strictly speaking, we do not need to do step (3) at
all since it is a physically meaningless global phase shift: performing step (3) merely makes it easier to see that we get the desired result. Alternatively, we could replace steps (3) and (4) by a
single step 1 K(φ)|a|xi , where i can be any of the qubits in register x, since placing a phase in any term of the tensor product is the same as placing it in any other term. We need to uncompute Uf
in step (5) to remove the entanglement between register |x and the temporary qubit so that |x ends up in the desired state, no longer entangled with the temporary qubits. Special case φ = π The
important special case φ = π has an alternative, surprisingly simple implementation that generalizes the trick used in the algorithm for Deutsch’s problem. Given Uf as above, the transformation SXπ
can be implemented by initializing a temporary qubit b to |− = √1 (|0 − |1), and then using Uf to compute into this register: consider |ψ = x∈X ax |x + 2 a |x, and compute x ∈X x
7.4 Quantum Subroutines
Uf (|ψ ⊗ |−) = Uf
ax |x ⊗ |− + Uf ⎝
ax |x ⊗ |−⎠
x ∈X
ax |x ⊗ |− + ⎝
⎞ ax |x ⊗ |−⎠
x ∈X
= (SXπ |ψ) ⊗ |−. In particular, the following circuit, acting on the n-qubit state |0 together with an ancilla qubit in state |1 creates the superposition |ψX = (−1)f (x) |x:
Uf 1
For elegance, and to be able to reuse the ancilla qubit, we may want to apply a final Hadamard transformation to the ancilla qubit, in which case the circuit is 0
Uf 1
Geometrically, when acting on the N -dimensional vector space associated with the quantum system, the transformation SXπ is a reflection about the N − k-dimensional hyperplane perpendicular to the
k-dimensional hyperplane spanned by {|x|x ∈ X}: a reflection in a hyperplane sends any vector |v perpendicular to the hyperplane to its negative −|v. For any unitary transformation U , the
transformation U SXπ U −1 is a reflection in the hyperplane perpendicular to the hyperplane spanned by the vectors {U |x|x ∈ X}. Section 9.2.1 uses this geometric view of SXπ to build intuition for
Grover’s algorithm. We can write the result of applying SXπ to the superposition W |0 as
7 Introduction to Quantum Algorithms
1 (−1)f (x) |x √ N where f is the Boolean function for membership in X, 1 if x ∈ X f (x) = 0 otherwise. Conversely, given a Boolean function f , we define Sfπ to be SXπ where X = {x|f (x) = 1}. 7.4.3
State-Dependent Phase Shifts
Section 7.4.2 explained how to change efficiently the phase of all terms in a superposition corresponding to certain subsets of basis elements, but that construction performs the same phase change on
all of those terms. This section considers the problem of implementing different phase shifts in different terms; we wish to implement transformations in which the amount of the phase shift depends
on the quantum state. Aim Efficiently approximate to accuracy s the transformation on n-qubits that changes the phase of the basis elements by
|x → eiφ(x) |x where the function φ(x) that describes the desired phase shift angle φ for each term x has an associated function f : Zn → Zs that is efficiently computable, and the value of the ith
bit of f (x) is the ith term in the following binary expansion for φ(x): f (x) . 2s The implementation can be only as efficient as the function f . Given a quantum circuit that efficiently implements
Uf , we can perform the state-dependent phase shift in O(s) steps in addition to 2 uses of Uf . The ability to compute f efficiently is a strong one: most functions do not have this property. This
paragraph shows how to implement efficiently the subprogram that changes the phase of an s-qubit standard basis state |x by the angle φ(x) = 2π2sx . Let 1 0 P (φ) = T (−φ/2)K(φ/2) = 0 eiφ φ(x) ≈ 2π
be the transformation that shifts the phase in a qubit if that bit is 1 but does nothing if that bit is 0. The program define Phase |a[s] = for i ∈ [0 . . . s − 1] )|ai P ( 2π 2i performs the s-qubit
transformation Phase : |a → exp(i2π 2as )|a.
7.4 Quantum Subroutines
The Phase program is used as a subroutine in the following program that implements the n-qubit transformation Phasef : |x → exp(2π i f2(x) s )|x: define Phasef |x[k] = qubit a[s] an s-bit temporary
register Uf |x|a compute f in a P hase |a perform phase shift by 2π a/2s −1 Uf |x|a uncompute f
(1) (2) (3) (4)
After step (2), register a is entangled with x and contains the binary expansion of the angle φ(x) for the desired phase shift for the basis vector |x. Since registers a and x are entangled, changing
the phase in register a during step (3) is equivalent to changing the phase in register x. Step (4) uncomputes Uf to remove this entanglement so that the contents of register x end up in the desired
state, no longer entangled with the temporary qubits. 7.4.4 State-Dependent Single-Qubit Amplitude Shifts Aim Efficiently approximate, to accuracy s, rotating each term in a superposition by a
single-qubit rotation R(β(x)) (see Section 5.4.1), where the angle β(x) depends on the quantum state in another register. More specifically, we wish to implement a transformation that takes
|x ⊗ |b → |x ⊗ (R(β(x))|b) , where β(x) ≈ f (x) 2π and the approximating function f : Zn → Zs is efficiently computable. 2s From an efficient implementation of Uf , we can implement this
transformation in O(s) steps plus two calls to Uf . The subroutine uses an auxiliary transformation Rot that shifts the amplitude )|b where the contents in qubit b by the amount specified in register
a; |a ⊗ |b → |a ⊗ R(a 2π 2s of the s-qubit register a give the angle by which to rotate up to accuracy 2−s . Figure 7.1 shows a circuit that implements Rot. Using our program notation, this
transformation can be described more concisely by define Rot |a[s]|b[1] = for i ∈ [0 . . . s − 1] |ai control R( 2π )|b. 2i The desired rotation specified by the function f can be achieved by the
program define Rotf |x[k]|b[1] = qubit a[s] Uf |x|a Rot |a, b Uf−1 |x|a
an s-bit temporary register compute f in a perform rotation by 2π a/2s uncompute f
7 Introduction to Quantum Algorithms
R( 2s )
R( 2 )
Figure 7.1 Circuit for controlled rotation.
7.5 A Few Simple Quantum Algorithms
This section presents a few simple quantum algorithms. The first three problems are black box or oracle problems for which the quantum algorithm’s query complexity is better than the query complexity
of any conceivable classical algorithm. The fourth is a problem for which the communication complexity of a quantum protocol is better than the communication complexity for any possible classical
one. Like Deutsch’s problem, the problems are a bit artificial, but they have relatively simple quantum algorithms that can be proved to be more efficient than any possible classical approach. Like
Deutsch’s algorithm, these algorithms solve these problems with certainty. 7.5.1 Deutsch-Jozsa Problem
David Deutsch and Richard Jozsa present a quantum algorithm for the following problem, a multiple bit generalization of Deutsch’s problem of section 7.3.1. Deutsch-Jozsa Problem A function f is
balanced if an equal number of input values to the function return 0 and 1. Given a function f : Z2n → Z2 that is known to be either constant or balanced, and a quantum oracle Uf : |x|y → |x|y ⊕ f
(x) for f , determine whether the function f is constant or balanced.
The algorithm begins by using the phase change subroutine of section 7.4.2 to negate terms of the superposition corresponding to basis vectors |x with f (x) = 1: the subroutine returns the state N−1
1 |ψ = √ (−1)f (i) |i. N i=0
7.5 A Few Simple Quantum Algorithms
( The subroutine uses a temporary qubit in state |−. Just as section 7.4.1 showed for Deutsch’s algorithm, it is vital that the subroutine end with that qubit unentangled with any other qubits, so
that it can be safely ignored.) Next, apply the Walsh transform W to the resulting state |ψ to obtain ⎛ ⎞ N−1 N−1 1 ⎝ (−1)f (i) |φ = (−1)i·j |j ⎠ . N i=0 j =0 For constant f , the (−1)f (i) = (−1)f
(0) is simply a global phase and the state |φ is simply |0: ⎛ ⎞ 1 1 ⎝ (−1)i·j ⎠ |j = (−1)f (0) n (−1)i·0 |0 = (−1)f (0) |0 (−1)f (0) n 2 2 n n n j ∈Z2
because, as box 7.1 shows, i∈Zn (−1)i·j = 0 for j = 0. For f balanced 2 ⎛ ⎞ 1 ⎝ (−1)i·j − (−1)i·j ⎠ |j |φ = n 2 n i∈X i∈X j ∈Z2
i·j where X0 = {x|f (x) = 0}. This time, for j = 0, the amplitude is zero: j ∈X0 (−1) − i·j = 0. Thus, measurement of state |φ in the standard basis will return |0 with j ∈X0 (−1) probability 1 if f
is constant and will return a non-zero |j with probability 1 if f is balanced. This quantum algorithm solves the Deutsch-Jozsa problem with a single evaluation of Uf , while any classical algorithm
must evaluate f at least 2n−1 + 1 times to solve the problem with certainty. Thus, there is an exponential separation between the query complexity of this quantum algorithm and the query complexity
for any possible classical algorithm that solves that problem with certainty. There are, however, classical algorithms that solve this problem in fewer evaluations, but only with high probability of
success. (See exercise 7.4.) 7.5.2 Bernstein-Vazirani Problem
The problem is to determine the value of an unknown bit string u of length n where one is allowed only queries of the form q · u for some query string q. The best classical algorithm uses O(n) calls
to fu (q) = q · u mod 2. A quantum algorithm, closely related to the algorithm we just gave for the Deutsch-Jozsa problem, can find u in just a single call to Ufu : on a quantum computer it is
possible to determine u exactly with a single query (in superposition). Let fu (q) = q · u mod 2 and Ufu : |q|b → |q|b ⊕ fu (q). The following circuit (figure 7.2) solves this problem with certainty
using only one call to Ufu . To understand how this circuit works, recall from section 7.4.2 that in the special case φ = π , the phase change subroutine can be accomplished by the circuit
7 Introduction to Quantum Algorithms
Ufu 1
In this case, applying this circuit results in the state 1 1 (−1)fu (q) |q = √ (−1)u·q |q |ψX = √ 2n q 2n q in the first register. The next paragraph shows that applying the Walsh-Hadamard
transformation W to this state produces the state |u. Recall that W |x = √12n z (−1)x·z |z. Thus 1 u·q (−1) |q W |ψX = W √ 2n q 1 (−1)u·q W |q =√ 2n q 1 = n (−1)u·q (−1)q·z |z . 2 q z A fact from box
7.1 tells us that (−1)u·q+z·q = (−1)(u⊕z)·q . Furthermore, equation 7.1 tells us that the internal sum is 0 unless u ⊕ z = 0, which implies that the only term that remains is the u = z term. Thus,
Ufu 1
Figure 7.2 Circuit for the Bernstein-Vazirani algorithm.
7.5 A Few Simple Quantum Algorithms
1 u·q+z·q W (|ψX = n (−1) |z 2 z q = |u. Thus measurement in the standard basis gives |u with certainty. Using quantum parallelism to compute on all possible inputs at the same time, and then
cleverly manipulate the resulting superposition, is a common explanation for how quantum algorithms work. The description we gave for the Bernstein-Vazirani algorithm fits this framework. There is a
question, however, as to whether quantum parallelism is the right way of looking at algorithms. To illustrate this point, we give an alternative description, due to Mermin, of exactly this same
algorithm. The key to Mermin’s explanation of the algorithm is to look at the circuit in the Hadamard basis. To understand what the quantum black box for Ufu does in the Hadamard basis, recognize
that it behaves as if it contained a circuit consisting of Cnot operations from some of the qubits to the ancilla qubit: this circuit contains a Cnot from qubit i to the ancilla if and only if the
ith bit of u is 1 (see figure 7.3). Recall from section 5.2.4 that Hadamard operations reverse the control and target roles of the qubits:
A Simpler Explanation
The Bernstein-Vazirani algorithms consists of starting with the state |0 . . . 0|1 and applying Hadamard transformations to every qubit before and after the call to the black box for Ufu (see figure
7.2.) Thus, the Bernstein-Vazirani algorithm behaves as if it were a circuit consisting only of Cnot operations from the ancilla qubit to the qubits corresponding to 1-bits of u. (See figure
Figure 7.3 For u = 01101, the black box for Ufu behaves as if it contained this circuit, consisting of Cnot gates for each 1-bit of u.
7 Introduction to Quantum Algorithms
0 0 u
Figure 7.4 For u = 01101, the Bernstein-Vazirani algorithm behaves as if it were implemented by this simple circuit consisting of a Cnot for each 1-bit of u.
7.4.) From this view of the circuit, it is immediate that the qubits end up in the state |u, so this much simpler explanation, which does not speak of quantum parallelism or of “computing on all
possible inputs,” is the right way to look at the algorithm. 7.5.3 Simon’s Problem
Simon’s problem: Given a 2-to-1 function f such that f (x) = f (x ⊕ a) for all x ∈ Zn2 , find hidden string a ∈ Zn2 . Simon describes a quantum algorithm that can find a in only O(n) calls to Uf ,
followed by O(n2 ) additional steps, whereas the best a classical algorithm that can do is O(2n/2 ) calls to f . Simon’s algorithm suggested to Shor an approach to the factoring problem that is now
famous as Shor’s algorithm. As we will see in chapter 8, there are structural similarities between Shor’s algorithm and Simon’s algorithm. To determine a, create the superposition x |x|f (x).
Measuring the right part of the register projects the state of the left register to √12 (|x0 + |x0 ⊕ a), where f (x0 ) is the measured value. Applying the Walsh-Hadamard transformation W leads to 1 W
√ (|x0 + |x0 ⊕ a) 2 1 1 x0 ·y (x0 ⊕a)·y =√ ((−1) + (−1) )|y √ 2 2n y =√ =√
1 2n+1 2 2n+1
(−1)x0 ·y (1 + (−1)a·y )|y
y·a even
(−1)x0 ·y |y.
7.5 A Few Simple Quantum Algorithms
Measurement of this state results in a random y such that y · a = 0 mod 2, so the unknown bits ai of a must satisfy the equation y0 · a0 ⊕ · · · ⊕ yn−1 · an−1 = 0. This computation is repeated until
n linearly independent equations have been found. Each time the computation is repeated, the resulting equation has at least a 50 percentage chance of being linearly independent of the previous
equations obtained. After repeating the computation 2n times, there is a 50 percentage chance that n linearly independent equations have been found. These equations can be solved to find a in O(n2 )
steps. Thus, with high likelihood, the hidden string a will be found with O(n) calls to Uf , followed by O(n2 ) steps to solve the resulting set of equations. 7.5.4 Distributed Computation
This section describes a different type of quantum algorithm, one for which communication complexity is the concern. Like dense coding and teleportation, it uses entangled pairs that can be
distributed ahead of time, independent of the computation, so these qubits are not counted as qubits transmitted during the solution of the problem (though the exponential savings would remain even
if they were counted). The Problem Let N = 2n . Alice and Bob are each given an N -bit number, u and v respectively.
The objective is for Alice to compute an n-bit number a and Bob to compute an n-bit number b such that dH (u, v) = 0 → a = b dH (u, v) = N/2 → a = b else → no condition on a and b where dH (u, v) is
the Hamming distance between u and v. In other words, Alice and Bob need an algorithm that produces a and b from any u and v such that if u = v, then a = b; if u and v differ in half of their bits,
then a = b; and if the Hamming distance of u and v is anything other than 0 or N/2, a and b can be anything. This problem is nontrivial because u and v are exponentially larger than a and b. Given a
sufficient supply of entangled pairs, this problem can be solved without additional communication between Alice and Bob, while a classical solution requires communication of at least N/2 bits between
the two parties. Suppose Alice and Bob share n entangled pairs of particles (ai , bi ), each in state √12 (|00 + |11), where Alice can access particles ai and Bob can access particles bi . We write
the state of the 2n particles making up these n entangled pairs in order a0 , a1 , . . . , an−1 , b0 , b1 , . . . , bn−1 , so −1 the entire 2n-qubit state is written √1N N i=0 |i, i, where Alice can
manipulate the first n qubits and Bob can manipulate the last n qubits. The problem can be solved without additional communication as follows. Using the phase |i → (−1)ui |i folchange subroutine of
section 7.4.2, with f (i) = ui , Alice performs lowed by the Walsh transform W on her n qubits. Bob performs the same computation on his n qubits using f (i) = vi . Together their particles are now
in the common global state
7 Introduction to Quantum Algorithms
|ψ = W
N−1 1 ui ⊕vi (−1) |i|i . √ N i=0
Alice and Bob now measure their respective part of the state to obtain results a and b. We need to show that a and b have the desired properties. 2 The probability that measurement results in a = x =
b is | x, x|ψ| . We wish to show that this probability is 1 if u = v and 0 if dH (u, v) = N/2. Let us simplify the state as follows, where the superscript in W (l) indicates that W is acting on an l
qubit state: N−1 1 (−1)ui ⊕vi |i|i |ψ = W (2n) √ N i=0 N−1 1 =√ (−1)ui ⊕vi (W (n) |i ⊗ W (n) |i) N i=0
1 √
−1 N−1 N−1 N
N N
(−1)ui ⊕vi (−1)i·j (−1)i·k |j k.
i=0 j =0 k=0
x, x|ψ =
N−1 N −1 1 1 (−1)ui ⊕vi (−1)i·x (−1)i·x = √ (−1)ui ⊕vi . √ N N i=0 N N i=0
If u = v, then (−1)ui ⊕vi = 1 and x, x|ψ = √1N , so the probability | x, x|ψ| = N1 . The probability, summed over the N possible values of x, is 1, so when Alice and Bob measure they obtain, with
probability 1, states a and b with a = b = x for some bit string x. For dH (u, v) = N/2, the ui ⊕vi has an equal number of +1 and −1 terms, which cancel to sum x, x|ψ = N √1 N N−1 i=0 (−1) give x, x|
ψ = 0. Thus, in this case, Alice and Bob measure the same value with probability 0. 2
7.6 Comments on Quantum Parallelism
Because quantum parallelism’s role in quantum computation has often been misunderstood, we make a few comments to address some common misconceptions. The notation N−1 1 |x, f (x) √ N x=0
suggests that exponentially more computation is being done by the quantum operation Uf acting on the superposition x |x, 0 than by a classical computer computing f (x) from x. The next paragraph
explains how this view is misleading and how it does not explain the power of quantum computation. Similarly, the exponential size of the n-qubit quantum state space may seem to
7.6 Comments on Quantum Parallelism
suggest that an exponential speedup over the classical case can always be obtained using quantum parallelism. This statement is generally incorrect, although in certain special cases quantum
computation does provide such speedups. We elaborate briefly on each of these statements. As explained in section 7.1.2, only one input /output pair can be extracted by measurement in the standard
basis from the superposition generated quantum parallelism. It is not possible to extract more input /output pairs in any other way since, as section 4.3.1 explained, only m bits of information can
be extracted from an m-qubit state. Thus, while the 2n values of f (x) appear in the single superposition state, it still takes 2n computations of Uf to obtain them all, no better than the classical
case. This limitation leaves open the possibility that any classical algorithm that takes 2n steps to obtain n bits of output could be done in a single step on a quantum computer. While some
algorithms do give speedups of this magnitude over classical algorithms, the optimality of Grover’s algorithm proved in chapter 9.1 shows that there are problems of this form for which it is known
that no quantum algorithm can provide an exponential speedup. Furthermore, lower bound results exist that show that for many problems quantum computation cannot provide any speedup at all. Thus,
quantum parallelism and quantum computation do not, in general, provide the exponential speedup suggested by the notation. Furthermore, a superposition like √1N |x, f (x) is still only a single state
of the quantum state space. The n-qubit quantum state space is extremely large, so large that the vast majority of states cannot even be approximated by an efficient quantum algorithm. (The elegant
proof goes beyond the scope of this book. A reference is given in section 7.9.) Thus, an efficient quantum algorithm cannot even come close to most states in the state space. For this reason, quantum
parallelism does not, and efficient quantum algorithms cannot, make use of the full state space. As Mermin’s explanation of the Bernstein-Vazirani algorithm of section 7.5.2 illustrates, even when
quantum parallelism can be used to describe an algorithm, it is not necessarily correct to view it as key to the algorithm. Understanding where the power of quantum computation comes from remains an
open research question. The status of entanglement as one of the keys will be discussed in the introduction to chapter 10 and in section 13.9, which addresses this question explicitly. When
algorithms are described in terms of quantum parallelism, the heart of the algorithm is the way in which the algorithm manipulates the state generated by quantum parallelism. This sort of
manipulation has no classical analog and requires nontraditional programming techniques. We list a couple of general techniques: •
Amplify output values of interest. The general idea is to transform the state in such a way that values of interest have a larger amplitude and therefore have a higher probability of being measured.
Grover’s algorithm of chapter 9 exploits this approach, as do the many closely related algorithms. •
Find properties of the set of all the values of f(x). This idea is exploited in Shor’s algorithm of chapter 8, which uses a quantum Fourier transformation to obtain the period of f . The
7 Introduction to Quantum Algorithms
algorithms given in section 7.5 for the Deutsch-Jozsa problem, the Bernstein-Vazirani problem, and Simon’s problem all take this approach. 7.7 Machine Models and Complexity Classes
Computational complexity classes are defined in terms of a language and machines that recognize that language. In this section, the term machine refers to any quantum or classical computing device
that runs a single algorithm on which we can count the number of computation steps and storage cells used. A language L over an alphabet is a subset of the finite strings ∗ of elements from . A
language L is recognized by a machine M if, for each string x ∈ ∗ , the machine M can determine if x ∈ L. Exactly what determine means depends on the kind of machine we are considering. For example,
given input x, a classical deterministic machine may answerYes, x ∈ L, or No, x ∈ L, or it may never halt. Probabilistic and quantum machines might answer Yes or No correctly with certain
probabilities. We consider five kinds of classical machines, deterministic (D), nondeterministic (N), randomized (R), probabilistic (Pr), and bounded probability of error (BP). Each of these types of
classical machine has a quantum analog (EQ, NQ, RQ, PrQ, BQ). Of particular interest will be quantum deterministic (exact) machines (EQ), and quantum bounded probability of error machines (BQ).
Section 7.7.1 uses these types of machine to define numerous complexity classes of varying resource constraints. We now more rigorously describe exactly how the different kinds of machines recognize
a language. For each kind of machine M, there is a single language LM that M recognizes. For example, a machine is deterministic if whenever it sometimes answers Yes on a given input x it always
answers Yes on that input. A deterministic machine D recognizes the language LD = {x ∈ ∗ |D(x) = Yes} = {x|P (D(x) = Yes) = 1}. By definition of deterministic, for all x ∈ L, the probability P (D(x)
= Y es) is zero. As a second example, a bounded probability of error machine, acting on a given input x, either answers Yes with probability at least 1/2 + or with probability no more than 1/2 − .
Given a bounded probability of error machine BP, LBP = {x|P (BP(x) = Yes) ≥ 1/2 + }. For x ∈ LBP , P (BP (x) = Yes) ≤ 1/2 − . Amachine may not give an answer at all for some inputs. Table 7.1
summarizes the conditions for the various types of machines we consider. The quantum machine types recognize a language with the same probability as their classical counterparts. Figure 7.5
illustrates containment relations between the kinds of machines. Containment means that by definition each D machine, for example, is also an R machine. A language is recognized by a kind of machine
if there exists a machine of that kind that recognizes it. The set of languages recognized by the types of machines we have defined does not depend on the particular value of . For example, suppose
we are given a Pr machine M that answersYes for x ∈ L with probability P (x ∈ L) > 12 + . We can construct a new Pr machine M that runs M three times and answers Yes if M answers Yes at least two
times. Then M will accept
7.7 Machine Models and Complexity Classes
Table 7.1 Probability for a particular kind of machine to answer Yes when given an input x that is or is not and element of language L Prefix
Kind of machine
P (x ∈ L)
P (x ∈ L)
Classical D
Randomized (Monte Carlo)
Bounded probability of error
Quantum EQ
Quantum deterministic (exact)
Quantum bounded probability of error
=0 +
=0 ≤
=0 +
D Figure 7.5 Containment relations between kinds of machines. These relations hold for classical and quantum machines, and for time and space complexity.
x ∈ L with probability > 12 + 32 − 3 . Some authors use a fixed value such as = 1/4. The case P (x ∈ L) > 1/2 is quite different from P (x ∈ L) > 1/2 + , however, since in the former case no
polynomial number of repetitions can guarantee an increase in the success probability above a given threshold 12 + . 7.7.1 Complexity Classes
In addition to being concerned about the probability that a machine answer correctly, complexity theory is concerned about quantifying the amount of resources, particularly time and space, that a
machine uses to obtain its answers. A machine recognizes a language L in time O(f ) if, for
7 Introduction to Quantum Algorithms
any string x ∈ ∗ of length n, it answers Yes or No within t (n) steps and t ∈ O(f ). A machine recognizes a language L in space O(f ) if, for any string x ∈ ∗ of length n, it answers Yes or No using
at most s(n) storage units, measured in bits or qubits, where s ∈ O(f ). A complexity class is the set of languages recognized by a particular kind of machine within given resource bounds.
Specifically, for m ∈ {D, EQ, N, R, Pr, BP}, we consider the classes mTime(f ) and mSpace(f ). Language L is in complexity class mTime(f ) if there exists a machine M of kind m that recognizes L in
time O(f ). Language L is in complexity class mSpace(f ) if there exists a machine M of kind m that recognizes L in space O(f ). We are particularly interested in machines that use only a polynomial
amount of resources, and to a lesser extent in those that use only an exponential amount. For example, we are interested in the class P = DTime(nk ) of machines that respond to an input of length n
using only O(nk ) time for some k. The following shorthand notations are common: P EQP
DTime(nk ) EQTime(nk )
NTime(nk )
RTime(nk )
PrTime(nk )
BPTime(nk )
BQTime(nk )
DSpace(nk )
NSpace(nk )
DTime(k n )
For time classes, we can assume that machines always halt because the function f provides an upper bound on the possible runtimes. However, machines in the space complexity classes may never halt on
some inputs. Therefore, we define mH Space(f ) to be the class of languages that are recognized by a halting machine of type m in space O(f ). Obviously, mH Space(f ) ⊆ mSpace(f ). Note that in the
circuit model all computations will terminate. Analysis of the complexity of nonhalting space classes requires a different model of computation, such as quantum Turing machines. 7.7.2 Complexity:
Known Results
We give informal arguments for some of the containment relations involving quantum complexity classes. Figure 7.6 depicts the known containment relation involving classical and quantum time
complexity classes. Nothing is as yet known about the relation between BQP and NP or PP.
7.7 Machine Models and Complexity Classes
P Figure 7.6 Containment relation involving classical and quantum complexity classes.
P ⊆ EQP Any classical polynomial time computation can be performed by a polynomial size circuit family. This inclusion follows from the main result of chapter 6: all classical circuits can be done
reversibly with only a slight increase in time and space, and any reversible polynomial time algorithm can be turned into a polynomial time exact quantum algorithm. EQP ⊆ BQP This containment is
trivial since every exact quantum algorithm has bounded probability of error. BPP ⊆ BQP Any computation performed by a machine M in BPP can be approximated arbitrarily closely by an machine M˜ that
makes a single equiprobable binary decision at each step. Furthermore, this decision tree is of polynomial depth, so a sequence of choices can be encoded by a polynomial size bit string c. From M˜
one can construct a deterministic machine M˜ d that, when applied to c and x, will perform the same computation as M˜ applied to x making the random choices c. For the deterministic machine M˜ d
there is a polynomial time quantum machine M˜ q that can be applied to the superposition of all possible random choices c applied to x, c |c, x, 0, producing c |c, x, M˜ d (c, x). In effect, M˜ q
performs all possible computations of M˜ on x in parallel. The probability of reading an accepting answer from M˜ q is the same as the probability that M˜ would accept x. It is not known whether BPP
⊆ BQP is a proper inclusion. In fact, showing BPP = BQP would answer the open question as to whether BPP = PSpace. BQP ⊆ PSpace Consider a machine in BQP acting on an input of size n that starts from
a known state |ψ0 = |0 and proceeds for k steps followed by a measurement. We show that such
7 Introduction to Quantum Algorithms
a machine can be approximated arbitrarily closely, in the sense of computing any amplitude of the final state to a specified precision, in polynomial space. Let |ψi = j aij |j denote the state after
step i. Each state |ψi , i = 0 may be a superposition of an exponential (in n) number of basis vectors. Yet, using only space polynomial in n, it is possible to compute the amplitude akj of an
arbitrary basis vector in the final superposition |ψk . We may assume each step corresponds to a primitive quantum gate Ui that operates on at most d ≤ 3 quantum bits. For these transformations, we
show that the amplitude ai+1,j of basis vector |j in state |ψi+1 depends only on the amplitudes ai,j of the small number (2d ≤ 8) of basis vectors of the preceding state |ψi that differ from |j only
in the bits that are being operated on by the gate. Without loss of generality, assume that U = Ui+1 operates on the last d quantum bits. We will use the the shorthand x ◦ y to stand for 2d x + y and
let uqr = r|U |q for basis elements |r and |q in the standard basis for a 2d -dimensional space. |ψi+1 = (I n−d ⊗ U )|ψi aij (I n−d ⊗ U )|j = j
d −1 2n−d −1 2
ai,p◦q |p ⊗
ai,p◦q |p ⊗ U |q
d −1 2
uqr |r
d −1 2
⎞ uqr ai,p◦q ⎠ |p|r.
2d −1 uqr ai,p◦q depends only on 2d amplitudes ai,p◦q It follows that each amplitude ai+1,p◦r = q=0 of the preceding state. By induction, we argue that it requires storage of i2d amplitudes to
compute a single amplitude of state |ψi . Since we know |ψ0 , it takes no space to compute the amplitude j |ψ0 for any j . As we have just seen, the amplitude ai+1,j can be computed from 2d
amplitudes of |ψi . We can do this by computing each of these amplitudes in turn, which requires storing at most i2d amplitude values, storing the resulting 2d amplitudes, and computing ai+1,j .
Overall, this process requires storage of (i + 1)2d amplitude values. We take M to be the maximum precision required at any point in the computation to obtain the desired precision at the end. The
total accumulated error is no larger than the sum of the errors of individual steps. Thus, the number M grows only linearly in the number of steps needed, and any
7.8 Quantum Fourier Transformations
one amplitude value can be stored in space M and the amplitude of any basis vector of the final superposition after k steps can be computed in k2d M space. Since by assumption k is polynomial in n, d
is a constant no more than 3, and M only grows linearly with k, it takes only polynomial space to compute a single amplitude of the final state |ψk . To simulate the algorithm, choose a basis vector
|j randomly (or, if you prefer, in a specified order) and calculate the amplitude akj . Generate a random number between 0 and 1 and see if it is less than |akj |. If so, return |j . Otherwise free
all the space, choose another basis vector, and repeat. Repeat as often as necessary until a basis vector is returned (time is not an issue!). Thus, any computation in BQP can be simulated
classically in polynomial space. 7.8 Quantum Fourier Transformations
The quantum Fourier transformation (QFT) is the single most important quantum subroutine. It and its generalizations are used in many quantum algorithms that achieve a speedup over classical
algorithms. Appendix B.2.2 discusses generalizations of quantum Fourier transforms and shows that the Walsh-Hadamard transformation is a generalized quantum Fourier transform. The quantum Fourier
transformation (QFT) is based on the classical discrete Fourier transformation (DFT) and its efficient implementation, the fast Fourier transform (FFT). We briefly describe the classical discrete
Fourier transform (DFT) and the fast Fourier transform (FFT) before describing the quantum Fourier transform (QFT) and its surprisingly efficient quantum implementation. 7.8.1 The Classical Fourier
Transform Discrete Fourier Transform The discrete Fourier transform (DFT) operates on a discrete
complex-valued function to produce another discrete complex-valued function. Given a function a : [0, . . . , N − 1] → C, the discrete Fourier transform produces a function A : [0, . . . , N − 1] → C
defined by N−1 1 kx A(x) = √ . a(k) exp 2πi N N k=0 The discrete Fourier transform can be viewed as a linear transformation taking column vector (a(0), . . . , a(N − 1))T to (A(0), . . . , A(N − 1))T
with matrix representation F with entries ). The values A(0), . . . , A(N − 1) are called the Fourier coefficients of the Fxk = √1N exp(2π i kx N function a. Example 7.8.1 Let a : [0, . . . , N − 1]
→ C be the periodic function a(x) = exp(−2π i ux ) for N
some frequency u evenly dividing N. We assume that the function is not constant: 0 < u < N. The Fourier coefficients for this function are
7 Introduction to Quantum Algorithms
N−1 1 kx A(x) = √ a(k) exp 2π i N N k=0 N−1 1 kx uk =√ exp 2π i exp −2πi N N N k=0 N−1 1 k(x − u) =√ . exp 2π i N N k=0 −1 r It is a well-known fact that sums of the form N k=0 exp(2π ik N ) vanish
unless r = 0 mod N . (We prove a more general fact in appendix B.) Since u < N , A(x) = 0 unless x − u = 0: only A(u) will be non-zero. Any periodic complex-valued function a with period r and
frequency u = N/r can be approximated, using its Fourier series, as the sum of exponential functions whose frequencies are multiples of u. Since the Fourier transform is linear, the Fourier
coefficients A(x) of any periodic function will be the sum of the Fourier coefficients of the component functions. If r divides N evenly, the Fourier coefficients A(x) will be non-zero only for those
x that are multiples of u = N/r. If r does not divide N evenly, the result only approximates this behavior, with the highest values at the integers closest to multiples of u = N/r and low values at
integers far from these multiples. Fast Fourier Transform The fast Fourier transform (FFT) is an efficient implementation of the discrete Fourier transform (DFT) when N is a power of two: N = 2n .
The key to the implementation is that F (n) can be recursively decomposed in terms of Fourier transforms for lower powers of 2. i Let ω(n) be the Nth root of unity, ω(n) = exp( 2π ). The entries of
the N × N matrix F (n) for N n the N = 2 dimensional Fourier transform are simply ij
Fij(n) = ω(n) , where we index the entries of all N × N matrices by i ∈ {0, . . . , N − 1} and j ∈ {0, . . . , N − 1}. Let F (k) be the 2k × 2k matrix for the 2k -dimensional Fourier transform. Let I
(k) be the 2k × 2k identity matrix. Let D (k) be the 2k × 2k diagonal matrix with elements 2k −1 0 , . . . ω(k+1) . Let R (k) be the permutation shown in figure 7.7 that maps the vector entries at ω
(k+1) index 2i to position i and at index 2i + 1 to position i + 2k−1 . The entries of the 2k × 2k matrix for R (k) are given by ⎧ ⎨ 1 if 2i = j Rij(k) = 1 if 2(i − 2k ) + 1 = j ⎩ 0 otherwise.
7.8 Quantum Fourier Transformations
R 0 1 2 3 4 5 6 7 Figure 7.7 An example of the shuffle transform R.
The reader may verify (see exercise 7.7) that (k−1) (k−1) 1 D (k−1) F I F (k) = √ (k−1) (k−1) I −D 0 2
0 F (k−1)
R (k) .
The reader may consult any standard reference on the fast Fourier transform for an implementation based on this recursive decomposition that uses only O(nN ) steps. 7.8.2 The Quantum Fourier
The quantum Fourier transform (QFT) is a variant of the discrete Fourier transform, which, like the fast Fourier transform (FFT), assumes that N = 2n . The amplitudes ax of any quantum state x ax |x
can be viewed as a function of x, which we will denote by a(x). The quantum Fourier transform operates on a quantum state by sending a(x)|x → A(x)|x x
where the A(x) are the Fourier coefficients of the the discrete Fourier transform of a(x), and x ranges over the integers between 0 and N − 1. If the state were measured in the standard basis right
after the Fourier transform was performed, the probability that the resulting state would be |x
7 Introduction to Quantum Algorithms
would be |A(x)| . The quantum Fourier transform generalizes from a classical complex-valued function in quite a different way from how Uf generalizes a binary classical function f ; here the output
of the classical function is placed in the complex amplitudes of the final superposition state, and there is no need for an additional output register. Applying the quantum Fourier transform to a
state whose amplitudes are given by a periodic function a(x) = ax with period r, where r is a power of 2, would result in x A(x)|x, where A(x) is zero except when x is a multiple of Nr . Thus, were
the state measured in the standard basis at this point, the result would be one of the basis vectors |x with label a multiple of Nr , say |j Nr . The quantum Fourier transform behaves in only
approximately this way when the period is not a power of 2 (does not divide N = 2n ): states labeled with integers near multiples of Nr would be measured with high probability. The larger the power
of 2 used as a base for the transform, the closer the approximation. While the implementation of the quantum Fourier transform is based on that of the fast Fourier transform, the quantum Fourier
transform can be implemented exponentially faster, needing only O(n2 ) operations, not the O(nN ) operations needed for the fast Fourier transform. We will see in appendix B.2.2 that the quantum
Fourier transform is a special case of a more general class of efficiently implementable quantum transformations. 2
7.8.3 A Quantum Circuit for Fast Fourier Transform
We show how to implement efficiently the quantum Fourier transform UF (n) for N = 2n , defined by UF
N−1 1 2πikx )|x. : |k → √ exp( N N x=0
The quantum Fourier transform for N = 2 is the familiar Hadamard transformation: 1 0 1 UF (1) : |0 → √ e |x = √ (|0 + |1), 2 x=0 2 1
1 π ix 1 |1 → √ e |x = √ (|0 − |1). 2 x=0 2 1
Using the recursive decomposition of section 7.8.1, (k) 1 D (k) 0 UF (k) I (k+1) UF R (k+1) , =√ (k) −D (k) 0 UF (k) 2 I we can compute UF (n) . All of the component matrices are unitary (the
multiplicative factor in front goes with the first matrix). It remains to be shown how these components can be efficiently realized on a quantum computer.
7.8 Quantum Fourier Transformations
We proceed as follows: 1. We can write the rotation R (k+1) as R
k −1 2
|i 2i| + |i + 2k 2i + 1|.
It can be accomplished by a simple permutation of the k + 1 qubits: qubit 0 becomes qubit k and qubits 1 through k become qubits 0 through k − 1. This permutation can be implemented using k − 1 swap
operations of section 5.2.4. 2. The transformation UF (k) 0 = I ⊗ UF (k) 0 UF (k) can be implemented by recursively applying the quantum Fourier transform to qubits 0 through k. 3. For k ≥ 1, the 2k
× 2k -diagonal matrix of phase shifts D (k) can be recursively decomposed as 1 0 (k) (k−1) D =D . ⊗ 0 ω(k+1) Recursively decomposing D (k) in this way, the transformation D (k) can be implemented by
apply 1 0 to qubit i for 1 ≤ i ≤ k. Thus altogether D (k−1) can be implemented using k ing 0 ω(i+1) single-qubit gates. 4. Given this implementation of D (k) , then I (k) D (k) 1 √ 2 I (k) −D (k) can
be implemented with only k gates. (k) 1 1 1 D (k) I = √ (|0 + |1) 0| ⊗ I (k) + √ (|0 − |1) 1| ⊗ D (k) √ (k) (k) I −D 2 2 2 = (H |0 0|) ⊗ I (k) + (H |1 1|) ⊗ D (k) = (H ⊗ I (k) )(|0 0| ⊗ I (k) + |1 1|
⊗ D (k) ). The transformation (|0 0| ⊗ I (k) + |1 1| ⊗ D (k) ) applies D (k) to the low-order bits controlled by the high-order bit: it applies D (k) to bits 0 through k − 1 if bit k is one. This
controlled version of D (k) can be implemented as a sequence of k two-qubit controlled gates that apply each of the single-qubit operations making up D (k) to bit i controlled by bit k.
7 Introduction to Quantum Algorithms
0 1 2 k–2 k–1
UF (k)
D (k)
k–3 H
Figure 7.8 A recursive quantum circuit for Fourier transform.
Since D (k) and R (k) can both be implemented with O(k) operations, the kth step in the recursion adds O(k) steps to the implementation of UF (n) . Overall, UF (n) takes O(n2 ) gates to implement,
which is exponentially faster than the O(n2n ) steps required for classical fast Fourier transform. A circuit for this implementation of the quantum Fourier transform is shown in figure 7.8. A
recursive program for this implementation would be define QF T |x[1] = H |x QF T |x[n] = Swap |x0 |x1 · · · xn−1 QF T |x0 · · · xn−2 |xn−1 control D (n−1) |x0 · · · xn−2 H |xn−1 . 7.9 References
Classical circuit complexity is discussed in Goldreich [131] and Vollmer [279]. Watrous [281] provides an excellent and extensive survey of quantum complexity theory. An older survey by Cleve [85],
unlike Watrous, discusses quantum communication complexity as well as quantum computational complexity. Brassard [59] and de Wolf [97] both survey quantum communication complexity. Deutsch described
the solution to the 1-qubit version of his problem in [99]. The three subroutines were discussed in Hogg, Mochon, Polak, and Rieffel [154]. Deutsch and Jozsa presented the n-qubit version and its
solution in [102]. Simon’s problem with solution appeared in [256]. The Bernstein-Vazirani problem first appears in Bernstein and Vazirani [49] as part of a more complex algorithm. The simpler
explanation of the algorithm appears in Mermin [209]. Both Grover [144] and Terhal and Smolin [269] independently rediscovered the problem and quantum algorithms for its solution. The latter
reference contains a proof of the complexity of the best possible classical algorithm. The example of section 7.5.4 was presented by Brassard, Cleve, and Tapp [60] in the context of their study of
quantum communication complexity. Various notions of communication complexity are discussed in [155, 96, 74].
7.10 Exercises
Beals et al. [35] proved that for a broad class of problems quantum computation cannot provide any speedup. Their methods were used by others to provide lower bounds for other types of problems.
Ambainis [21] found another powerful method for establishing lower bounds. Bernstein and Vazirani [49] analyze the accumulation of errors, and their result implies that the needed precision for
simulating a quantum computation grows only linear with the number steps. Bennett et al. [41] provide another, more accessible, account. Yao [287] shows that any function computable in polynomial
time on a quantum Turing machine is computable by a polynomial quantum circuit. The same is true for classical Turing machines and Boolean circuits, and proofs can be found in the paper by Pippenger
and Fischer [229] or Papadimitriou’s book [223]. Papadimitriou’s book also contains a comprehensive definition of classical complexity classes, as does Johnson [164]. Boppana and M. Sipser [55]
discuss classical complexity for Boolean circuits. Formal proofs of the complexity results given here can be found in the papers of Berthiaume-Brassard [50] and Bernstein-Vazirani [49]. The idea of
Fourier transformation goes back to Joseph Fourier’s 1822 book The Analytical Theory of Heat [123]. The algorithm for fast Fourier transformation was proposed by Cooley and Tukey [88]; more
comprehensive treatments can be found in Brigham [66], Cormen et al. [90], Knuth [182], and Strang [264] . The quantum Fourier transform was developed independently by Shor [250] and Coppersmith
[89], and by Deutsch in an unpublished paper. Ekert and Jozsa [112] provide an attractive presentation of quantum Fourier transforms, including some of the circuit diagrams we give here. Approximate
implementations of the quantum Fourier transform are analyzed in Barenco et al. [32]. For instance, it is shown that for some applications approximate computations may lead to better performance.
Aharonov, Landau, and Makowsky [12], Yoran and Short [288], and Browne [67] show that quantum Fourier transforms can be simulated efficiently on a classical computer in the sense that there exist
efficient classical algorithms that provide a means of sampling from a distribution identical to that obtained by measuring the output of the quantum Fourier transform when the input is a product
state. Browne exhibits a method for efficient classical simulation of the quantum Fourier transform applied to a broader class of input states. It is not known how to simulate efficiently the output
distribution of the quantum Fourier transform for certain other input states. One such state is the output of the modular exponentiation circuit of section 6.4.6 when applied to a superposition of
all inputs. Were it possible classically and efficiently to simulate sampling from such a distribution, Shor’s algorithm, described in chapter 8, would be classically simulatable, yielding an
efficient classical solution to the factoring problem. For this reason, it is suspected that such simulation is impossible. 7.10 Exercises Exercise 7.1. In the standard circuit model of section 5.6,
the computation takes place by applying
quantum gates. Only at the end are measurements performed. Imagine a computation that proceeds instead as follows. Gates G0 , G1 , . . . , Gn are applied, then qubit i is measured in the standard
7 Introduction to Quantum Algorithms
basis and never used again. If the result of the measurement is 0, the gates G01 , G02 , . . . , G0k are applied. If the result is 1, then gates G11 , G12 , . . . , G1l are applied. Find a single
quantum circuit in the standard circuit model, with only measurement at the very end, that carries out this computation. Exercise 7.2. Prove equation 7.1: n −1 2
2n 0
if y = 0 otherwise.
Exercise 7.3. Let f and g be functions from the space of n-bit strings to the space of m-bit
strings. Design a quantum subroutine that changes the sign of exactly the basis states |x such that f (x) = g(x), and which is efficient if f and g have efficient implementations. Exercise 7.4. a.
Prove that any classical algorithm requires at least two calls to Cf to solve Deutsch’s problem. b. Prove that any classical algorithm requires 2n−1 + 1 calls to Cf to solve the Deutsch-Jozsa
problem with certainty. c. Describe a classical approach to the Deutsch-Jozsa problem that solves it with high probability
using fewer than 2n−1 + 1 calls. Calculate the success probability of your approach as a function of the number of calls. Exercise 7.5. Show that a classical solution to Simon’s problem requires O(2n
/2 ) calls to the
black box, and describe such a classical algorithm. Exercise 7.6. Show directly that, in the distributed computation algorithm of section 7.5.4, when 2 u = v, | x, y|ψ| = 0 for all x = y. Exercise
7.7. Fast Fourier transform decomposition. (k)
a. For k < l, write the entries Fij of the 2k × 2k matrix for the Fourier transform UF (k) in terms
of ω(l) . m+i i b. Find m in terms of k such that −ω(k) = ω(k) for all i ∈ Z.
c. Compute the product
I (k−1) I (k−1)
D (k−1) −D (k−1)
UF (k−1) 0
0 UF (k−1)
ultimately writing each entry as a power of ω(k) . d. Let A be any 2k × 2k matrix with columns Aj . The product matrix AR (k) is just a permutation
of the columns. Where does column Aj end up in the product AR (k) ?
7.10 Exercises
e. Verify that
1 UF (k) = √ 2
I (k−1) I (k−1)
D (k−1) −D (k−1)
UF (k−1) 0
0 UF (k−1)
R (k) .
Exercise 7.8. Even though we know little about quantum hardware, it makes sense that we may not
want to require multiple qubit transformations that involve physically distant qubits, since these may be difficult to implement. To avoid such transformations, we can modify the implementation we
gave very slightly. a. Give a quantum circuit like that of figure 7.8 for the Fourier transform that does not swap
qubits but changes the order of the output qubits instead. b. Give a complete quantum circuit for the Fourier transform UF (3) that contains only single-
qubit transformations and two-qubit transformations on adjacent qubits. You may want to use the two-qubit swap operator defined in section 5.2.4.
Shor’s Algorithm
In 1994, inspired by Simon’s algorithm, Peter Shor found a bounded probability polynomialtime quantum algorithm for factoring integers. Since the 1970s, researchers have searched for efficient
algorithms for factoring integers. The most efficient classical algorithm known today, the number field sieve, is superpolynomial in the size of the input. The input to the algorithm is M, the number
to be factored. The input M is given as a list M’s digits, so the size of the input is taken to be m = log M. The number field sieve requires O(exp(m1/3 )) steps. People were confident enough that
factoring could not be done efficiently that the security of many cryptographic systems, such as the widely used RSA algorithm, depends on the computational difficulty of this problem. Shor’s result
surprised the community at large, prompting widespread interest in quantum computing. Shor’s factoring algorithm provides a fast means for finding the period of a function. A standard classical
reduction of the factoring problem to the problem of finding the period of a certain function has long been known. Shor’s algorithm uses quantum parallelism to produce a superposition of all the
values of this function in one step; it then uses the quantum Fourier transform to create efficiently a state in which most of the amplitude is in states close to multiples of the reciprocal of the
period. With high probability, measuring the state yields information from which, by classical means, the period can be extracted. The period is then used to factor M. Section 7.8.2 covered the crux
of the quantum part of Shor’s algorithm: the quantum Fourier transform. The remaining complications are classical, particularly the extraction of the period from the measured value. Section 8.1
explains the classical reduction of factoring to the problem of finding the period of a function. Section 8.2 explains the details of Shor’s algorithm, and section 8.3 walks through Shor’s algorithm
in a specific case. Section 8.4 analyzes the efficiency of Shor’s algorithm. Section 8.5 describes a variant of Shor’s algorithm in which a measurement performed in the course of the algorithm is
omitted. Section 8.6 defines two problems that are solved by generalizations of Shor’s factoring algorithm: the discrete logarithm problem and the Abelian hidden subgroup problem. Appendix B
describes the generalizations of Shor’s algorithm that solve these problems and discusses the difficulty of the general hidden subgroup problem.
8 Shor’s Algorithm
8.1 Classical Reduction to Period-Finding
The order of an integer a modulo M is the smallest integer r > 0 such that a r = 1 mod M; if no such integer exists, the order is said to be infinite. Two integers are relatively prime if they share
no prime factors. As long as a and M are relatively prime, the order of a is finite. Consider the function f (k) = a k mod M. Because a k = a k+r mod M if and only if a r = 1 mod M, for a relatively
prime to M, the order r of a modulo M is the period of f . If a r = 1 mod M and r is even, we can write (a r/2 + 1)(a r/2 − 1) = 0 mod M. As long as neither a r/2 + 1 nor a r/2 − 1 is a multiple of
M, both a r/2 + 1 and a r/2 − 1 have nontrivial common factors with M. Thus, if r is even, a r/2 + 1 and a r/2 − 1 are likely to have a nontrivial common factor with M. This property suggests a
strategy for factoring M: •
Randomly choose an integer a and determine the period r of f (k) = a k mod M.
If r is even, use the Euclidean algorithm to compute efficiently the greatest common divisor of a r/2 + 1 and M.
Repeat if necessary.
In this way, factoring M has been converted to a different hard problem, that of computing the period of the function f (k) = a k mod M. Shor’s quantum algorithm attacks the problem of efficiently
finding the period of a function. 8.2 Shor’s Factoring Algorithm
Before giving the details of Shor’s factoring algorithm in sections 8.2.1 and 8.2.2, we give a high-level outline. Quantum computation is required only for parts 2 and 3; the other parts would most
likely be carried out on a classical computational device. 1. Randomly choose an integer a such that 0 < a < M. Use the Euclidean algorithm to determine whether a and M are relatively prime. If not,
we have found a factor of M. Otherwise, apply the rest of the algorithm. 2. Use quantum parallelism to compute f (x) = a x mod M on the superposition of inputs, and apply a quantum Fourier transform
to the result. Section 8.2.2 shows that it suffices to consider input values x ∈ {0, . . . , 2n − 1}, where n is such that M 2 ≤ 2n < 2M 2 . 3. Measure. With high probability, a value v close to a
multiple of
2n r
will be obtained.
4. Use classical methods to obtain a conjectured period q from the value v. 5. When q is even, use the Euclidean algorithm to check efficiently whether a q/2 + 1 (or a q/2 − 1) has a nontrivial
common factor with M. 6. Repeat all steps if necessary.
8.2 Shor’s Factoring Algorithm
Sections 8.2.1 and 8.2.2 describe Shor’s algorithm in more detail. Section 8.3 runs through an example with specific values of M and a. 8.2.1 The Quantum Core
After using quantum parallelism to create the superposition x |x, f (x), part 2 of Shor’s algorithm applies the quantum Fourier transform. Since f (x) = a x mod M can be computed efficiently
classically, the results of chapter 6 imply that the transformation Uf : |x|0 → |x|f (x) has an efficient implementation. (We discuss the efficiency of the entire algorithm in section 8.4.) We use
quantum parallelism with Uf to obtain the superposition 2 −1 1 |x|f (x). √ 2n x=0 n
The analysis simplifies slightly if we now measure the second register. Section 8.5 shows how the measurement can be omitted without affecting the efficiency or the result of the algorithm. Measuring
the second register randomly returns a value u for f (x) and the state becomes C g(x)|x|u, (8.2) x
g(x) =
if f (x) = u otherwise,
and C is the appropriate scale factor. The value of u is of no interest and, since the second register is no longer entangled with the first, we can ignore it. Because the function f (x) = a x mod M
has the property that f (x) = f (y) if and only if x and y differ by a multiple of the period, the values of x that remain in the sum, those with g(x) = 0, differ from each other by multiples of the
period. Thus, the function g has the same period as the function f . If we could somehow obtain the value of two successive terms in the sum, we would have the period. Unfortunately, the laws of
quantum physics permit only one measurement from which we can obtain only one random value of x. Repeating the process does not help because we would be unlikely to measure the same value u of f (x),
so the two values of x obtained from two runs would have no relation to each other. Applying the quantum Fourier transform to the first register of this state produces UF (C g(x)|x) = C G(c)|c, (8.3)
8 Shor’s Algorithm
where G(c) = x g(x) exp( 2π2icx n ). The analysis of section 7.8.2 tells us that when the period r of the function g(x) is a power of two, G(c) = 0 except when c is a multiple of 2n /r. When the
period r does not divide 2n , the transform approximates the exact case, so most of the amplitude n is attached to integers close to multiples of 2r . For this reason, measurement yields, with high n
probability, a value v close to a multiple of 2r . The quantum core of the algorithm has now been completed. The next section examines the classical use of v to obtain a good guess for the period.
8.2.2 Classical Extraction of the Period from the Measured Value
This section sketches a purely classical algorithm for extracting the period from the measured value v obtained from the quantum core of Shor’s algorithm. When the period r happens to be a power of
2, the quantum Fourier transform gives exact multiples of 2n /r, which makes the period n easy to extract. In this case, the measured value v is equal to j 2r for some j . Most of the time j and r
will be relatively prime, in which case reducing the fraction 2vn to its lowest terms will yield a fraction jr whose denominator is the period r. The rest of this section explains how to obtain a
good guess for r when it is not a power of 2. In general the quantum Fourier transform gives only approximate multiples of the scaled frequency, which complicates the extraction of the period from
the measurement. When the period is not a power of 2, a good guess for the period can be obtained from the continued fraction expansion of 2vn described in box 8.1. Shor shows that with high
probability v is within 12 of some multiple n n of 2r , say j 2r . The reason why n was chosen to satisfy M 2 ≤ 2n < 2M 2 becomes apparent when we try to extract the period r from the measured value
v. In the high-probability case that n v − j 2 < 1 r 2 for some j , the left inequality M 2 ≤ 2n implies that v −j< 1 ≤ 1 . 2n r 2 · 2n 2M 2 In general, the difference between two distinct fractions
is bounded: p p pq − p q 1 − = q q qq > M 2 . Thus, there is at most one fraction
p q
p q
p q
with denominators less than M
with denominator q < M such that 2vn − pq < n
1 . In the M2 p fraction q can be
high probability case that v is within 12 of j 2r , this fraction will be jr . The computed using a continued fraction expansion (see box 8.1). We take the denominator q of the obtained fraction. as
our guess for the period. This guess will be correct whenever j and r are relatively prime.
8.3 Example Illustrating Shor’s Algorithm
Box 8.1 Continued Fraction Expansion
The unique fraction with denominator less than M that is within 12 of 2vn can be obtained efficiently M from the continued fraction expansion of 2vn as follows. Let [x] be the greatest integer less
than x. Using the sequences v ! 2n
a0 =
v 0 = n − a0 2 # " 1 ai = i−1 i =
1 i−1
− ai
p0 = a0 p1 = a1 a0 + 1 pi = ai pi−1 + pi−2 q0 = 1 q 1 = a1 qi = ai qi−1 + qi−2 , compute the first fraction pqii such that qi < M ≤ qi+1 .
8.3 Example Illustrating Shor’s Algorithm
This section illustrates the operation of Shor’s algorithm as it attempts to factor the integer M = 21. Since M 2 = 441 ≤ 29 < 882 = 2M 2 , take n = 9. Since log M = m = 5, the second register
requires five qubits. Thus, the state 9
2 −1 1 |x|f (x) √ 29 x=0
is a 14-qubit state, with nine qubits in the first register and five in the second. Suppose the randomly chosen integer is a = 11 and that quantum measurement of the second register of the
superposition of equation 8.1
8 Shor’s Algorithm
0.012 0.0108 0.0096 0.0084 0.0072 0.006 0.0048 0.0036 0.0024 0.0012 0.0 0
Figure 8.1 Probabilities for measuring x when measuring the state C x∈X |x, 8 obtained in equation 8.2, where X = {x|211x mod 21 = 8}.
0.17 0.153 0.136 0.119 0.102 0.085 0.068 0.051 0.034 0.017 0.0 0
Figure 8.2 Probability distribution of the quantum state after Fourier transformation.
8.4 The Efficiency of Shor’s Algorithm
2 −1 1 |x|f (x) √ 29 x=0
produces u = 8. The state of the first register after this measurement is shown in figure 8.1, which clearly shows the periodicity of f . Figure 8.2 shows the result of applying the quantum Fourier
transform to this state; it is the graph of the fast Fourier transform of the function shown in Figure 8.1. In this particular example, the period of f does not divide 2n , which is why the
probability distribution has some spread around multiples of 2n /r instead of having a single spike at each of these values. Suppose that measurement of the state returns v = 427. Since v and 2n are
relative prime, we use the continued fraction expansion of box 8.1 to obtain a guess q for the period. The following table shows a trace of the continued fraction algorithm: i
0.8339844 0.1990632 0.02352941 0.5
The algorithm terminates with 6 = q2 < M ≤ q3 . Thus, q = 6 is our guess for the period of f. Since 6 is even, a 6/2 − 1 = 113 − 1 = 1330 and a 6/2 + 1 = 113 + 1 = 1332 are likely to have a common
factor with M. In this particular example, gcd(21, 1330) = 7 and gcd(21, 1332) = 3. 8.4 The Efficiency of Shor’s Algorithm
This section considers the efficiency of Shor’s algorithm, examining both the efficiency of each part in terms of the number of gates or classical steps needed to implement the part and the expected
number of times the algorithm would need to be repeated. The Euclidean algorithm on integers x > y requires at most O(log x) steps, so both parts 1 and 5 require O(log M) = O(m) steps. The continued
fraction algorithm used in part 4 is related to the Euclidean algorithm and also requires O(m) steps. Part 3 is a measurement of m qubits or, as section 8.5 shows, can be omitted altogether. Part 2
consists of the computation of Uf and the computation of the quantum Fourier transform. Section 7.8.2 showed that the quantum Fourier transform on m qubits requires O(m) steps. The algorithm for
modular exponentiation given in section 6.4 requires O(n3 ) steps could be used to implement Uf . The transformation Uf can be implemented more efficiently using an algorithm for modular
exponentiation, described by Shor, that is based on the most efficient classical method known, and runs in O(n2 log n log log n) time and O(n log n log log n) space. These results show that the
overall runtime of a single iteration of Shor’s algorithm is dominated by the computation of Uf , and that the overall time complexity for a single iteration of the algorithm is O(n2 log n log log
8 Shor’s Algorithm
To show that Shor’s algorithm is efficient, we also need to show that the parts do not need to be repeated too many times. Four things can go wrong: •
The period of f (x) = a x mod M could be odd.
Part 4 could yield M as M’s factor.
The value v obtained in part 3 might not be close enough to a multiple of n
2n . r
A multiple j 2r of 2r is obtained from v, but j and r could have a common factor, in which case the denominator q is actually a factor of the period, not the period itself.
The first two problems appear in the classical reduction, and standard classical arguments bound the probabilities as at most 1/2. For the case in which the period r divides 2n , problem 3 n does not
arise. Shor shows that, in the general case, v is within 1/2 of a multiple of 2r with high n probability. As for problem 4, when r divides 2n , it is not hard to see that every outcome v = j 2r is
equally likely: the state after taking the quantum Fourier transform is C
n −1 2
where G(c) =
2n /r
cx cry exp(2π i n ) = exp(2π i n ) 2 2 y=0
where Xu = {x|f (x) = u}. As we mentioned in section 7.8.1, the final sum is 1 when c is a multiple of 2n /r, and 0 otherwise. Thus, in this case, any j ∈ {0, . . . , r − 1} is equally likely. From j
, we obtain the period r exactly when r and j are relatively prime, gcd(r, j ) = 1. The number of positive integers less than r that are relatively prime to r is given by the famous Euler φ function,
which is known to satisfy φ(r) ≥ δ/ log log r for some constant δ. Thus we need to repeat the parts only O(log log r) times in order to achieve a high probability of success. The argument for the
general case in which r does not divide 2n is somewhat more involved but yields the same result. 8.5 Omitting the Internal Measurement
Part 3 of Shor’s algorithm, the measurement of the second register of the state in equation 8.1 to obtain u, can be skipped entirely. This section first describes the intuition for why this
measurement can be omitted and then gives a formal argument. If the measurement is omitted, the state consists of a superposition of several periodic functions, one for each value of f (x), all of
which have the same period. By the linearity of quantum transformations, applying the quantum Fourier transformation leads to a superposition of the Fourier transforms of these functions. The
different functions remain distinct parts of the superposition and do not interfere with each other because each one corresponds to a different value u of the
8.6 Generalizations
second register. Measuring the first register gives a value from one of these Fourier transforms, n which as before will be close to j 2r for some j and so can be used to obtain the period in the
same way as before. Seeing how this argument can be formalized illustrates some of the subtleties of working with quantum superpositions. Let Xu = {x|f (x) = u}. The state of equation 8.1 can be
written as 2 −1 1 1 |x|f (x) = √ |x|u √ 2n x=0 2n u∈R x∈Xu n
2n −1 1 =√ gu (x)|x |u, 2n u∈R x=0 where R is the range of f (x) and gu is the family of functions indexed by u such that 1 if f (x) = u gu (x) = 0 otherwise. The amplitudes in states with different
u in the second register can never interfere (add or cancel) with each other. The result of applying the transform UF ⊗ I to the preceding state can be written 2n −1 1 1 UF ⊗ I √ UF gu (x)|x |u = √
gu (x)|x |u 2n u∈R x=0 2n u∈R x n −1 2 Gu (c)|c |u, = C u∈R
where Gu (c) is the discrete Fourier transform of gu (x). This results is a superposition of the possible states of equation 8.3 over all possible u. Since the gu all have the same period, measuring
the first part of this state returns a c close to a multiple of 2n /r, just as happened when the second register was measured as part of the original algorithm. 8.6 Generalizations
Shor’s original paper contained not only a quantum factoring algorithm, but also a related algorithm for the discrete logarithm problem. Further generalizations of Shor’s quantum algorithms have been
obtained for problems falling in the general class of hidden subgroup problems. The next two sections, sections 8.6.1 and 8.6.2, require knowledge of group theory. Readers unfamiliar with group
theory should just skim these sections; the results they contain will not be used later in the book, apart from appendix B and the section of the final chapter that reviews more recent algorithmic
results. The basics of group theory are reviewed in boxes.
8 Shor’s Algorithm
8.6.1 The Discrete Logarithm Problem
The discrete logarithm problem is also of cryptographic importance; the security of Diffie-Hellman and El Gamal and elliptic curve public key encryption, for example, rest on the classical difficulty
of this problem. In fact, all standard public key encryption systems and digital signature schemes are based on either factoring or the discrete logarithm problem. Electronic commerce and
communication rely on public key encryption and digital signature schemes for their security and efficiency. It is currently unclear whether a public key encryption system believed to be secure
against classical and quantum attacks can be established before quantum computers are built. If quantum computers win this race, the practical implications will be substantial. Once quantum computers
become a reality, all currently accepted public key encryption systems will be completely insecure. Let Z∗p be the group of integers {1, . . . , p − 1} under multiplication modulo p, and let b be a
generator for this group (any b relatively prime to p − 1 will do). The discrete logarithm of y ∈ Z∗p with respect to base b is the element x ∈ Z∗p such that bx = y mod p. Discrete Logarithm Problem
Given a prime p, a base b ∈ Z∗p , and an arbitrary element y ∈ Z∗p ,
find an x ∈ Z∗p such that bx = y mod p.
For large p, this problem is computationally difficult to solve. The discrete logarithm problem can be generalized to arbitrary finite cyclic groups G, though for some large G it is is not difficult
to solve classically. The discrete logarithm is a special case of the Abelian hidden subgroup problem. Appendix B describes a general algorithm for the Abelian hidden subgroup problem that yields
essentially Shor’s original discrete logarithm algorithm in the special case. The next section discusses hidden subgroup problems. 8.6.2 Hidden Subgroup Problems
The hidden subgroup framework subsumes many of the problems and quantum algorithms we have discussed. Understanding this framework requires experience with group theory. The definition of a group is
reviewed in box 8.2, which also contains examples. Box 8.3 defines some properties of groups and subgroups. Box 8.4 discusses Abelian groups. The Hidden Subgroup Problem Let G be a group. Suppose a
subgroup H < G is implicitly
defined by a function f on G in that f is constant and distinct on every coset of H . Find a set of generators for H . The aim is to find a polylogarithmic algorithm that computes a set of generators
for H in O((log|G|)k ) steps for some k. The difficulty of the problem depends not only on G and F but also on what is meant by given a group G. Some useful properties may be expensive to compute
from certain descriptions of a group and immediate from others. For example, computing the size of a group from certain types of descriptions, such as a defining set of generators and relations, is
known to be computationally hard. Also, we can hope to find a solution in poly-log time only if f itself is computable in poly-log time.
8.6 Generalizations
Box 8.2 Groups
A group is a non-empty set G with an associative binary operation, denoted ◦, satisfying •
(closure) for any two elements g1 and g2 of G, the product g1 ◦ g2 is also in G, an identity element e ∈ G such that e ◦ g = g ◦ e = g, and
every element g ∈ G has an inverse g −1 ∈ G such that g ◦ g −1 = g −1 ◦ g = e.
The associative binary operation ◦ is generally referred to as the group’s product. Often the product is indicated simply by juxtaposition, with the ◦ omitted: g1 ◦ g2 is written simply as g1 g2 .
For some groups, other notation is used for the binary operation. Some examples of groups: • The integers {0, 1, . . . , n − 1} form a group under addition modulo n. This group is denoted Z , n with
binary operator +. •
The set of k-bit strings, Zk2 , forms a group under bitwise addition modulo 2.
For p prime, the set of integers {1, . . . , n − 1} forms a group Z∗p under multiplication modulo p.
The set U(n) of all unitary operators on an n-dimensional vector space V forms a group.
The Pauli group consisting of the eight elements ±I , ±X, ±Y , and ±Z forms a group.
The extended Pauli group consisting of the sixteen elements ωI , ωX, ωY , and ωZ, where ω ∈ {1, −1, −i, i}, forms a group.
Box 8.3 Properties of Groups and Subgroups
The number of elements |G| of a group is called its order. A group is said to be finite if its order is a finite number; otherwise it is an infinite group. A subset H of G that is a group in its own
right, under the restriction of G’s product to H , is called a subgroup of G. The subgroup relation is written H < G. For example, for any integer m dividing n, the set of multiples of m forms a
subgroup of Zn . Also, any subspace W of a vector space V is a subgroup of the group V under vector addition. The Pauli group is a subgroup of the unitary group U (n). The order of an element g is
the size of the subgroup of G that it generates. The order of an element must divide the order of a group. A set of generators of a group G is a subset of G such that all elements of G can be written
as a finite product of the generators and their inverses (in any order and allowing repeats). A set of generators of a group is independent if no generator can be written as a product of the other
generators. A group is finitely generated if a finite set of generators exists. If a group can be generated by a single element it is cyclic. The set of generators for a given group is not unique in
general. The centralizer, Z(H ), of a subgroup H of G is the set of elements of G that commute with all elements of H : Z(H ) = {g ∈ G|gh = hg for all h ∈ H }. For H < G, the centralizer Z(H ) of H
is a subgroup of G.
8 Shor’s Algorithm
Box 8.4 Abelian Groups
A group is Abelian if its group product ◦ is commutative: g1 ◦ g2 = g2 ◦ g1 . The group Zn is Abelian, but the set of unitary operators U(n) is not Abelian. The product G × H of two groups G and H ,
with products ◦G and ◦H respectively, is the set of pairs {(g, h)|g ∈ G, h ∈ H } with the product (g1 , h1 ) ◦ (g2 , h2 ) = (g1 ◦G g2 , h1 ◦H h2 ). The structure of finite Abelian groups is well
understood. Every finite Abelian group is isomorphic to a product of one or more cyclic groups Zni . For example, for the product n of two relatively prime integers p and q, the group Zn is
isomorphic to Zp × Zq . Any finite Abelian group A has a unique decomposition (up to the ordering of the factors) into cyclic groups of prime power order. The decomposition depends only on its order
|A|. Let |A| = i ci be the prime factorization of |A|, s where ci = pi i and the pi are distinct primes. Then A∼ = Zc1 × Zc2 × · · · × Zck .
While the general hidden subgroup problem remains unsolved, a polylogarithmic bounded probability quantum algorithm for the general case of finite Abelian groups, specified in terms of their cyclic
decomposition, exists. The cyclic decomposition for Abelian groups is described in box 8.4. Finite Abelian Hidden Subgroup Problem Let G be a finite Abelian group with cyclic decompo-
sition G = Zn0 × · · · × ZnL . Suppose G contains a subgroup H < G that is implicitly defined by a function f on G in that f is constant and distinct on every coset of H . Find a set of generators
for H . Example 8.6.1 Period-finding as a hidden subgroup problem. Period-finding can be rephrased as a hidden subgroup problem. Let f be a periodic function on ZN with period r that divides N. The
subgroup H < ZN generated by r is the hidden subgroup. Once a generator h for H has been found, the period r can be found by taking the greatest common divisor of h and N : r = gcd(h, N ).
In addition to period-finding, both Simon’s problem and the discrete logarithm problem are instances of the finite Abelian hidden subgroup problem. Recognizing how Simon’s problem can be viewed as a
hidden subgroup problem is relatively easy. Understanding how the discrete logarithm problem is a special case of the hidden subgroup problem requires some ingenuity. Example 8.6.2 The discrete
logarithm problem as a hidden subgroup problem. The discrete log problem asks: Given the group G = Z∗p , where p is prime, a base b ∈ G, and an arbitrary
8.7 References
element y ∈ G, find an x ∈ G such that bx = y mod p. Consider f : G × G → G where f (g, h) = b−g y h . The set of elements satisfying f (g, h) = 1 is the hidden subgroup H of G × G consisting of
tuples of the form (mx, m). From any generator of H , the element (x, 1) can be computed. Thus, solving this hidden subgroup problem yields x, the solution to the discrete logarithm problem. A
crucial ingredient of Shor’s algorithm is the quantum Fourier transform. The quantum algorithm for Simon’s problem also uses a quantum Fourier transform; quantum Fourier transforms can be defined for
all finite Abelian groups (and more generally all finite groups), and the quantum Fourier transform for the group Zn2 is the Walsh-Hadamard transformation W . The solution to the hidden subgroup
problem over an Abelian group G uses the quantum Fourier transform over the group G. The Fourier transformation over an general finite group G is defined in terms of the group representations of G.
These ingredients are described in appendix B, which also describes the general solution to the finite Abelian hidden subgroup problem. It makes use of deeper group theory results than the rest of
the book. No one knows how to solve the hidden subgroup problem over general non-Abelian groups. What progress has been made toward understanding the non-Abelian hidden subgroup problem is discussed
in chapter 13. 8.7 References
Lenstra and Lenstra [193] describes the best currently known classical factoring algorithm, the number field sieve, including its O(exp(n1/3 )) complexity. Some simpler but less efficient classical
factoring algorithms are described in Knuth [182]. Shor’s algorithm first appeared in 1994 [250]. Shor later published an expanded version [253] that contains a detailed analysis of the complexity
and the probability of success. The continued fraction expansion, and the approximations it gives, is described in detail in most standard number theory texts including Hardy and Wright [149]. Its
efficiency and relation to the Euclidean algorithm is discussed in Knuth [182]. The Euler φ function and its properties are also discussed in standard number theory books such as Hardy and Wright
[149]. Kitaev solved the general finite Abelian hidden subgroup problem [172]. Jozsa [165] and [112] provide accessible accounts of the quantum Fourier transform in the context of the hidden subgroup
problem. The general hidden subgroup problem was introduced by Mosca and Ekert in [214]. Koblitz and Menezes, in their 2004 survey [183], give a detailed overview of proposed public key encryption
schemes, including ones not based on factoring or the discrete logarithm problem, as well as the more standard public key schemes. Rieffel [242] discusses the practical implications of quantum
computing for security. There are conferences in the field of post-quantum cryptography. The book Post-Quantum Cryptography [47] contains a compilation of papers on the implications of quantum
computing for cryptography and overviews of some of the more
8 Shor’s Algorithm
promising directions. Perlner and Cooper [228] survey public key encryption and digital signature schemes that are not known to be vulnerable to quantum attacks and discuss design criteria that need
to be met if such a system were to be deployed in the future. 8.8 Exercises Exercise 8.1. Give the exact value of the scale factor C in equation 8.2 in terms of properties of
f and u. Exercise 8.2. Show that with high probability v, the value obtained from the quantum core of n Shor’s algorithm described in section 8.2.1 is within 12 of some multiple of 2r , Exercise 8.3.
Determine the efficiency of Shor’s algorithm in the general case when r does not
divide 2n . Exercise 8.4. Show that the probability that the period of f (x) = a x mod M is odd is at most
1/2. Exercise 8.5. Show that in the general case in which r does not divide 2n , the parts of Shor’s
algorithm need to be repeated only O(log log r) times in order to achieve a high probability of success. Exercise 8.6. Explain how Deutsch’s problem of section 7.3.1 is an instance of the hidden
subgroup problem. Exercise 8.7. Explain how Simon’s problem is an instance of the hidden subgroup problem.
Grover’s Algorithm and Generalizations
Grover’s algorithm is the most famous algorithm in quantum computing after Shor’s algorithm. Its status, however, differs from that of Shor’s in a number of respects. Shor’s algorithm solves a
problem with clear practical consequences, but its application is focused on a narrow, if important, range of problems. In contrast, Grover’s algorithm and its many generalizations can be applied to
a broad range of problems, although, as section 9.6 explains, there is debate as to how far-reaching the practical implications of Grover’s algorithm and its generalizations are. √ Grover’s algorithm
solves a black box problem. It succeeds in finding a solution with O( N ) calls to the oracle, whereas the best possible classical approaches require O(N ) calls. Thus, unlike Shor’s algorithm,
Grover’s algorithm is provably better than any possible classical algorithm. This query complexity improvement over the classical case translates to a speedup only under certain conditions; it
depends on the efficiency with which the black box can be implemented, and on whether there is additional structure to the problem that can be exploited by classical and quantum algorithms. This
issue will be discussed in section 9.6. Even when the query complexity result translates to √a time complexity improvement, the speedup is much less than for Shor’s algorithm. The O( N ) query
complexity of Grover’s algorithm is known to be optimal; no quantum algorithm can do better. This restriction is as important as the algorithm itself. It places a severe restriction on the power of
quantum computation. Although Grover’s algorithm is usually presented as succeeding with high probability, unlike for Shor’s algorithm, variations that succeed with certainty are known. Grover’s
algorithm is simpler and easier to grasp than Shor’s, and has an elegant geometric interpretation. Section 9.1 describes Grover’s algorithm and determines its query complexity. Section 9.2 covers
amplitude amplification, a generalization of Grover’s algorithm. It also provides a simple geometric view of the algorithm. The optimality of Grover’s algorithm is proved in section 9.3. Section 9.4
shows how to derandomize Grover’s algorithm while preserving its efficiency. Section 9.5 generalizes Grover’s algorithm to handle cases in which the number of solutions is not known. Section 9.6
discusses black box implementability, explains under what circumstances the query complexity results translate into a speedup, and evaluates the extent of practical potential applications for
Grover’s algorithm.
9 Grover’s Algorithm and Generalizations
9.1 Grover’s Algorithm
Grover’s algorithm uses amplitude amplification to search an unstructured set of N elements. The problem is usually stated in terms of a Boolean function, or predicate, P : {0, . . . , N − 1} → {0,
1} that captures the property being searched for. The goal of the problem is to find a solution, an element x with P (x) = 1. As in Simon’s problem and the Deutsch-Jozsa problem, the predicate P is
viewed an oracle, or black box, and we will concern ourselves with the query complexity, the number of calls made to the oracle P . Given a black box that outputs P (x) upon input of x, the best
classical approaches must, in the single solution case, inspect an average of N/2 values; it requires an average of N/2 evaluations of the predicate P (x). Given a quantum black box UP that outputs
cx |x|P (x) x
upon input of cx |x|0, x
√ Grover’s algorithm finds a solution with only O( N ) calls to UP in the single solution case. Grover’s algorithm iteratively increases the amplitudes cx of those values x with P (x) = 1, so that a
final measurement will return a value x of interest with high probability. For practical applications of Grover’s algorithm, the predicate P must be efficiently computable, but without enough
structure to enable classical methods to gain on the quantum algorithm. 9.1.1 Outline
Grover’s algorithm starts with an equal superposition |ψ = √1N x |x of all N values of the search space and repeatedly performs the same sequence of transformations: 1. Apply UP to |ψ. 2. Flip the
sign of all basis vectors that represent a solution. 3. Perform inversion about the average, a transformation that maps every amplitude A − δ to A + δ, where A is the average of the amplitudes. For
the case of a single solution, figure 9.1 illustrates how these steps increase the amplitude of the basis vector of a solution. We now look at this process in detail. 9.1.2 Setup
Without loss of generality, let N = 2n for some integer n, and let X be the state space generated by {|0, . . . , |N − 1}. Let UP be a quantum black box that acts as UP : |x, a → |x, P (x) ⊕ a, for
all x ∈ X and single-qubit states |a.
9.1 Grover’s Algorithm
Figure 9.1 The iteration step of Grover’s algorithm is achieved by (a) changing the sign of the good elements and (b) inverting about the average. The case of a single solution is illustrated.
Let G = {x|P (x)} and B = {x|¬P (x)} denote the good and bad values respectively, and let the number of good states be a small fraction of the total number of states: |G| << N. Let |ψG =
|G| x∈G
be an even superposition of all the good states, and 1 |x |ψB = |B | x∈B be an even superposition of the bad ones. Then |ψ = W |0, an equal superposition of all N values, can be written as a
superpositions of |ψG and |ψB 2 −1 1 |ψ = √ |x = g0 |ψG + b0 |ψB 2n x=0 where g0 = |G|/N and b0 = |B |/N . The core of Grover’s algorithm is the repeated application of a unitary transformation n
Q : gi |ψG + bi |ψB → gi+1 |ψG + bi+1 |ψB that increases the amplitude gi of good states (and decreases bi ) until a maximal value is reached. After applying the amplitude amplifying transformation Q
an appropriate number of times j ,
9 Grover’s Algorithm and Generalizations
almost all amplitude will have shifted to good states, so that |bj | % |gj |. At this point, measurement will return an √x ∈ G with high probability. The exact number of times Q needs to be applied
is on the order of N and depends on both N and |G|. Section 9.1.4 presents a detailed analysis. 9.1.3 The Iteration Step
The transformation Q is achieved by changing the sign of the good elements and then inverting about the average. This section describes the implementation of these two steps in detail. Both steps
take real amplitudes to real amplitudes, so we will refer only to real amplitudes throughout the argument.
those |x such that x ∈ G, apply 7.4.2 showed that
π . A sign SG
cx |x of exactly change is simply a phase shift by e = −1. Section
Changing the Sign of the Good Elements To change the sign in a superposition
π |ψ) ⊗ H |1. UP (|ψ ⊗ H |1) = (SG
Changing the sign of the good elements is accomplished by UP : (gi |ψG + bi |ψB ) ⊗ H |1 → (−gi |ψG + bi |ψB ) ⊗ H |1. The number of gates needed to change the sign on the good elements does not
depend on N , but rather on how many gates it takes to compute UP . Inversion About the Average Inversion about the average sends a|x to (2A − a)|x where A is the average of the amplitudes of all
basis vectors in the superposition. (See figure 9.1.) It is easy to see that the transformation N −1 i=0
ai |xi →
(2A − ai )|xi
is performed by the unitary transformation ⎞ ⎛ 2 2 2 −1 ... N N N 2 2 2 ⎟ ⎜ −1 ... N N N ⎟. D=⎜ ⎝ ... ... ... ... ⎠ 2 2 . . . N2 − 1 N N This paragraph shows how to implement this transformation with
O(n) = O(log2 (N )) quantum gates. Following Grover, define D = −W S0π W , where W is the Walsh-Hadamard transform and ⎛ ⎞ −1 0 . . . 0 ⎜ 0 1 0 ... ⎟ ⎟ S0π = ⎜ ⎝ 0 ... ... 0 ⎠ 0 ... 0 1
9.1 Grover’s Algorithm
is the phase shift by π of the basis vector |0 described in section 7.4.2. To see that D = −W S0π W , let ⎛ ⎞ 2 0 ... 0 ⎜ 0 0 0 ... ⎟ ⎟ R=⎜ ⎝ 0 ... ... 0 ⎠. 0 ... 0 0 Since S0π = I − R, −W S0π W = W
(R − I )W = W RW − I. Since Rij = 0 for i = 0 or j = 0, (W RW )ij = Wi0 R00 W0j =
2 N
and −W S0π W = W RW − I = D. Putting inversion about the average together with changing the sign of the good elements yields the iteration transformation π Q = −W S0π W SG .
9.1.4 How Many Iterations?
This section examines the result of multiple application of the iteration step Q, which combines changing the sign and inverting about the average, in order to determine the optimal number of times
to apply Q. It shows that Q is a fixed rotation and that the amplitude gi of good states varies periodically with the number of iterations. To find a solution with high probability, the number of
iterations i must be chosen carefully. To determine the correct number of iterations to use, we describe the result of applying Q in terms of recurrence relations on gi and bi . π transforms gi |ψG +
bi |ψB to gi+1 |ψG + bi+1 |ψB . First, The iteration step Q = DSG π SG : gi |ψG + bi |ψB → −gi |ψG + bi |ψB .
To compute the average amplitude, Ai , the term −gi |ψG contributes |G| amplitudes −gi | G| and bi |ψB contributes |B | amplitudes bi . |B | Thus, altogether |B |bi − |G|gi . Ai = N
9 Grover’s Algorithm and Generalizations
Inversion about the average transforms gi bi 2Ai + |x + 2Ai − |x D : −gi |ψG + bi |ψB → | G| |B | x∈G x∈B
= (2Ai |G| + gi )|ψG + (2Ai |B | − bi )|ψB = gi+1 |ψG + bi+1 |ψB where
gi+1 = 2Ai |G| + gi ,
bi+1 = 2Ai |B | − bi . Let t be the probability that a random value in {0, . . . , N − 1} satisfies P . Then t = |G|/N and 1 − t = |B |/N. Then
|B ||G|bi − |G|gi = t (1 − t)bi − tgi , Ai |G| = N
|B |bi − |B ||G|gi = (1 − t)bi − t (1 − t)gi . A i |B | = N The recurrence relation can be written in terms of t: gi+1 = (1 − 2t)gi + 2 t (1 − t)bi , bi+1 = (1 − 2t)bi − 2 t (1 − t)gi √ √ where g0 =
t and b0 = 1 − t. It is easy to verify that gi = sin((2i + 1)θ ) bi = cos((2i + 1)θ )
√ is a solution to these equations with sin θ = t = |G|/N . We are now ready to compute the optimum number of iterations of Q. To maximize the probability of measuring a good state, and thus finding
an element with the desired property P , we wish to choose i such that sin((2i + 1)θ ) ≈ 1 or (2i + 1)θ ≈ π/2. For |G| % N theangle θ becomes very small and |G|/N = sin θ ≈ θ . Thus, gi will be
maximal for i ≈ π N/|G|. 4 Additional iteration will reduce the success probability of the algorithm. This situation is in contrast to many classical algorithms in which the greater the number of
iterations the better the results. Using the equations for gi and bi , for t = 1/4, the optimum number of iterations is 1, and for t = 1/2, no amount of iteration will improve the situation.
9.2 Amplitude Amplification
Since every step of the iteration process has been written as a linear combination of |ψG and |ψB with real coefficients, Grover’s algorithm can be viewed as acting in the real twodimensional
subspace spanned by |ψG and |ψB . The algorithm simply shifts amplitude from |ψB to |ψG . This picture leads to an elegant geometric interpretation of Grover’s algorithm discussed in section 9.2.1.
First, we describe a generalization of Grover’s algorithm, amplitude amplification, to which this geometric picture also applies. 9.2 Amplitude Amplification π The first step of Grover’s algorithm
applies the iteration operator Q = −W S0π W SG to the initial state W |0. We can look at W as a trivial algorithm that maps |0 to all possible values and thus to a solution with probability |G|/N .
Suppose we have an algorithm U such that U |0 gives an initial solution with a higher probability. This section shows that the analysis of 9.1.4 generalizes directly to any algorithm U such that U |0
has some amplitude in the good states. Amplitude amplification π with generalizes Grover’s algorithm by replacing the iteration operator Q = −W S0π W SG π Q = −U S0π U −1 SG .
The rest of this section generalizes the argument of section 9.1.4 to obtain the same recurrence relations for this more general case. Let G and B be the subspaces spanned by {|x|x ∈ G} and {|x|x ∈
G} respectively, and let PG and PB be the associated projection operators. Let |ψ = U |0 be written as |ψ = g0 |ψG + b0 |ψB where |ψG and |ψB are the normalized projections of |ψ onto the good and
bad subspaces, |ψG =
1 PG |ψ, g0
and |ψB =
1 PB |ψ, b0
with g0 = |PG |ψ|, and b0 = |PB |ψ|. For U = W , |ψG , |ψB , g0 , and b0 are as in section 9.1.4. Here g0 and b0 are not determined by the number of solutions, but rather by the properties of U
relative to the good states. The states
9 Grover’s Algorithm and Generalizations
|ψG and |ψB need not be equal superpositions of the good and bad states respectively, but g0 and b0 are still real. Again, we let t = g02 with 1 − t = b02 , where t should be thought of as the
probability that measurement of the superposition U |0 yields a state that satisfies predicate P . The operator U can be viewed as a reversible algorithm that maps |0 to a set of solutions in G 2
with a probability t = |g0 | . π , recall from section 7.4.2 that S0π |ϕ can be To understand the effect of Q = −U S0π U −1 SG written as |ϕ − 2 0|ϕ|0. For an arbitrary state |ψ, U S0π U −1 |ψ = U U
−1 |ψ − 2 0|U −1 |ψ|0 = |ψ − 2 0|U −1 |ψU |0 = |ψ − 2 ψ|U |0U |0. π |ψG = −|ψG and SBπ |ψB = |ψB , Since SG π Q|ψG = −U S0π U −1 SG |ψG
= U S0π U −1 |ψG = |ψG − 2g0 U |0 = |ψG − 2g0 g0 |ψG − 2g0 b0 |ψB = (1 − 2t)|ψG − 2 t (1 − t)|ψB and Q|ψB = −|ψB + 2b0 U |0 = −|ψB + 2b0 g0 |ψG + 2b0 b0 |ψB g0 = −|ψB + 2(1 − t) |ψG + 2(1 − t)|ψB b0
= (1 − 2t)|ψB + 2 t (1 − t)|ψG . An arbitrary real superposition of |ψG and |ψB is transformed by Q as follows: Q(gi |ψG + bi |ψB ) = (gi (1 − 2t) + 2bi t (1 − t))|ψG + (bi (1 − 2t) − 2gi t (1 − t))|
ψB , which leads to the same recurrence relation as in the previous section, gi+1 = (1 − 2t)gi + 2 t (1 − t)bi bi+1 = (1 − 2t)bi − 2 t (1 − t)gi , with the solution
9.2 Amplitude Amplification
gi = sin((2i + 1)θ ) bi = cos((2i + 1)θ ) √ for sin θ = t = g0 . Thus, for small g0 , the amplitude gi will be maximal after i ≈ π4 g1 iterations. If the algorithm 0 U succeeds with probability t,
then simple classical repetition of U requires an average of 1/t iterations to find a solution. Amplitude amplification speeds up this process so that it takes only √ O( 1/t) tries to find a
solution. If U has no amplitude in the good states, g0 will be zero and amplitude amplification will have no effect. Furthermore, just as no amount of iteration in Grover’s algorithm improves the
probability if t = 1/2, if g0 is large, amplitude amplification cannot improve the situation. For this reason, amplitude amplification applied to an algorithm U that is the result of amplitude
amplification does not improve the results. 9.2.1 The Geometry of Amplitude Amplification
The reasoning behind amplitude amplification, including the optimal number of iterations of Q to perform, can be reduced to a simple argument in two-dimensional Euclidean geomeπ be as defined before.
This section shows that the try. Let |ψG , |ψB , and Q = −U S0π U −1 SG entire discussion of amplitude amplification, and Grover’s algorithm in particular, reduces to a simple geometric argument
about rotations in the two-dimensional real subspace generated by {|ψG , |ψB }. By the definition of |ψG and |ψB , the initial state U |0 = g0 |ψG + b0 |ψB has real amplitudes g0 and b0 , so is in
the two-dimensional real plane spanned by {|ψG , |ψB }. The smaller the success probability t, the closer U |0 is to |ψB . Let β be the angle between U |0 and |ψG illustrated in figure 9.2. The angle
β depends only on the probability t = g02 that the initial state U |0, if measured, gives a solution: cos(β) = ψG |U |0 = g0 . The rest of this section explains how each iteration of Grover’s
algorithm rotates the state by a fixed angle in the direction of the desired
U 0
g0 b0
Figure 9.2 The initial state U |0 in the basis {|ψG , |ψB }.
9 Grover’s Algorithm and Generalizations
U 0 B
1 Figure 9.3 π The transformation SG reflects |ψ0 about |ψB , result in the state |ψ1
state. To maximize the amplitude in the good states, we iterate until the state is close to |ψG . From the simple geometry of the situation, we can determine both the optimal number of iterations and
the probability that the run succeeds. Amplitude amplification, and Grover’s algorithm as the special case when U = W , consists π of repeated applications of Q = −U S0π U −1 SG . To understand this
transformation geometrically, π can be viewed as a reflection about the recall from section 7.4.2 that the transformation SG hyperplane perpendicular to |ψG . In the plane spanned by {|ψG , |ψB },
this hyperplane reduces π maps an arbitrary to the one-dimensional space spanned by |ψB . Figure 9.3 illustrates how SG π state |ψ0 in the {|ψG , |ψB } subspace to |ψ1 = SG |ψ0 . Similarly, the
transformation S0π is a reflection about the hyperplane orthogonal to |0. Since U S0π U −1 differs from S0π by a change of basis, it is a reflection about the hyperplane orthogonal to U |0. The
effect of this transformation on |ψ1 is shown in figure 9.4. The final negative sign reverses the direction of the state vector, shown in figure 9.5. (Strictly speaking, this negative sign is
unnecessary, since it does nothing to the quantum state: it is a global phase change, so it is physically irrelevant. However, since we are drawing our pictures in the plane, not in projective space,
the negative sign makes it easier to see what is going on.) Recall from Euclidean geometry that the concatenation of two reflections is a rotation of twice the angle between the axes of the two
reflections. The two axes of reflection in this case are perpendicular to U |0 and |ψG respectively, so the angle between the axes of reflection is −β where cos β = g0 as before. The two reflections
perform a rotation by −2β, and the final negation amounts to a rotation by π . Thus, each step Q performs a rotation by π − 2β. Let θ = π2 − β, the angle between U |0 and |ψB , so sin θ = g0 as it
did in the analyses of the previous sections. Each iteration of Q rotates the state by 2θ , so the angle after i steps is (2i + 1)θ.
9.2 Amplitude Amplification
U 0 B
Figure 9.4 The transformation U S0π U −1 reflects |ψ1 about a line perpendicular to U |0, resulting in the state |ψ2 .
U 0 B
1 2 Figure 9.5 The final negative sign maps |ψ2 to |ψ3 for a total rotation of π − 2β.
9 Grover’s Algorithm and Generalizations
As before, the amplitude in the good states after i steps is given by gi = sin((2i + 1)θ ). We solve for the optimal number of iterations just as we did at the end of section 9.1.4. 9.3 Optimality of
Grover’s Algorithm
As important as Grover’s algorithm itself is the proof that Grover’s algorithm is as good as any possible quantum algorithm for exhaustive search. Even before Grover discovered his algorithm,
researchers had proved a lower bound on the query complexity of any √ possible quantum algorithm for exhaustive search: no quantum algorithm can use fewer than ( N ) calls to the predicate UP . Thus,
Grover’s algorithm is optimal. This result places a severe limit on what quantum computers can ever hope to do. The exponential size of the quantum state space gives naive hope that quantum computers
could provide an exponential speedup for all computations; popular press accounts of quantum computers still widely make this claim. A less naive guess would be that quantum computers can provide
exponential speedup for any computation that can be parallelized and requires only a single answer output. But the optimality of Grover’s algorithm shows that even that hope is too optimistic;
exhaustive search is easily parallelized and requires only a single answer, but quantum computers can provide only a relatively small speedup. This section sketches a proof of optimality in the case
of a single solution x. The proof bounds the number of calls to the oracle UP . The argument generalizes to the case of multiple solutions. Section 7.4.2 shows how Sxπ can be computed from UP . We
use Sxπ as the interface to the oracle. We do not lose any generality in doing so; the process of computing Sxπ from UP is reversible, so any algorithm using Sxπ could be rewritten in terms of UP and
vice versa. Since the oracle UP provides us with the only way to access any information about the element x we are searching for, an arbitrary quantum search algorithm can be viewed as an algorithm
that alternates between unitary transformations independent of x and calls to Sxπ ; any quantum search algorithm can be written as |ψkx = Uk Sxπ Uk−1 Sxπ . . . U1 Sxπ U0 |0, where the Ui are unitary
transformations that do not depend on x. The argument does not change if we allow the use of additional qubits; we simply use I ⊗ Sxπ instead of Sxπ and, since N is now larger, the algorithm will be
less efficient. It is important to recognize that the algorithm must work no matter which x is the solution. For any particular x, there are transformations that find x very quickly. We want an
algorithm that finds x quickly no matter what x is. Any search algorithm worth the name must return x with reasonable probability for all possible values of x. We consider only quantum search
algorithms that return x with at least probability p = 1/2. It is easy for the reader to check that any value √ 0 < p < 1 results in an O( N ) bound, just with a different constant. More formally, we
will show that if the state |ψkx , obtained after k steps of the form Ui Sxπ , satisfies
9.3 Optimality of Grover’s Algorithm
| x|ψkx |2 ≥
√ for all x, then k must be ( N ). This paragraph describes the rough strategy and intuition behind the proof. The requirement that the algorithm work for any x means that if the oracle interface is
Sxπ , then the result of applying the algorithm Uk Sxπ Uk−1 Sxπ . . . U1 Sxπ U0 |0 must be a state |ψkx sufficiently close to |x that x will be obtained upon measurement with high probability. Since
two elements of the standard basis |x and |y cannot be closer than a certain constant, the final states of the algorithm for different Sxπ and Syπ must be sufficiently far apart. Since the Ui are all
the same, any difference in the result of running the algorithm arises from calls to Sxπ . The algorithms all start with the same state U0 |0, so if we can bound from above the amount each step
increases the distance between y |ψix and |ψi , then we can obtain a bound on k, the number of calls to the oracle interface Sxπ . In other words, we want to bound from above the amount this distance
can increase by applying y y x and Ui Syπ to |ψi−1 . To obtain this bound, we compare both |ψix and |ψi with Ui Sxπ to |ψi−1 π |ψi , the state obtained by applying U0 up through Ui without any
intervening calls √ to Sx . We first give the details of how to use inequalities based on these ideas to prove that ( N ) calls to the oracle are required, and then give detailed proofs of each of
the inequalities. 9.3.1 Reduction to Three Inequalities
The proof considers the relation between three classes of quantum states: the desired result |x, the state of the computation |ψkx after k steps, and the state |ψk = Uk Uk−1 . . . U1 U0 |0 obtained
by performing the sequence of transformations Ui without consulting the oracle. The analysis simplifies if we sometimes consider, instead of |x, a phase-adjusted version of |x, namely x x |xk = eiθk
|x, where eiθk = x|ψkx /| x|ψkx |. The phase adjustment is chosen so that xk |ψkx 2 is positive real for all k. Since |xk differs from |x only in a phase, whenever | x|ψkx | ≥ 12 , we have a similar
inequality for |xk , namely 1 2
| xk |ψkx |2 ≥ , in which case xk |ψkx ≥ √12 . We consider the distances between certain pairs of these states: dkx = ||ψkx − |ψk | akx = ||ψkx − |xk | ckx = ||xk − |ψk |. The proof establishes
bounds involving the sum, or average, of these distances squared: Dk =
1 2 d , N x kx
Ak =
1 2 a , N x kx
Ck =
1 2 c . N x kx
9 Grover’s Algorithm and Generalizations
The reason for considering the sum, or equivalently the average, is that any generally useful search algorithm must efficiently find x for all possible x. The proof relies on three inequalities
involving Dk , Ak , and Ck , which we will prove in section 9.3.2. Before proving the inequalities, we describe them and show how they imply a lower bound on the number of calls to the oracle. The
first inequality bounds from above Ak , the average squared distance between the state |ψkx obtained after k steps and and the phase adjusted solution state |xk ; section 9.3.2 shows 2 that in order
to obtain a success probability of | x|ψkx | ≥ 12 , the following inequality must hold: √ Ak ≤ 2 − 2. The second inequality bounds Ck , the sum of the squared distances between the vector |ψk and all
basis vectors |j , from below as long as N ≥ 4: Ck ≥ 1. The third inequality bounds the growth of Dk , the average squared distance between |ψkx and |ψk as k increases: Dk ≤
4k 2 . N
The three quantities dkx , akx , and ckx are related as follows: dkx = ||ψkx − |ψk | = ||ψkx − eiθx |x + eiθx |x − |ψk | ≥ akx − ckx . k
To relate the quantities Dkx , Akx , and Ckx , we use the Cauchy-Schwarz inequality (see box 9.1) to obtain Box 9.1 The Cauchy-Schwarz Inequality
We use the Cauchy-Schwarz inequality in two forms, the general form i
$⎛ ⎞⎛ ⎞ % % %⎝ ui v i ≤ & u2i ⎠ ⎝ vi2 ⎠, i
and a specialization for vi = 1 in an N dimensional space √ ' ui ≤ N u2i . i
9.3 Optimality of Grover’s Algorithm
1 2 d N x kx 1 2 2 ≥ akx − 2 akx ckx + ckx N x x x $ % 1 2 1 2 2% 2 2 & ≥ akx − akx ckx + c N x N N x kx x x ≥ Ak − 2 Ak Ck + Ck .
Dk =
Making use of this inequality and the three earlier ones, we bound
4k 2 N
from below by a constant:
4k 2 ≥ Dk N
≥ Ak − 2 Ak Ck + Ck 2 = Ck − Ak
√ 2 ≥ 1− 2− 2 ,
√ since1 ≥ 2 − 2 ≥ Ak . Thus, for N ≥ 4 (needed for the second inequality), and taking q = √ √ 2 1 − 2 − 2, at least k ≥ q2 N iterations are required for a success probability of | x|ψkx | ≥ 12 for
all x. We now turn to the proofs of the three inequalities. 9.3.2 Proofs of the Three Inequalities The inequality for Ak
and |xk ,
By assumption, | ψkx |x| ≥ 12 . By the choice of phase eiθk relating |x 2
ψkx |xk ≥ √ , 2 so 2 = ||ψkx − |xk | akx
= ||ψkx | − 2 xk |ψkx + ||xk | √ ≤ 2 − 2, 2
from which it follows that Ak =
√ 1 2 akx ≤ 2 − 2. N x
9 Grover’s Algorithm and Generalizations
2 Bound on sum squared distance to all basis vectors The terms ckx can be bounded as follows: 2 ckx = ||xk − |ψk |
= |eiθk |x − |ψk | x
= ||ψk | − eiθk ψk |x − eiθk ψk |x + ||x| 2
= 2 − 2Re(eiθk ψk |x) ≥ 2 − 2| x|ψk |. We can now bound the average of these terms: 1 2 c N x kx 2 ≥ 2− | x|ψk | N x ' 2 ≥ 2− √ | x|ψk |2 N x
Ck =
2 = 2− √ , N
(9.3) (9.4)
where inequality 9.3 follows from the Cauchy-Schwarz inequality (box 9.1), and equation 9.4 holds because |ψk is a unit vector and {|x} forms a basis. Thus, the second inequality Ck ≥ 1 holds as long
as N ≥ 4. As an aside, since this argument made no assumption about |ψk , the bound on the sum of the distances to all basis vectors holds for any quantum state: 2 1 ||x − |ψ|2 ≥ 2 − √ N x N for any
|ψ. First, we bound how much the distance between |ψkx and |ψk can increase each step. Consider the following relation between dkx and dk+1,x :
The inequality for Dk
x − |ψk+1 | dk+1,x = ||ψk+1
= |Uk+1 Sxπ |ψkx − Uk+1 |ψk | = |Sxπ |ψkx − |ψk | = |Sxπ (|ψkx − |ψk ) + (Sxπ − I )|ψk |
9.4 Derandomization of Grover’s Algorithm and Amplitude Amplification
≤ |Sxπ (|ψkx − |ψk )| + |(Sxπ − I )|ψk | = dkx + 2| x|ψk |. This inequality shows that with each step the distance between |ψkx and |ψk can increase by at most 2| x|ψk |. Using this bound, we prove
by induction that Dk =
1 2 4k 2 . dkx ≤ N x N
Base case For k = 0, for all x, we have |ψ0x = U0 |0 = |ψ0 , so d0x = 0 and therefore D0 = 0. Induction step 1 2 d Dk+1 = N x k+1,x ≤
1 (dkx + 2| x|ψk |)2 N x
1 2 4 4 dkx + | x|ψk |2 + dkx | x|ψk | N x N x N x
= Dk +
4 4 + dkx | x|ψk |. N N x
The Cauchy-Schwarz inequality gives $ ( % 1% 1 Dk 2 2 & dkx | x|ψk | ≤ dkx | x|ψk | = . N x N N x x Using the induction assumption Dk ≤ Dk+1
4k 2 , N
we have
( 4(k + 1)2 4 Dk ≤ . ≤ Dk + + 4 N N N
9.4 Derandomization of Grover’s Algorithm and Amplitude Amplification
Unlike Shor’s algorithm, Grover’s algorithm is not inherently probabilistic. With a little cleverness, Grover’s algorithm can be modified in such a way that it is guaranteed to find a solution while
still preserving the quadratic speedup. More generally, amplitude amplification can be derandomized. Brassard, Høyer, and Tapp suggest two approaches. In the first, each iteration rotates by an angle
that is slightly smaller than the one used in section 9.2.1, while the
9 Grover’s Algorithm and Generalizations
second changes only the last step to a smaller rotation. This section describes each approach in turn. 9.4.1 Approach 1: Modifying Each Step
Suppose the angle θ in Grover’s algorithm or amplitude amplification happened to be such that π π − 12 were an integer. In this case, after i = 4θ − 12 iterations, the amplitude gi would be 1 and the
4θ algorithm would output a solution with certainty. Recall from section 9.2 that θ satisfies sin θ = √ t = g0 . To derandomize amplitude amplification for algorithm U with success probability g0 ,
we modify U to obtain an algorithm U with success probability g0 < g0 such that, for θ satisfying sin θ = g0 , the quantity 4θπ − 12 is an integer. Intuitively, it seems as though it should not be
hard to modify an algorithm U so that it is less successful, but we must make sure that we can compute such a U efficiently from U . The trick is to allow the use of an additional qubit b. Given an
algorithm U with success probability g0 acting on an n-qubit register |s, define U to be the transformation U ⊗ B on an (n + 1)-qubit register |s|b, where B is the single-qubit transformation ' ' g0
g0 B = 1 − |0 + |1. g0 g0 Let G be the set of basis states |x ⊗ |b such that |x ∈ G and |b = |1. The reader may check that the initial success probability |PG U |0| is indeed g0 . Amplitude
amplification, now on an π π π −1 π SG , (n + 1)-qubit state, with U for U , SG for SG , and iteration operator Q = −U S0 (U ) π 1 succeeds with certainty after i = 4θ − 2 steps.
This modified algorithm obtains a solution with certainty, using O( 1t ) calls to the oracle, at the at the cost of a single additional qubit. 9.4.2 Approach 2: Modifying Only the Last Step
1 time with This approach is more complicated to describe, but results in a solution in O t π π certainty without the need for an additional qubit. The idea is to modify SG and S0 in the last step so
that exactly the desired final state is obtained. To this end, we begin by analyzing general properties of transformations of the form τ Q(φ, τ ) = −U S0 U −1 SG , φ
where φ and τ are both arbitrary angles and iφ e |x if |x ∈ X φ SX |x = |x if |x ∈ / X. φ
Section 7.4.2 showed how to implement SX efficiently. First, we show that for any quantum state |v, φ U S0 U −1 |v = |v − 1 − eiφ v|U |0U |0.
9.4 Derandomization of Grover’s Algorithm and Amplitude Amplification
Write |v =
v|U |iU |i + v|U |0U |0.
Then φ U S0 U −1 |v
φ U S0
v|U |i|i + v|U |0|0
v|U |i|i + v|U eiφ |0|0
v|U |iU |i + eiφ v|U |0U |0
= |v − 1 − eiφ v|U |0U |0. τ Using this result, we now can see the effect of Q(φ, τ ) = U S0 U −1 SG on any superposition |v = g|vG + b|vB in the subspace spanned by |vG and |vB . We have φ
Q(φ, τ )|v = g(−eiτ |vG + eiτ (1 − eiφ ) vG |U |0U |0) + b(−|vB + (1 − eiφ ) vB |U |0U |0). π After s = 4θ − 12 ! iterations of amplitude amplification we have the state |ψs = √ sin ((2s + 1)θ) |ψG +
cos ((2s + 1)θ ) |ψB , where sin θ = t = g0 . Applying Q(φ, τ ) to the states |ψG and |ψB , we obtain Q(φ, τ )|ψG = eiτ (1 − eiφ )g02 − 1 |ψG + eiτ (1 − eiφ )g0 b0 |ψB ), Q(φ, τ )|ψB = (1 − eiφ )b0
g0 |ψG + (1 − eiφ )b02 − 1 |ψB ).
So Q(φ, τ )|ψ = g(φ, τ )|ψG + b(φ, τ )|ψB , where g(φ, τ ) = sin ((2s + 1)θ ) eiτ (1 − eiφ )g02 − 1 + cos ((2s + 1)θ ) (1 − eiφ )b0 g0 b(φ, τ ) = sin ((2s + 1)θ ) eiτ (1 − eiφ )g0 b0 + cos ((2s + 1)θ
) (1 − eiφ )b02 − 1 . τ Our aim now is to show that there exist φ and τ such that if Q(φ, τ ) = U S0 U −1 SG is applied as a final step, a solution is obtained with certainty. To show that φ and τ
can be chosen so that Q(φ, τ )|ψ has all of its amplitude in the good states, we want b(φ, τ ) = 0 or φ
9 Grover’s Algorithm and Generalizations
sin ((2s + 1)θ ) eiτ (1 − eiφ )g0 b0 + cos ((2s + 1)θ ) (1 − eiφ )b02 − 1 = 0,
eiτ (1 − eiφ )g0 1 − g02 sin ((2s + 1)θ ) = 1 − (1 − eiφ )(1 − g02 ) cos ((2s + 1)θ ) ,
since b0 = 1 − g02 . Since the right-hand side equals 2 g0 (1 − eiφ ) + eiφ cos ((2s + 1)θ ) , we want φ and τ to satisfy cot ((2s + 1)θ ) =
eiτ (1 − eiφ )g0 1 − g02 g02 (1 − eiφ ) + eiφ
Once φ is chosen, we choose τ to make the right-hand side real. To find φ, compute the magnitude squared of the right-hand side of equation 9.5 g02 b02 (2 − 2 cos φ) . 4 g0 (2 − 2 cos φ) − g02 (2 − 2
cos φ) + 1 The maximum value of the magnitude squared, obtained when cos φ = −1, is 4g02 b02 4g02 b02 . = 4g04 − 4g02 + 1 (2g02 − 1)2 So the maximum magnitude is 2g0 b0 2g0 b0 = tan(2θ ), = 2 2g02 −
1 g0 − b02 √ where sin θ = t = g0 as before. Thus, φ and τ can be chosen to make the right-hand side of equation 9.5 any real number between [0, tan(2θ)]. By the geometric interpretation of section π
− 12 ! iterations, the state has been rotated to within 2θ of the desired state. 9.2.1, after s = 4θ Thus, we have shown that φ and τ can be chosen so that applying s iterations of Q, followed by one
application of Q(φ, τ ), yields a solution with certainty. 9.5 Unknown Number of Solutions
Grover’s algorithm requires that we know the relative number of solutions t = |G|/N in order to determine how many times we should apply the transformation Q. More generally, amplitude 2
amplification requires as input the success probability t = |g0 | of U |0. This section sketches two approaches to handling cases in which we do not know t. The first approach repeats Grover’s
algorithm multiple times, choosing a random number of iterations of Q in each run. While inelegant, this approach does succeed in finding a solution with high probability. The second
9.5 Unknown Number of Solutions
approach, called quantum √ counting, uses the quantum Fourier transform to estimate t. Both approaches require O( N ) calls to UP . 9.5.1 Varying the Number of Iterations
Consider Grover’s algorithm applied to a problem with tN solutions in a space of cardinality N. When t is unknown, a simple strategy is to repeatedly√execute Grover’s algorithm with a number of
iteration steps picked randomly between 0 and π4 N . For large values of t, this simple approach√is clearly not optimal. Nevertheless, as we show, this simple strategy succeeds with at most O( N )
calls to UP regardless of the value of t. The results of section 9.1.4 imply that the average probability of success for a run with i iterations of Q, where i is randomly chosen between 0 and r, is
given by 1 2 sin ((2i + 1)θ ), r i=0 r−1
P r(i < r) =
√ where sin θ = t as before. A plot of the average success probability for different values of r is shown in figure 9.6. The graph will be identical for all values of t as long as t % 1. For
comparison, the graph of the success probability after exactly r iteration steps of Grover’s algorithm is also given. It is easy to see from the graph of this function that there is a constant c such
that P r(i < r) > c √ π for all r ≥ 4 1t . For 1t ≤ N , guaranteeing at least one solution, if we choose r = π4 N , then √ P r(i < π/4 N ) ≥ c. Thus, a single run of the algorithm, where the number
of iterations of Q
Figure 9.6 The average success probability P√r(i < r) over runs with a random number of iterations chosen between 0 and r plotted as a function of r where sin θ = t as usual. For reference, the
dotted curve gives the success probability for a run with exactly r iterations.
9 Grover’s Algorithm and Generalizations
√ is chosen randomly between 0 and π/4 N , finds the solution with probability at least c. The √ expected number of calls to the oracle during such a run is therefore O( N ). For any probability c >
c, there is a constant K such that if Grover’s algorithm is run K times, with the number of iterations for each run chosen as above, then a solution will be found with probability c . Thus, for any c
, the √total number of times Q is applied, and therefore the total number of calls to the oracle, is O( N ). 9.5.2 Quantum Counting
Instead of repeating Grover’s algorithm with randomly varying numbers of iterations of Q, quantum counting takes a more quantum approach: create a superposition of results for different numbers of
applications of Q and then use the quantum Fourier transform on that superposition to obtain a good estimate for the relative number of solutions t. The same strategy can be used for the amplitude
amplification algorithm to estimate the success probability t of U |0. This approach √ also has query complexity O( N ). The algorithm itself is easy to describe, though determining the size of the
superposition needed is more involved. Let U and Q be as defined in the amplitude amplification algorithm of section 9.2. Define a transformation RepeatQ, with input |k and |ψ, that performs k
iterations of Q on |ψ: RepeatQ : |k ⊗ |ψ → |k ⊗ Qk |ψ. This transformation is more powerful than the classical ability to repeat Q because RepeatQ can be applied to a superposition. We apply RepeatQ
to a superposition of all k < M = 2m tensored with the state U |0 to obtain M−1 M−1 1 1 |k ⊗ U |0 → √ |k ⊗ (gk |ψG + bk |ψB ), √ M k=0 M k=0
where we ignore for the moment how M was chosen. A measurement of the right register in the standard basis produces a state |x that is either a good state (orthogonal to |ψB ) or a bad state
(orthogonal to |ψG ). Thus, the state of the left M−1 register collapses to either |ψ = C M−1 k=0 bk |k or |ψ = C k=0 gk |k. Let us suppose the former state |ψ is obtained; the reasoning for the
latter case is analogous. From section 9.2, bk = cos((2k + 1)θ ), so |ψ = C
cos((2k + 1)θ )|k.
Apply the quantum Fourier transform to this state to obtain F :C
M−1 k=0
bk |k →
M−1 j =0
Bj |j .
9.6 Practical Implications of Grover’s Algorithm and Amplitude Amplification
Section 7.8.1 explained that, for a cosine function of period πθ , most of the amplitude is in those . If we measure the state now, from the measured value |j Bj that are close the single value Mθ π
. Thus, with high we obtain, with high probability, a good approximation of θ by taking θ = πj M √ probability, the value t = sin θ is a good approximation for the ratio of solutions in the Grover’s
algorithm case, or the success probability of U |0 in the case of amplitude amplification. There is, of course, one issue remaining: we do not know a priori a proper value for M. This problem can be
addressed by repeating the algorithm for increasing M until a meaningful value for j is read. Since θ = Mj π, for a given θ we will likely read an integer value j ∼ θπM and j will be measured as 0
with high probability when M is chosen too small for the given problem. 9.6 Practical Implications of Grover’s Algorithm and Amplitude Amplification
The introduction to this chapter mentioned that there is debate as to the extent of the practical impact of Grover’s algorithm and its generalization, amplitude amplification. Although the quadratic
reduction in the query complexity provided by Grover’s algorithm and amplitude amplification over classical algorithms may seem minor compared to the superpolynomial speed up of Shor’s algorithm, a
quadratic speedup can be of practical importance. For example, even though the fast Fourier transform is only a quadratic speedup over the straightforward way of implementing the Fourier transform,
it is viewed as a significant improvement. That the speedup provided by Grover’s algorithm is no greater is the least of our concerns in terms of the practical impact of these algorithms. A major
concern is the efficiency with which √ UP can be computed for a given practical problem. Unless UP is efficiently computable, the O( N ) speedup of the search is swamped by the amount of time it
takes to compute UP . If UP takes O(N ) time to compute, which is true for a√generic P , then a run of Grover’s algorithm takes O(N ) time, even though it uses UP only O( N ) times. Furthermore,
there is no savings for multiple searches over the same space; the measurement at the end of the algorithm destroys the superposition, so UP must be computed afresh for each search. Another concern
is that most searches done in practice are over spaces with a lot of structure, which in many cases enables fast classical algorithms that amplitude amplification cannot improve upon. For example,
classical algorithms can find an element of an alphabetical list of N elements in O( log2 N ) time. Furthermore, that algorithm is not of a form amenable to speedup by amplitude amplification. There
are relatively few practical search problems for which the search space has no structure, so Grover’s algorithm on its own has few practical applications. Its generalization, amplitude amplification,
is more widely applicable in that it can be used to speed up certain, but by no means most, classes of heuristics. Grover’s algorithm applies to exhaustive search of a search space. Grover’s
algorithm is commonly called a database search algorithm, but that appellation is misleading. Grover’s search algorithm gives a speedup only over classical algorithms for unstructured search.
9 Grover’s Algorithm and Generalizations
which are generally highly structured, can be searched rapidly classically. Most databases, be they of employee records or experimental results, are structured and yet at the same time hard to
compute from first principles (the relevant UP is expensive to compute); for example, an alphabetical list of names is structured, yet computing it most likely cannot be done any faster than by
separately adding each entry, an O(N ) time operation. For this reason, it is an unfortunate historical accident that Grover’s algorithm was ever called a database search algorithm. Contrary to
popular claims, Grover’s algorithm will not aid standard database or Internet searches, since it takes longer to place the elements in a quantum superposition, which gets destroyed in each run of the
search, than it takes to perform the classical √ search in the first place: re-creating the superposition is often linear in N which negates the O( N ), benefit of the search algorithm. In fact,
Childs et al. showed that for ordered data, quantum computation can give no more than a constant factor improvement over optimal classical algorithms. When the candidate solutions to a problem can be
enumerated easily and there is an efficient test for whether a given value x represents a solution or not, UP can be computed efficiently, thus avoiding that concern. The amplitude amplification
technique used in Grover’s algorithm has been extended to provide small speedups for a number of problems, including approximating the mean of a sequence and other statistics finding collisions in
r-to-1 functions, string matching, and path integration. NP-complete problems also fall into the class of problems for which the relevant UP for such problems are efficiently computable.
Unfortunately, amplitude amplification gives only a quadratic speedup, so problems that require an exponential number of queries classically remain exponential for Grover’s algorithm. In particular,
Grover’s search does not provide a means to solve NP-complete problems efficiently. Moreover, NP-complete problems have structure that is exploited in classical heuristic algorithms, and only some of
these can be improved upon using amplitude amplification. 9.7 References
Grover’s search algorithm was first presented in [143]. Grover extended his algorithm to achieve quadratic speedup for other non-search problems, such as computing the mean and median of a function
[144]. Using similar techniques, Grover has also shown that certain search problems that classically run in O(log N ) can be solved in O(1) on a quantum computer [143]. Amplitude amplification can be
used as a subroutine in other quantum computations in light of a result of Biron et al. [52] that shows how amplitude amplification works with essentially any initial √ amplitude distribution while
still maintaining O( N ) complexity. Jozsa [166] provides a complementary description of the geometric interpretation of Grover’s algorithm and amplitude amplification. Bennett, Bernstein, Brassard,
and Vazirani [41] give the earliest proof of optimality of Grover’s algorithm. Boyer et al. [58] provide a detailed analysis of the performance of Grover’s algorithm and give a solution to the
recurrence relation of section 9.1.4. A tighter version of the optimality of Grover’s algorithm is given by Zalka [290].
9.8 Exercises
Boyer et al. [58] present the strategy for choosing a random number of iterations when t is unknown. They also present a more efficient algorithm for large t based on the same principle. The idea of
quantum counting is due to Brassard et al. [62]. Their paper contains a detailed analysis that includes a strategy for iterating to find M. The two modifications of Grover’s algorithm that show that
it is not inherently probabilistic are in this same paper. Childs et al. [81] showed that quantum computation can give no more than a constant factor improvement over optimal classical algorithms for
searches of ordered data. Both Viamontes et al. [277] and Zalka [291] discuss issues related to practical use of Grover’s search algorithm and its generalizations. Extensions to Grover include
approximating the mean of a sequence and other statistics [144, 216], finding collisions in r-to-1 functions [61], string matching [234], and path integration [271]. Grover’s algorithm has been
generalized to support nonbinary labelings [58], arbitrary initial conditions [52], and nested searches [77]. 9.8 Exercises Exercise 9.1. Verify that
gi = sin((2i + 1)θ ) bi = cos((2i + 1)θ ), √ with sin θ = t = |G|/N, is a solution to the recurrence relations of section 9.1.4. Exercise 9.2. Show that applying Grover’s algorithm in the case t = |G
|/N = 1/2 results in no
improvement. Exercise 9.3. What happens if we try to apply Grover’s algorithm to the case t = |G|/N = 3/4? Exercise 9.4. a. How many iterations should Grover’s algorithm use in order to find one item
among sixteen? b. If we apply one fewer than the optimal number of iterations and then measure, how does the
success probability compare to the optimal case? c. If we apply one more than the optimal number of iterations and then measure, how does the
success probability compare to the optimal case? Exercise 9.5. Suppose P : {0, . . . , N − 1} → {0, 1} is zero except at x = t, and suppose we are
given not only a quantum oracle UP , but also the information that the solution t differs √ from a known string s in exactly k bits. Exhibit an algorithm that finds the solution with O( 2k ) calls to
UP . Exercise 9.6. Suppose P : {0, . . . , N − 1} → {0, 1} is zero except at x = t, and suppose we are
given not only a quantum oracle UP , but also the information that all suffixes except 010 and 100
9 Grover’s Algorithm and Generalizations
have been ruled out. In other words, the solution t must end with either 010 and 100. Exhibit an algorithm that finds the solution with fewer calls to UP than Grover’s algorithm. Exercise 9.7.
Suppose P : {0, . . . , N − 1} → {0, 1} is zero except at x = t, and suppose we are
given not only a quantum oracle UP , but also the information that the solution t differs from a known string s in at most√ k bits. Exhibit an algorithm that is more efficient, in terms of the number
of calls needed, than O( 2n )? Exercise 9.8. Suppose P : {0, . . . , N − 1} → {0, 1} is zero except at x = t, and suppose we are
given a quantum oracle UP and told that with probability 0.9 the first n/2 bits of the solution t are zero. How can we take advantage of this information √ to obtain an algorithm that is more
efficient, in terms of the number of calls needed, than O( 2n )? Exercise 9.9. Given a quantum black box for a function f : {0, . . . , N − 1} → {0, . . . , N − 1},
√ design a quantum algorithm that finds the minimum with O( N log N ) queries, where N = 2n .
Exercise 9.10. Suppose there is an error in the initial state, so that instead of starting with |00 . . . 0
we run Grover’s algorithm, starting with the state √
1 1 + 2
(|00 . . . 0 + |11 . . . 1).
How does this error affect the results of Grover’s algorithm? Exercise 9.11. Why does applying amplitude amplification to the output of a first application of
amplitude amplification not result in an additional square root reduction in the query complexity? Exercise 9.12. Prove the optimality of Grover’s algorithm in the multiple solution case. Exercise
9.13. For the quantum counting procedure of section 9.5.2, show how the estimate of t
is obtained in the case that a bad state is measured.
Quantum Subsystems and Properties of Entangled States
The power of quantum computation is most often attributed to entanglement. Indeed, Jozsa and Linden [167] have shown that any quantum algorithm that achieves exponential speedup over classical
algorithms must make use of entanglement between a number of qubits that increases with the size of the input to the algorithm. Nevertheless, as section 13.9 explains, the exact role of entanglement
in quantum computation, and in quantum information processing more generally, remains unclear. While entanglement is generally viewed as an important resource for quantum computation, entanglement,
particularly multipartite entanglement, is still only poorly understood. This chapter surveys some of what is known about entanglement, particularly multipartite entanglement, entanglement between
three or more subsystems. It also illustrates some of the complexities that make developing a deeper understanding of entanglement challenging. For example, there are many distinctly different types
of multipartite entanglement; for entanglement between four (or more) subsystems, the different types of entanglement are uncountably infinite! Much work remains to be done to understand which types
of entanglement are useful, and for what. As chapters 3 and 4 emphasize, the notion of entanglement is well defined only with respect to a particular tensor decomposition of the system into
subsystems. A deeper understanding of entanglement comes from studying these quantum subsystems. Of particular interest will be entanglement between the n subsystems consisting of the single qubits
making up a n qubit system and entanglement between registers of a quantum computer during a computation. Density operators, introduced in section 10.1, are used to model quantum systems, and more
particularly how a state appears when only one subsystem is accessible. Density operators are also useful for describing measurements yet to be performed, or for which the outcome is not yet known.
The mathematics of density operators enables a more detailed examination of entanglement, including issues related to quantifying how much entanglement a state contains and distinguishing different
types of entanglement. Section 10.2 uses the density operator formalization to quantify bipartite entanglement and to examine properties of multipartite entanglement. How density operators model
measurements is the concern of section 10.3, and section 10.4 discusses transformations of quantum systems. Properties of quantum subsystems give insight not only into entanglement, but also into
robustness issues in quantum computation. From a practical standpoint, any quantum system such as
10 Quantum Subsystems and Properties of Entangled States
a quantum computer is always really a quantum subsystem: any experimental setup is never completely isolated from the rest of the universe, so any experiment is rightly viewed as only one part of a
larger quantum system. The final section of this chapter, section 10.4.4, shows how decoherence, errors caused by interaction with the environment, can be modeled. This model forms the foundation for
the discussion of quantum error correction and fault-tolerant quantum computation in chapters 11 and 12. 10.1 Quantum Subsystems and Mixed States
It commonly arises that we have access to, or interest in, only one part of a larger system. This section develops concepts and notation to support the study of quantum subsystems and entanglement
between subsystems. Some of the issues in modeling quantum subspaces are illustrated by an EPR pair distributed between two parties. Imagine that Alice has the first qubit of an EPR pair √12 (|00 + |
11), and Bob has the second. How would Alice describe her qubit? It is not in a single-qubit quantum state, a quantum state of the form a|0 + b|1. Were Alice to measure her qubit in the standard
basis, she would have a 50 percent chance of seeing |0 or |1. So it might appear that her qubit’s state must be an even superposition of |0 and |1. But if she measured it in the basis {|+, |−}, she
would have a 50 percent chance of seeing |+ or |−. In fact, in any basis whatsoever, it appears to be an even superposition of the two basis states. But no single-qubit state has this property. For
example, the state √12 (|0 + |1) is an even superposition in the standard basis but is an uneven superposition in most bases and is deterministic in the basis {|+, |−}. So what can Alice say about
her qubit? To answer that question, it is worth looking carefully at what is meant by a state of a system. A state captures all information that could conceivably be learned about the system. Since
information can only be gained by measurement, and measurement changes the quantum state, imagine an infinite supply of identically prepared quantum systems. The quantum state encapsulates all
information that could be gained from any number of measurements on this infinite supply of identical quantum systems. Another way of saying that most states of a multiple-qubit quantum system cannot
be described in terms of the states of each of its single-qubit subsystems separately is that a single qubit of a multiple-qubit system is generally not in a well-defined quantum state. Alice’s qubit
of the entangled pair is such a case. An n-qubit quantum state captures all of the information that could conceivably be learned from measurements on an infinite supply of identically prepared
quantum systems. For an infinite supply of m-qubit subsystems of identically prepared n-qubit systems, it is interesting to ask what can be learned from measurements of the m-qubit subsystem alone.
The structure that encapsulates that information is called the mixed state of the m-qubit subsystem, and it will be modeled by the mathematics of density operators. So far we have considered only
systems that are universes unto themselves; the states of such systems, all the states we have studied so far, are called pure states.
10.1 Quantum Subsystems and Mixed States
The meaning of state in mixed state should be interpreted with care. That a subsystem always has a well-defined mixed state should not be interpreted to mean that when a state of a system is
entangled with respect to a decomposition into subsystems, the states of the subsystems are well defined after all; mixed states are not quantum states in the conventional sense, the sense we have
used up to now. Knowing the mixed states of all the subsystems making up a system does not enable us to know the state of the entire system; many different states of a system give the same set of
mixed states on the subsystems. Knowing the mixed states of all the subsystems gives full knowledge of the state of the whole system precisely when the state of the entire system is unentangled with
respect to that subspace decomposition. In exactly this case, the mixed states of the subsystems can be viewed as pure states. The relationship of a mixed state for a subsystem to the pure state of
the whole system is analogous to the relationship of a marginal distribution to a joint distribution. This analogy can be made precise; see appendix A. The following section develops the mathematics
of density operators for modeling mixed states. It concludes with a description of Alice’s qubit. 10.1.1 Density Operators
For an m-qubit subsystem A of a larger n-qubit system X = A ⊗ B, the mixed state for subsystem A must capture all possible results of measurement by operators of the form O ⊗ I , where O is a
measurement operator on just the m qubits of A and I is the identity on the n − m qubits of B. Let |x be a state of the entire n-qubit system. The next few paragraphs culminate in the description of
an operator on the 2m -dimensional complex vector space A, called the density operator ρxA : A → A, that captures all of the information that can be gained about |x from measurements on the m-qubit
subsystem A alone. For this reason, density operators are used to model mixed states. Let M = 2m and L = 2n−m . Given bases {|α0 , . . . , |αM−1 } and {|β0 , . . . , |βL−1 } for A and B,
respectively, {|αi ⊗ |βj } is a basis for X = A ⊗ B. A state |x of X can be written |x =
L−1 M−1
xij |αi |βj .
i=0 j =0
Measurements on system A alone are modeled by observables O A with associated projectors {PiA }, 0 ≤ i < 2m . On the whole space X, such measurements have the form O A ⊗ I B with projectors PiA ⊗ I B
. For any particular projector P A , x|P A ⊗ I |x gives the probability that measurement of |x by O A ⊗ I B results in a state in the subspace associated to P A . Writing this probability in terms of
the bases {|α0 , . . . , |αM−1 } and {|β0 , . . . , |βL−1 } yields ⎛ ⎞ A A xij αi | ⊗ βj |⎠ (P ⊗ I ) xkl |αk ⊗ |βl
x|P ⊗ I |x = ⎝ ij
ij kl
xij xkl αi |P A |αk βj |βl ,
10 Quantum Subsystems and Properties of Entangled States
where indices i and k are summed over [0 . . . M − 1], and j and l are summed over [0 . . . L − 1]. Since βj |βl = δlj , each term is zero except those for which j = l, so the probability that the
measurement outcome is the one associated with P A can be written more concisely as
x|P A ⊗ I |x = xij xkj αi |P A |αk . (10.1) ij k
This formula, together with facts about the partial trace found in box 10.3, yields the density operator that encapsulates all information that can be gained by measurements of the form O A ⊗ I B .
Since {|αu } is a basis for A, M−1
|αu αu | = I
is the identity operator on A. We write
x|P A ⊗ I |x = xij xkj αi |P A |αk ij k
xij xkj αi |P
|αu αu | |αk
xij xkj αu |αk αi |P A |αu
αu | ⎝
⎞ xij xkj |αk αi |P A ⎠ |αu
tr(ρxA P A ),
where we define xij xkj |αk αi | ρxA = ik
and call ρxA the density operator for |x on subsystem A. By box 10.3, ρxA = trB (|x x|).
Since O A is a general observable on A, and P A a general projector associated with O A , this calculation shows that all information from measurements on subsystem A alone can be gained from the
density operator ρxA . Thus the density operator ρxA models the mixed state corresponding to the part of |x in A. This definition of a density operator of |x is physically reasonable only if it does
not depend on the choice of basis {|αi }, since physically no basis is preferred. The next two paragraphs show that density operators are well defined in the sense that calculating ρxA in different
bases gives
10.1 Quantum Subsystems and Mixed States
Box 10.1 The Trace of an Operator
To define the trace of an operator O acting on a vector space V , we first define the trace of a matrix for O and then show that the trace is basis independent and therefore a property of an
operator, not of the specific matrix representation used. The trace of a matrix M for O : V → V is the sum of its diagonal elements:
vi |M|vi tr(M) = i
where {|vi } is the basis for V with respect to which the matrix M is written. The following identities are easily verified: tr(M1 + M2 ) = tr(M1 ) + tr(M2 ), tr(αM) = αtr(M), tr(M1 M2 ) = tr(M2 M1
). The last equality implies tr(C −1 MC) = tr(M) for any invertible matrix C, which means that the trace is invariant under basis change. Thus the notion of a trace of an operator is independent of
basis, so we can simply talk about tr(O) without specifying a basis. Useful fact: For any |ψ1 and |ψ2 in a space V , and any operator O acting on V ,
ψ1 |O|ψ2 = tr(|ψ2 ψ1 |O).
The proof of this fact illustrates a common way of reasoning about traces. For any basis {|αi } for V , tr(|ψ2 ψ1 |O) =
αi |ψ2 ψ1 |O|αi i
ψ1 |O|αi αi |ψ2 i
= ψ1 |O ⎝
⎞ |αi αi |⎠ |ψ2 .
i |αi αi | is the identity matrix, the result follows.
the same operator. We first prove the result for density operators of pure states and then use that result to prove the general case. Suppose the subsystem under consideration is the whole system, A
= X. The system is in a pure state |x, written as |x = i xi |ψi in the basis {|ψi } for X. The density operator (equation 10.2) becomes xi xk |ψk ψi | = |x x|. ρxX = ρxA = ik
10 Quantum Subsystems and Properties of Entangled States
Box 10.2 Restricting Operators to Subsystems
Corresponding to any operator OAB on A ⊗ B, there is a family of operators on subsystem A that is parametrized by pairs of elements from B. Any pair of states |b1 and |b2 in B defines an operator on
A denoted by b1 |OAB |b2 . We first define the operator b1 |OAB |b2 in terms of a basis {|αi } for A, and then show that it is independent of basis, so any basis for A defines the same operator.
Operator b1 |OAB |b2 acts as follows:
b1 |OAB |b2 : A → A |x →
αi | b1 |OAB |x|b2 |αi .
This notation takes some getting used to. It may help the reader to begin by writing the operator
b1 |OAB |b2 as _, b1 |OAB |_, b2 . To prove basis independence, let {|aj } be another basis for A with |aj = i aij |αi . Then
b1 |OAB |b2 |a =
aj | b1 |OAB |b2 |a |aj j
⎛ ⎞ ⎛ ⎞ ⎝ = aij αi |⎠ b1 |OAB |a|b2 ⎝ akj |αk ⎠ j
aij akj αi | b1 |OAB |a|b2 |αk
αi | b1 |OAB |a|b2 |αi i
where the last line follows because {|αi } is a basis so j aij akj = δik . These restricted operators are useful for defining the partial trace (box 10.3), the canonical restriction of OAB to
subsystem A, and the operator sum decomposition discussed in section 10.4.
Thus, the density operator of a pure state ρxX = |x x| is independent of the basis for X. As with any operator, a matrix representation for the operator does depend on the basis. In basis {|ψi }, the
ij th entry of the matrix for ρxX is xj xi . The diagonal elements xi xi of the matrix have special meaning for measurements in the basis {|ψi }: the probability that |x will be measured as being in
the basis state |ψi with projector Pi = |ψi ψi | is
x|Pi |x = x|ψi ψi |x = xi xi . In the general case (A = X), let X = A ⊗ B, and let {|αi } and {|βj } be bases for A and B respectively. The matrix for the density operator ρxX of the state |x = xij |
αi |βj of X in the basis {|αi βj } has entries xij xkl :
10.1 Quantum Subsystems and Mixed States
ρxX =
M−1 L−1
xij xkl |αk |βl αi | βj |.
i,k=0 j,l
To obtain the density matrix ρxA , we use equation 10.3, which says that ρxA is simply the partial trace over B of ρxX (see box 10.3): Box 10.3 The Partial Trace
For any operator OAB on A ⊗ B, the partial trace of OAB with respect to subsystem B is an operator trB OAB on subsystem A defined by
βi |OAB |βi , trB OAB = i
where {|βi } is a basis for B. The operators βi |OAB |βi were defined in box 10.2. The partial trace trB OAB is basis independent by an argument similar to the one given for β1 |OAB |β2 in box 10.2.
In terms of bases {|αi } and {|βj } for A and B respectively, the matrix for trB OAB has entries (trB OAB )ij =
αi | βk |OAB |αj |βk ,
so the matrix for trB OAB is ⎞ ⎛ N−1 M−1 ⎝
αi | βk |OAB |αj |βk ⎠ |αi αj | trB OAB = i,j =0
where N and M are the dimensions of A and B respectively. In the special case in which OAB = |x x|, let xij x¯kl be the entries of OAB in the basis |αi |βj , so OAB =
xij |αi |βj
xkl αk | βl |
xij xkl |αi |βj αk | βl |.
ij kl
Then trB (OAB ) = trB (|x x|) =
N−1 M−1
xij xkj |αi αk |.
i,k=0 j =0
In the special case in which an operator is the tensor product of operators on the separate subsystems, the partial trace has the simple form trB (OA ⊗ OB ) = OA tr(OB ).
10 Quantum Subsystems and Properties of Entangled States
ρxA = trB (ρxX ) ⎞ ⎛ L−1 M−1 xij xkl |αk |βl αi | βj |⎠ = trB ⎝ i,k=0 j,l
⎛ ⎞ ⎞ ⎛ L−1 L−1 M−1 ⎝ αu | βw | ⎝ xij xkl |αk |βl αi | βj |⎠ |αv |βw ⎠ |αu αv | w
L−1 M−1
i,k=0 j,l
xvw xuw |αu αv |.
u,v=0 w
Since the partial trace is basis independent, so is the density operator. Example 10.1.1 Let us return to Alice, who controls the first qubit of the EPR pair |ψ = √1 (|00 + |11) 2
A ⊗ B is
while Bob controls the second. The density matrix for the pure state |ψ ∈
ρψ = |ψ ψ| 1 (|00 00| + |00 11| + |11 00| + |11 11|) 2 ⎛ ⎞ 1 0 0 1 ⎟ 1⎜ ⎜ 0 0 0 0 ⎟ = ⎜ ⎟. 2⎝ 0 0 0 0 ⎠ =
The mixed state of Alice’s qubit, which encapsulates all information that could be obtained from any sequence of measurements on Alice’s qubit alone on a sequence of identical states |ψ, is modeled
by the density matrix ρψA obtained from ρψ by tracing over Bob’s qubit, ρψA = trB ρψ . The four entries a00 , a01 , a10 , and a11 for a matrix representing ρψA in the standard basis can be computed
separately: a00
1 1 1 =
0| j | |ψ ψ| |0|j = +0 = , 2 2 j =0
a01 =
0| j | |ψ ψ| |1|j = (0 + 0) = 0, j =0
10.1 Quantum Subsystems and Mixed States
a10 =
1| j | |ψ ψ| |0|j = (0 + 0) = 0,
j =0
a11 =
1| j | |ψ ψ| |1|j = 0 + = . 2 2 j =0
So ρψA =
By symmetry, the density operator for Bob’s qubit is 1 1 0 ρψB = . 2 0 1 In general, it is not possible to recover the state of the entire system from the set of density operators for all of the
subsystems; information has been lost. For example, for a two-qubit system, 1 0 1 0 1 1 and ρψB = 2 , if the density matrices for each of the two qubits are ρψA = 2 0 1 0 1 the state of the two-qubit
system as a whole could be √12 (|00 + |11), as in example 10.1.1, or it could be
√1 (|00 − |11) 2
√1 (|01 + |10) 2
among other possibilities.
10.1.2 Properties of Density Operators
Any density operator ρxA satisfies 1. ρxA is Hermitian (self-adjoint), 2. tr(ρxA ) = 1, and 3. ρxA is positive. Property (1) follows immediately from the definition (equation 10.2). Since |x is a
unit vector, tr(ρxA ) = xij xij = 1. An operator O : V → V being positive means that v|O|v is real with
v|O|v ≥ 0 for all |v in V . To show that ρxA : A → A is positive, let |v ∈ A. Then
v|ρxA |v =
v|( xij xkj |αk αi |)|v ik
xij αi |vxkj v|αk
xij v|αi
xkj v|αk
10 Quantum Subsystems and Properties of Entangled States
2 = xij v|αi j
≥ 0. Positivity implies that all of the eigenvalues of ρxA are real and non-negative: if λ is an eigenvalue of ρxA with eigenvector |vλ , then λ = vλ |ρxA |vλ is real and non-negative. It follows
from these properties that in any (orthonormal) eigenbasis {|v0 , . . . , |vM−1 } for ρxA , the matrix for ρxA is a diagonal matrix with non-negative real entries λi that sum to 1. Thus ρxA = i λi |
vi vi |. In this way the mixed state with density operator ρxA may be viewed as a mixture of pure states |vi vi | or, more precisely, as a probability distribution over these states. It turns out
that any operator satisfying (1), (2), and (3) is a density operator; in some expositions, that is how density operators are first defined. To establish this equivalence, we need to show that, for
any operator ρ : A → A satisfying these conditions, there is a pure state |ψ of a larger system A ⊗ B such that trB (|ψ ψ|) = ρ. The state |ψ is called a purification of ρ. Let ρ be any operator
acting on a subsystem A of dimension M = 2m that satisfies (1), (2), and (3). These properties mean that in its eigenbasis {|ψ0 , |ψ1 , . . . , |ψM−1 }, ρ is diagonal with non-negative real
eigenvalues λi that sum to 1. Thus, for any ρ, ρ = λ0 |ψ0 ψ0 | + · · · + λM−1 |ψM−1 ψM−1 |, for some {|ψ0 , |ψ1 , . . . , |ψM−1 }. Let B be a quantum system with associated vector space of dimension
2n > M, and let {|0, . . . , |M − 1} be the first M elements of a (orthonormal) basis for B. Then the pure state |x ∈ A ⊗ B |x = λ0 |ψ0 |0 + λ1 |ψ1 |1 + · · · + λM−1 |ψM−1 |M − 1 satisfies ρxA = ρ.
For a pure state |x, the density operator ρxX = |x x| has a particularly simple form in terms of a basis that contains |x as its ith element: it is a matrix with all entries 0 except for a single 1
on the diagonal in the ith spot. It follows that the density operator of a pure state is a projector: ρxX ρxX = ρxX . Conversely, any density operator that is a projector corresponds to a pure state:
projection operators have only 0 and 1 as eigenvalues, and to obtain trace 1 the density operator must have only a single 1-eigenvector, which is the corresponding pure state. Another nice property
of density operators of pure states is that the non-uniqueness of the representation of states due to the global phase disappears. Let |x = eiθ |y. The density operator corresponding to |x is ρx = |x
x|, which is also equal to |y y|, since ρx = |x x| = eiθ |y y|e−iθ = |y y|. Thus, any two vectors that differ by a global phase have the same density operator. It is important not to confuse mixed
states with superpositions. The mixed state that is an even probabilistic combination of |0 and |1 is not the same as the pure state superposition |+ = √12 (|0 + |1). Their density operators are
different: in the standard basis, the density matrix for the former is
10.1 Quantum Subsystems and Mixed States
ρME =
whereas the density matrix for the latter is 1 1 1 ρ+ = . 2 1 1 The latter gives deterministic results when measured in an appropriate basis, whereas the former gives probabilistic results in all
bases. Mixed states are not viewed as true quantum states, but rather as a way of describing a subsystem whose state is not well defined, being only a mixed state, or a probabilistic mixture of well
defined pure states. Therefore, state or quantum state will mean a pure state unless it is prefaced with the word mixed. Furthermore, when it is clear which subsystem is being talked about, we drop
the superscript and just say ρx . 10.1.3 The Geometry of Single-Qubit Mixed States
The Bloch sphere (section 2.5.2) can be extended in an elegant way to include single-qubit mixed states. Mixed states are convex combinations of pure states, linear combinations of pure states with
non-negative coefficients that sum to 1, so it is not surprising that single-qubit mixed states can be viewed as lying in the interior of the Bloch sphere. The precise connection with the geometry
uses the fact that density operators are Hermitian (self-adjoint) operators with trace 1. Any self-adjoint 2 × 2-matrix is of the form a c − id , c + id b where a, b, c, and d are real parameters.
Requiring that the matrix have trace 1 means there are only 3 real parameters. Such matrices can be written as 1 1 + z x − iy , 2 x + iy 1 − z where x, y, and z are real parameters. Thus, any density
matrix for a single-qubit system can be written as 1 (I + xσx + yσy + zσz ), 2 0 1 0 −i 1 0 , σy = −iY = , and σz = Z = are the Pauli spin 1 0 i 0 0 −1 matrices. (The Pauli spin matrices are related
to the Pauli group elements X, Y , and Z of section 5.2.1 by σx = X, σy = −iY , and σz = Z.)
σx = X =
10 Quantum Subsystems and Properties of Entangled States
The determinant of a single-qubit density operator ρ = 12 (I + xσx + yσy + zσz ) has geometric meaning; it is easily computed to be 1 (1 − r 2 ) 4
2 2 2 where r = |x | + |y | + |z| is the radial distance from the origin in x, y, z coordinates. Since the determinant of ρ is the product of its eigenvalues, which for a density operator must be
non-negative, det(ρ) ≥ 0. So 0 ≤ r ≤ 1. Thus, with x, y, and z acting as coordinates, the density matrices of single-qubit mixed states ρ = 12 (I + xσx + yσy + zσz ) all lie within a sphere of radius
1. The density matrices for states on the boundary of the sphere have det(ρ) = 0; one of their eigenvalues must be 0. Since density operators have trace 1, the other eigenvalue must be 1. Thus,
density operators on the boundary of the sphere are projectors, which means they are pure states. We have recovered the boundary of the Bloch sphere discussed in section 2.5 as the boundary of a ball
for which the Pauli spin matrices provide the coordinates. This entire ball is called the Bloch sphere (though it ought to be called the Bloch ball). The following table gives the density matrices,
in the standard basis, and the Bloch sphere coordinates for some familiar states and mixed states. det(ρ) =
(x, y, z)coordinate
state vector
density matrix
(1, 0, 0)
1 (I 2
(0, 1, 0)
(0, 0, 1)
+ σx ) =
(0, 0, 0)
−i 1 2 0 1 1 (I + σz ) = 2 2 0 0 1 0 1 1 ρ0 = 2 I = 2 0 1 1 (I 2
+ σy ) =
1 i
The set of all density operators for mixed states of an n-qubit system with n ≥ 2 also forms a convex set, but its geometry is significantly more complicated than the simple Bloch sphere picture. As
one example, in the single-qubit case the boundary of the Bloch sphere contains exactly the pure states, where as for n ≥ 2, the boundary of the set of all mixed states contains both pure and mixed
states. The reader may easily check that this statement must be true by computing the dimension of the space of n-qubit mixed states to be 22n − 1 and comparing it to the dimension of the space of
pure states, which is only 2n+1 − 2. 10.1.4 Von Neumann Entropy
The density matrix of one qubit of an EPR pair, 1 1 0 ρME = , 2 0 1
10.1 Quantum Subsystems and Mixed States
corresponds to the point (0, 0, 0) in the center of the sphere, farthest from the boundary. In a technical sense, this state is the least pure single-qubit mixed state possible: it is the maximally
uncertain state in that no matter in what basis it is measured, it gives the two possible answers with equal probability. In contrast, for any pure state, there is a basis in which measurement gives
a deterministic result. For no state, mixed or pure, do measurements in two different bases give deterministic results, so pure states are as certain as possible. This notion of uncertainty can be
quantified for general n-qubit states by an extension of the classical information theoretic notion of entropy. The von Neumann entropy of a mixed state with density operator ρ is defined to be S(ρ)
= −tr(ρ log2 ρ) = − λi log2 λi , i
where λi are the eigenvalues of ρ (with repeats). As is done for classical entropy, take 0 log(0) = 0. The von Neumann entropy is zero for pure states; since the density operator ρx for a pure state
|x is a projector, it has a single 1-eigenvalue with n − 1 0-eigenvalues, so S(ρx ) = 0. Observe that the maximally uncertain single qubit mixed state ρME has von Neumann entropy S(ρ) = 1. More
generally, a maximally uncertain n-qubit state has a density operator that is diagonal with entries all 2−n ; a maximally uncertain n-qubit state ρ has von Neumann entropy S(ρ) = n. For a
single-qubit state with density operator ρ, the von Neumann entropy S(ρ) is related to the distance between the point in the Bloch sphere corresponding to ρ and the center of the Bloch sphere. Let λ1
and λ2 be the eigenvalues of ρ. Since density operators have trace 1, λ2 = 1 − λ1 . The von Neumann entropy of ρ can be deduced from its determinant: det(ρ) = λ1 λ2 , so det(ρ) = λ1 (1 − λ1 ), so λ21
− λ1 + det(ρ) = 0, which has solutions √ 1 + 1 − 4 det ρ λ1 = 2 and λ2 =
1 − 4 det ρ . 2
Using det(ρ) = 14 (1 − r 2 ) from section 10.1.3, we see that λ1 =
1+r 2
and 1−r . 2 So, for single-qubit mixed states, the entropy is simply a function of the radial distance r: 1+r 1−r 1+r 1−r S(ρ) = − log2 + log2 . (10.6) 2 2 2 2
λ2 =
10 Quantum Subsystems and Properties of Entangled States
10.2 Classifying Entangled States
The concepts and notation for quantum subsystems support a deeper study of entanglement. Ever since the beginning of the field, entanglement has been recognized as a fundamental resource for quantum
information processing and a key to what distinguishes it from classical processing. Nevertheless, it is still only poorly understood. Only in the simplest case, that of pure states of a bipartite
system A ⊗ B, is entanglement well understood. What is known about multipartite entanglement is that it is complicated; there are many distinct types of multipartite entanglement whose utility and
relation are only beginning to be understood. Even for bipartite mixed states there are distinct measures of entanglement. Each gives insight into the entanglement resources needed for various
quantum information processing tasks; no single measure of entanglement will do. Recall that entanglement is not an absolute property of a quantum state, but depends on the tensor decomposition of
the system into subsystems. A (pure) state |ψ of a quantum system with associated vector space V , is separable with respect to the tensor decomposition V = V1 ⊗ · · · ⊗ Vn if it can be written as |ψ
= |ψ1 ⊗ · · · ⊗ |ψn , where |ψi is contained in Vi . Otherwise, |ψ is said to be entangled with respect to this decomposition. For n-qubit systems, we will generally speak of entanglement with
respect to the decomposition into the n single-qubit systems. Thus, when we say that a state is entangled without further qualification, we mean that it is entangled with respect to this
decomposition into individual qubits. For bipartite pure states, it is possible to quantify the amount of entanglement a state contains. Any reasonable measure of entanglement should satisfy certain
properties. For example, any measure of entanglement should take its minimal value, usually zero, on unentangled states. Furthermore, performing any sequence of operations, including measurements, on
the subsystems individually should not increase the value of an entanglement measure. Even allowing the result of a measurement on one subsystem to influence which operations are performed on another
subsystem should not increase the value. Imagine different people in control of each subsystem, with only classical communication channels between them. The restricted set of operations they can
perform is often abbreviated LOCC, for local operations with classical communication. The LOCC requirement for any reasonable measure of entanglement means that nothing these people can do can
increase the value of the entanglement measure. 10.2.1 Bipartite Quantum Systems
To find a good measure of entanglement for pure states of a bipartite system X = A ⊗ B, let us look at the simplest of bipartite systems: two-qubit systems. The state |ψ = √12 (|00 + |11) is
maximally entangled in the sense that, when looked at separately, the state of each qubit is as uncertain as possible. Tracing over each qubit gives the mixed state ρME = 12 I . This state has
maximal von Neumann entropy among all two-qubit states. Similarly, unentangled states are the
10.2 Classifying Entangled States
least entangled states possible in the sense that, when looked at separately, the state of each qubit is as certain as possible. Tracing over each qubit gives a pure state, a state with zero von
Neumann entropy. These examples suggest that the von Neumann entropy of the partial trace with respect to one of the subsystems might make a good measure of entanglement in bipartite systems. In
order for this approach to make sense, the von Neumann entropy of the partial trace should be the same whether we look at subsystem A or subsystem B. The proof that the two quantities are the same
relies on the Schmidt decomposition. The Schmidt decomposition also leads directly to a coarse measure of entanglement. For any pure state |ψ of a bipartite system A ⊗ B, there exist orthonormal sets
of states {|ψiA } and {|ψiB } such that |ψ =
λi |ψiA ⊗ |ψiB
for some positive real λi such that i=1 λ2i = 1. Exercises 10.8 and 10.9 step through a proof that Schmidt decomposition exists for every state |ψ. The λi are called the Schmidt coefficients, and K,
the number of λi , is called the Schmidt rank or Schmidt number of |ψ. For unentangled states, the Schmidt rank is 1. We now use the Schmidt decomposition to check that trA ρ = trB ρ. Let |ψ be a
state in a bipartite system X = A ⊗ B, where A and B are general multiple-qubit systems. Let ρ = |ψ ψ|. Let |ψ =
λi |ψiA ⊗ |ψiB
be a Schmidt decomposition for |ψ. Then ρ = |ψ ψ| =
K−1 K−1
λi λj |ψiA ψjA | ⊗ |ψiB ψjB |,
i=0 j =0
so trB ρ =
λ2i |ψiA ψiA |
and trA ρ =
λ2i |ψiB ψiB |.
Since {|ψiA } is an orthonormal set, it follows that S(trA ρ) = −
K−1 i=0
λ2i log2 λ2i .
10 Quantum Subsystems and Properties of Entangled States
Similarly, S(trB ρ) = −
λ2i log2 λ2i .
Thus S(trA ρ) = S(trB ρ). The amount of entanglement between the two parts of a pure state |ψ of a bipartite system X = A ⊗ B with density operator ρ = |ψ ψ| is defined to be S(trA ρ), or
equivalently S(trB ρ). We compute this quantity for a variety of bipartite states. To begin with, it is zero on unentangled states. Example 10.2.1 For |x =
√1 (|00 + |11), 2
recall from example 10.1.1 that ρx1 = tr2 |x x| =
ρME = 12 I . Thus, by the formula for the von Neumann entropy for single-qubit mixed states, equation 10.6, the amount of entanglement is S(ρME ) = 1. If we work out the density matrices for the
first qubit of states √12 (|01 + |10) and √12 (|00 − i|11) we will find that these too are equal to ρME . Such states are among the maximally entangled two-qubit states.
Example 10.2.2 Let |x =
7 1 1 7 |00 + 10 |01 + 10 |10 + 10 |11 with density operator ρx = |x x|. 10 1 operator ρx = tr2 |x x|, trace over the second qubit. The four terms that
To obtain the density make up matrix ρx1 in the standard basis are: 2 1 1 7 2 1 |0 0| = |0 0|,
0| j ||x x||0|j |0 0| = + 10 10 2 j =0 1 7 1 1 7 7
0| j ||x x||1|j |0 1| = + |0 1| = |0 1|, 10 10 10 10 50 j =0 1 7 1 7 1 7 + |1 0| = |1 0|,
1| j ||x x||0|j |1 0| = 10 10 10 10 50 j =0
1| j ||x x||1|j |1 1| = + |1 1| = |1 1|. 10 10 2 j =0
10.2 Classifying Entangled States
So 7 7 1 1 |0 0| + |1 0| + |0 1| + |1 1| 2 50 50 2 1 1 14 50 14 = = (I + X) 100 14 50 2 50
ρx1 =
corresponding to the point (14/50, 0, 0) in the Bloch sphere. To compute S(ρx1 ), we note that 9 and 25 , so {|+, |−} is the eigenbasis of ρx1 with eigenvalues 16 25 16 9 9 16 log2 − log2 = 0.942 . .
. . 25 25 25 25 More directly, we could have used equation 10.6 of section 10.1.4 to compute the eigenvalues from the distance r = 14 of ρx1 from the center of the sphere. 50 S(ρx1 ) = −
Example 10.2.3 Let |y =
second qubit, we obtain ρ 1 (y) = tr2 (ρy ) =
√ i 99 |00 + |11 10 10
with density operator ρy = |y y|. Tracing over the
1 49 (I − Z) 2 50
corresponding to the point (0, 0, 49/50) in the Bloch sphere. Using the relation between r = and the eigenvalues given by equation 10.6 of section 10.1.3, we obtain S(ρy1 ) = −
1 99 99 1 log2 − log2 = 0.0807 . . . . 100 100 100 100
To underscore how strongly the notion of entanglement depends on the subsystem decomposition under consideration, we give an example of a state with widely different von Neumann entropies with
respect to two different system decompositions. Example 10.2.4 The amount of entanglement in the four-qubit state
|ψ =
1 1 (|00 + |11 + |22 + |33) = (|0000 + |0101 + |1010 + |1111) 2 2
differs greatly for two different bipartite system decompositions. First, consider the decomposition into the first and third qubits and the second and fourth qubits. Trace over the first subsystem.
The state ρψ24 = tr1,3 (|ψ ψ|) has von Neumann entropy 0 since, by example 3.2.3, |ψ can be written as the tensor product of pure states in each of these subsystems. Thus, with respect to this
decomposition, the state |ψ is unentangled. Now consider the decomposition into the first and second qubits and the third and fourth qubits. Tracing over the second system yields
ρψ12 = tr34 (|ψ ψ|) =
10 Quantum Subsystems and Properties of Entangled States
j | k||ψ ψ||i|k|j i|.
i,j =0 k=0
The coefficient of |j i| is 14 δij , so ρψ12 is the 4 × 4 diagonal matrix with diagonal entries all 1/4, so S(tr1,2 (|ψ ψ|)) = 2. The maximum value possible for the von Neumann entropy of states of a
two-qubit system is 2. Thus, with respect to this decomposition, the state |ψ is maximally entangled. While the von Neumann entropy of the partial trace with respect to one of the subsystems is the
most common measure of entanglement for bipartite pure states, the Schmidt rank K is also a useful measure of entanglement. Both are nonincreasing under local operations and classical communication
(LOCC). The Schmidt rank is a much coarser measure of entanglement than the von Neumann entropy of the partial trace. For two-qubit systems, the Schmidt rank merely distinguishes between unentangled
states, with Schmidt rank 1, and entangled states, with Schmidt rank 2. For bipartite systems A ⊗ B, where A and B are multiple-qubit systems, the Schmidt rank is more interesting than in the
single-qubit case, but it is still a coarser measure than the von Neumann entropy of the partial trace. 10.2.2 Classifying Bipartite Pure States up to LOCC Equivalence
A state |ψ ∈ X can be converted to a state |φ ∈ X by local operations and classical communication (LOCC) with respect to a tensor decomposition X = X1 ⊗ · · · ⊗ Xn if there exists a sequence of
unitary operators and measurements on separate Xi that when applied to |ψ are guaranteed to result in the state |φ. Which transformations are applied is allowed to depend on the outcomes of previous
measurements, but is otherwise deterministic. Two states |ψ and |φ are said to be LOCC equivalent with respect to the decomposition X = X1 ⊗ · · · ⊗ Xn if |ψ can be converted to |φ via LOCC and vice
versa. An unentangled state cannot be converted to an entangled one using only LOCC. Example 10.2.5 The Bell states
√1 (|00 + |11) 2
simply apply X to the second qubit.
√1 (|01 + |10) 2
are LOCC equivalent;
can be converted to |00 via LOCC, but not vice versa: measure the first qubit in the standard basis to obtain either |00 or |11, and if the result was |1 apply X to each of the qubits. Example 10.2.6
The Bell state
√1 (|00 + |11) 2
Nielsen provides an elegant classification of pure states of bipartite systems up to LOCC equivalence, in terms of majorization of the sets of eigenvalues of the density operators of the subsystems.
Let a = (a1 , . . . , am ) and b = (b1 , . . . , bm ) be two vectors in Rm . Let a ↓ be the
10.2 Classifying Entangled States
reordered version of a such that ai ≥ ai+1 for all i. We say that b majorizes a, written b ' a, ↓ ↓ ↓ if for each k, 1 ≤ k ≤ m, kj =1 aj ≤ kj =1 bj with the additional requirement that kj =1 aj = k ψ
ψ ↓ ψ j =1 bj when k = m. For a pure state |ψ of a bipartite system A ⊗ B, let λ = (λ1 , . . . , λm ) be the eigenvalues of trB |ψ ψ|. Nielsen has proved that the state |ψ can be transformed to |φ by
LOCC if and only if λψ is majorized by λφ . Thus, |ψ and |φ are LOCC equivalent if and only if λψ ' λφ and λφ ' λψ . In the case of a bipartite system consisting of two qubits, the majorization
condition reduces to a simple one. Let |ψ and |φ be two states of a two-qubit system with λψ = (λ, 1 − λ) and λφ = (μ, 1 − μ), where λ ≥ 1/2 and μ ≥ 1/2. Then λψ ' λφ if and only if λ ≥ μ. It follows
that λψ ' λφ if and only if S(tr2 |ψ ψ|) ≤ S(tr2 |φ φ|). Thus, |φ can be converted to |ψ via LOCC if and only if |φ is more entangled than |ψ. Similarly |φ and |ψ are LOCC equivalent if and only if
the von Neumann entropies of the density operators for the partial trace over one of the subsystems are equal. Observe that there are infinitely many LOCC equivalence classes, and that these classes
are parametrized by a continuous variable, 1/2 ≤ λ ≤ 1. For bipartite systems with subsystems larger than single-qubit systems, the classification is more complicated in that there are incomparable
states. For example, if A and B are both two-qubit systems, the states √ 3 2 2 1 |ψ = |0|0 + |1|1 + |2|2 + |3|3 4 4 4 4 and
√ √ 1 1 2 2 6 |0|0 + |1|1 + |2|2 + |3|3 |φ = 4 4 4 4 are incomparable because ψ
λ1 =
1 9 φ > = λ1 , 16 2
but 13 14 φ φ < = λ1 + λ2 . 16 16 Nevertheless, in any bipartite system, no matter how large, the vector for any unentangled state majorizes all others. Furthermore, in any bipartite system there are
maximally entangled states |ψ for which λψ is majorized by λφ for all states |φ. Let X be a bipartite system X = A ⊗ B where A and B have dimensions n and m respectively, with n ≥ m. Let |ψ be a
state of the form ψ
λ1 + λ2 =
1 A |ψ = √ |φ ⊗ |φiB m i=1 i m
where the {|φiA } and {|φiB } are orthonormal sets, and since m is the dimension of B, the set {|φiB } is a basis for B. The vector λψ is majorized by λφ for all states |φ ∈ A ⊗ B. Furthermore, as
one would expect, these maximally entangled states have maximal Schmidt rank, and the
10 Quantum Subsystems and Properties of Entangled States
von Neumann entropy after tracing over either subsystem is the maximum possible value. These states fulfill our current expectations for maximally entangled states in every way. We shall see,
however, that for multipartite states it is nevertheless highly unclear what maximally entangled should mean. 10.2.3 Quantifying Entanglement in Bipartite Mixed States
Before discussing entanglement in multipartite quantum systems, we take a brief look at the meaning of entanglement for mixed states. A mixed state ρ of a quantum system V1 ⊗ · · · ⊗ Vn is separable
with respect to this tensor decomposition if it can be written as a probabilistic mixture of unentangled states: ρ is separable if it can be written as ρ=
pj |φj(1) φj(1) | ⊗ · · · ⊗ |φj(n) φj(n) |,
j =1
where |φj(i) ∈ Vi and pi ≥ 0 with i pi = 1. For a given i, the various |φj(i) need not be orthogonal. If a mixed state ρ cannot be written as above, it is said to be entangled. This definition may
appear more complicated than expected; why not say a mixed state ρ is entangled if it cannot be written as ρ1 ⊗ · · · ⊗ ρn ? The more involved definition distinguishes entanglement from mere
classical correlation. For example, the mixed state ρcc = 1 |00 00| + 12 |11 11| is classically correlated (it cannot be written as ρ1 ⊗ · · · ⊗ ρn ), but is 2 not entangled. The state ρ+ = 12 (|00 +
|11)( 00| + 11|) is entangled. Appendix A discusses quantum entanglement versus classical correlations in more detail. If a mixed state ρ can be written as a probabilistic mixture of entangled
states, it is not necessarily entangled; it still may be separable. For example, consider ρ=
1 + 1 | + | + |− − |, 2 2
√ √ where |+ and |− are the Bell states |+ = 1/ 2(|00 + |11) and |− = 1/ 2(|00 − |11). We defined the mixed state ρ as a probabilistic mixture of maximally entangled states, but it is easy to check
that it can also be written as 1 1 |00 00| + |11 11|, 2 2 a probabilistic mixture of product states, so ρ is actually separable. There are a number of useful measures of entanglement for mixed
bipartite states, all of which coincide with the standard measure of entanglement on pure states, the von Neumann entropy of the density operator for one of the subsystems. We give a rough
description of a few of these measures. The amount of distillable entanglement contained in a mixed state ρ is the asymptotic ratio m/n of the maximum number m of maximally entangled states ρME that
can be obtained from n copies of ρ by LOCC. Conversely, the entanglement cost is the asymptotic ratio m/n of the minimum number n of copies of a maximally entangled state ρME needed to produce m
copies of ρ using only LOCC. The relative entropy of entanglement can be thought of as measuring how ρ=
10.2 Classifying Entangled States
close ρ is to a separable state ρS , and is defined to be inf tr[ρ(log ρ − log ρS )],
ρS ∈S
where the infinum is over all separable states ρS . It is known that not only is the distillable entanglement never greater than the cost of entanglement, but also for most mixed states the
distillable entanglement is strictly less than the cost of entanglement. In particular, there exist bound entangled states from which no entanglement can be distilled, but whose entanglement cost is
non-zero. The study of entanglement in mixed bipartite states is a rich area of research, with many known results not described here, but also with many remaining open questions. Even the
relationship between the measures we just described is not fully understood. 10.2.4 Multipartite Entanglement
Researchers are continuing to develop new measures of entanglement and explore properties of states entangled with respect to tensor decompositions into more than two subsystems. For quantum
computation, we are particularly interested in properties of n-qubit states for large n, and measures of entanglement for these states with respect to the decomposition into the individual qubit
systems. In spite of broad recognition that understanding multipartite entanglement is crucial for understanding the power and limitations of quantum computation, much remains unknown. Entangled
states provide a fundamental resource for other types of quantum information processing, such teleportation and dense coding, as well as quantum computation. Which types of entangled states are most
useful for which types of quantum information processing tasks, is an active area of research. Even for pure states of the simplest multipartite systems, three-qubit systems, quantifying entanglement
is complicated. We just saw that for two-qubit systems there are infinitely many LOCC equivalence classes. However, relaxing the LOCC condition simplifies the picture somewhat. A state |ψ can be
converted to |φ by stochastic local operations and classical communication (SLOCC) if there is a sequence of local operations with classical communications that with non-zero probability turns |φ
into |ψ. States |ψ and |φ are SLOCC equivalent if |ψ can be converted to |φ by SLOCC and vice versa. Under SLOCC equivalence, the two-qubit case reduces to two classes: entangled states and
unentangled states. For a three-qubit system X = A ⊗ B ⊗ C, the SLOCC classification of states with respect to the decomposition into the three systems, has six distinct SLOCC classes: •
unentangled states,
A-BC decomposable states,
B-AC decomposable states,
C-AB decomposable states,
SLOCC equivalent to |GH Z3 =
SLOCC equivalent to |W3 =
√1 (|000 + |111), 2
√1 (|001 + |010 + |100). 3
10 Quantum Subsystems and Properties of Entangled States
GHZ3 equivalent
W3 equivalent
unentangled Figure 10.1 Partial order of SLOCC classes for a three-qubit system, where states in upper classes can be converted to states in lower classes using SLOCC, but the two uppermost classes
cannot be converted to each other.
A state |ψ being in the A-BC decomposable class means that it can be written as |ψ = |ψA ⊗ |ψBC for |ψA ∈ A and |ψBC ∈ B ⊗ C, but it cannot be fully decomposed into |ψ = |ψA ⊗ |ψB ⊗ |ψC , where |ψA ∈
A, |ψB ∈ B and |ψC ∈ C. A partial order on these six classes is shown in figure 10.1: a state |ψ is contained in a class above that of state |φ if there exists an SLOCC sequence taking |ψ to |φ but
not vice versa. There are two inequivalent classes of states at the top of the hierarchy; |GH Z3 cannot be converted to |W3 or the other way around. It is not clear whether |GH Z3 or |W3 should be
considered more entangled; each appears to be highly entangled in some ways and less so in others. To illustrate the distinct types of entanglement these states embody, we look at these states in
terms of the persistency and connectedness of their entanglement. Persistency of entanglement The persistency of entanglement of |ψ ∈ V ⊗ · · · ⊗ V is the min-
imum number of qubits, Pe , that need to be measured to guarantee that the resulting state is unentangled. Maximal connectedness A state |ψ ∈ V ⊗ · · · ⊗ V is maximally connected if for any two
there exists a sequence of single-qubit measurements on the other qubits that when performed guarantee that the two qubits end up in a maximally entangled state. Let |GH Zn be the n-qubit state 1 |GH
Zn = √ (|00 . . . 0 + |11 . . . 1), 2 and |Wn be the n-qubit state 1 |Wn = √ (|0 . . . 001 + |0 . . . 010 + |0 . . . 100 + · · · + |1 . . . 000). n Because only one qubit needs to be measured to
reduce |GH Zn to an unentangled state, the persistency of entanglement of |GH Zn is only 1, so in this sense it is not very entangled. On
10.2 Classifying Entangled States
the other hand, |GH Zn is maximally connected. It is relatively easy to check that the states |Wn are not maximally connected. Yet, they do have high persistency: Pe (|Wn ) = n − 1. Thus, whether |GH
Zn or |Wn should be considered more entangled depends on what properties of entanglement one is interested in. For n ≥ 4, the situation becomes far more complicated. For n ≥ 4 there are infinitely
many SLOCC equivalence classes, and these classes are parametrized by continuous parameters. As n increases, it becomes less and less clear which states should be considered maximally entangled.
Cluster States A class of n-qubit entangled states, cluster states, combine properties of both
|GH Zn and |Wn states. The |GH Zn states are maximally connected but have persistency of only 1. The persistency of the |Wn states increases with n, but they are not maximally connected. Cluster
states are maximally connected and have persistency increasing with n. Cluster states form a universal entanglement resource for quantum computation that is the basis for cluster state, or one-way,
quantum computing, an alternative model of quantum computing discussed in chapter 13. Let G be any finite graph whose vertices are qubits. The neighborhood of any vertex v ∈ G, nbhd(v) is the set of
vertices w connected to v by an edge of the graph. An operator O stabilizes a state |ψ if O|ψ = |ψ. The graph states |G corresponding to a graph G is the state stabilized by the set of operators, one
for each vertex of G, ) Zi , (10.7) Xv ⊗ i∈nbhd(v)
where X = |1 0| + |0 1| and Z = |0 0| − |1 1| are the familiar Pauli operators, and the superscript on these operators indicates to which qubit the operator is applied. If the graph G is a
d-dimensional rectangular lattice, then |G is called a cluster state (see figure 10.2). There is some discrepancy in terminology in the literature; sometimes cluster states is taken to be synonymous
with graph states. Graph states, including cluster states, can be constructed as follows. For each vertex, begin with a qubit in state |+. Then for each edge in the graph apply the controlled phase
operator CP = |00 00| + |01 01| + |10 10| − |11 11|. Since the controlled phase operator is symmetric on the qubits, and the applications of the controlled phase all commute with each other, it does
not matter in which order the operators are applied. Here we consider only * the states stabilized by the operators X v ⊗ i∈nbhd(v) Z i , but some expositions consider all states which are joint
eigenstates of these operators. Example 10.2.7 Construction of the cluster state for a 1 × 2 lattice. Apply CP to |+|+ to obtain
the cluster state |φ2 =
1 1 1 (|00 + |01 + |10 − |11) = √ (|+|0 + |−|1) = √ (|0|+ + |1|−). 2 2 2
This state is LOCC equivalent to a Bell state.
10 Quantum Subsystems and Properties of Entangled States
Z Z
X Z
Figure 10.2 A 4 × 5 rectangular lattice. Sample operators that define a cluster state are shown, one for an internal node, one for a node on the boundary but not at a corner, and one for a corner
node. A cluster state is a state that is a simultaneous eigenstate of all such operators, one for each node of the lattice.
Example 10.2.8 Cluster state for a 1 × 3 lattice. The operator (CP ⊗ I )(I ⊗ CP ) applied to |+|+|+ results in the cluster state
1 |φ3 = (CP ⊗ I )( √ (|+(|0|+ + |1|−)) 2 1 (|0|0|+ + |0|1|− + |1|0|+ − |1|1|−) 2 1 = (|+|0|+ + |−|1|−). 2 This state is LOCC equivalent to |GH Z3 . =
Example 10.2.9 Cluster state for a 1 × 4 lattice.
1 (|0|+|0|+ + |1|−|0|+ + |0|−|1|− + |1|+|1|−) 2 1 = (|+|0|+|0 + |−|0|−|0 + |+|0|−|1 + |−|1|+|1) 2
|φ4 =
10.3 Density Operator Formalism for Measurement
The reader should check that each of these states is stabilized by all of the operators of equation 10.7. Briegel and Raussendorf give a straightforward proof that all cluster states are maximally
connected. A more involved argument shows that cluster states |φn have persistency n/2!. Thus, while the persistency of |φn is not as great as the persistency of |Wn , the persistency of cluster
states does increase linearly with the number of qubits, and unlike |Wn , cluster states are maximally connected. Thus cluster states combine entanglement strengths of both the |GH Zn states and the
|Wn states. In section 13.4.1, we briefly return to cluster states to describe the use of their entanglement as a quantum computational resource. The following table summarizes the situation:
|GH Zn |φn |Wn
max connected
1 n/2! n−1
10.3 Density Operator Formalism for Measurement
An analysis of a quantum algorithm or protocol that involves measurement must take into account all possible outcomes of any measurement. Up to now we have had only an awkward way of describing the
result of a future measurement: listing the possible outcomes and their respective probabilities. Density operators provide a compact and elegant way to model the probabilistic outcomes of a
measurement yet to be performed or of a measurement for which the outcome is unknown. Density operators provide a means of compactly expressing a probability distribution over quantum states or the
statistical properties of an ensemble of quantum states. If the reader has not yet read appendix A on the relations between probability theory and quantum mechanics, now would be a good time to do
so. The following game motivates this use of density operators in this context. Keep in mind the definition of a state given in section 10.1: a state encapsulates “all information about the system
that can be gained from any number of measurements on a supply of identical quantum systems." Suppose you are told you will be sent a sequence of qubits and that either all members of the sequence
are the first qubit of a Bell state √1 (|00 + |11) or that a random sequence of |0 and |1 are sent, with |0 and |1 having equal 2 probability. Your job is to determine which type of sequence you are
receiving. What is your strategy? It is impossible to do better than guessing randomly; without access to more information, there is no way to distinguish the two sequences. If you were given access
to the second qubit of each Bell pair in the first case and a second copy of each qubit in the second, a winning strategy is possible. But without access to the second qubit, the two sequences are
indistinguishable. To see why, recall from section 10.1.1 that the density operator for one qubit of a Bell pair is 12 I , and that
10 Quantum Subsystems and Properties of Entangled States
1 0 0 0 and |1 1| = respectively. 0 0 0 1 From appendix A, the density operator ρ for a 50–50 probability distribution over the states |0 and |1 is 1 1 0 1 0 0 ρ= + 2 0 0 2 0 1
the density operators for |0 and |1 are |0 0| =
1 1 |0 0| + |1 1| 2 2 1 = I. 2 =
So the density operator of one qubit of a Bell pair is the same as the mixed state of a 50-50 probability distribution over the states |0 and |1. More generally, a probability distribution over
quantum states where |ψi has probability pi is represented by the density operator ρ=
pi |ψi ψi |.
This representation works even if the states |ψi are not mutually orthogonal. Probability distributions over quantum states have appeared frequently in this book to describe the possible outcomes of
a measurement. Density operators provide a concise representation, one that can be manipulated directly to see the effects of subsequent unitary transformations and measurements. Given an orthogonal
set {|xi } of the possible outcomes of a measurement of a specific state |x, with pi being the probability of each outcome, the density operator representing this probability distribution over
quantum states is ρ= pi |xi xi |. It is easy to check that ρ is Hermitian, trace 1, and positive, so ρ is a density operator. The density operator ρ = pi |xi xi | summarizes the possible results of a
measurement as a probabilistic mixture of the density operators for the possible resulting pure states weighted by the probability of the outcomes. 10.3.1 Measurement of Density Operators
This section discusses the meaning of and notation for measurement of density operators. The measurement of mixed states directly generalizes that of pure states. First, we write the familiar
measurement of pure states in terms of density operators. Let |x be an element of a N = 2n dimensional vector space X with corresponding density operator ρx = |x x|. Measuring |x with an operator O
that has K associated projectors Pj , yields with probability pj = x|Pj |x the state
10.3 Density Operator Formalism for Measurement
Pj |x
1 = √ Pj |x. pj
|Pj |x|
The density operator for each of these states is ρxj =
1 1 † † Pj |x x|Pj = Pj ρx Pj , pj pj
so the density operator ρxO summarizing the possible outcomes of the measurement is † ρxO = pj ρxj = Pj ρx Pj . j
When ρx is written in an eigenbasis for the measuring operator O, the result ρxO of measurement P |x with O is particularly easy to see. Let {|αi } be an eigenbasis for O that contains the vectors |
Pjj |x| as the first K elements of an N-element basis. In this basis, |x =
K−1 j =0
N−1 Pj |x = xi |αi , |Pj |x| i=0
where xi =
pj for i < K, and xi = 0 for i ≥ K. So ⎞† N−1 ⎛N−1 xi |αi ⎝ xj |αj ⎠ = xi xj |αi αj |; ρx = |x x| = j =0
the ij th entry of the matrix for ρx in basis {|αk } is x¯i xj . The density operator ρxO is † xj xj |αj αj | = Pj |x x|Pj , ρxO = j
so is obtained from ρx by removing all the cross terms; the matrix for ρxO in the basis {|αi } is the matrix ρx with the off-diagonal entries replaced with zeros. Measurement of mixed states is
easily derived from that of pure states. Let ρ be a density operator. Using results from section 10.1.1, ρ can be viewed as a probabilistic mixture ρ = i qi |ψi ψi | of of pure states |ψi . Measuring
the mixed state ρ can be viewed as measuring |ψi ψi | with probability qi , so the measurement outcomes are encapsulated by the density operator ρ , a probabilistic mixture of the density operators
representing the possible outcomes of measuring each |ψi ψi |: † Pj |ψi ψi |Pj . ρi = ρxO
Thus, the density operator ρ for the possible outcomes of measuring the mixed state ρ is † † † ρ = qi Pj |ψi ψi |Pj = Pj qi |ψi ψi | Pj = Pj ρPj . i
10 Quantum Subsystems and Properties of Entangled States
The term Pj ρPj is not a density operator in general; it is positive and Hermitian, but its trace may be less than one. Because the trace of a positive, Hermitian operator is zero only if the
operator is the zero operator, ρ may be viewed as a probabilistic mixture of density operators ρj =
Pj ρPj
tr(Pj ρPj† )
with weighting pj = tr(Pj ρPj ):
ρ =
p j ρj =
Pj ρPj
tr(Pj ρPj )
where we ignore the zero terms. For a pure state |ψ with density operator ρ = |ψ ψ|, ρ =
Pj ρPj
ψ|Pj |ψ
because †
tr(Pj |ψ ψ|Pj ) = ψ|Pj Pj |ψ = ψ|Pj |ψ by the trace trick of box 10.1 and the properties of projection operators. Both measurements with known outcome and measurements that have yet to be performed
or for which the outcome is not known can be concisely represented by density operators. Suppose we P |x measure |x with operator O and obtain outcome |ψ = |Pjj |x| with density operator ρ = |ψ ψ|.
There are two different representations for the result of this measurement, ρψ and ρxO . Which should we use? If we do not know the measurement outcome, we must use ρxO . If we do know the outcome,
we should use ρψ . We can use the density operator ρxO , but ρψ encapsulates more of the information we know. If we were to use ρxO , the outcome of the measurement must be kept track of separately,
and since ρxO allows for more possibilities, using it means performing unnecessary calculations involving possibilities that did not happen. The same distinction arises when sampling from a
probability distribution; before the sample is taken, or if the sample is taken but the outcome is unknown, the best model for the sample is the probability distribution itself. But once the outcome
is known, the sample is best modeled by the known value. Appendix A discusses such relations between the classical and quantum situations. While issues with measurement connect with the deepest
issues in quantum mechanics, the distinction between these two models for measurement outcomes is not one of these issues. The deeper questions involve when and how a measurement outcome becomes
known and by whom. We do not elaborate on these quantum mechanical issues here. 10.4 Transformations of Quantum Subsystems and Decoherence
Density operators were introduced to enable us to better discuss quantum subsystems. In the preceding section, we used it fruitfully to gain insight into entanglement. So far we have only used it to
discuss static situations. We turn now to dynamics. In the first two parts of the book, we
10.4 Transformations of Quantum Subsystems and Decoherence
discussed quantum systems modeled by pure states that are acted upon by unitary operators. As we saw in section 10.1, to discuss quantum subsystems, we needed to expand from considering only pure
states to considering density operators. Similarly, to discuss the dynamics of quantum subsystems, we need to expand from considering only unitary operators to a more general class of operators.
Section 10.4.1 develops superoperators, this more general class of operators, by considering unitary operators on the entire system and looking to see what can be understood about their effect on a
subsystem. Section 10.4.2 describes a decomposition that gives insight into superoperators. Section 10.4.3 discusses superoperators corresponding to measurements. Section 10.4.4 makes use of the
superoperator formalism to discuss decoherence, errors caused by the interaction of the quantum system under consideration with the environment. This discussion of decoherence provides the setting
for the discussion of quantum error correction in chapter 11. 10.4.1 Superoperators
This section considers the dynamics of subsystems. Section 10.1 first considers the case in which the subsystem A is the whole system (A = X), and then considers the general case. Here, first
consider a unitary operator acting on a system X. In the original notation for pure states, the unitary operator U applied to X takes |ψ to U |ψ. The density operator for a pure state |ψ is ρ = |ψ ψ
|, so U takes ρ to U |ψ ψ|U † = UρU † . The general case, in which A is a subsystem of X = A ⊗ B, is more complicated. Suppose |ψ ∈ X = A ⊗ B and U : X → X. Then the density operator ρA = trB |ψ ψ|
is sent to ρA = trB (U |ψ ψ|U † ). When U = UA ⊗ UB , ρA † can be deduced from just ρA and U , and it will be ρA = UA ρA UA . For a general unitary operator U , however, it is not possible to deduce
ρA from only U and ρA ; the density operator ρA depends on the original state |ψ of the whole system. Two examples illustrate this point. Example 10.4.1 Let X = A ⊗ B, where A and B are both
single-qubit systems. Suppose ρA =
|0 0|, and U = Cnot where B is the control qubit and A the target: U = |00 00| + |11 01| + |10 10| + |01 11|. The density operator ρA for subsystem A is consistent with many possible states of the
entire system X, including |ψ0 = |00, |ψ1 = |01, and |ψ2 = √12 |0(|0 + |1). What is the density operator ρA for system A after U has been applied? If the state of the entire system is |ψ0 = |00, then
ρA = |0 0|. But if it were |ψ1 = |01, then ρA = |1 1|, or if it were |ψ2 = √12 |0(|0 + |1), then ρA = 12 I .
In fact, the resulting mixed state ρA may have no relation with the initial mixed state ρA . Example 10.4.2 Consider the unitary operator
USwitch = |00 00| + |10 01| + |01 10| + |11 11|
10 Quantum Subsystems and Properties of Entangled States
acting on single-qubit systems A and B. The transformation exchanges the states of the two systems. Suppose system A is originally in state ρA = |ψ ψ| and system B is in state |0 0|. After applying U
, the resulting state of system A is |0 0| no matter what |ψ is. Let DA be the set of all density operators for subsystem A. When initially subsystem A is not entangled with subsystem B, and
subsystem B is in state |φB , a unitary operator U : X → X φ induces a transformation SUB : DA → DA . Specifically, the unitary transformation U :X→X |ψ → U |ψ induces φ
SUB : DA → DA ρA → ρA , where ρA = trB |ψ ψ| and ρA = trB U |ψ ψ|U † . Induced transformations such as SUB are called superoperators. Superoperators are linear: the effect of a superoperator S on any
density operator ρ that is a probabilistic mixture of other density operators, ρ = i pi ρi , is the sum of the superoperator applied to each of the components: S : ρ → pi S(ρi ). φ
10.4.2 Operator Sum Decomposition
Given a superoperator S : DA → DA , it would be handy to describe it just in terms of system A and formalisms we already have for operators on A. General superoperators, however, are not of the form
UρU † for some unitary operator. They are not even reversible φ in general: from example 10.4.2, for U = USwitch and |φ = |0, SU takes ρA = |ψ ψ| to ρA = |0 0| for all |ψ. Furthermore, most
superoperators are not even of the form AρA† for some linear operator A. However, it turns out that every superoperator is the sum of operators of this form; for every superoperator S, there exist
linear operators A1 , . . . , AK such that S(ρ) =
Ai ρAi .
Such a representation is known as the operator sum decomposition for S. The operator sum decomposition for a given superoperator S is not, in general, unique. φ To obtain an operator sum
decomposition for SU , let {|βi } be a basis for B and let Ai : A → A be the operator Ai = βi |U |φ defined in equation 10.5 of box 10.2. Then
10.4 Transformations of Quantum Subsystems and Decoherence
SU (ρ) = trB (U (ρ ⊗ |φ φ|)U † ) =
βi |U (ρ ⊗ |φ φ|)U † |βi
βi |U |φρ φ|U † |βi
Ai ρAi .
To see how the third line follows from the second, first consider the pure state case ρ = |ψ ψ|, from which the general case ρ, a mixture of pure states, follows. For a given superoperator there are
many possible operator sum decompositions; the operator sum decomposition depends on which basis is used. The next two examples give the operator sum decomposition in the standard basis for the
operators of examples 10.4.1 and 10.4.2. Example 10.4.3 Operator sum decomposition for Cnot and |φ =
√1 (|0 + |1. The Cnot 2
operator U of example 10.4.1 can be written U = X ⊗ |1 1| + I ⊗ |0 0|. Suppose that initially the two systems are unentangled and system A is in state ρ = |ψ ψ| and B is in state ρ = |φ φ|. Suφ (ρ) =
trB (U (ρ ⊗ |φ φ|)U † ) †
= A0 ρA0 + A1 ρA1 where A0 = 0|U |φ and A1 = 1|U |φ. Then, using the definition of Ai found in equation 10.5 of box 10.2, A0 |ψ =
αi | 0|U |ψ|φ |αi
= 0| 0|(X ⊗ |1 1| + I ⊗ |0 0|)|ψ|φ |0 + 1| 0|(X ⊗ |1 1| + I ⊗ |0 0|)|ψ|φ |1 = ( 0| 0|(X ⊗ |1 1|)|ψ|φ + 0| 0|(I ⊗ |0 0|)|ψ|φ )|0 + ( 1| 0|(X ⊗ |1 1|)|ψ|φ + 1| 0|(I ⊗ |0 0|)|ψ|φ )|1. Because 0|1 = 0,
the first and third terms are zero, so A0 |ψ = 0| 0|I ⊗ |0 0|)|ψ|φ |0 + 1| 0|I ⊗ |0 0|)|ψ|φ |1 = 0|ψ 0|φ|0 + 1|ψ 0|φ|1 = 0|φ|ψ.
10 Quantum Subsystems and Properties of Entangled States
Since |φ =
√1 (|0 + |1), 2
1 A0 |ψ = √ |ψ, 2 so 1 A0 = √ I. 2 Similar reasoning shows that A1 |ψ = 0| 1|X ⊗ |1 1|)|ψ|φ |0 + 1| 1|X ⊗ |1 1|)|ψ|φ |1 = 0|X|ψ 1|φ |0 + 1|X|ψ 1|φ |1 = 1|φ(X|ψ) 1 = √ X|ψ, 2 so 1 A1 = √ X. 2
Example 10.4.4 Operator sum decomposition for USwitch and |φ = |0. Let
USwitch = |00 00| + |10 01| + |01 10| + |11 11| and |φ = |0. Suφ (ρ) = trB (Uρ ⊗ |φ φ|U † ) †
= A0 ρA0 + A1 ρA1 where A0 = 0|U |φ and A1 = 1|U |φ. 1 A0 |ψ =
αi | 0|U |ψ|φ |αi i=0
= 00||φ|ψ |0 + 10||φ|ψ |1 = 0|φ 0|ψ 0| + 1|φ 1|ψ 1|. Since |φ = |0, A0 |ψ = 0|ψ|0 and
10.4 Transformations of Quantum Subsystems and Decoherence
A0 = |0 0|. Similar reasoning gives A1 = |0 1|. †
Each term Ai ρAi in the operator sum decomposition is Hermitian and positive, but generally †
does not have trace one. Since tr(Ai ρAi ) ≥ 0, the operator
Ai ρAi
is Hermitian, positive, and tr(Ai ρA†i ) has trace one, and therefore is a density operator. Furthermore, since the trace of a Hermitian, † φ positive operator is zero only if the operator is zero,
and 1 = tr(SU (ρ)) = K i=1 tr(Ai ρAi ), †
SU (ρ) is a probabilistic mixture of the operators φ
SU (ρ) =
Ai ρAi
tr(Ai ρA†i )
Ai ρAi
tr(Ai ρAi )
where pi = tr(Ai ρAi ) and we have ignored any zero terms. Operator sum decompositions for superoperators S on subsystem A of system X = A ⊗ B, and their dependence on the basis chosen for B can be
understood in terms of measurement. It is not a coincidence that equation 10.8 is reminiscent of the equation ρ =
Pj ρPj
tr(Pj ρPj )
that encapsulates the possible outcomes of measurement of ρ by operator O with associated φ projectors Pj . Let Ai be the operator obtained in the operator sum decomposition for SU when using basis
{|bi } for B. Suppose that after U : A ⊗ B → A ⊗ B was applied to ρ, subsystem B were measured with respect to the projectors Pi = |bi bi | for the K = 2k basis elements |bi for B. The best
description of subsystem A after this measurement is a probabilistic mixture of mixed states ρ = i pi ρi where ⎛ ⎞ † † (I ⊗ Pi )U (ρ ⊗ |φ φ|)U (I ⊗ Pi ) ⎠ ρi = trB ⎝ † tr (I ⊗ Pi )U (ρ ⊗ |φ φ|)U † (I
⊗ Pi ) and
† pi = tr (I ⊗ Pi )U (ρ ⊗ |φ φ|)U † (I ⊗ Pi ) . Since trB (I ⊗ |βi βi |)Uρ ⊗ |φ φ|U † (I ⊗ |βi βi |) = βi |Uρ ⊗ |φ φ|U † |βi , φ the density operator ρ = i pi ρi is identical to the density operator
SU (ρ).
10 Quantum Subsystems and Properties of Entangled States
10.4.3 A Relation Between Quantum State Transformations and Measurements
Section 10.3.1 showed that the density operator representing the probabilistic mixture of outcomes of a measurement O with associated projectors {Pj } of a system A initially represented by the mixed
state ρ is
ρ =
Pj ρPj
tr(Pj ρPj )
For any measurement O, the map SO : DA → DA ρ → ρ can also be obtained in a different way, as the superoperator coming from a unitary transformation on a larger system. More specifically, for any
observable O of system A, there is a larger system φ X = A ⊗ B, a unitary operator U : X → X, and a state |ψ of B such that SU = SO . To prove this statement, suppose O has M distinct eigenvalues.
Let B be a system of dimension M with basis {|βi }, and suppose that B is initially in the state |φ = |β0 . Let U be any unitary operator on X = A ⊗ B that maps |ψ|β0 →
Pi |ψ|βi .
Then for ρ = |ψ ψ|, φ
SU (ρ) = trB (U (ρ ⊗ |φ φ|)U † ) =
Ai ρAi
Ai |ψ ψ|Ai
where Ai = βi |U |φ. Since |φ = |β0 ,
αj | βi |U |ψ|β0 |αj Ai |ψ = j
M =
αj | βi | Pk |ψ|βk |αj j
αj |Pi |ψ |αj = j
= Pi |ψ.
10.4 Transformations of Quantum Subsystems and Decoherence
So φ
SU (ρ) =
Pi |ψ ψ|Pi
M i=1
Pi |ψ ψ|Pi
tr(Pi |ψ ψ|Pi ) †
where pi = tr(Pi |ψ ψ|Pi ). There is debate within the quantum physics community as to the extent to which this relationship between unitary operators and measurement clarifies various issues in the
foundations of quantum mechanics. We do not elaborate on these issues here. 10.4.4 Decoherence
In practice, it is impossible to isolate a quantum computer completely from its environment. Because all physical qubits interact with their environment, the computational qubits of a quantum
computer are properly viewed as a subsystem of a larger system consisting of the computation qubits and their environment. By an environment we mean a subsystem over which we have no control: we
cannot gain information from it by measurement or apply gates to it. In some cases, the effect of an environmental interaction on the computational subsystem is reversible by transformations on the
subsystem alone. But in other cases, decoherence occurs. In decoherence, information about the state of the computational subsystem is lost to the environment. Such errors are serious because the
environment is beyond our computational control. The next two chapters develop quantum error correction and fault-tolerant techniques to counteract errors due to decoherence as well as other sorts of
errors, such as those stemming from imperfections in the implementations of quantum gates. This section lays a foundation for that discussion by setting up an error model for errors due to
interaction with the environment. The operator sum decomposition provides a means for describing the effect on the computational subsystem of an interaction with another subsystem in terms of
operations on the computational subsystem alone. Using the operator sum decomposition, the effect on the computational subsystem of any interaction with the environment can be viewed as a mixture of
K †
errors resulting in the K mixed states
Ai ρAi
. tr(Ai ρA†i ) Common error models suppose that the environment interacts separately with different parts of the computational subsystem. For example, a common error model consists of errors that are
both local and Markov:
local each qubit interacts only with its own environment, and
Markov the state of a qubit’s environment, and its interaction with the qubit, is independent of the state of the environment in previous time intervals.
More precisely, under a local error model, the errors to which an n-qubit system is subjected can be modeled by interaction with an environment E = E1 ⊗ · · · ⊗ En such that the environment Ei
10 Quantum Subsystems and Properties of Entangled States
interacts with only the ith qubit of X; the errors can be modeled by unitary transforms of the form U = U1 ⊗ · · · ⊗ Un such that Ui acts on Ei and the ith qubit of X is given by superoperators of
the form SU = SU1 ⊗ · · · ⊗ SUn , where SUi acts on only the ith qubit of X. A reasonable way to think of the Markov condition is that each qubit’s environment is renewed (or replaced) at each
computational time step. More concretely, under a local and Markov error model, the computational subsystem X at a given time t interacts with an environment E t = E1t ⊗ · · · ⊗ Ent in such a way
that the only interactions are between Eti and the ith qubit of system X, and the current state of the environment E, and its interaction with X, is independent of the state of the environment at any
previous time s. Most of the quantum error correcting codes and fault-tolerant techniques discussed in chapters 11 and 12 are designed to handle local and Markov errors. Techniques to handle other
error models have been developed, some of which are briefly described in section 13.3. 10.5 References
Jozsa and Linden [167] show that any quantum algorithm that achieves exponential speedup over classical algorithms must entangle an increasing number of qubits. Their proof applies only to algorithms
run in isolation in which the state is always in a pure state. The results of section 10.4 show that any mixed state algorithm can be viewed as a pure state algorithm on a larger system. The result
of Jozsa and Linden still applies in this more general setting, except that the entanglement could involve noncomputational qubits of the larger system; it is not required to be between the
computational qubits. Efficient classical simulations of certain quantum systems have be found by Vidal and others [278, 204]. Meyer discusses the lack of entanglement throughout the
Bernstein-Vazirani algorithm and related results [213]. Bennett and Shor’s “Quantum information theory" [38] discusses various entanglement measures for mixed states of bipartite systems, including
some examples and a distillation protocol. It is generally a good overview of topics in quantum information theory, including a number of interesting topics we will not cover in this book. Bruss’s
“Characterizing entanglement" [69] is an excellent fifteen-page overview of many of the most significant results about entanglement to date. Myhr’s master’s thesis, “Measures of entanglement in
quantum mechanics" [215], gives a readable and more detailed and account of many of these results. Nielsen’s majorization results is found in Nielsen [217]. The SLOCC classification of 3-qubit states
was first described by Dür, Vidal, and Cirac in [107]. Briegel and Raussendorf define persistency of entanglement and maximal connectedness in [65], as well as introducing cluster states. 10.6
Exercises Exercise 10.1. Show that the definition of the partial trace is basis independent.
10.6 Exercises
Exercise 10.2. Show that trB (OA ⊗ OB ) = OA tr(OB ). Exercise 10.3. a. Find the density operators for the whole system and both qubits of | − = b. Find the density operators for the whole system and
both qubits of |+ =
√1 (|00 − |11). 2 √1 (|01 + |10). 2
Exercise 10.4. Distinguishing pure and mixed states. a. Show that a density operator ρ represents a pure state if and only if ρ 2 = ρ. In other words, ρ
is a projector. b. What can be said about the rank of the density operator of a pure state? Exercise 10.5. We showed that any density operator can be viewed as a probability distribution
over a set of orthogonal states. Show by example that some density operators have multiple associated probability distributions, so that in general the probability distribution associated to a
density operator is not unique. Exercise 10.6. Geometry of Bloch regions. a. Show that the Bloch region, the set S of mixed states of an n-qubit system, can be parametrized
by 22n − 1 real parameters. b. Show that S is a convex set. c. Show that the set of pure states of an n qubit system can be parametrized by 2n+1 − 2 real param-
eters, and therefore the set of density matrices corresponding to pure states can be parametrized in this way also. d. Explain why for n > 2 the boundary of the set of mixed states must consist of
more than just
pure states. e. Show that the extremal points, those that are not convex linear combinations of other points,
are exactly the pure states. f. Characterize the non-extremal states that are on the boundary of the Bloch region. Exercise 10.7. Give a geometric interpretation for R(θ ) and T (φ) of Section 5.4.1
by determining their behavior on the set of mixed states viewed as points of the Bloch sphere. Exercise 10.8. The Schmidt decomposition. Every m × n matrix M, with m ≤ n, has a singular
value decomposition M = U DV where D is an m × n diagonal matrix with non-negative real entries, and U and V are m × m and n × n unitary matrices. Let |ψ ∈ A ⊗ B, where A has dimension m and B has
dimension n, with m ≤ n. Let {|i} be a basis for A and {|j } be a basis for B, then for some choice of mij ∈ C |ψ =
n−1 m−1 i=0 j =0
aij |i|j .
10 Quantum Subsystems and Properties of Entangled States
Let M be the m × n matrix with entries aij . Use the singular value decomposition (SVD) for M to find sets of orthonormal unit vectors {|αi } ∈ A and {|βj } ∈ B such that |ψ =
λi |αi |βj
where λi is non-negative. The λi are called the Schmidt coefficients, and K, the number of λi , is called the Schmidt rank or Schmidt number of |ψ. Exercise 10.9. Singular value decomposition. Let A
be an n × m matrix. a. Let |uj be unit length eigenvectors of A† A with eigenvalues λj . Explain how we know that
λj is real and non-negative for all j . b. Let U be the matrix with |uj as its columns. Show that U is unitary. c. For all eigenvectors with non-zero eigenvalues define |vi =
|vi as columns. Show that V is unitary.
A|xi √ . λi
Let V be the matrix with
d. Show that V † AU is diagonal. e. Conclude that A = V DU † for some diagonal D. What is D? Exercise 10.10. For |ψ ∈ A ⊗ B, show that |ψ is unentangled if and only if S(trB ρ) = 0, where ρ = |ψ ψ|.
Exercise 10.11. a. Show that the states
√1 (|01 + |10) 2
√1 (|00 − i|11) 2
are maximally entangled.
b. Write down two other maximally entangled states. Exercise 10.12. What is the maximum possible amount of entanglement, as measured by the von Neumann entropy, over all pure states of a bipartite
quantum system A ⊗ B where A has dimension n and B has dimension m with n ≥ m? Exercise 10.13. Claim: LOCC cannot convert an unentangled state to an entangled one. a. State the claim in more precise
language. b. Prove the claim. Exercise 10.14. Show that the four Bell states | ± and |± are all LOCC equivalent. Exercise 10.15. a. Show that any two-qubit state can be converted to |00 via LOCC. b.
Show that any n-qubit state can be converted to a state unentangled with respect to the tensor decomposition into the n qubits. Exercise 10.16. Show that the vector of ordered eigenvalues λψ for the
density operator of any
unentangled state |ψ of a bipartite system majorizes the vectors for any other state of the bipartite system.
10.6 Exercises
Exercise 10.17. Maximally entangled bipartite states. Let |ψ be a state of the form
1 A |ψ = √ |φ ⊗ |φiB m i=1 i m
where the {|φiA } and {|φiB } are orthonormal sets. Show that the vector λψ is majorized by λφ for all states |φ ∈ A ⊗ B. Exercise 10.18. Classify all two-qubit states up to SLOCC equivalence.
Exercise 10.19. Show that |GH Z3 can be converted via SLOCC to any A-BC decomposable
state. Exercise 10.20. Show that the states |GH Zn are maximally connected. Exercise 10.21. Show that the states |Wn are not maximally connected. Exercise 10.22. a. If |ψ has persistency n and |φ has
persistency m, what is the persistency of |ψ ⊗ |φ? b. Show by induction that the persistency of |Wn is n − 1. (Hint: You may want to use (a).) Exercise 10.23. a. Check that each of the cluster states
of examples 10.2.7, 10.2.8, and 10.2.9 is stabilized by the
operators of equation 10.7. b. Find the cluster state for the 1 × 5 lattice. c. Find the cluster state for the 2 × 2 lattice. Exercise 10.24. Maximal connectedness of cluster states. a. Show by
induction that for the qubits corresponding to the ends of the chain in the cluster state
|φn for the 1 × n lattice, there is a sequence of single-qubit measurements that place these qubits in a Bell state. b. Show that for any two qubits q1 and q2 in a graph state, there exists a
sequence of single-qubit
measurements that leave these qubits as the end qubits of a cluster state of a 1 × r lattice. Conclude that graph states are maximally connected. Exercise 10.25. Persistency of cluster states. For
the cluster state |φN corresponding to the 1 × N
lattice for N even, give a sequence of N/2 single-qubit measurements that result in a completely unentangled state. Exercise 10.26. Show that if {|xi } is the set of possible states resulting from a
measurement and pi is the probability of each outcome, then ρ = pi |xi xi | is Hermitian, trace 1, and positive. Exercise 10.27. For initial mixed state ρA ⊗ ρB , find the mixed state of A after the
U = |00 00| + |10 01| + |01 10| + |11 11| has been applied.
10 Quantum Subsystems and Properties of Entangled States
Exercise 10.28. Suppose that subsystem A = A1 ⊗ A2 and that U : A ⊗ B → A ⊗ B behaves as
the identity on A1 . In other words, suppose U = I ⊗ V where I acts on A1 and V acts on A2 ⊗ B. φ Show that for any state |φ of system B, the superoperator SU can be written as I ⊗ S for some
superoperator S on subsystem A2 alone. Exercise 10.29. a. Give an alternative operator sum decomposition for example 10.4.3. b. Give an alternative operator sum decomposition for example 10.4.4. c.
Give a general condition for two sets of operators {Ai } and {A j } to give operator sum
decompositions for the same superoperator. Exercise 10.30. a. Describe a strategy for determining which sequence was sent in the game of section 10.3 if both qubits are received. More specifically,
you receive a sequence of pairs of qubits. Either all pairs are randomly chosen from {|00, |11} or all pairs are in the state √12 (|00 + |11). Describe a strategy for determining which sequence was
sent. b. For each sequence, write down the density operator representing that sequence.
Quantum Error Correction
For practical quantum computers to be built, techniques for handling environmental interactions that muddle the quantum computations are required. Shor’s algorithms, while universally acclaimed, were
initially thought by many to be of only theoretical interest; estimates suggested that unavoidable interactions with the environment were many orders of magnitude too strong to be able to run Shor’s
factoring algorithm on a number that was of practical interest, and no one had any idea as to how to perform error correction for quantum computation. Given the impossibility of copying an unknown
quantum state, a straightforward application of classical methods to the quantum case is not possible, and it was far from obvious what else to do. Results such as the no-cloning theorem made many
experts believe that robust quantum computation might be impossible. It turns out, however, that an elegant and surprising use of classical techniques forms the foundation of sophisticated quantum
error correction techniques. Quantum error correction is now one of the most extensively developed areas of quantum computation. It was the discovery of quantum error correction, as much as of Shor’s
algorithms, that turned quantum information processing into a significant field in its own right. In the classical world, error correcting codes are primarily used in data transmission. Quantum
systems, however, are difficult to isolate sufficiently from environmental interactions while retaining the ability to perform computations. In any quantum system used to perform quantum information
processing, the effects of interaction with the environment are likely to be so pervasive that quantum error correction will be used at all times. We begin in section 11.1 with a few simple examples
to give a sense for the workings of quantum error correction, particularly purely quantum aspects such as how quantum superpositions of both errors and states are handled. A general framework for
quantum error correction is given in section 11.2. This framework has similarities to the framework for classical codes but is considerably more complicated. Quantum error correcting codes must
handle the infinite variety of single-qubit states and the peculiarly quantum ways in which qubits can interact with each other. In section 11.3, Calderbank-Shor-Steane (CSS) codes are presented.
Then, in section 11.4, the more general class of stabilizer codes is described. Most of the specific quantum
11 Quantum Error Correction
error correcting codes we consider are designed to correct all errors on k or fewer qubits. Such codes work well for systems subject to independent single-qubit, or few-qubit, errors. While that sort
of error behavior is expected in many situations, other reasonable error models exist. Throughout the chapter we pretend that quantum error correction can be carried out perfectly. Chapter 12
discusses fault-tolerant methods that enable quantum error correction to work even when carried out imperfectly. Other approaches to robust quantum computation are discussed in chapter 13. 11.1 Three
Simple Examples of Quantum Error Correcting Codes
Classical error correcting codes map message words into a code space, consisting of longer words, in a redundant way that allows detection and correction of errors. Quantum error correcting codes
embed the vector space of message states, called words, into a subspace of a larger vector space, the code space. A quantum algorithm that logically operates on n-qubits is implemented as an
algorithm operating on the much larger m-qubit system in which the n-qubits are encoded. To detect and correct an error, computation into ancilla qubits is performed and the ancilla are measured.
Error correcting transformations are applied according to the result of that measurement. To preserve superpositions, the encoding and measurements must be carefully designed so that these
measurements give information only about what error occurred and not about the encoded state of the computation. To give a general sense for quantum error correction, particularly its use of
measurement and its ability to correct superpositions of correctable errors, we first describe a simple code that corrects only single-qubit bit-flip errors, then a code that corrects only
single-qubit phase errors, and finally a code that corrects all single-qubit errors. 11.1.1 A Quantum Code That Corrects Single Bit-Flip Errors
A single-qubit bit-flip error applies X to one of the qubits of the quantum computer. The following simple code is a quantum version of the classical [3, 1] repetition code, which will be described
more formally in section 11.2. It detects and corrects any of the three single bit-flip errors {X2 = X ⊗ I ⊗ I, X1 = I ⊗ X ⊗ I, X0 = I ⊗ I ⊗ X}, where Xi means the tensor product of X applied to the
ith qubit with the identity on all other qubits. In brief, the [3, 1] repetition code encodes each bit in three bits as 0 → 000 1 → 111.
11.1 Three Simple Examples of Quantum Error Correcting Codes
Decoding is done by majority rules ⎫ 000 ⎪ ⎪ ⎬ 001 → 0 010 ⎪ ⎪ ⎭ 100 ⎫ 011 ⎪ ⎪ ⎬ 101 → 1. 110 ⎪ ⎪ ⎭ 111 To implement majority rules, first determine if an error has occurred by comparing the first
bit with each of the other bits. More formally, to make the comparisons, use two additional bits, called ancilla, to hold the computation of b2 ⊕ b1 and b2 ⊕ b0 respectively. This computation is
called the syndrome computation. The syndrome values of b2 ⊕ b1 and b2 ⊕ b0 determine which error correcting transformation should be applied, as shown in table 11.1. The first line of the table says
that if b2 ⊕ b1 and b2 ⊕ b0 are both zero, do nothing. The second lines says that if b2 = b1 but b2 = b0 , flip b0 so that it agrees with b2 and b1 , the majority. Similarly, if b2 = b1 and b2 = b0 ,
flip b1 . Finally, if b2 = b1 and b2 = b0 , then b1 and b0 must agree, so flip b2 to make it agree with the majority. No matter what happened previously, this procedure results in a codeword.
However, if more than one error has occurred it will correct to the wrong word. For example, if the original string was 000 and two bit-flip errors occur, one on the first qubit and one on the third,
the resulting string, 101, will be “corrected" to 111 under this procedure. The [3, 1] repetition code can correct only single bit-flip errors. More powerful codes, such as the [n, 1] repetition
codes that encode one bit in n bits and decode by majority rules, can correct more errors. Both classical and quantum error correction spread the information we want to protect across several qubits
so that individual errors have less of an effect. The [3, 1] repetition code encodes 0 and 1 as the bit strings 000 and 111 respectively. In the quantum setting, let CBF be the subspace spanned by {|
000, |111}. This quantum code encodes the state |0 in the state |000 and |1 in the state |111. Linearity of the code and these relations define a general encoding cBF of single-qubit Table 11.1
Syndrome and corresponding error correcting transformations for the classical [3, 1] repetition code. b 2 ⊕ b1
b2 ⊕ b0
Error correcting transformation
identity flip b0 flip b1 flip b2
11 Quantum Error Correction
states into the subspace CBF of the state space for a three-qubit system: cBF :
|0 ⊗ |00 → |000 |1 ⊗ |00 → |111,
˜ so a|0 + b|1 maps to a|000 + b|111. In general, for a quantum code, we use the notation |0 ˜ = |000 and |1 ˜ = |111. for the encoding of |0 and likewise for other states. For this code, |0 ˜ + b|1
˜ = a|000 + b|111 is a two-dimensional vector space, so it may The set of states a|0 be considered a qubit in its own right. It is called the logical qubit to distinguish it from the three
computation qubits whose tensor product forms the entire eight-dimensional code space. States such as |101 that are not logical qubit values are not legitimate computational states. Legitimate
states, the possible values of the logical qubits, are called codewords. On a logical qubit a|000 + b|111, single bit-flip errors no longer take legitimate computational states to legitimate
computational states, but to states that are not codewords. For example, a bit-flip error on the first qubit results in the state a|100 + b|011, which is not a codeword because it is not in CBF . The
goal of an error correction scheme is to detect non-codeword states and transform them back to codewords. To detect an error, we compute the XOR of the first and second qubits into one ancilla qubit,
and the XOR of the first and third qubits into another. More formally, UBF : |x2 , x1 , x0 , 0, 0 → |x2 , x1 , x0 , x2 ⊕ x1 , x2 ⊕ x0 . The transformation UBF is called the syndrome extraction
operator and has quantum circuit
x2 x1 x0 0
a0 .
The ancilla qubits are then measured in the standard basis, and the error syndrome is obtained. The use of the syndrome parallels that of the classical [3, 1] repetition code. In addition to
correcting all single bit-flip errors, the code must not corrupt correct states, so for convenience we also consider I ⊗ I ⊗ I as an “error" we can correct. The information we gain from measuring the
ancilla enables us to choose the right transformation to apply to correct the error. Since X = X−1 , the correcting transformation in this case is the same as the error transformation that occurred.
The following table gives the transformation to apply given the measurement of the ancilla:
11.1 Three Simple Examples of Quantum Error Correcting Codes
Bit flipped
Error correction
none 0 1 2
|00 |11 |10 |01
none X2 = I ⊗ I ⊗ X X1 = I ⊗ X ⊗ I X0 = X ⊗ I ⊗ I.
Because of its close parallel with the classical [3, 1] code, it is not surprising that this procedure ˜ = |000 and |1 ˜ = |111. corrects any single bit-flip errors on the encoded standard basis
states |0 In addition, it corrects single bit-flip errors on superpositions of codewords. Example 11.1.1 Correcting a bit-flip error on a superposition. A general superposition |ψ =
a|0 + b|1 is encoded as ˜ + b|1 ˜ = a|000 + b|111. ˜ = a|0 |ψ ˜ is subject to the single bit-flip error X2 = X ⊗ I ⊗ I , resulting in Suppose |ψ ˜ = a|100 + b|011. X2 |ψ ˜ ⊗ |00 results in the state
Applying the syndrome extraction operator UBF to X2 |ψ ˜ ⊗ |00) = a|100|11 + b|011|11 UBF ((X2 |ψ) = (a|100 + b|011)|11 Measuring the two ancilla qubits yields |11, and the state is now (a|100 + b|
011) ⊗ |11. The error can be removed by applying the inverse error operator X2 , corresponding to the measured syndrome |11, to the first three qubits. Doing so reconstructs the original encoded
state ˜ + b|1 ˜ = a|000 + b|111). |ψ = a|0 The intuition behind why this procedure does not irreparably disturb the quantum state, even though it includes measurement, is that measurement of the
ancilla by the syndrome extraction operator tells us nothing about individual computational qubit states, only about what errors ˜ + b|1, ˜ the result of occurred. If the syndrome extraction operator
is applied to a codeword a|0 the measurement of the ancilla will be the syndrome 00 regardless of whether the codeword is ˜ |1, ˜ or some superposition of the two. Similarly, if error X2 = X ⊗ I ⊗ I
has occurred, the |0, syndrome will be in state |11 regardless of whether the computational qubits are in state |100, |011, or a superposition of the two. Thus measuring the ancilla qubits gives no
11 Quantum Error Correction
about the states of the computational qubits, but it does give information about what error has occurred. Measuring the ancilla qubits gives information about the error without disturbing the
computation, even when the initial state is a superposition a|000 + b|111. Unlike in the classical case, linear combinations of quantum errors are also possible. This same procedure also corrects
linear combinations of bit-flip errors. Example 11.1.2 Correcting a linear combination of bit-flip errors. Suppose the state |0 has been
˜ = |000 and an error E = αX ⊗ I ⊗ I + βI ⊗ X ⊗ I , a linear combination of the encoded as |0 two single bit-flip errors X2 and X1 , occurs, yielding
˜ = α|100 + β|010. E|0 ˜ ⊗ |00 results in the state Applying the syndrome extraction operator UBF to (E|0) ˜ ⊗ |00) = α|100|11 + β|010|10. UBF ((E|0) Measuring the two auxiliary qubits of this state
yields either |11 or |10. If the measurement produces the former, the state is now |100. The measurement has the almost magical effect of causing all but one summand of the error to disappear. The
remaining part of the error can be removed by applying the inverse error operator X2 = X ⊗ I ⊗ I , corresponding to the mea˜ = |000. If instead the sured syndrome |11. Doing so reconstructs the
original encoded state |0 syndrome measurement yields |10, we would apply X1 to |010 to recover the original state ˜ = |000. |0 While linear combinations of single bit-flip errors can be corrected in
this way, multiple bit-flip errors cannot be corrected by this code. The distinction between linear combinations of single bit-flip errors and multiple bit-flip errors is that in the former case any
term in the superposition representing a computational state contains only one error, but in the second case a single term may contain multiple errors that will be misinterpreted by the syndrome. In
the classical case, the [3, 1] code corrects all possible single bit errors. The quantum code CBF , while based on the [3, 1] code, does not correct all single-qubit errors. In the classical case,
bit flips are the only possible errors; in the quantum case, there is an infinite continuum of possible single-qubit errors. The code CBF does not even detect, let alone correct, phase errors.
Example 11.1.3 Undetected phase error. Suppose the quantum state |+, encoded as
1 ˜ = √ (|000 + |111), |+ 2
11.1 Three Simple Examples of Quantum Error Correcting Codes
˜ becomes the error state is subjected to a phase error E = Z ⊗ I ⊗ I . The state |+ 1 ˜ = √ (|000 − |111). E|+ 2 ˜ ˜ The syndrome extraction operator UBF applied to E|+|00 results in E|+|00, so no
error is detected, let alone corrected. It is easy to construct a code that corrects all single-qubit phase-flip errors, but does not correct single-qubit bit-flip errors. The next section describes
such a code. To obtain a code that corrects all single-qubit errors requires more cleverness. It turns out that, by carefully combining codes that correct bit-flip and phase-flip errors, a code
correcting all single-qubit errors can be constructed. Such a code is given in section 11.1.3. 11.1.2 A Code for Single-Qubit Phase-Flip Errors
Consider the three single-qubit phase-flip errors Z2 , Z1 , Z0 of a three-qubit system, where {Z2 = Z ⊗ I ⊗ I, Z1 = I ⊗ Z ⊗ I, Z0 = I ⊗ I ⊗ Z}. Phase-flip errors, Zi , in the standard basis are
bit-flip errors X = H ZH in the Hadamard basis {|+, |−}, and vice versa. This observation suggests that appropriate modifications to the bit-flip code CBF of section 11.1.1 will result in a code CP F
that corrects phase-flip errors instead. To obtain the logical qubits for the code CP F , apply the Walsh-Hadamard transformation W (3) = ˜ = |+ + + H ⊗ H ⊗ H to the logical qubits of the code CBF ;
the logical qubits for CP F are |0 ˜ = |− − −. and |1 The phase-flip error Z2 sends |+ + + to |− + + and |− − − to |+ − −. To detect such errors, the syndrome extraction operator UP F for CP F can be
obtained from UBF by changing basis from the standard basis to the Hadamard basis. Since, in the Hadamard basis, phase flips appear as bit flips, applying UBF from code CBF detects the error. Once
the syndrome has been obtained by measuring the ancilla qubits in the standard basis, the error can be corrected by applying the bit-flip operator corresponding to the syndrome for code CBF and then
applying W to change back to the original basis. Instead, because H X = ZH , the error may be corrected by first applying W , and then the appropriate error correction transformation from the
following table: Bit shifted
Error correction
none 0 1 2
|00 |11 |10 |01
none Z2 = Z ⊗ I ⊗ I Z1 = I ⊗ Z ⊗ I Z0 = I ⊗ I ⊗ Z.
11 Quantum Error Correction
Thus UP F = W UBF W , with implementation
a1 a0 The code CP F corrects all single-qubit relative phase errors, not just Z, because any single-qubit phase error is a linear combination of Z and I up to an irrelevant global phase factor: φ φ 1
0 i φ2 I − i sin Z . = e cos 0 eiφ 2 2 The code CP F does not correct bit-flip errors, let alone general single-qubit errors. 11.1.3 A Code for All Single-Qubit Errors
Section 11.2.11 shows that a quantum error correcting code C that can correct all Xi and all Zi errors can also correct all Yi errors. Section 11.2.9 shows that any superposition (linear combination)
of correctable errors is correctable. Section 11.2.9 also shows that the Pauli errors I , X, Y , and Z form a basis for all single-qubit errors. So if we can design a code that corrects all Xi and Zi
errors, the code will actually correct all single-qubit errors. To construct such a code, it is natural to try to combine CBF and CP F . First encoding a qubit using CP F and then encoding each
resulting qubit using CBF leads to the nine-qubit code ˜ = √1 (|000 + |111) ⊗ (|000 + |111) ⊗ (|000 + |111), |0 → |0 8 ˜ = √1 (|000 − |111) ⊗ (|000 − |111) ⊗ (|000 − |111), |1 → |1 8 known as Shor’s
nine-qubit code. For convenience, we often write these states as ˜ = √1 (|000 + |111)⊗3 |0 → |0 8 ˜ = √1 (|000 − |111⊗3 . |1 → |1 8
11.2 Framework for Quantum Error Correcting Codes
To perform error correction, first use UBF on each block of three qubits to correct for possible X errors in each block separately. At this point the bit values of the state are correct — in any term
of the superposition, the three qubits of each block now have the same bit value — but the relative phases may be wrong. To correct phase errors, a variant of UP F is used, essentially an expansion
of UP F to nine qubits instead of three. More details are given in section 11.3. The term code, in both the classical and quantum setting, refers to the set of codewords. The mapping of the original
strings or states into the codewords is not of great importance; a different mapping allows exactly the same set of errors to be corrected. Moreover, the encoding ˜ + b|1 ˜ should be viewed map is
not generally implemented. The mapping a|0 + b|1 to a|0 as an abstract mapping; we do not start with qubits of the form a|0 + b|1 and then encode them. Rather, we define the logical qubits of a
system in this way, and we design gates and interpret measurements in terms of these logical qubits. For example, for Shor’s code, instead of computing directly on n single qubits, each qubit is
encoded in 9 qubits, totaling 9n qubits altogether. All quantum computation takes place on the n logical qubits, each consisting of nine qubits. It is on the 2n -dimensional subspace containing the
logical qubits, not on the full 29n dimensional space, that we compute. Error correction returns states to this subspace, and it is on this 2n -dimensional subspace, not on the full 29n -dimensional
space, that we need a universal set of gates. Sections 11.2.8 and 11.4.4, and then much of chapter 12, concern the design of such gates. Later sections describe codes that correct multiple-qubit
errors and codes that correct all singlequbit errors using fewer than nine qubits. Before discussing those codes, we need to develop more systematic ways of thinking about and describing codes. 11.2
Framework for Quantum Error Correcting Codes
As section 10.4.4 explained, errors on the computational system due to interactions with the environment are linear, but not necessarily unitary. Because unitary transformations are invertible, if we
can figure out what unitary error has occurred, we can correct it. But general errors may not have inverse transformations, so if such an error occurs, even if we have been able to determine which
error has occurred, it is not obvious how to correct it. At first glance we might guess that such errors cannot be corrected without access and control over the part of the environment that
interacted with the system. It is true that these errors cannot be corrected by applying unitary quantum transformations to the computational system alone. By measuring the system, however, or by
entangling the system with auxiliary qubits, nonunitary errors can be corrected. When a system has been subjected to decoherence under which it undergoes a nonunitary transformation, information
about the original state of the system has been lost. For example, decoherence could swap a qubit in the environment with a qubit of the computational system, resulting in a complete loss of
information about that qubit, except what can be deduced from
11 Quantum Error Correction
other qubits. If the qubit’s state was completely uncorrelated with the other qubits’ states, all information about the state of that qubit is lost. The idea behind any sort of scheme for protecting
information stored in quantum states is to embed the quantum states we care about in highly correlated states of a larger quantum system. To correct against general quantum errors, this correlation
must be quantum; these states must be highly entangled states. The art of designing quantum error correcting codes is to choose the embedding of k logical qubits into an n-qubit system in such a way
that measurements can correct the most common errors to which the system is likely to be subjected. Generally, this embedding is taken to be linear: it is given by a linear map between the 2k
-dimensional vector space of the logical system and the 2n -dimensional vector space of the larger system. We consider only linear codes here. Quantum codes have been designed for many types of
errors. The most frequently considered family of errors consists of all errors on t or fewer qubits. We concentrate on this family of errors after presenting a general framework for quantum error
correction. As physical implementations of quantum computers are developed it will be possible to determine to which sorts of errors a given physical device is most subject and to design error
correcting codes or other forms of error protection to guard most efficiently and effectively against those errors. Linear quantum codes are closely related to classical block codes. For each concept
in quantum error correction, we first review related concepts from classical codes. For this reason, this section alternates between short subsections describing classical error correction and
subsections describing quantum error correction. This exposition is most suitable for readers who have some familiarity with classical error correcting codes; readers new to error correcting codes
may wish to read all of the classical sections first to get a feel for the general strategies employed in error correction. Both classical and quantum error correction rely heavily on group theory.
Boxes containing brief reviews of groups, subgroups, and Abelian groups can be found in section 8.6.1 and section 8.6.2. A few more boxes are interspersed throughout this chapter. Readers new to
group theory will need to study the relevant sections of a text devoted to group theory. Suggested texts are given in the reference section at the end of this chapter. This section describes a
general nonconstructive framework for linear quantum error correcting codes, specifying properties that all linear quantum error correcting codes must satisfy. This framework pays no attention to
whether or how a code can be efficiently implemented. This issue is crucial to whether the code is useful or not and will be dealt with more carefully later in this chapter and in chapter 12. 11.2.1
Classical Error Correcting Codes
A classical [n, k] block code C is a size 2k subset of the 2n possible n-bit strings. The set of n-bit strings is a group, written Zn2 , under bitwise addition modulo 2. If the 2k size subset C is a
subgroup of Zn2 , then the code is said to be an [n, k] linear block code. When a code is used,
11.2 Framework for Quantum Error Correcting Codes
Box 11.1 Group Homomorphisms
A homomorphism f from a group G to a group H is a map f : G → H that satisfies, for any elements g1 and g2 of G, f (g1 ◦ g2 ) = f (g1 ) ◦ f (g2 ). The product used on the right-hand side is the
product for group G, while on the left-hand side it is the product for group H . An isomorphism from group G to group H is a homomorphism that is both one-to-one and onto. If there is an isomorphism
between H and G, H and G are isomorphic, written H ∼ = G. The kernel of a homomorphism f : G → H is the set of elements of G that are mapped to the identity element eH of H .
a specific encoding function c : Zk2 → Zn2 is chosen, where c is an isomorphism between Zk2 , the message space, the set of all k-bit strings, and C, the code space: c : Zk2 → C ⊂ Zn2 . In general,
for any code C, there are many possible encoding functions. It may seem odd that the code is defined purely in terms of the subgroup C, not in terms of an encoding function. The reason for this
convention is that no matter which encoding function is chosen, exactly the same set of errors can be corrected. To encode a length mk message, each of the m blocks of length k are separately encoded
using c to obtain a ciphertext of length mn. For this reason these codes are called block codes. The encoding function c can be represented by an n × k generator matrix G that takes a message word,
an element of Zk2 viewed as a length k column vector, to a codeword, an element of C ⊂ Zn2 : the generator matrix G multiplied with a message word gives the corresponding codeword. The k columns of G
form a linearly independent set of binary words. Example 11.2.1 The [3, 1] repetition code. The [3, 1] repetition code is defined to be the subset C = {000, 111} of all 3-bit strings. This subset is
a subgroup of Z32 under bitwise addition modulo 2. The standard encoding function sends
0 → 000 1 → 111 and the associated generator matrix is ⎛ ⎞ 1 G = ⎝ 1 ⎠, 1
11 Quantum Error Correction
which acts on bit strings viewed as column vectors: ⎛ ⎞ ⎛ ⎞ 0 1 ⎝ 0 ⎠ = ⎝ 1 ⎠ (0) 0 1 ⎛ ⎞ ⎛ ⎞ 1 1 ⎝ 1 ⎠ = ⎝ 1 ⎠ (1). 1 1 A more interesting code is the [7, 4] Hamming code. A widely used quantum
code, the Steane code, is built using special properties of the [7, 4] Hamming code. The Steane code will be introduced in section 11.3.3 and is a member of some major code families, including CSS
codes and stabilizer codes, which are the subjects of sections 11.3 and 11.4 respectively. Example 11.2.2 The [7, 4] Hamming code. The [7, 4] Hamming code C encodes 4-bit strings,
elements of Z42 , in 7-bit strings, elements of Z72 . The code C is the subgroup of Z72 generated by {1110100, 1101010, 1011001, 1111111}. The reasoning behind this construction will become clear in
section 11.2.5. One encoding function for C sends 1000 → 1110100 0100 → 1101010 0010 → 1011001 0001 → 1111111 These relations, this encoding is ⎛ 1 1 ⎜ 1 1 G = ⎜ ⎝ 1 0 1 1
together with linearity, fully define the encoding. The generator matrix G for
⎞T 0 0 ⎟ ⎟ . 1 ⎠ 1
An alternative encoding function sends 1000 → 1000111 0100 → 0100110 0010 → 0010101 0001 → 0001011
11.2 Framework for Quantum Error Correcting Codes
with generator matrix G ⎛ ⎞ 1 0 0 0 ⎜ 0 1 0 0 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 0 0 1 0 ⎟ ⎜ ⎟ ⎟ G=⎜ ⎜ 0 0 0 1 ⎟. ⎜ 1 1 1 0 ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ 1 1 0 1 ⎠ 1 0 1 1
11.2.2 Quantum Error Correcting Codes
A [[n, k]] quantum block code C is a 2k -dimensional subspace C of the vector space V associated with the state space of an n-qubit system. The double square brackets are used to distinguish [[n, k]]
quantum codes from [n, k] classical codes. View W , the k-qubit message space, as the subspace of V that has as basis the subset of the standard basis consisting of all strings in which the first n −
k elements are 0. Any unitary transformation UC : V → V that takes W to C is a possible encoding operator for code C. In most cases we do not care how UC behaves on states outside W , so frequently
when we define an encoding operator UC we will specify only its behavior on W and not on all of V . Elements |w ∈ W are called message words, and elements of C are called codewords in analogy with
the classical case. This terminology should not be taken too literally; neither message words in W nor codewords in C are bit strings, but rather quantum states of k and n qubits, respectively. Just
as in the classical case, it is the subspace C, not the encoding function, that defines the code; the same set of errors can be corrected no matter which encoding function is used. Given an encoding
function and any state represented by |w ∈ W , the image UC (|w) = |w ˜ of |w is an n-qubit state referred to as the logical k-qubit state corresponding to |w. Example 11.2.3 The bit-flip code
revisited. The code C is the subspace spanned by {|000, |111}. The standard encoding operator is
UC : |0 → |000 |1 → |111. ˜ = |000 and |1 ˜ = |111. So |0 Strictly speaking, we should write UC : |000 → |000 |001 → |111
11 Quantum Error Correction
and define UC on the rest of V , but we will generally define encoding functions in this way, since we do not care how the encoding behaves on states outside W , and the function definition is easier
to read if we leave off the initial prefix string of zeros. Example 11.2.4 The Shor code revisited. The Shor code is a [[9, 1]] code, where C is the two-
dimensional subspace spanned by 1 √ (|000 + |111)⊗3 8 and 1 √ (|000 − |111⊗3 . 8 The standard encoding operator used with this code sends ˜ = √1 (|000 + |111)⊗3 |0 → |0 8 ˜ = √1 (|000 − |111⊗3 , |1 →
|1 8 but any other function mapping |0 and |1 to two orthogonal vectors within the subspace C would also be a legitimate encoding function. In practice it is not necessary to implement the encoding
and decoding functions. At the beginning of a computation we simply construct the valid starting state, and at the end we interpret the classical information obtained from measurement to deduce
information about the final logical state. Sections 11.2.8 and 11.4.4, and then much of chapter 12, discuss how to compute directly on the encoded data. 11.2.3 Correctable Sets of Errors for
Classical Codes
A classical error may be viewed as an n-bit string e ∈ Zn2 that acts on code words through bitwise addition ⊕, flipping a subset of the code bits. Any code C corrects some sets of errors and not
others. A set of errors E is said to be correctable by code C if, for a w in Zn2 , there is at most one error that could result in w: for all e1 , e2 ∈ E and c1 , c2 ∈ C, e1 ⊕ c1 = e2 ⊕ c2 .
This condition is called the disjointness condition for classical error correction. Usually E is taken to be a group under bitwise addition modulo 2, so E contains the identity element, the non-error
00 · · · 0. The disjointness condition for e1 = 00 · · · 0 means that a correctable error
11.2 Framework for Quantum Error Correcting Codes
cannot take a codeword to a different codeword. For any code C, there are many possible sets of correctable errors. Some correctable sets of errors are better than others from a practical point of
view. Example 11.2.5 Correctable error sets for the [3, 1] repetition code.
The set E = {000, 001, 010, 100} is a correctable set of errors for the [3, 1] repetition code C. The set E = {000, 011, 101, 110} is also a correctable set of of errors for C. The union of E and E
is not a correctable set for C.
11.2.4 Correctable Sets of Errors for Quantum Codes
For classical error correction, it suffices to consider bit-flip errors, a simple discrete set of errors. For quantum error correction, neither the encoded states nor the possible errors form a
discrete set. For this reason, specifying correctable sets of errors for a quantum code C is more complicated than for a classical code. Fortunately, it is simpler than we might at first fear. Let BC
= {|c1 , . . . , |ck } be a (orthonormal) basis for C. A finite set E = {E1 , E2 , . . . , EL } of unitary transformations Ei : V → V is said to be a correctable set of errors for code C if there
exists a matrix M with entries mij such that †
ca |Ei Ej |cb = mij δab
for all |ca , |cb ∈ C and Ei , Ej ∈ E. The next few paragraphs clarify the meaning and motivation for this definition. Just as in the classical case, there are many possible sets of correctable
errors for a code C. Furthermore, there is no maximal correctable set, but some sets are more useful than others from a practical point of view. To perform error correction, one set of correctable
errors is chosen, and the error correction procedures are designed with respect to that set. In the quantum case, the set of errors corrected by these procedures is much larger than the original
correctable set E; section 11.2.9 shows that if there is a procedure for a code C that corrects a set of errors E = {E1 , E2 , . . . , EL }, then any superposition or mixture of errors in E can also
be corrected by code C. It is this property that enables the correction of the general errors, discussed in section 10.4, that can be modeled as probabilistic mixtures of linear transformations.
Since unitary errors E are easily corrected by applying the inverse transform E † , the errors of a correctable set have a clear error correction procedure once the error is known. The next two
paragraphs give intuitive justification for the Correctable Error Set Condition (equation 11.2). Just as in the classical case, there is no hope of correctly recovering from a set of errors E that
contains a pair of error transformations that take two different codewords to the same state. The quantum case has a stronger requirement along these lines: any two distinct errors in E must take
orthogonal codewords to orthogonal states. The reason for this requirement is that in order to
11 Quantum Error Correction
determine which error is likely to have occurred, we need to make measurements, and two states can be distinguished with certainty if and only if the two states are orthogonal. This condition
guarantees that the images of two different codewords under errors in E are distinguishable if the original codewords are distinguishable. This condition is written
c|Ei Ej |c = 0 †
for all Ei , Ej ∈ E, and all |c, |c ∈ C such that c|c = 0. This orthogonality condition is the analog of the disjointness condition, equation. 11.1, for classical error correction. In the quantum
case, in order for error correction not to destroy the quantum computation, an additional condition is needed. Measurements made to determine the error must not give any information about the logical
state, since otherwise superpositions may be destroyed, making the quantum computation useless. For this reason, we require †
ca |Ei Ej |ca = cb |Ei Ej |cb
for all |ca , |cb ∈ C and Ei , Ej ∈ E. This requirement means that for every pair of indices i and j , there is a value mij such that †
ca |Ei Ej |ca = mij . Putting conditions 11.3 and 11.4 together results in the original equation 11.2: †
ca |Ei Ej |cb = mij δab for all |ca , |cb ∈ C and Ei , Ej ∈ E, where a significant part of the meaning of this formula is that mij is independent of a and b. Condition 11.2 holds if †
ca |Ei Ej |cb = 0
for all |ca , |cb ∈ C and Ei , Ej ∈ E such that i = j , but this condition is stronger than necessary. If two different errors E1 and E2 take a state |ψ to the same state |ψ , no matter which error †
† occurred, applying E1 (or equally well E2 ) corrects the error. Condition 11.5 holds for many quantum codes, but not for some important codes. A code that does not satisfy this condition is called
a degenerate code for error set E. Shor’s code is degenerate, for example: a relative phase error acting on the first qubit will have the same effect as a relative phase error acting on the second
qubit. The existence of degenerate codes complicates matters. There is no classical analog for degenerate quantum codes. The unitarity of the Ei means that Ei C has dimension 2k for all errors Ei .
Since there can be at most 2n−k mutually orthogonal subspaces of dimension 2k in a space of dimension 2n , the maximum size of a set E of correctable errors for a nondegenerate code is 2n−k . For
degenerate codes, the size of a maximal set of correctable errors can be greater than 2n−k .
11.2 Framework for Quantum Error Correcting Codes
Example 11.2.6 The bit-flip code revisited. The set of errors E = {Eij } with
E00 = I ⊗ I ⊗ I, E01 = X ⊗ I ⊗ I, E10 = I ⊗ X ⊗ I, E11 = I ⊗ I ⊗ X is a correctable error set for the bit-flip code. The set of errors E = {Eij } with = I ⊗ I ⊗ I, E01 = I ⊗ X ⊗ X, E10 = X ⊗ I ⊗ X,
E11 = X⊗X⊗I E00
is a different correctable error set for the bit-flip code. In this case, the code corrects all two-qubit flip errors, but none of the single bit-flip errors. Of course, this set of correctable
errors is of little practical value, since single bit-flip errors are generally more likely than pairs of bit-flip errors. But it is conceivable that in certain physical implementations, bit-flip
errors are more likely to appear in pairs.
11.2.5 Correcting Errors Using Classical Codes
Let C be a classical [n, k] linear block code, and suppose E is a correctable set of errors for C. Suppose w = e ⊕ c for some codeword c ∈ C and error e ∈ E. We wish to correct w to c. To find e and
c, it is helpful to consider cosets of the code C. This paragraph shows that there is a unique error associated with each coset. Let H be the set of cosets of C in Zn2 . An error e ∈ E changes a code
word c into e ⊕ c, an element of some coset of C. Given errors e1 = e2 and codewords c1 and c2 , by disjointness condition 11.1, e1 ⊕ c1 and e2 ⊕ c2 are in two different cosets. To see this, suppose
e1 ⊕ c1 and e2 ⊕ c2 were in the same coset. Then there would exist a c3 ∈ C such that e1 ⊕ c1 ⊕ c3 = e2 ⊕ c2 . Box 11.2 Cosets
Given a subgroup H < G, for each a ∈ G, the set aH = {ah|h ∈ H } is called a (left) coset of H in G. (Right cosets are analogously defined, but we do not need to consider them here, so we will simply
refer to left cosets as cosets.) For a and b in G, either aH = bH or aH ∩ bH = ∅, so the cosets partition G. Thus, the order of a subgroup must divide the order of the group and, similarly, the
number of distinct cosets must divide the order of the group. The index of H in G is the number of distinct cosets of H in G, and is denoted by [G : H ]. For example, let G = Zn , and let H = mZn be
the set of multiples of m for some integer m dividing n. The order of G is n, the order of H is n/m, and the number of distinct cosets is [G : H ] = |G|/|H | = m. If K < H < G, then [G : K] = |G|/|K|
= (|G|/|H |)(|H |/|K|) = [G : H ][H : K].
11 Quantum Error Correction
But c1 ⊕ c3 is in C, which violates the disjointness condition 11.1 that says that two distinct correctable errors cannot take two codewords to the same word. Thus, knowing to which coset the word e
⊕ c belongs, tells us which error e has occurred. Let us make this more precise. Because Zn2 is Abelian, the set of all cosets forms a group, H . It is of size 2n−k . Since H is Abelian and
nontrivial, and all elements of H have order 2, H is isomorphic to Zn−k 2 . Let be an isomorphism. The map σ : H → Zn−k 2 ∼ h : Zn2 → Zn−k =H 2 w → σ (w ⊕ C) sends all elements of C to the zero
element of Zn−k 2 ; the kernel of h is C. The element h(w) characterizes each coset since h(w) = h(w ) if and only if w and w are in the same coset. By the previous paragraph, there is a unique error
e ∈ E associated with this coset. Since h(w) characterizes the coset, it also characterizes this error. For this reason, h(w) is called the error syndrome, or simply syndrome. More concretely, h can
be realized by an (n − k) × n matrix P . To construct a concrete P , find n − k linearly independent elements pi of Zn2 such that pi · c = 0 mod 2 for all c ∈ C, and take these as the rows of the
matrix: ⎛
p1T ⎜ P = ⎝ ...
⎞ ⎟ ⎠
T pn−k
For a given code C, there are many possible matrices P (just as there are many possible isomorphisms σ ). The matrix P , acting on w ∈ Zn2 viewed as a column vector, produces an n − k length binary
column vector P w, the syndrome, that characterizes the coset of C containing w. Each of these n − k values is the inner product (mod 2) of w with a row of P . For this reason, the rows pi are called
parity checks and P is called a parity check matrix for code C. The parity check matrix P distinguishes between distinct correctable errors ei and ej since P (ei ) = P (ej ). If G is a generator
matrix for the codewords of C, and P is an arbitrary (n − k) × n matrix, the (n − k) × k product matrix P G is 0 if and only if P is a parity check matrix for C. The code C is both the image of Zk2
in Zn2 under G, and the kernel of P , the set of elements of Zn2 sent to 00 · · · 0 under P . Hamming codes are among the simplest classical codes and are used as the basis for many quantum codes.
There is a Hamming code Cn for every integer n ≥ 2. A parity check matrix for Hamming codes has columns consisting of all the non-zero n-bit strings. Since the parity check matrix for the Hamming
code Cn is a n × (2n − 1) matrix, the generator matrix for Cn is therefore a (2n − 1) × (2n − n − 1) matrix, and the Hamming code Cn is a [2n − 1, 2n − n − 1] code. All Hamming codes correct single
bit-flip errors.
11.2 Framework for Quantum Error Correcting Codes
Example 11.2.7 The Hamming code C2 . The [3, 1] repetition code is also the Hamming code C2 ,
the code with parity check matrix 0 1 1 P = . 1 0 1 A different parity check matrix for the same code is 1 1 0 P = . 1 0 1 The matrix P has form (A|I ). By exercise 11.2, ( AI ) is a generator matrix
for the code. The generator matrix obtained in this way from P is ⎛ ⎞ 1 G = ⎝ 1 ⎠. 1 The code C2 is called a repetition code, since 0 → 000 and 1 → 111.
Example 11.2.8 The Hamming code C3 . The Hamming code C3 is a [7, 4] code. Section 11.3.3 uses C3 to define the quantum Steane code. A parity check matrix for the [7, 4] Hamming code is ⎛ ⎞ 0 0 0 1 1
1 1 P = ⎝ 0 1 1 0 0 1 1 ⎠; 1 0 1 0 1 0 1
its columns are exactly the seven non-zero 3-bit strings. Our next task is to find a generator matrix G for C. Since each row of P contains an even number of 1s, each row is orthogonal to itself.
Furthermore, these elements are orthogonal to each other, that is, P P T = 0, so we may take as the first three columns of G the transposes of the rows of P . We need to find one other vector
orthogonal to and linearly independent of these columns. The vector T 1 1 1 1 1 1 1 satisfies both conditions. So a generator matrix for the [7, 4] Hamming code is ⎛ ⎞T 0 0 0 1 1 1 1 ⎜ 0 1 1 0 0 1 1
⎟ ⎟ G = ⎜ ⎝ 1 0 1 0 1 0 1 ⎠ . 1 1 1 1 1 1 1
11 Quantum Error Correction
Alternatively, the [7, 4] Hamming code can be defined in terms of a more convenient parity check matrix of the form (A|I ), ⎛ ⎞ 1 1 1 0 1 0 0 P = ⎝ 1 1 0 1 0 1 0 ⎠. 1 0 1 1 0 0 1 By exercise 11.2, is
( AI ), ⎛ 1 0 0 ⎜ 0 1 0 ⎜ ⎜ 0 0 1 ⎜ ⎜ G=⎜ 0 0 0 ⎜ ⎜ 1 1 1 ⎜ ⎝ 1 1 0 1 0 1
a generator matrix corresponding to a parity check matrix of this form 0 0 0 1 0 1 1
⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠
11.2.6 Diagnosing and Correcting Errors Using Quantum Codes
This section describes a procedure for correcting errors handled by nondegenerate quantum codes. Let C be an [[n, k]] quantum code that is nondegenerate with respect to a correctable error set E =
{Ei }, where 0 ≤ i < M. Suppose |w = Es |v for some Es ∈ E and |v ∈ C. Because C is nondegenerate with respect to E, the subspaces Ei C and Ej C are orthogonal for all i = j , so Es |v is the only
way to obtain |w from a codeword in C and an error in E: the elements Es and |v are unique. Thus, if we can determine in which subspace Es C the state |w lives, † from among the M subspaces {Ei C},
we can correct the error by applying Es to |w. To make this determination, we must measure the state |w. The standard model of quantum computation allows only single-qubit measurements in the
standard basis. Any other measurement can be carried out by computing into ancilla qubits and measuring each of these in the standard basis, but only some measurements can be efficiently carried out
in this way. This section presents a general framework. Later sections of this chapter and the next consider implementation issues with respect to specific codes. The aim of the measurement is to
determine in which error subspace the state |w lies. Let Wi = Ei C, and W =
M−1 +
Wi .
Let W ⊥ be the possibly empty subspace of the computational space V orthogonal to W ; vectors in W ⊥ are orthogonal to all codewords and also to all states that are images of codewords under a
correctable error Ei ∈ E. For notational convenience, define WM = W ⊥ . Since |w is the
11.2 Framework for Quantum Error Correcting Codes
result of an error Ei applied to a codeword, by definition of WM , |w does not lie in WM . Since the Wi are mutually orthogonal, there is an observable O with eigensubspaces exactly the Wi . Let Pi
be the projector onto the subspace Wi . Let m = log2 M, and let UP be a unitary operator on n + m qubits such that UP : |w|0 →
bj |wj |j ,
j =0
where |w = M−1 j =0 bj |wj is written in terms of its components bj |wj = Pj |w. Measuring the m auxiliary qubits in the standard basis gives the error syndrome, the subspace index j . By definition
of WM , the index M cannot occur. After measurement, the state of the first n qubits is † in the subspace Wj = Ej C, so applying the operator Ej corrects the error. The operator UP is called a
syndrome extraction operator since it plays a similar role to the syndrome in classical error correction. The notation UP is meant to suggest a unitary operator that plays the role of the parity
check matrix P in classical error correction. Since the labels for the subspaces can be arbitrarily chosen, many different unitary operators can serve as a syndrome extraction operator for a given
code C and error set E. Measuring a single qubit l of the m auxiliary qubits on its own corresponds to a binary observable with two 2n−1 -dimensional eigensubspaces, the subspace spanned by all of
the Wi for which the lth bit of the binary representation of its index i is 0, and the subspace spanned by all of the Wi for which the lth bit of the binary representation of its index i is 1. In
this way, the syndrome extraction operator can be viewed as a set of m observables. Example 11.2.9 The bit-flip code revisited. Consider the bit-flip code C and the set of correctable
errors E = {Eij } with E00 = I ⊗ I ⊗ I, E01 = X ⊗ I ⊗ I, E10 = I ⊗ X ⊗ I, E11 = I ⊗ I ⊗ X. More simply, E00 = I , E01 = X2 , E10 = X1 , and E11 = X0 , where Xi is the operator X applied to the i th
qubit. The orthogonal subspaces corresponding to this error set are W00 = E00 C, W01 = E01 C, W10 = E10 C and W11 = E11 C with bases B00 = {|000, |111}, B01 = {|100, |011}, B10 = {|010, |101} and B11
= {|001, |110}, respectively. The operator UP : |x2 , x1 , x0 , 0, 0 → |x2 , x1 , x0 , b1 = x1 ⊕ x0 , b0 = x2 ⊕ x0 serves as a syndrome extraction operator for C with error set E. Measuring bit b1 in
the standard basis distinguishes between the eigenspaces spanned by subspaces {W00 , W01 } and {W10 , W11 } respectively. Similarly, measuring b0 distinguishes between errors in the spaces spanned by
{W00 , W10 } and {W01 , W11 }. Measuring b1 and b0 as i and j projects the state into Wij = Eij C.
11 Quantum Error Correction
The error can be corrected by applying Eij . If, for example, measuring the ancilla b1 and b0 yields 0 and 1 respectively, we apply the transformation X2 . Measuring b0 (resp. b1 ) directly, without
the use of ancilla bits, can be done using the observable Z ⊗ I ⊗ Z (resp. I ⊗ Z ⊗ Z). Compare the classical parity check matrix 1 0 1 P = 0 1 1 for the [3, 1] code with the array Z I
I Z
Z , Z
where the factors of the two observables have been placed in the rows. In the classical case, the parity check matrix multiplied by a word will be 0 if the word is a codeword. At least one of the
rows of the parity check matrix when multiplied by a non-codeword will be non-zero. In the quantum case a codeword is in the +1-eigenspace for all the observables, and non-codewords are in the
−1-eigenspace for at least one of the observables. The stabilizer codes of section 11.4 exploit this connection. Before turning to another example, we use this example to illustrate an alternative to
the syndrome measurement. The use of ancilla qubits in quantum error correction to correct general errors is one of the most elegant and surprising aspects of quantum computing. By computing
information into ancilla qubits and measuring them, nonunitary errors can be converted to unitary errors. When the result of the measurement tells us which unitary error remains, we can correct it by
applying the inverse unitary operator. Alternatively, and equivalently, instead of measuring the ancilla after computing into them, a controlled operation from the ancilla qubits to the computational
qubits can correct the error. In general, for E = {Es }, Instead of measuring after applying UP to the computational system and the ancilla, apply the following controlled operation with the ancilla
as the control bits: Es† ⊗ |s s|. VP = s
In this way, errors can be corrected without measurement. Example 11.2.10 Bit-flip code CBF correction by controlled operations. After applying UBF
in example 11.1.2, instead of measuring, a controlled operation VP from the ancilla qubits to the computational qubits can be performed, one that applies each of the three error correction
transformations when the ancilla qubits are in the corresponding state: VP = I ⊗ |00 00| + X2 ⊗ |01 01| + X1 ⊗ |10 10| + X0 ⊗ |11 11|.
11.2 Framework for Quantum Error Correcting Codes
The circuit for this controlled operation is b2 b1 b0 a1 a0
Suppose an error E = αX2 + βX1 has occurred. Applying this circuit to the state ˜ ⊗ |00) = α|100|11 + β|010|10 UBF (E|0 results in α|000|11 + β|000|10 = |000(α|11 + β|10). We may wish to measure the
two ancilla qubits in order to transform them to |00 so that they can be reused in a later error correction step, but measurement is not required to achieve quantum error correction.
Example 11.2.11 The phase-flip code revisited. Recall that the relative phase code of section
11.1.2 is dual to the bit-flip code of example 11.2.9 through the transformation W = H ⊗ H ⊗ H . Applying W to all states and replacing all transformations T used in example 11.2.9 with W T W results
in an error correction procedure for the relative phase code. Since X = H ZH , the observables corresponding to the syndrome operator UP are X ⊗ I ⊗ X and I ⊗ X ⊗ X, which have corresponding array X
I X
X , X
which is related to the classical parity check matrix 1 0 1 P = . 0 1 1 In this case, errors can be corrected without measurement using VP = I ⊗ |00 00| + Z2 ⊗ |01 01| + Z1 ⊗ |10 10| + Z0 ⊗ |11 11|.
11 Quantum Error Correction
11.2.7 Quantum Error Correction Across Multiple Blocks
Just as the classical [n, k] block codes of section 11.2.1 encode length mk bit-strings as a length mn bit-string by encoding each of the m blocks of k bits, a quantum [[n, k]] code C encodes mk
logical qubits in mn computational qubits by encoding each of the m blocks of k logical bits using C. A logical superposition such as |ψ = αij (|wi ⊗ |wj ) i
is encoded as ˜ = |ψ αij (|ci ⊗ |cj ), i
where |ci = UC |wi and UC is an encoding function for C. Quantum block codes must be able to correct errors on such superpositions. Furthermore, if C can correct errors Ei ∈ E, then C applied
blockwise must be able to correct errors of the form Ei1 ⊗ · · · ⊗ Eim on the encoded state. The rest of this section illustrates, in the two-block case, quantum error correction on superpositions
and across multiple blocks. ˜ = i j αij (|ci ⊗ |cj ) were subject to error Ea ⊗ Eb , where Suppose the encoded state |ψ Ea and Eb are both correctable errors for code C. Applying the syndrome
extraction operator UP for C to each block separately, measuring the ancilla for each block, and applying the appropriate ˜ correcting operators will restore the state |ψ: ˜ ⊗ |0|0) = UP ⊗ UP ((Ea ⊗
Eb |ψ) αij (UP (Ea |ci |0) ⊗ (UP (Eb |cj |0)) ij
αij (Ea |ci |a ⊗ Eb |cj |b),
where we have reordered the qubits for clarity. Measurement of the two ancilla yields |a and |b respectively, with the computation qubits in state |φ = ij αij (Ea |ci ⊗ Eb |cj ). The syndrome †
|a|b indicates that the error can be corrected by applying Ea ⊗ Eb . Applying Ea ⊗ Eb does indeed correct the error: † ˜ Ea† ⊗ Eb |φ = αij (|ci ⊗ |cj ) = |ψ. i
11.2.8 Computing on Encoded Quantum States
For error correcting codes to be useful for quantum computation, we must still be able to perform computation on the states after they are encoded. Let C ⊂ V be an [[n, k]] quantum code and let UC be
an encoding function UC : W → C. In order to perform general computation on encoded states, for any unitary operator U : W → W , we must find an analogous unitary operator U˜ acting on the encoded
states, one that for all |w ∈ W sends UC (|w) to UC (U |w). Because
11.2 Framework for Quantum Error Correcting Codes
we do not care how U˜ behaves outside C, there are many unitary operators acting on V that have this property. For a given unitary operator U˜ , there are many ways to implement it in terms of basic
gates. Furthermore, for a given U : W → W with two logical analogs U˜ : V → V and U˜ : V → V , one of U˜ and U˜ may be more efficiently implementable than the other, and some implementations have
better robustness properties than others. One such operator can be constructed using the encoding operator. Let UC be the unitary † coding function that sends |w ⊗ |0 to |w. ˜ The transformation UC
sends a valid codeword |w˜ † to |w ⊗ |0. The operator U˜ = UC (U ⊗ I )UC acts as desired on the code space; U˜ is the logical equivalent to U on the encoded states. In general, however, this
construction yields a U˜ with poor † robustness properties: after applying UC , the state is unencoded, making it extremely vulnerable to any errors that occur during this time. Chapter 12 takes a
careful look at how logical operations are best implemented on encoded states. 11.2.9 Superpositions and Mixtures of Correctable Errors Are Correctable
Section 10.4 showed that general errors E can be modeled as probabilistic mixtures of linear transformations, and that these linear error transformations Ai are not necessarily unitary: E : ρ →
Ai ρAi .
This section shows that errors that are non-zero complex linear combinations of elements of a correctable error set E for a code C can be corrected by this code. The term set of correctable errors
refers to a set of errors the code can correct via a unitary transformation, but the set of errors the code corrects is much larger: all linear combinations of such errors. Measurement is used to
project a linear combination of errors onto one of the correctable errors, and it is also used to detect which error remains after measurement so that the corresponding unitary error transformation
can be applied. As in the classical case, there are many possible maximal sets of correctable errors for a given code, and some of these distinct maximal sets of correctable errors generate distinct
subspaces. Let error E = m mixture of errors, a linear combination of errors i=0 αi Ei be a probabilistic Ei from a correctable set E such that i |αi |2 = 1. The error E may or may not be unitary, so
we consider the general case and show that, if E takes a codeword |c, with density operator ρ = |c c|, to a mixed state ρ = EρE † , we can correct for the error. The mixed state ρ can be written ρ =
|αi |2 Ei |c c|Ei† . Since the Ei |c are mutually orthogonal and |αi |2 = 1, ρ has trace 1 and is a mixed state. Thus ρ is a probability distribution over the orthogonal pure states Ei |c. Consider
the observable O = i λi Pi , where the λi are distinct and Pi is the projector onto the subspace Ei C. Using the † † definitions in section 10.3, measurement with O results in the state Pi ρ Pi = Ei
|c c|Ei with
11 Quantum Error Correction
probability |αi | . Thus, after measurement, we have a pure state Ei |c. The measurement result, † λi , tells us in which subspace Ei C the state resides. Applying Ei corrects the state. 2
11.2.10 The Classical Independent Error Model
In both the quantum and classical case, a general error correction strategy consists of three parts: detecting non-codewords, determining the most likely error, and applying a transformation to
correct that error. Determining the most likely error requires an error model. A common family of error models for classical computation is the independent error model in which each bit has
probability p ≤ 1/2 of flipping. In this model, the chance of any of the single bit-flip errors 100 · · · 0, 010 · · · 0, . . . , 000 · · · 1 is p(1 − p)n−1 , the chance of the two-bit error 110 · ·
· 0 occurring is probability p2 (1 − p)n−2 , and the chance of no error occurring is (1 − p)n . This error model guides the error correction strategy. Since under this model no error is more likely
than any error, if a codeword is received, our best bet is to assume that no error occurred. Suppose w is a non-codeword we wish to correct. Let c be an element of C that is closest to w in the
Hamming distance. If the closest element is unique, the most likely error to have occurred under the independent error model is e = c ⊕ w. Let w be another element of the coset containing w, so w = w
⊕ k for some k ∈ C. If c is the closest element in C to w, then c = c ⊕ k must be the closest element in C to w . The most likely error resulting in w is also e, because w ⊕ c = w ⊕ k ⊕ c ⊕ k = w ⊕ c
= e. Thus, all elements of a coset are equally close to C in the Hamming distance. By definition of c, the most likely error e is the element of the coset with the lowest Hamming weight. Once the
syndrome computation tells us the coset, we correct by applying the lowest weight element e of that coset. If the actual error was a different one, we have “corrected" to the wrong word, but no
better strategy exists. In particular, if we receive a codeword, we do nothing. In general, error correcting codes cannot correct errors that take codewords to codewords. Furthermore, if there is
more than one closest element to w in C, it is unclear how best to correct the error. For this reason, when working under the independent error model, the set of correctable errors is usually taken
to be Et , the set of all words of Hamming weight t or less, where t is as large as possible without introducing ambiguity or, equivalently, violating the disjointness condition (equation 11.1) for a
set of correctable errors. The minimum Hamming distance between any pair of codewords is called the distance of the code. An [n, k, d] code is one that uses n-bit words to encode k-bit message words
and has distance d. For each codeword c, let et (c) = {v|dH (v, c) ≤ t} be the set of words no more than Hamming distance t away from c. The set et (c) contains exactly words v obtained from c by an
error of weight at most t. If the sets et (c) are disjoint for all pairs
11.2 Framework for Quantum Error Correcting Codes
of code words c and c , the code can correct any weight t error by mapping words in et (c) to the codeword c. The sets et (c) are disjoint if and only if d ≥ 2t + 1. So an [n, k, d]-code can !. For a
distance d code C, correct at most all errors with weight less than or equal to t = d−1 2 the maximum possible t satisfies 2t + 1 ≤ d, since otherwise two codewords could be mapped to the same error
word under two different t-bit errors, and the disjointness condition would not hold. 11.2.11 Quantum Independent Error Models
For a given quantum code C, some sets of correctable errors for C are better than others from a practical point of view. As in the classical case, which correctable sets are better depends on which
errors are more probable. Because there is a richer class of quantum errors, there is a greater variety of quantum error models to choose from. The most common quantum error models assume, as the
classical independent error model does, that errors on separate qubits occur independently and that, with probability p, a given qubit is subject to an error. The error model we describe is motivated
by the local and Markov assumptions discussed in section 10.4.4. Because unitary errors are easily corrected by applying the inverse transformation, sets of correctable errors are chosen to contain
only unitary error transformations. It is particularly common to choose a correctable set of errors containing only elements of the generalized Pauli group Gn . The generalized Pauli group Gn
consists of n-fold tensor products of Pauli group elements: all elements of Gn are of the form μA1 ⊗ A2 ⊗ · · · ⊗ An where Ai ∈ {I, X, Y, Z} and μ ∈ {1, −1, i, −i}. The commutation relations, the
relations between group products gi gj and gj gi , for the Pauli group imply that every element of Gn can be written as μ(Xa1 ⊗ · · · ⊗ Xan )(Z b1 ⊗ · · · ⊗ Z bn ), where the ai and bi are binary
values. Section 10.4.4 showed that any error can be expressed as a mixture of linear transformations
. †
tr(Ai ρAi ) The generalized Pauli group Gn forms a basis for the vector space of linear transformations acting on the vector space associated with an n-qubit system. Thus, a general error E on an
n-qubit quantum register can be expressed as linear combination j ej Ej where Ej ∈ Gn . All linear transformations arising in an operator sum decomposition can be written not only in terms of unitary
operators, but also in terms of generalized Pauli operators. By results of section 11.2.9, a mixture of errors, each of which is corrected by a procedure, is also corrected by that procedure.
11 Quantum Error Correction
We write Xi for the transform that applies X to the ith qubit and leaves the others alone: I, ⊗ ·-. · · ⊗ I/ ⊗X ⊗ I, ⊗ ·-. · · ⊗ I/ . i
The meaning of Yi and Zi is similar. The weight of a Pauli error is the number of nonidentity terms in its tensor product expression. The weight of an error is defined only for Pauli errors, not for
general errors. The generalized Pauli group has a number of convenient properties. For example, the stabilizer codes of section 11.4 make heavy use of the fact that any two elements g1 and g2 in the
Pauli group either commute (g1 g2 = g2 g1 ) or anticommute (g1 g2 = −g2 g1 ). Another convenient property is that if the set of all single-qubit bit-flip and phase-flip errors Xi and Zi for all i is
a correctable set E for a code C, then E can be expanded to contain the Yi errors for all i. The orthogonality condition for E and C says that if Xi and Zi are correctable errors, then for all i, the
following four expressions are zero: †
c1 |Xi Zi |c2 = c1 |Zi Xi |c2 = c1 |I Zi |c2 = c1 |I Xi |c2 = 0. To show that the Yi are compatible correctable errors, it suffices to show that for all i and j and for all orthonormal |c1 = |c2 ∈ C,
c1 |Xj Yi |c2 = 0 †
c1 |Zj Yi |c2 = 0
c1 |I Yi |c2 = 0, and for all j = i †
c1 |Yj Yi |c2 = 0. These equalities follow immediately from multiplication in the Pauli group. For example, because †
Xi Yi = −Xi Xi Zi = −I Zi , †
c1 |Xi Yi |c2 = − c1 |I Zi |c2 = 0 Thus, any code that corrects all bit-flip errors X and all phase-flip errors Z also corrects all Y errors. Let t be the maximum weight for which the set of Pauli
group elements of weight t or less satisfies correctable error set condition (equation. 11.2). Any nondegenerate [[n, k]]-quantum code cannot correct errors of more than weight t. Section 11.2.4
showed that the maximum number of elements in a correctable set for a nondegenerate code is 2n−k . The number of elements of n . Thus, any nondegenerate code that corrects all errors with weight t or
less weight t is 3t t
11.2 Framework for Quantum Error Correcting Codes
must satisfy the quantum Hamming bound t n i ≤ 2n−k . 3 i i=0
A nondegenerate code that obtains equality in the quantum Hamming bound is called a perfect code. The classical Hamming Bound is discussed in box 11.3. The quantum Hamming bound does not apply to
degenerate codes. All classical codes satisfy the classical Hamming bound. That the quantum Hamming bound does not apply to all codes provides an example of how the existence of degenerate codes
complicates the quantum picture. Just as in the classical case, the term perfect should not be taken to imply that perfect codes are necessarily the best ones to use in practice. The quantum Hamming
bound quantifies the best trade-off in terms of code expansion (ratio of size of encoded state to original message state) and the strength of the error correction in terms of the number of
single-qubit errors the code can correct. A third quantity is also of great practical interest: the efficiency with which errors can be detected. There are many codes that come close to the quantum
Hamming bound but that do not have efficient error detection schemes, as measured in terms of the number of gates needed for syndrome extraction and the number of qubits that need to be measured.
Both in the quantum and the classical cases, significant structure must be in place in order for efficient error detection schemes to be possible. The design of classical, as well as quantum, error
correction schemes with efficient error detection and good trade-offs between data expansion and strength is a continuing area of research. Stabilizer codes provide this structure; for this reason,
nearly all Box 11.3 The Classical Hamming Bound
For any [n, k, d]-code, there are
|Et (c)| =
n t
weight t errors, so the cardinality of Et (c) is
t n . i i=0
Since there are 2k codewords, the sets Et (c) can be disjoint only if |Et (c)|2k ≤ 2n . Thus, any [n, k] code that corrects all errors of weight t or less must satisfy the following bound: t n ≤ 2n−k
. i i=0
This condition is called the (classical) Hamming bound. A code for which equality holds is called a perfect code, since it uses the minimum size n to encode k-bit message words in such a way that all
weight t errors can be corrected. This bound on t is independent of d.
11 Quantum Error Correction
quantum error correction codes are stabilizer codes. CSS codes, a subset of stabilizer codes, have the advantage that they can be built from pairs of classical codes that are related to each other in
a special way. 11.3 CSS Codes
Shor’s code encodes a single qubit into three qubits to correct bit-flip errors and then re-encodes the resulting logical qubits to correct phase-flip errors. Recall from section 11.1.2 that Xi = H
Zi H , so bit-flip errors Xi are closely related to phase-flip errors Zi ; bit-flip errors in the standard basis {|0, |1} are phase-flip errors in the Hadamard basis {|+, |−} and vice versa.
Calderbank and Shor, and separately Steane, recognized that by using this relation they could construct quantum codes from pairs of classical codes that satisfy a certain duality relation. These
codes, called CSS codes after their founders, have a number of advantages. For example, by encoding only once to correct both phase- and bit-flip errors, the number of qubits required to correct t
qubit errors can be reduced: the most famous CSS code, Steane’s [[7, 1]] code requires seven qubits to correct all single qubits, as opposed to the nine qubits needed in Shor’s code. 11.3.1 Dual
Classical Codes
Two classical codes C1 and C2 are dual to each other, C1 = C2⊥ , if a generator matrix for one is the transpose of a parity check matrix for the other: G1 = P2T . Two sets of words V and W are said
to be orthogonal if for all v ∈ V and w ∈ W , the inner product, v † w = 0 mod 2, where v and w are viewed as vectors. Let C and C ⊥ be dual codes with generator matrices and parity check matrices
{G, P } and {G⊥ , P ⊥ }, respectively. The codewords C ⊥ are orthogonal to the codewords of C because G⊥ = P T ; v ∈ C ⊥ and w ∈ C means that there exist x and y such that v = G⊥ x and w = Gy, so v †
w = (G⊥ x)T Gy = (P T x)T Gy = x T P Gy = 0.
Example 11.3.1 The dual code to the [7, 4] Hamming code is the is the [7, 3] code C ⊥ with
generator matrix ⎛
⎞T 0 0 ⎠ 1
and parity check matrix ⎛ 1 1 1 ⎜ 1 1 0 P ⊥ = GT = ⎜ ⎝ 1 0 1 1 1 1
⎞ 0 0 ⎟ ⎟. 1 ⎠ 1
1 G⊥ = P T = ⎝ 1 1
11.3 CSS Codes
Since the rows of P are a subset of those of P ⊥ , it follows that C contains its own dual: C ⊥ ⊂ C. The eight codewords of C ⊥ are the linear combinations of the columns of G⊥ : C ⊥ = {0000000,
1110100, 1101010, 0011110, 1011001, 0101101, 0110011, 1000111}. The sixteen codewords of C are those of C ⊥ plus those obtained by adding 1111111 to all of the codewords of C ⊥ . For any [n, k]
classical code C, k if x ∈ C ⊥ 2 (−1)c·x = 0 otherwise.
This identity may be established by relating it to the identity N −1 0 for x = 0 y·x (−1) = N = 2n for x = 0 y=0
from box 7.1. Because x · Gy = GT x · y, the inner product of the two n-bit strings x and Gy is equal to the inner product of the two k-bit strings GT x and y, so 2 −1 (−1)c·x = (−1)Gy·x k
k −1 2
2k 0
if GT x = 0 otherwise
Identity 11.7 follows, since GT x = P ⊥ x = 0 precisely when x ∈ C ⊥ . 11.3.2 Construction of CSS Codes from Classical Codes Satisfying a Duality Condition
Identity 11.7 enables the construction of states that are superpositions of codewords from a classical code C when viewed in the standard basis and are superpositions of dual codewords w ∈ C ⊥ when
viewed in the Hadamard basis. More precisely, we construct states |ψg that are superpositions of codewords from C, and show that they have amplitude only in the states |hi where i ∈ C ⊥ and the |hi
are elements of the n-qubit Hadamard basis: |hi = W |i = H ⊗ · · · ⊗ H |i. After constructing these states, this section shows how this property enables the correction of both phase-flip and bit-flip
11 Quantum Error Correction
Box 11.4 Quotient Groups
If H < G and the conjugacy condition, gHg −1 = H , holds for all elements of g ∈ G, then the cosets of H form a group, called a quotient group G/H of the group G. Let us be more precise. Let S = g1 H
and T = g2 H be cosets of H in G. Then ST = g1 H g2 H = g1 g2 Hg2−1 g2 H = g1 g2 H is another coset R = g1 g2 H of H in G. Let f : G → H be a homomorphism. Let K be the set of elements of G that are
sent to the identity in H . Then, K is a subgroup of G that satisfies the conjugacy condition, so the cosets of K form a quotient group G/K. If f is onto, the quotient group G/K is isomorphic to H .
Let C1 and C2⊥ be [n, k1 ] and [n, k2 ] classical codes respectively, and suppose both codes correct t errors. Furthermore, suppose C2⊥ ⊂ C1 . There are 2k1 −k2 distinct cosets of C2⊥ in C1 ; every c
∈ C1 defines a coset c ⊕ C2⊥ = {c ⊕ c |c ∈ C2⊥ } and c ⊕ C2⊥ = d ⊕ C2⊥ if and only if k c ⊕ d ∈ C2⊥ . The set of cosets forms a group, the quotient group G = C1 /C2⊥ . Since C1 ≡ Z21 k k −k and C2⊥ ≡
Z22 , the quotient group G ≡ Z21 2 . For each element g ∈ G, define a quantum state 1 |cg ⊕ c, |ψg = √ 2k2 c∈C ⊥ 2
where cg is any element of C1 contained in the coset of C2⊥ labeled by g. The 2k1 −k2 -dimensional subspace spanned by the |ψg for all g ∈ G defines a [[n, k1 − k2 ]] quantum code C, the CSS code CSS
(C1 , C2 ). This paragraph shows that |ψg , when viewed in the Hadamard basis, only has amplitude in the codewords of C2 . The components of |ψg in the Hadamard basis are
hi |ψg |hi = i|W |ψg W |i. Therefore it suffices to show that W |ψg is a superposition of codewords |c ∈ C2 : recall from section 7.1.1 that N−1 1 (−1)y·x |x. W |y = √ N x=0
So N−1 1 1 (−1)(cg ⊕c)·x |x W |ψg = √ √ 2k2 c∈C ⊥ 2n x=0 2
1 2n+k2
(−1)x·c |x
11.3 CSS Codes
1 2n+k2 1 2n−k2
(−1)x·cg (2k2 )|x
x∈(C2⊥ )⊥
(−1)x·cg |x,
where line 3 follows from line 2 by identity 11.7. Now we turn to how the error correction is carried out. Since each |ψg is a linear combination of codewords in C1 , a quantum version of the
syndrome for C1 can be used to correct all t bit-flip errors. More specifically, each row of the parity check matrix P1 tests whether the sum (mod 2) of the corresponding set of bits is even or odd.
If a row of the parity check matrix reads b = bn−1 b2 . . . b1 , then the observable that makes the analogous check on quantum states is Z bn−1 ⊗ · · · ⊗ Z b1 , the operator with a Z in every place
the parity check has a 1 and an I every place the parity check has a 0. More generally, for any single-qubit unitary transformation Q, let Qb be the tensor product Qbn−1 ⊗ · · · ⊗ Qb1 ⊗ Qb0 . Let b ∈
P mean that b appears as a row in P . To realize these observables in terms of single-qubit measurement, each row b ∈ P1 corresponds to a component of a quantum circuit on n + 1 qubits, the n
computational qubits plus an ancilla qubit. The component has a Cnot between the i th qubit and the ancilla wherever the i th entry of the row has a 1. To see how the code handles phase errors, we
first confirm that phase-flip errors become bit-flip errors under W . Let e be the bit string indicating the location of the phase-flip errors. Under this error, |ψg becomes 1 (−1)e·(cg ⊕c) |cg ⊕ c √
2k2 c∈C ⊥ 2
which, after applying W , becomes √
1 2n+k2
(−1)e·(cg ⊕c)
(−1)x·(cg ⊕c) |x
N−1 1 =√ (−1)(e⊕x)·cg (−1)(e⊕x)·c |x 2n+k2 x=0 c∈C ⊥ 2
1 (−1)(e⊕x)·cg |x =√ n−k 2 2 x⊕e∈C2 1 =√ (−1)y·cg |y ⊕ e. n−k 2 2 y∈C2 This state differs from W |ψg by exactly the bit-flip error corresponding to the string e.
11 Quantum Error Correction
Since applying W to |ψg yields a linear combination of elements of C2 , and under W phase-flip errors become bit-flip errors, a quantum version of the syndrome for C2 can be used to correct phase
errors. Following the construction for bit-flip error detection, for each row in the parity matrix P2 , construct a component of a quantum circuit that has a Cnot operator between qubit i and the
ancilla qubit if and only if there is a 1 in the i th entry of the row. A variant of this construction gives an even more direct connection between the quantum syndrome computation and the parity
check matrices. The following two circuits, involving a computational qubit and an ancilla, have the same effect on states |ψ|0 where |ψ is any single-qubit state. H
H X
Z 0
Measuring the ancilla in either case results in the same state with the same probability. So instead of applying the Walsh-Hadamard transformation to the computational qubits and then Cnot from
computational qubits to the ancilla, simply apply a Hadamard gate to the ancilla qubit and use it to control phase flips on the computational qubits. A similar argument implies that to correct bit
flips we can apply a Hadamard transformation to the ancilla and use it to control bit flips on the computational qubits. Studying stabilizer codes, a generalization of CSS codes, will illuminate why
correct computational states remain undisturbed by the syndrome computation. Altogether, the CSS code has n − k1 + n − k2 = 2n − k1 − k2 observables, n − k1 that contain only Z and I terms and
correct bit-flip errors, and n − k2 that contain only X and I terms and correct phase-flip errors. Instead of constructing CSS codes starting with superpositions of classical codewords, we could have
begun the construction with the observables corresponding to the parity check matrices for the codes C1 and C2 . This approach will be pursued in section 11.4 on stabilizer codes. 11.3.3 The Steane
Steane’s [[7, 1]] code C is based on the [7, 4] Hamming code. We revisit this code multiple times, first in section 11.4 as an example of stabilizer codes, and then in chapter 12 as the running
example illustrating the design of fault-tolerant procedures.
11.3 CSS Codes
Recall from example 11.3.1 that C ⊥ = {0000000, 1110100, 1101010, 0011110, 1011001, 0101101, 0110011, 1000111}, and that C contains sixteen codewords, those of C ⊥ plus those obtained by adding
1111111 to all of the codewords of C ⊥ . Since C contains its own dual, the conditions for the CSS construction are satisfied by taking C1 = C and C2 = C. Following the CSS construction, ˜ = √1 |0 →
|0 |c 8 c∈C ⊥ 1 = √ (|0000000 + |1110100 + |1101010 + |0011110+ 8 |1011001 + |0101101 + |0110011 + |1000111) and ˜ = √1 |1 → |1 |c 8 c∈C, c∈C ⊥ / 1 = √ (|1111111 + |0001011 + |0010101 + |1100001+ 8 |
0100110 + |1010010 + |1001100 + |0111000). A syndrome extraction operator UP for the Steane code is based on a parity check matrix P for the [7, 4] Hamming code, ⎛ ⎞ 1 1 1 0 1 0 0 ⎜ ⎟ P = ⎝ 1 1 0 1 0
1 0 ⎠. 1
The six observables for the Steane code are (a circuit for S1 is shown in figure 11.1): S1 = Z ⊗ Z ⊗ Z ⊗ I ⊗ Z ⊗ I ⊗ I S2 = Z ⊗ Z ⊗ I ⊗ Z ⊗ I ⊗ Z ⊗ I S3 = Z ⊗ I ⊗ Z ⊗ Z ⊗ I ⊗ I ⊗ Z S4 = X ⊗ X ⊗ X ⊗ I
⊗ X ⊗ I ⊗ I
S5 = X ⊗ X ⊗ I ⊗ X ⊗ I ⊗ X ⊗ I S6 = X ⊗ I ⊗ X ⊗ X ⊗ I ⊗ I ⊗ X We postpone discussion of how to compute on the encoded states until after developing stabilizer codes, a general class of codes that
contains CSS codes.
11 Quantum Error Correction
Figure 11.1 One of six component circuits for a syndrome extraction operator UP for the Steane code.
11.4 Stabilizer Codes
The stabilizer code construction generalizes from the construction of CSS codes described in section 11.3. The construction begins by recognizing that certain [[n, k]] codes, 2k -dimensional
subspaces of a 2n -dimensional space, can be defined in terms of the set of operators that stabilize the subspace. Example 11.4.1 The Steane code is stabilized by the six observables S1 , S2 , S3 ,
S4 , S5 , S6 of ˜ and |1 ˜ of the Steane code are +1-eigenvectors of all six observables. equations 11.8; the states |0
Section 11.4.1 explains how codes are defined by their stabilizers. It looks at the case of binary observables serving as stabilizers for a code C. All of the observables used in the CSS code
construction have only two eigenvalues, −1 and +1. Section 11.4.1 uses properties of these observables to determine conditions on a set of correctable errors for C. Section 11.4.2 further restricts
from binary observables to elements of the generalized Pauli group, yielding more specific conditions on a set of correctable errors. This setup prepares for a full development of stabilizer code
error correction in section 11.4.3. Section 11.4.4 explains how computation is done on the logical qubits of a stabilizer code using a new code, the [[5, 1]] stabilizer code, as a running example.
11.4.1 Binary Observables for Quantum Error Correction
A subspace W of a vector space V is stabilized by an operator S : V → V if for all |w ∈ W , S|w = |w. In other words, W is stabilized by S if |w is a +1-eigenstate of S for all |w ∈ W .
11.4 Stabilizer Codes
The stabilizer of a subspace W ⊂ V is the set of all operators that stabilize W . Let S be the set of all binary observables on V with only +1 and −1 as eigenvalues. Any set of observables {Si } ⊂ S
defines a subspace C, the largest subspace stabilized by all elements Si . Sometimes the code C is an attractive quantum error correcting code, while in other cases C is not. For example, for some
sets of observables, C is simply the zero vector. Our next task is to understand for what sets of observables we get an interesting code by learning how to determine a correctable set of errors for C
from the set of observables that defines C. Suppose S stabilizes |v and T anticommutes with S; in other words, ST = −T S. Then, ST |v = −T S|v = −T |v, so T |v is a −1-eigenvector for S. If a code C
is stabilized by S, then for all |v ∈ C, the state T |v cannot be a codeword of C. That T |v is not a codeword can be detected by measurement with S. This fact enables us express to the condition on
a set E of unitary errors, equation 11.2 of section 11.2.4 †
ca |Ei Ej |cb = mij δab ,
in terms of the set of stabilizers. Let C be the code defined defined by the r stabilizers S1 , . . . , Sr . † Suppose that for all pairs Ei and Ej of distinct elements, either Ei Ej stabilizes C or
there is at † least one Sl that anticommutes with Ei Ej . The next paragraph shows that such a E is a correctable set of errors for C. Box 11.5 Stabilizers and Groups Acting on Sets
A group G acts on a set S if for all elements g, g1 , g2 ∈ G and element s ∈ S •
it is meaningful to talk about applying g to s to obtain another element gs of S,
the identity e of G takes any s to itself, es = s, and
(g1 g2 )s = g1 (g2 s).
A group may act on a set in many different ways, so it is important to define which action is being talked about. We give some examples. • For any H < G, the group G acts on the set of cosets of H in
a canonical way: an element g ∈ G 1 acts on a coset g2 H taking it to the coset (g1 g2 )H . • The group of unitary operators U : V → V acts on the vector space V , viewed as a set, by sending |v ∈ V
to the vector U |v.
For any s ∈ S, the set of group elements that stabilize s is a subgroup, Hs = {g ∈ G|gs = s}, called the stabilizer of s.
11 Quantum Error Correction
If Ei Ej stabilizes C, †
ca |Ei Ej |cb = ca |cb = δab . †
On the other hand, suppose Ei Ej anticommutes with a stabilizer Sl . Then †
ca |Ei Ej |cb = ca |Ei Ej Sl |cb = − ca |Sl Ei Ej |cb = − ca |Ei Ej |cb , from which it follows that †
ca |Ei Ej |cb = 0. Since the Ei are unitary, †
ca |Ei Ei |cb = δab for all i, a, and b. These equations show that E satisfies the quantum error condition, equation 11.2. † If, for some i and j , the transformation Ei Ej stabilizes C, the code C
is degenerate with respect † to E. Otherwise, if for all i = j each Ei Ej anticommutes with at least one Sl , then the code C is nondegenerate. 11.4.2 Pauli Observables for Quantum Error Correction
The observations of section 11.4.1 suggest a general mechanism for constructing a code C with correctable error set E from a set of operators satisfying certain relations. Because of the generalized
Pauli group’s commutation relations, it is relatively easy to find sets of generalized Pauli operators satisfying these relations. Because Y † = −Y , X† = X and Z † = Z, any element of the
generalized Pauli group Gn that contains an even number of Y terms and arbitrarily many X and Z terms is Hermitian, and so can be viewed as an observable. Let S be an Abelian subgroup of Gn that does
not contain −I . All elements of the generalized Pauli group Gn square to either ±I . Since S is a subgroup that does not contain −I , all elements of S square to I which means they can only have ±1
as eigenvalues. Because S is an Abelian group in which all elements square to the identity, S must be isomorphic to Zk2 for some k. Let S1 , . . . , Sr be generators for S. Let C be the subspace
stabilized by S: C = {|v ∈ V | Sa |v = |v, ∀Sa ∈ S}. The next paragraph shows that C has dimension 2n−r . Let Ci be the subspace stabilized by the first i stabilizers: Ci = {|v ∈ V | Sj |v = |v, ∀0 <
j ≤ i}. Because all nonidentity elements Sα of Gn have trace 0, and +1 and −1 are the only eigenvalues, the +1 eigenspace of Sα must have half the dimension of V . Thus, the subspace C1 stabilized by
S1 must have dimension half that of V : the subspace C1 has dimension 2n−1 . For all i, the operator Pi = 12 (I + Si ) is a projector onto the +1-eigenspace of Si , so C1 = P1 V . Since S2 P1 = 12 (I
+ S1 )S2 has trace zero, exactly half of C1 is in the +1-eigenspace of S2 . Thus C2
11.4 Stabilizer Codes
has dimension 2n−2 . Since C = Cr , induction yields dim C = 2n−r . To find C explicitly, for any element Sα ∈ S, the set of elements {Sα Sβ |Sβ ∈ S} = S because S is a group. Thus, 1 Sα |ψ |S | Sα
∈S is stabilized by S, where |ψ is any n-qubit state. † Let E ∈ Gn be a set of errors {Ei } such that for all i and j , either Ei Ej is in the stabilizer of S or anticommutes with at least one
element of S. In other words, †
/ Z(S) − S, Ei Ej ∈ where Z(S) is the centralizer of S, the subgroup of Gn that contains elements that commute with † † all elements of S. As per section 11.4.1, if Ei Ej stabilizes C, then ca |Ei Ej
|cb = δab , and if † † Ei Ej anticommutes with an Sl , then ca |Ei Ej |cb = 0 unless i = j and a = b. Thus, any E † / Z(S) − S is a correctable set of errors for code C. Of such that all Ei , Ej ∈ E
satisfy Ei Ej ∈ particular interest is the maximal t such that all errors Ei and Ej on t or fewer qubits satisfy † Ei Ej ∈ Z(S) − S. The distance d of a stabilizer code is the minimum weight of an
element in Z(S) − S. A [[n, k, d]] quantum code represents k-qubit message words in n-qubit codewords and has distance d. We use double brackets to distinguish quantum from classical codes. An [[n,
k, d]]-quantum code is able to correct all errors of weight t or less if d ≥ 2t + 1. 11.4.3 Diagnosing and Correcting Errors
Let C be a stabilizer code with stabilizers S given in terms of an independent generating set S1 , . . . , Sr . Because S is Abelian, measurements by different Si do not affect each other; the
probability that a state |v ∈ V is measured and determined to be in the −1-eigenspace of Si is the same no matter what other Sj have been measured before. Measurement of all r observables Si
distinguishes 2r subspaces {Ve } of V , each of dimension 2n−r . Each subspace has a unique signature e, a length r bit-string whose i th bit ei indicates whether Ve is in the +1 or −1-eigenspace of
Si : 0 Ve = (−1)ei -eigenspace of Si . i
Any error E ∈ Gn either commutes or anticommutes with each Si . The discussion of stabilizer codes started with the observation that for any |v stabilized by Si , the state E|v is in the +1eigenspace
of Si if E and Si commute, and in the −1-eigenspace of Si if they anticommute. Since both EC and Ve have dimension 2n−r , the subspace EC = Ve for some e. Recall from section † 11.4.1 that if E is a
correctable error set for code C, then for all Ei and Ej in E, either Ei Ej † anticommutes with S or is in S. If Ei Ej anticommutes with S, then Ei C and Ej C are orthogonal † subspaces. If Ei Ej is
in S, then Ei |v = Ej |v for all |v ∈ C, and Ei C = Ej C. In the first case, measurement by the r observables Si distinguishes Ei C from Ej C. In the second case, while the measurement cannot
determine whether error Ei or error Ej has occurred, it is not necessary to † † know; applying either Ei or Ej returns the state to the correct original. Every Ei in E is associated
11 Quantum Error Correction
Figure 11.2 Indirect measurement for operators M that are both Hermitian and unitary. Measurement of the ancilla qubit in the standard basis yields the same state on the quantum register with the
same probability as would have been obtained by direct measurement of the register with M.
with a unique signature e. When the Si are measured and an r bit string e is obtained, applying † Ei for any of the Ei with signature e will return the state to the correct one no matter which error
Ej ∈ E occurred. Let M be any Hermitian unitary operator on n qubits. Because M is both Hermitian and unitary, an indirect measurement may be performed with an additional ancilla qubit and the
circuit of figure 11.2, where a gray circle means “measure according to the Hermitian operator encircled.” In this case, measure the ancilla qubit with operator Z, a measurement in the standard
basis. The remainder of this section explains how this circuit achieves an indirect measurement according to M. Because M is both unitary and Hermitian, its only possible eigenvalues are +1 and −1.
The circuit uses the fact that, for any |ψ, the state c(|ψ + M|ψ) is a +1-eigenvector of M, and the state c (|ψ − M|ψ) is a −1-eigenvector of M, where c and c are the normalization factors c = 1/||ψ
+ M|ψ| and c = 1/||ψ − M|ψ|: if we write |ψ in terms of eigenstates for M, we see that the −1-eigenvectors cancel in |ψ + M|ψ, leaving the +1-eigenvectors. Let P + be the projection onto the
+1-eigenspace of M, so 12 (|ψ + M|ψ) = P + |ψ, and P − the projection onto the −1-eigenspace of M, so 12 (|ψ − M|ψ) = P − |ψ. Direct measurement of |ψ according to +
with probability ψ|P + |ψ and |PP − |ψ with probability ψ|P − |ψ. M yields |PP + |ψ |ψ| |ψ| This paragraph shows that the circuit of figure 11.2 yields these same states with the same probability.
Prior to measurement, the state is 1 1 √ (|+|ψ + |−M|ψ) = ((|0 + |1)|ψ + (|0 − |1)M|ψ) 2 2 1 = (|0(|ψ + M|ψ) + |1(|ψ − M|ψ)) 2 1 1 1 = ( |0(c(|ψ + M|ψ)) + |1(c (|ψ − M|ψ))). 2 c c
Measurement of the ancilla qubit with Z yields 0 with probability p+ = ψ|P + |ψ and results in the n-qubit state c(|ψ + M|ψ). Similarly, the measurement yields 1 with probability p− =
11.4 Stabilizer Codes
ψ|P − |ψ resulting in c (|ψ − M|ψ) as the state of the n qubit register. As an alternative argument, since M = P + − P − and I = P + + P − , c(|ψ + M|ψ) = c((P + + P − )|ψ + (P + − P − )|ψ) = cP + |
ψ. Thus, the circuit of figure 11.2 has the same effect on the n computational qubits as a direct measurement by M. To measure each of the Si for 1 ≤ i ≤ m, we need m such circuits and m ancilla
qubits. Measurement of these qubits yields the string e. 11.4.4 Computing on Encoded Stabilizer States
For stabilizer codes, certain operators U , including those in the Pauli group, have logically equivalent operators U˜ that are simple to obtain. Chapter 12 shows that stabilizer codes also have good
error handling properties. Again we work backward; instead of defining an encoding function and then finding logical operators for that encoding, we find candidate logical operators and then use them
to define the encoding. In particular, we find logical single-qubit Pauli operators Z˜1 , . . . , Z˜k and use them to define an encoding function. In general, a state can be defined in terms of
operators for which it is an eigenstate; for example, all of the standard basis elements for a k-qubit system can be defined in terms of whether they are +1 of −1-eigenvectors for each of the
operators Z1 , . . . , Zk . As another example, section 10.2.4 defined cluster states in terms of the operators that stabilize them. Here, the encoding function takes any standard basis vector |b1 .
. . bk to the unique state in the code C that is a (−1)bi -eigenstate of Z˜i for all i. The rest of this section describes this program in more detail, developing the five-qubit code as a running
example. Example 11.4.2 The set of observables
S0 = X ⊗ Z ⊗ Z ⊗ X ⊗ I S1 = Z ⊗ Z ⊗ X ⊗ I ⊗ X S2 = Z ⊗ X ⊗ I ⊗ X ⊗ Z S3 = X ⊗ I ⊗ X ⊗ Z ⊗ Z, defines a [[5, 1]] code. The four observables are independent, so each of the four observables divides the
25 -dimensional code space into two eigenspaces, leaving a space of 25 /24 = 21 dimensions of codewords. This code satisfies the quantum Hamming bound, and so is a perfect code. Consider an element A
∈ Z(S). For any |v ∈ C, the state A|v is also in C: Si A|v = ASi |v = A|v, so A|v is a +1-eigenstate of all the Si . If A is in S, then A|v = |v, but for all A in Z(S) − S, A acts nontrivially on C.
If A1 = A2 Sa for some Sa ∈ S, then A1 and A2 behave in the same way
11 Quantum Error Correction
on C. All the elements of the quotient group Z(S)/S act in distinct ways on C. To understand how they act on C, we need to know more about the structure of the centralizer Z(S). The symplectic view
of Gn illuminates the structure of Z(S). Recall from section 11.2.11 that any element of Gn can be written uniquely as μ(X a1 ⊗ · · · ⊗ Xan )(Z b1 ⊗ · · · ⊗ Z bn ), so to each element of Gn there is
an associated 2n-bit string (a|b) = a1 . . . an b1 . . . bn . Moreover h : Gn → Z2n 2 is a group homomorphism where (a|b) · (a |b ) = (a ⊕ a |b ⊕ b ). The homomorphism h is four-to-one and loses the
phase information contained in μ. Since S does not contain −I , and therefore does not contain iI or −iI , no two elements of S map to the same string, so on S the homomorphism h is one-to-one. The
elements S1 , . . . , Sr of Gn are independent, meaning that none of them can be written as a product of the others, if and only if the corresponding bit strings (a|b) are linearly independent. Two
elements g and g of Gn commute if and only if ab + a b = 0 mod 2, where ab is the the usual inner product, the sum of bitwise multiplication of the corresponding bits, and (a|b) = h(g) and (a |b ) =
h(g ). The expression ab + a b mod 2 is called the symplectic inner product. Example 11.4.3 The stabilizer group generated by the four observables of example 11.4.2 has
sixteen elements Sα : S i Si = I S1 S3 S0 S2 S1 S2 S2 S3 S 0 S1 S3 S 1 S2 S3
= = = = = = = =
I ⊗I ⊗I ⊗I ⊗I Z⊗Z⊗X⊗I ⊗X X⊗I ⊗X⊗Z⊗Z −Y ⊗ Y ⊗ Z ⊗ I ⊗ Z −I ⊗ Y ⊗ X ⊗ X ⊗ Y −Y ⊗ X ⊗ X ⊗ Y ⊗ I −Z ⊗ I ⊗ Z ⊗ Y ⊗ Y −X ⊗ Y ⊗ I ⊗ Y ⊗ X
S0 S2 S 0 S1 S 0 S3 S 1 S3 S0 S1 S2 S0 S2 S3 S 0 S1 S2 S3
= = = = = = = =
X⊗Z⊗Z⊗X⊗I Z⊗X⊗I ⊗X⊗Z −Y ⊗ I ⊗ Y ⊗ X ⊗ X −I ⊗ Z ⊗ Y ⊗ Y ⊗ Z −Y ⊗ Z ⊗ I ⊗ Z ⊗ Y −X ⊗ X ⊗ Y ⊗ I ⊗ Y −Z ⊗ Y ⊗ Y ⊗ Z ⊗ I I ⊗ X ⊗ Z ⊗ Z ⊗ X.
The following table shows the error syndrome for the code defined by these observables. For single-qubit errors X, Y , and Z on qubits 0 through 4, the corresponding column shows the result of
measurement with observable Si after that single error occurred on a codeword. The + and − indicate whether the result of measurement with Si on that qubit is +1 and −1 respectively. The results of
the four measurements identify the error uniquely. Counting + as 0 and − as 1, the last row shows a unique decimal value, coming from measurement of all four observables.
11.4 Stabilizer Codes
bit 0
S0 S1 S2 S3
bit 1
bit 2
bit 3
bit 4
+ − − +
− + + −
− − − −
− − + +
+ + − +
− − − +
− + + +
+ − + −
− − + −
+ + + −
− + − +
− + − −
+ + − −
+ − + +
+ − − −
Let S1 , . . . , Sr be an independent generating set for S. Form the r × 2n binary matrix ⎞ ⎛ (a|b)1 ⎜ (a|b)2 ⎟ ⎟ ⎜ M=⎜ ⎟ .. ⎠ ⎝ . (a|b)r with the (a|b)i = h(Si ) as the rows. Because the Si are
independent, so are the rows of M, and so M has rank r. The matrix M acts on a 2n-bit string (a|b), viewed as a column vector ab , to produce a length r vector in which the ith entry is the
symplectic inner product of (a|b) with (a|b)i . The matrix M has a kernel of dimension 2n − r. Elements of this kernel correspond to elements of Gn that commute with all elements of the stabilizer.
Thus, there are 4 · 22n−r elements in Z(S), where the factor of 4 comes from the four possible values of μ; these values of μ will not be relevant for the remaining discussion because elements of S
are uniquely determined by the corresponding string (a|b). For an [[n, k]] stabilizer code, the size of the stabilizer subgroup is 2n−k . So for an [[n, k]] code, there are 22n−r = 2n+k elements in Z
(S). Take Z˜ 1 to be any element of Z(S) that is independent of S1 , . . . , Sr . Form the (r + 1) × 2n binary matrix M1 by adding as an additional row to M the 2n-bit string corresponding to Z˜ 1 .
The matrix M1 has full rank r+ 1. Let C1 be the size 22n−(r+1) = 2n+k−1 set of binary strings, viewed as column vectors ab , that are in the kernel of M1 . Let Z˜ 2 be any element of Z(S) that
corresponds to a bit string in C1 . We can continue this process k times to obtain operators Z˜ 1 , . . . Z˜ k that commute with each other and with all elements of S. The kernel of Mk will be S.
Consider unencoded standard basis vectors for a moment. The k-qubit state |00 . . . 0 is the unique +1-eigenstate of the Z1 , . . . , Zk . More generally, the standard basis vector |b1 . . . bk is
the unique state that is a (−1)bi -eigenstate of Zi for all i. For any k-bit string b1 . . . bk , there is a unique element of the code C that is a (−1)bi -eigenstate of Zi for all i; the argument
that there is a unique element is similar to the argument that established the dimension of C. We define an encoding function UC for the code C that takes standard basis elements to elements of C
that have k −1 ax |x analogous eigenstate relations with the logical versions Z˜ i of the Zi . A k-qubit state 2x=0 is encoded as follows:
UC :
11 Quantum Error Correction
k −1 2
ax |x →
k −1 2
ax |x ˜
where |x ˜ is the unique element of C that is in the (−1)xi -eigenspace of Z˜i for all 0 ≤ i ≤ k. The (r + k) × 2n = n × 2n matrix Mk has rank; therefore for any i, there is a bit string bfull (a|b)
that, when viewed as a column vector a , yields the n-bit string ei which has a 1 in the i th place and 0 elsewhere: in particular, there is a 2n-bit string (a|b) that satisfies b = e1 . Mk a Let X˜1
be the element of Z(S) with bit string (a|b) that yields e1 when multiplied by Mk . Construct Mk+1 by adding as a row to Mk the bit string corresponding to X˜1 . Let X˜2 be such that its bit string
(a|b) satisfies b = e2 . Mk+1 a We can continue in this way until we obtain X˜1 , . . . , X˜k . By construction, X˜ i anticommutes with Z˜i , and commutes with all of S, all X˜j , and all the Z˜j for
j = i. Example 11.4.4 For the [[5, 1]] code of example 11.4.2, the binary matrix corresponding to the
independent generating set {Si } is ⎛ 1 0 0 1 0 0 1 ⎜ 0 0 1 0 1 1 1 M=⎜ ⎝ 0 1 0 1 0 1 0 1 0 1 0 0 0 0
⎞ 0 0 ⎟ ⎟. 1 ⎠ 1
The bit string (a|b) = (11111|00000) is independent of the rows m ∈ M and satisfies Mb = 0, so we may take Z˜ = Z ⊗ Z ⊗ Z ⊗ Z ⊗ Z and, since (b|a) is orthogonal to (a|b) and all rows of M, we may
take X˜ = X ⊗ X ⊗ X ⊗ X ⊗ X. Let |e˜i be the unique state in C that is a −1-eigenstate of Z˜i but a +1-eigenstate for all the Z˜j with j = i. For j = i, Z˜j X˜ i |e˜i = X˜ i Z˜j |e˜i = X˜ i |e˜i , so
X˜ i |e˜i is a +1-eigenstate of Z˜j for j = i. For Z˜i , Z˜i X˜ i |e˜i = −X˜ i Z˜i |e˜i = −X˜ i |e˜i ,
11.5 CSS Codes as Stabilizer Codes
so X˜ i |e˜i is in the +1-eigenstate of Z˜i as well. This calculation suggests that X˜ i is the logical analog of Xi for C with encoding UC of equation 11.10. A full proof is straightforward. Example
11.4.5 For the [[5, 1]] code of example 11.4.2, the +1-eigenspace of Z˜ is spanned by
˜ to be the set of standard basis states with an even number of 1s. Thus, we may take |0 ˜ = 1 Sα |00000 |0 |S | Sα ∈S =
1 (|00000 + |10010 + |00101 + |01010 4 + |10100 − |10111 − |11000 − |00110 − |01111 − |10001 − |11110 − |11101 − |00011 − |01100 − |11011 + |01001),
˜ to be and |1 ˜ = 1 Sα |11111, |1 |S | Sα ∈S a superposition of all basis vectors with an odd number of ones. The construction of logical versions of other single-qubit gates and multiple-qubit
gates for a stabilizer code C is more complicated. Chapter 12 looks at this issue in more detail and provides constructions for a universal approximating set of logical gates for the Steane code.
11.5 CSS Codes as Stabilizer Codes
Let C1 and C2 be [n, k1 ] and [n, k2 ] classical codes respectively, and suppose both correct t errors. Furthermore, suppose C2⊥ ⊂ C1 . These codes satisfy the condition required for the construction
of an [[n, k1 − k2 ]] CSS code. This section describes an alternative to the CSS code construction of section 11.3, one that uses the stabilizer viewpoint. Let P1 (resp. P2 ) be the parity check
matrix for the code C1 (resp. C2 ). For each row of P1 , viewed as a bit string b = b1 . . . bn , construct an observable Xb = X b1 ⊗ · · · ⊗ Xbn . These n − k1 observables are independent, since the
rows of P1 are linearly independent. For each row of P2 , also construct an observable Z b = Z b1 ⊗ · · · ⊗ Z bn . These n − k2 observables are also independent, and, since X and Z are independent,
the entire set of 2n − k1 − k2 observables is independent. The group S generated by these observables does not contain −I , so S defines a stabilizer code if and only if it is Abelian.
11 Quantum Error Correction
This paragraph shows that the CSS condition implies that S is Abelian. Since X and I commute, all elements of {Xa |a ∈ P1 } commute. Similarly, all elements of {Z b |b ∈ P2 } commute. Since X and Z
anticommute, the group elements Xa and Z b commute if and only if a · b is even. So the elements of S commute if abT = 0 mod 2 for all rows a of P1 and rows b of P2 . This equality ⊥ holds if P1 P2T
= 0 mod 2. Because C2⊥ ⊂ C1 , the generator matrix G⊥ 2 satisfies P1 G2 = 0. Since ⊥ T T G2 = P2 , we have P1 P2 = 0. Therefore, S is Abelian and the subspace C, stabilized by S, is a stabilizer
code. The code CSS(C1 , C2 ) of section 11.3.2 is stabilized by S. Since S has n − k1 + n − k2 independent generators, it stabilizes a subset of dimension n − (2n − k1 − k2 ) = k1 + k2 − n. Since CSS
(C1 , C2 ) has dimension k1 + k2 − n, it is the stabilizer code for S. Example 11.5.1 The Steane code revisited. The parity check matrix
1 P =⎝ 1 1
⎞ 0 0 ⎠ 1
defines the [7, 4] Hamming code. The Steane code takes the [7, 4] Hamming code as both C1 and C2 . To obtain stabilizers for the Steane code, define an operator in G7 for each row in the parity check
matrix that has a Z in every place a 1 occurs and an I for every 0: Z⊗Z⊗Z⊗I ⊗Z⊗I ⊗I Z⊗Z⊗I ⊗Z⊗I ⊗Z⊗I Z ⊗ I ⊗ Z ⊗ Z ⊗ I ⊗ I ⊗ Z. For each row in the parity check matrix, also define an operator that
has an X wherever a 1 occurs: X⊗X⊗X⊗I ⊗X⊗I ⊗I X⊗X⊗I ⊗X⊗I ⊗X⊗I X⊗I ⊗X⊗X⊗I ⊗I ⊗X These six observables stabilize exactly the Steane code C, so the Steane code is a [[7, 1]] stabilizer code.
11.6 References
Hungerford’s Abstract Algebra: An Introduction [159] includes a chapter on classical error correction, as well as giving a thorough treatment of the group theory and linear algebra involved. Wicker’s
Error Control Systems for Digital Communication and Storage [283] discusses classical error correction and includes chapters on the relevant algebra.
11.7 Exercises
The nine-qubit code of section 11.1.3 was originally proposed by Shor [251]. The seven-qubit code of section 11.3 was originally proposed by Steane [259]. The theory of stabilizer codes and
fault-tolerant implementation of them are discussed at length in Daniel Gottesman’s thesis [135]. 11.7 Exercises Exercise 11.1. For the code C : Z12 → Z32 defined by generator matrix
⎞ 1 G=⎝ 1 ⎠ 1 give •
the set of code words
two distinct parity check matrices
Computing a parity check matrix for a code specified by a generator matrix. a. Show that adding a column of a generator matrix G for a code C to another column produces an alternative generator
matrix G for the same code C. A b. Show that for any [n, k] code there is a generator matrix of the form , where A is a I (n − k) × k matrix and I is the k × k identity matrix. A , then the (n − k) ×
k matrix P = (I |A), where I is the (n − k) × c. Show that if G = I (n − k) identity matrix, is a parity check matrix for the code C. I is a generator d. Show that if a parity check matrix P has the
form (A|I ), then G = A matrix for the code. Exercise 11.2.
Exercise 11.3. Show that the code CP F of section 11.1.2 corrects all linear combinations of ˜ + b|1. ˜ single-qubit phase-flip errors {I, Z2 , Z1 , Z0 } on any superposition a|0 Exercise 11.4. Show
that the code CP F of section 11.1.2 does not correct bit-flip errors. Exercise 11.5. Show that if an [[n, k, d]]-quantum code is able to correct all errors of weight t or less, d ≥ 2t + 1. Exercise
Show that all Hamming codes have distance 3 and so correct single bit-flip errors.
Exercise 11.7. Show that Shor’s code is a degenerate code. Exercise 11.8. Alternative Steane code constructions. a. Find a parity check matrix of the form (I |A) for the Steane code.
11 Quantum Error Correction
b. Construct an alternative circuit, based on the parity check matrix found in (a), that can serve as a syndrome extraction operator for the Steane code. Exercise 11.9. Show that the generalized set
of Pauli elements forms a basis for the linear transformations on the vector space associated with an n-qubit system. Exercise 11.10. a. Show that for all i and j and for all orthonormal |c1 = |c2 ∈
C, †
c1 |Zj Yi |c2 = 0
c1 |I Yi |c2 = 0, b. Show that for all j = i and for all orthonormal |c1 = |c2 ∈ C, †
c1 |Yj Yi |c2 = 0. Exercise 11.11. Describe how the Shor code can be used to correct single-qubit errors without
making any measurements. Exercise 11.12. Show that if two blocks encoded according to code C are subjected to an error E that is a superposition of errors E = Ea ⊗ Eb + Ec ⊗ Ed , where Ea , Eb , Ec ,
and Ed are all elements of a correctable set of errors E for C, then E can be corrected. Exercise 11.13. Suppose a single qubit |ψ = a|0 + b|1 has been encoded using the Steane √ code and that the
error E = 12 X2 + 23 Z3 has occurred. Write down a. the encoded state, b. the state after the error has occurred, c. for each phase of the error correction, the syndrome and the resulting state, and
d. each error correcting transformation applied and the state after each of these applications. Exercise 11.14. Show that for a [[n, k]] quantum stabilizer code there is, for any k bit string
b1 . . . bk , a unique element of the code C that is a (−1)bi -eigenstate of Zi for all i. Exercise 11.15. Show that the subspaces Ve of section 11.4.3 are of dimension 2n−r .
Show that the operators X˜ i , as defined for stabilizer codes in section 11.4.4, act as a logical analog of the gates Xi for the logical states obtained from the encoding c.
Exercise 11.16.
Exercise 11.17. Show that the [[9, 1]] Shor code is a stabilizer code. Exercise 11.18. Find alternative ways of implementing operations corresponding to X and Z on
the logical qubits of the five-qubit stabilizer code. Exercise 11.19. Let [[n, k, d]] be any nondegenerate code. Such a code can correct t =
d−1 ! 2
errors. Show that tracing any codeword over any n − t qubits results in the totally mixed state ρ = 21t I on the remaining t qubits. Thus, all codewords are highly entangled states.
Fault Tolerance and Robust Quantum Computing
Quantum error correction by itself is not sufficient for robust quantum computation. Robust computation means that arbitrarily long computations can be run to whatever accuracy desired. The analyses
of quantum error correction techniques in chapter 11 supposed that the error correcting procedures were carried out perfectly, an unrealistic assumption. Also, even if the environment interacts with
the system only in ways that can be handled by the error correcting code, gates used as part of the computation may propagate errors in ways that produce errors the code cannot correct. To achieve
robust quantum computation, quantum error correction must be combined with fault-tolerant techniques. This chapter presents one approach to robust quantum computation: error correction coupled with
fault-tolerant procedures. Other approaches to robust quantum computation exist, both for quantum computation in the standard circuit model and for alternative models of computation. These
alternative approaches will be touched on briefly in sections 13.3 and 13.4 respectively. The chapter concludes with a threshold theorem for one error model. Threshold theorems prove that as long as
the error rate is below a certain threshold, a quantum computer can run arbitrary long computations to arbitrarily high accuracy. This chapter uses a simple error model sufficient to illustrate a
general approach to fault-tolerant quantum computation: the strategy is to replace a circuit with an expanded circuit that is more robust; if the original circuit’s chance of failing was O(p), the
expanded circuit fails with only probability O(p 2 ). Given a general method for obtaining such expanded circuits, arbitrary low probabilities of failure can be achieved by concatenation; the
expanded circuit can be replaced with a yet larger and more robust circuit, and we can repeat this process, called concatenated coding, until the desired level of accuracy is achieved. A key feature
of concatenated coding is that only polynomial resources are required to obtain exponentially low probabilities of failure. Like quantum error correction, fault-tolerant quantum computing is a richly
developed field. A variety of approaches have been developed, and threshold theorems for a variety of error models and codes have been proved. Fault-tolerant quantum computation remains an active
area of research, and like quantum error correction, it will evolve as quantum information processing devices are built, more realistic error models are learned, and more sophisticated quantum
computer architectures are developed.
12 Fault Tolerance and Robust Quantum Computing
As in chapter 11, this chapter concentrates on quantum error correcting codes that correct errors of weight t or less. The most important issue in ensuring that a circuit can always be replaced with
a more robust expanded circuit is the control of spread; if in the course of computation a single error propagates to additional qubits before we are able to correct it, then the probability that our
computation becomes subject to an error that we cannot correct is much higher. Faulttolerant quantum computing methods aim to eliminate the propagation of errors from small numbers of qubits to a
large numbers of qubits. All aspects of quantum computation must be made fault-tolerant: error correction, the gates themselves, initial state preparation, and measurement. The sections of the
chapter gradually peel away at assumptions of perfection, adding mechanisms to handle each source of errors, to arrive at a full program of fault-tolerant procedures that support robust quantum
computation. The chapter uses Steane’s seven-qubit code as a running example. Section 12.1 discusses the setting in which we describe fault-tolerant techniques, including the error model and when
error correction steps are applied. Section 12.2 addresses fault-tolerant quantum error correction. Section 12.2 examines the design of a full set of faulttolerant procedures for performing arbitrary
computations on qubits encoded with the Steane code. Section 12.3 describes concatenated coding leading to a threshold result. 12.1 Setting the Stage for Robust Quantum Computation
To simplify the presentation, we consider only [[k, 1]] quantum codes. Given a specific set of universal gates and a specific [[k, 1]] quantum error correcting code, fault-tolerant techniques aim to
take any circuit composed of those gates and produce a circuit on encoded qubits such that the probability of a faulty computation is reduced even though more qubits and more operations are involved.
Fault-tolerant techniques for a given code address the question of how to implement logical procedures on the computational qubits, syndrome extraction operators, measurements, state preparation, and
error correcting transformations in such a way that the resulting computation is more robust than the original one; roughly, if the original circuit failed with probability O(p), then the expanded
circuit fails with probability O(p2 ). Before describing fault-tolerant techniques, we need to discuss when quantum error correcting operations are applied, and how we will model the errors. Let Q0
be a quantum circuit for a computation we wish to make robust. We partition time into intervals in which at most one gate acts on any qubit (figure 12.1). This partitioning is not unique: for the
circuit of figure 12.1, the single-qubit gate applied to the first qubit could have been placed in the second time interval instead, or the first time interval could have been split in two with, for
example, the single-qubit operations performed in the first interval and the two-qubit operation in the second. Given a partitioned circuit Q0 , we will define an expanded circuit Q1 , in which every
qubit expands to a block of k qubits, and each time interval expands into two parts, one in which procedures implementing the logical gates are carried out, and a second in which the syndrome is
measured and error correcting transformations are applied (figure 12.2). Both of these procedures
12.1 Setting the Stage for Robust Quantum Computation
Figure 12.1 The original circuit Q0 for a computation we wish to make robust partitioned into time intervals in which at most one gate acts on a single qubit.
Figure 12.2 Schematic diagram showing the general structure, including the subpartitioned time intervals, for an expanded circuit for a [[7, 1]] quantum code. The expanded circuit alternates between
carrying out logical procedures and performing error correction (EC).
12 Fault Tolerance and Robust Quantum Computing
may require ancilla qubits. The expanded circuit is then subpartitioned into time intervals in which no qubit is acted on by more than one gate. We could have chosen to apply error correction less
often. We have chosen to apply it as often as possible: at the end of each logical procedure. A number of choices remain. Which procedures are used to implement the logical gates, and how the
syndrome and error correcting transformations are performed, determine whether the expanded circuit Q1 is more robust than the original circuit Q0 or not. For the purposes of describing
fault-tolerant procedures, we use a model in which errors only take place at the beginning of time intervals. We model imperfect single-qubit gates as a single-qubit error followed by a perfect gate.
Similarly, we model imperfect Cnot gates as two single-qubit errors followed by a perfect gate. Correlations between these errors can be ignored because quantum error correction is applied separately
to each block and, within our fault-tolerant procedures, we will allow Cnot transformations only between qubits in different blocks. Errors due to interactions with the environment are modeled as
occurring only at the beginning of time intervals. For our initial discussion, we use the local and Markov error model of section 10.4.4. This model means that each qubit, in each time interval,
interacts only with its own environment at the beginning of each time step (figure 12.3), and that the state of the environment at the beginning of each time interval is uncorrelated with the state
at all previous times. The threshold theorem discussed in section 12.3.2 uses a more general error model.
Figure 12.3 Schematic diagram of the error model with environmental interactions at the beginning of each time interval. The boxes representing the environmental interactions show each qubit
interacting with its own environment of arbitrary size.
12.2 Fault-Tolerant Computation Using Steane’s Code
12.2 Fault-Tolerant Computation Using Steane’s Code
In medicine, doctors take an oath, “first, do no harm.” While the quantum error correction methods described in the previous chapter enable the correction of errors during the course of a quantum
computation, they also increase the chance of errors, since they require more qubits and more gates. Analysis of quantum error correction in chapter 11 made the unrealistic assumption that the error
correction steps were carried out perfectly. In fact, as we will see shortly, if we used the quantum error correction schemes of the last chapter exactly as described, imperfections in the process
would likely introduce more errors than the process corrects. These schemes are not fault-tolerant. Fortunately, these schemes can be modified so that they do not introduce more errors than they
correct. We illustrate fault-tolerant quantum error correction techniques by demonstrating how to make the Steane seven-qubit code fault tolerant. Fault-tolerant techniques put in safeguards so that
a single error never propagates to multiple qubits, since multiple errors cannot be corrected by Steane’s code. The strategy is to replace parts that fail under a single qubit error with an ensemble
of parts that fails only in the presence of two or more errors, so that, if originally a part fails with probability p, the ensemble replacing it fails only with probability cp 2 . Section 12.2.1
illustrates ways in which the quantum error correction techniques of 11.3.3 fail to be fault tolerant. Section 12.2.2 shows how to perform error correction in a fault-tolerant way using the Steane
code as the example. Section 12.2.3 develops fault-tolerant logical gates for the Steane code that limit the propagation of errors, and sections 12.2.4 and 12.2.5 deal with fault-tolerant measurement
and fault-tolerant state preparation. Together, these procedures make the quantum computation robust against errors in the system and ancilla qubits, in the gates, in measurement, and in state
preparation. 12.2.1 The Problem with Syndrome Computation
The computation of the syndrome is potentially dangerous to the computational state. Consider the first of the six parity check circuits we gave for the Steane code, the one shown in figure 12.4, b1
b2 b3 b5
a Figure 12.4 One of the six syndrome computation circuits for Steane’s seven-qubit code.
12 Fault Tolerance and Robust Quantum Computing
which acts on the first, second, third, and fifth qubits of the encoded state and on the first ancilla qubit. Steane’s code was designed to correct any single-qubit error on any one of the qubits. We
want to make sure that imperfections in carrying out the error correction scheme do not make things worse. In particular, we want to make sure that a single error in carrying out the quantum error
correction does not result in multiple errors on the encoded qubits. Suppose a bit-flip error occurs on an ancilla qubit leading to the “correction” of a nonexistent error. Such an occurrence is
annoying but not too serious; the “correction” only introduces a single-qubit error on the coding qubits, which, as long as another error does not occur, will be corrected by the next round of error
correction. (Since the ancilla qubit will be used again only if it is first reset to |0, its error does not propagate further.) There is a worse possibility, one that results in multiple errors on
the coding qubits, something that will not be corrected by subsequent rounds of error correction even if no other error occurs. Take a minute to see if you can see the problem; spotting the problem
is a good test of whether you are thinking of quantum circuits in a quantum or classical fashion. Syndrome extraction operators for quantum codes commonly use controlled gates. On the face of it,
controlled operations seems perfectly safe since how could computing from the computational qubits to the ancilla qubits adversely affect the state of the computational qubits? However, as we saw in
caution 2 of section 5.2.4: the notions of from and to are basis dependent; in the Hadamard basis the control and target qubits of a Cnot are reversed, and phase flips become bit flips and vice
versa. Consider for example what happens if the coding qubits b1 , b2 , b3 , b5 are in the state |+ and a ZH error occurs on the ancilla qubit before the syndrome computation has begun. The error
places the ancilla qubit in the state |− so that when each Cnot is performed it acts as the control bit, with the result that all four qubits b1 , b2 , b3 , b5 have been flipped to the |− state. In
this way, a single error on the ancilla qubit propagates to multiple errors on the coding qubits. 12.2.2 Fault-Tolerant Syndrome Extraction and Error Correction
The example of section 12.2.1 suggests that, to obtain fault-tolerant error correction, an ancilla qubit should be connected with at most one coding qubit. To implement Steane’s code in a
fault-tolerant manner, we must use a circuit of the following form:
12.2 Fault-Tolerant Computation Using Steane’s Code
If we measure the ancilla qubits, however, we are in danger of gaining information about the quantum state of the coded qubits, not just the error, which means the measurement is likely to ˜ + |1) ˜
affect the state of the encoding qubits. For example, suppose the encoded state was √12 (|0 before a single-qubit error occurs on qubit b5 . Measurement of the ancilla qubits in the standard basis
will tell us that an error occurred on qubit b5 , but it will also destroy the superposition, so ˜ or |1 ˜ instead of the correct state that the error correcting operation will “restore" the state to
|0 ˜ + |1). ˜ √1 (|0 2
The trick to avoid gaining too much information is to initialize the ancilla qubits in a state from which it is impossible to learn anything about the computational state. The four ancilla qubits
replace a single qubit in the non-fault-tolerant circuit, so from measuring all four qubits of the ancilla, only one bit of information needs to be gained: the value of the corresponding syndrome
operator. Exercise 5.9 suggests how to achieve this result; a carefully designed initial starting state |φ0 for the ancilla that becomes a second state |φe under a single-qubit bit-flip error on any
one of the four qubits will yield only one bit of information. Consider 1 |x, |φ0 = √ 2 2 d (x) even H
where the sum is over all strings with even Hamming weight, and 1 |x. |φe = √ 2 2 d (x) odd H
Under errors in the encoded qubits that would have resulted in syndrome state |0 in the original syndrome computation, the ancilla remains in state |φ0 . Under errors that would have resulted in |φ1
, the ancilla ends up in state |φe . These two states are distinguished by a measurement in the standard basis that yields a random even-weighted string in the no-error case and a random odd-weighted
string in the error case. This measurement provides only one bit of information. One final problem needs to be addressed before a fault-tolerant implementation of the Steane code syndrome measurement
is obtained. The solution given above requires the preparation of the state |φ0 . We must make sure that we can prepare |φ0 in a fault-tolerant way. Our strategy is not to use a state we prepare if
it deviates too much from |φ0 . In particular, we want to make sure that a faulty preparation does not produce errors in multiple coding qubits. Applying the Walsh-Hadamard transformation to the cat
state √12 (|0000 + |1111 produces the state |φ0 . The circuit of figure 12.5 constructs the cat state in a non-fault-tolerant way. To see how to turn this construction into a fault-tolerant one, let
us look at what errors may occur. A single error in any one of the cat state qubits must not propagate to an error in more than one of the coding qubits. Bit-flip errors in the overall construction
of the ancilla state, even multiple bit-flip errors resulting from a single error, are not a concern; the worst they do is cause an error in the syndrome, which results in at most a single-qubit
error when the “correction" corresponding to this syndrome is carried out. Multiple phase errors resulting from a single error must be avoided, since such errors could result in multiple errors in
the coding qubits. Before the final Hadamard
12 Fault Tolerance and Robust Quantum Computing
Figure 12.5 Non-fault-tolerant construction of a cat state.
Figure 12.6 Fault-tolerant cat state ancilla preparation. The Z measurement test whether the first and fourth qubits have the same value. If this measurement fails, the state is discarded, and the
state preparation is repeated until the test is passed.
transformations are applied, phase errors were bit-flip errors, so we must avoid bit-flip-error propagation in the first part of the circuit. In the circuit of figure 12.5, a bit-flip error in either
the second or third qubit can propagate to the successive qubits. However, either of these bit flips would mean that the first and fourth qubit have opposite values, whereas in the error-free case
the first and fourth qubits have the same value. If we insert a check for equality of these values, we can discard the state and redo the preparation if the check fails. This single-qubit test
suffices. Figure 12.6 shows cat state ancilla preparation that includes this test. 12.2.3 Fault-Tolerant Gates for Steane’s Code
To perform arbitrary quantum computations on the logical qubits of the Steane code, a universal set of fault-tolerant logical gates that can approximate any unitary operator on the logical qubits
must be available. Even implementations of logical single-qubit gates may not be fault-tolerant, since they may propagate a single error to multiple qubits. For example, the most obvious, though far
from optimal, way to carry out a logical single-qubit operation is to decode the logical qubit, apply a true single-qubit operation to the resulting single qubit, and then re-encode. Such an
implementation is clearly not fault-tolerant; if an error occurs to the single qubit after the decoding,
12.2 Fault-Tolerant Computation Using Steane’s Code
upon re-encoding the error will propagate to all seven of the encoding qubits. Furthermore, for logic gates involving more than one logical qubit, fault-tolerant implementations must not propagate a
single error in one block to multiple errors in another. For the Steane code, it is easy to find fault-tolerant implementation for some gates, including X, H , and Cnot . For most other gates, it is
challenging to find a fault-tolerant implementation, and for some gates, including the Toffoli gate and π/8-gate, the only fault-tolerant implementations known require auxiliary qubits. For a
fault-tolerant implementation of the logical X˜ operation, ˜ is an evenly weighted superposition of all recall from section 11.3.3 that the logical qubit |0 ˜ elements of C, and that |1 is an evenly
weighted superposition of all elements of C ⊥ − C. Recall further that the elements of C that are not in C ⊥ are those obtained from elements of C by adding 1111111. Thus, applying X to every qubit
in the seven-qubit block performs the logical X˜ gate ˜ to |1, ˜ and |1 ˜ to |0. ˜ Expanding on this reasoning using relations such as “adding any taking |0 element of C to any element of C ⊥ results
in an element of C ⊥ ” shows that the logical C not may be implemented by applying Cnot operators between the corresponding qubits of the two blocks, as shown in figure 12.7. Both of these
implementations are fault-tolerant because a single error cannot create multiple errors either in its own block or in another. Unfortunately, the transversal strategy applied in these examples, where
gates are applied only between corresponding qubits in the blocks, does not work in most cases. When the transversal strategy does not work, it can be highly nontrivial to find a fault-tolerant
implementation. The construction of fault-tolerant procedures depends on the code used. For some codes it is not known how to construct fault-tolerant implementations of some logical gates. Even for
the Steane code, most single-qubit operations cannot be implemented transversally. Applying the phase gate P π2 = |0 0| + i|1 1| of section 5.5 to all seven qubits of the Steane code results ˜ 0| ˜ −
˜ 1|, ˜ which isn’t quite P˜ π . In this case, applying |0 0| − i|1 1| to in the logical gate |0
Cnot x y ~
Figure 12.7 Fault-tolerant Cnot .
12 Fault Tolerance and Robust Quantum Computing
≠T x y z
Figure 12.8 The transversal approach does not result in a logical Toffoli gate T˜ .
each qubit results in a fault tolerant implementation of P˜ π2 , but in other cases, there is no easy π fix. For example, not only does applying P π4 = |0 0| + ei 4 |1 1| to each qubit not result in
P˜ π4 , but it is not possible to implement P˜ π4 in any transversal way. No transversal implementation of P˜ π4 is known. For P˜ π4 , the only fault-tolerant implementations known require ancilla
qubits. The logical Toffoli gate T˜ cannot be implemented by the application of Toffoli gates to corresponding qubits in the three blocks as shown in figure 12.8 (see exercise 12.4). Like the P˜ π4
gate, only nontransversal implementations of T˜ are possible for the Steane code. In order to show that fault-tolerant computation can be done on data encoded using the Steane code, we need to show
that all logical unitary operators can be approximated arbitrarily closely by the application of a sequence of fault-tolerant gates. We give fault-tolerant versions of the logical operations for the
universally approximating set of gates described in section 5.5: the Hadamard gate H , the phase gate P π2 , the controlled-not gate Cnot , and the π/8-gate P π4 . Fault-tolerant ˜ implementations
for P˜ π and C not have already been described. The logical Hadamard gate H 2
can be implemented transversally by applying H to each of the qubits in the block. Finding a fault-tolerant implementation of P˜ π4 is more work. A number of fault-tolerant implementations use the
same key idea: many transforms that can be implemented using a fault-tolerantly prepared ancilla state do not have a direct fault-tolerant
12.2 Fault-Tolerant Computation Using Steane’s Code
Figure 12.9 A circuit that forms the basis for a fault-tolerant implementation for P π4 . Application of Pπ/2 X is conditional on the outcome of the measurement with Z.
implementation. The trick is to use measurement. We illustrate these techniques by developing a fault-tolerant implementation of the π/8-gate. It is perhaps unsurprising that the state |π/4 = π |0 +
ei 4 |1 can be used to implement the π/8-gate P π4 . The circuit of figure 12.9 performs the π/8-gate P π4 on any input state |ψ. Since we already know fault-tolerant implementations of the to
realize this Cnot , P π2 , and X, we must find a fault-tolerant preparation of the encoded state |π/4 circuit. We first consider fault-tolerant measurement, which will be used as part of
fault-tolerant state preparation. 12.2.4 Fault-Tolerant Measurement
Recall from section 11.4.3 that if M is both Hermitian and unitary, an indirect measurement may be performed with an additional ancilla qubit using the following circuit:
This construction is far from fault-tolerant, since a single error in the ancilla qubit could propagate to all n qubits. To make this construction fault-tolerant, we use a cat state as we did for the
faulttolerant syndrome measurement of section 12.2.2. For the present fault-tolerant construction we need an n-qubit cat state and, just as for fault-tolerant quantum error correction, we must
perform checks on the cat state we construct and discard any states that fail those tests. Indirect measurement by M has a fault-tolerant implementation whenever a version of M controlled correctly
by the cat state can be constructed. If M has a transversal implementation in terms of single-qubit operators, a controlled version is easy to obtain: control each single qubit operator with the
corresponding qubit in the cat state, so that either all single-qubit operators or none at all are performed. The use of this construction is illustrated in the fault-tolerant preparation of the ˜
described in the next section. state |π/4
12 Fault Tolerance and Robust Quantum Computing
12.2.5 Fault-Tolerant State Preparation of |π/4
˜ fault-tolerantly, it suffices to find an efficiently and fault-tolerantly impleTo prepare a state |φ ˜ is an eigenstate. Any fault-tolerantly prepared mentable measurement operator M˜ for which |φ
˜ When ˜ ˜ state that is not orthogonal to |φ yields |φ with positive probability when measured by M. an incorrect eigenstate is obtained after such a measurement, the process can be repeated until
the correct state is obtained or, in many cases, a fault-tolerant gate can be used to transform the obtained eigenstate into the desired one. state, we begin To obtain an efficiently and
fault-tolerantly implementable M˜ for the |π/4 with a general observation about the operators Pθ = |0 0| + eiθ |1 1| √1 (|0 + eiθ |1). Since X has eigenvectors |+ and |−, with eigenvalues 1 2 and −1
respectively, Pθ XPθ−1 has eigenvectors Pθ |+ = √12 (|0 + eiθ |1) and Pθ |− = √12 (|0 − iθ
and the states |θ =
e |1) with eigenvalues 1 and −1 respectively. At first, this fact does not seem useful; yes, |π/4 −1 , but it is Pπ/4 we are trying to implement in the first place. is an eigenstate of M = Pπ/4 XPπ/4
However, the commutation relation XPθ−1 = e−iθ Pθ X implies that π
−1 M = Pπ/4 XPπ/4 = e−i 4 Pπ/4 Pπ/4 X = e−i 4 Pπ/2 X,
˜ and X˜ fault-tolerantly. and we know how to implement Pπ/2 For the indirect measurement construction to work, we do not need to implement full controlled versions of these gates; instead, we need
only to implement versions that are correctly controlled by the cat state used to fault tolerantly implement the measurement, a much easier task. To obtain π the logical analog of indirect
measurement by M, apply a controlled e−i 4 phase gate between the first qubit of the cat state and the first qubit of the ancilla followed by seven controlled Pπ/2 X gates, between the seven pairs of
corresponding qubits of the cat state and the ancilla, implements ˜ X˜ (see figure 12.10). The cat state construction is then undone and the remaining qubit Pπ/2 is obtained. measured in the standard
basis. If the measurement result is 0, the desired state |π/4 1 i π4 ˜ ˜ √ If the measurement result is 1, the resulting state is 2 (|0 − e |1) and the desired state can be ˜ obtained by applying Z.
˜ let us consider what happens at each stage. To see that this circuit performs the measurement M, ˜ The Hadamard transformation together with the six Cnot operations result in the state |φ0 |ψ. The
next eight gates perform M˜ on the computational qubits controlled by the cat state, which results in the state 1 ˜ ψ). ˜ ˜ + |1⊗7 M| √ (|0⊗7 |ψ 2 The six Cnot result in the state 1 ˜ ψ). ˜ + |1|0⊗6
M| ˜ √ (|0⊗7 |ψ 2
12.3 Robust Quantum Computation
U U U
/4 U
P –
U U Figure 12.10 π ˜ Fault-tolerant construction of |π/4, where R = e−i 4 and U = P π2 X. The first set constructs a cat state, the next set applies M˜ controlled by the cat state, then the cat state
construction is undone and the first qubit of the cat state register ˜ or P π |− is obtained. In the latter case, a Z˜ operator could be applied to obtain |π/4. ˜ is measured. Either |π/4 2
The final Hadamard transformation results in the state 1 ˜ ψ), ˜ + |−|0⊗6 M| ˜ (|+|0⊗6 |ψ 2 which is equal to 1 ˜ ψ) ˜ ψ)). ˜ + M| ˜ + |1|0⊗6 (|ψ ˜ − M| ˜ √ (|0|0⊗6 (|ψ 2 2 Just as in section 11.4.3,
one or the other of the eigenstates of M˜ is obtained when the first qubit is measured in the standard basis. 12.3 Robust Quantum Computation
Section 12.3.1 describes concatenated coding that iteratively replaces a circuit with a larger and more robust one. That section also analyzes how many levels of concatenation are required to obtain
a given accuracy to show that a polynomial increase in resources (qubits and gates) can achieve an exponential increase in accuracy. With these tools in hand, section 12.3.2 describes a threshold
12 Fault Tolerance and Robust Quantum Computing
12.3.1 Concatenated Coding
Let Q0 be a time-partitioned circuit (section 12.1) for a computation we wish to make robust. Suppose we want the computation to succeed with probability at least 1 − . Let Qi+1 be the
time-partitioned circuit obtained by encoding each of the qubits of circuit Qi with Steane’s code, replacing all of the basic gates used in Qi with fault-tolerant logical equivalents, and performing
fault-tolerant error correction after the completion of the logical equivalent of each of Qi ’s time intervals. In other words, the circuit Qi is obtained from Q0 by i rounds of concatenated coding.
Figure 12.11 schematically represents two levels of concatenated coding. In circuit Qi there are i different levels of error correction: error correction on blocks of seven qubits are done most
often, and error correction on blocks corresponding to the final logical qubits least often. This hierarchical application of error correction enables an exponential level of robustness to be
achieved with only polynomially many resources, qubits and gates. This paragraph provides a rough heuristic argument for how polynomially many resources suffice to obtain exponential accuracy. When
encoding qubits using an error correcting code, we can think of it roughly as having replaced parts that fail under a single-qubit error with an ensemble of parts that fails only in the presence of
two or more errors. If, within a given time period, the probability of a part failing is p, the ensemble fails with probability cp 2 . Suppose a machine M0 that is composed of N parts, each of which
fails with probability p in a single time interval, runs for T time intervals. The chance that M operates without fault is (1 − p)N T . Suppose a new machine, M1 , is created in which each of the N
parts is replaced by K parts that together perform the operation of the original part, and that while each part still fails with probability p, the ensemble of K parts fails to perform the desired
operation with probability only cp2 for some constant c < 1/p. The basic parts of machine M1 now can be replaced with ensembles of K parts. After continuing in this way i times, the hierarchical
ensembles in machine Mi , making the equivalent of a single part in machine M0 , fail to perform the desired operation with probability i i i i only c2 −1 p 2 , so overall the machine Mi succeeds
with probability (1 − c2 −1 p 2 )N T . The number of parts K i in the hierarchical ensemble corresponding to a single part in machine M0 increases only exponentially in i, while the accuracy
increases doubly exponentially in i. Thus, for the r ensemble to achieve a failure rate of no more than (1/2) , we need encode only O(log2 r) times: For any i > log2 i > log2
log2 c−r log2 (cp)
log2 c − r log2 (cp)
, the failure rate is less that (1/2)r because
implies that 2i >
r − log2 c . − log2 (cp)
The denominator − log2 (cp) is positive, since cp < 1, so
12.3 Robust Quantum Computation
Figure 12.11 Schematic diagram with circuits Q0 , Q1 , and Q2 showing two levels of concatenated coding. The number of qubits is only suggestive: for the Steane code, the circuit Q2 would use
forty-nine qubits for each qubit of Q0 .
12 Fault Tolerance and Robust Quantum Computing
−2i log2 (cp) > r − log2 c, which implies
−r > 2 log2 (cp) − log2 c = log2 i
(cp)2 c
Thus, i
2−r >
(cp)2 , c 2i
where (cp)c = c2 −1 p2 is the failure rate we computed for an ensemble in machine Mi that replaces a single part in the original machine M0 . i
12.3.2 A Threshold Theorem
This section begins by stating a threshold theorem and explaining the meaning of the concepts used in the statement of the theorem. It then briefly describes more general threshold theorems, and
numerical estimates for thresholds obtained so far. A Threshold Theorem For any [[n, 1, 2t + 1]] quantum error correcting code that has a full set of fault-tolerant procedures, there exists a
threshold pT with the following properties. For any > 0, and any ideal circuit C, there exists a fault-tolerant circuit C that, under local stochastic noise of error rate p < pT , produces output
that is within , in the statistical distance metric, from the output of C, and fewer than a|C| qubits and time steps are used in C , where the factor a is , where |C| is the number of locations in C.
polylogarithmic in |C|
An error correcting code has a full set of fault-tolerant procedures if it has fault-tolerant procedures for a set of universal logical gates, error correction steps, state preparation, and
measurement. Suppose a circuit C has been divided into time steps in which each qubit is subjected to at most one preparation, gate, or measurement. A location in C is a gate, a preparation, a
measurement, or a wait (the identity transformation storing the qubit for the next step). A fault-tolerant protocol based on an error correcting code with a full set of fault-tolerant procedures
replaces each location with a fault-tolerant procedure followed by an accompanying fault-tolerant error correcting procedure. The circuit C in the threshold theorem is obtained by applying the
fault-tolerant protocol iteratively, in each stage replacing the gates, preparations, and measurements contained in the fault-tolerant procedures making up the circuit obtained in the previous round,
as explained in section 12.3.1. The number of iterations that need to be carried out depends on the accuracy desired. In a local stochastic error model, the probability of errors at all locations in
a set of locations during one time step decreases exponentially with the size of the set. More precisely, for every subset S of locations in a circuit C, the total probability of having faults at
every location in S 1 (and possibly outside S as well) is at most i pi , where pi is the fixed probability of having
12.3 Robust Quantum Computation
a fault at location Li . If the error probability pi for all locations is less than p, the error rate is said to be less than p. The type of error that occurs at these locations is not specified; for
analysis, it is helpful to imagine that the error is chosen by an adversary who is trying to disrupt the computations as much as possible. Stochastic means that the locations of the faults are chosen
randomly. The statistical distance between two probability distributions P and Q, with probabilities pi and qi respectively for the N outcomes i, is the L1 -distance = ||P − Q||1 =
|pi − qi |.
In our case, let P be the probability distribution over the measurement outcomes obtained if the ideal circuit were executed perfectly and the resulting state were measured in the standard basis. Let
Q be the probability distribution obtained by applying C under local stochastic noise of rate p and then measuring the logical qubits. Prior to measurement, the output state of C in the ideal case,
and the output state of C in the noisy case, can be written as density operators ρ and σ respectively. The statistical distance between the distributions obtained by measuring ρ and σ in the standard
basis is the same as the trace distance between the Hermitian operators ρ and σ . The trace distance or trace metric comes from the trace norm √ for Hermitian operators: the trace norm ||A||T r of A
is defined as ||A||T r = tr|A|, where |A| = A† A is the positive square root of the operator A† A. Let ρ and ρ be two density operators. The trace metric dT r (ρ, ρ ) on density operators is defined
to be = dT r (ρ, ρ ) = ||ρ − ρ ||T r = tr|ρ − ρ |. Threshold theorems have been obtained for other error models, including more general models. For example, threshold theorems exist for error models
in which each basic gate is replaced with one that interacts with the environment but remains close to the original gate. More precisely, each basic gate, acting perfectly, is modeled as U ⊗ I ,
where U is the basic gate acting on the computational system and I is the identity acting on the environment. Each perfect gate has a noisy counterpart modeled by a unitary operator V acting on the
computational system, together with the environment where V is constrained to be within ηT of U ⊗ I , ||V − U ⊗ I ||T r < ηT for some threshold value ηT . This noise model is quite general. In
particular, it subsumes the local stochastic error model. Estimates for threshold values such as pT or ηT have been obtained for a variety of codes, faulttolerant procedures, and error models. Early
results had thresholds on the order of η = 10−7 , and these have been improved to η = 10−3 . Further improvements are needed to reach η = 10−2 , a value that begins to be in reach of implementation
experiments. Better understanding of realistic noise models, the development of more advanced error codes, and improved fault-tolerant techniques and analyses will improve these values.
12 Fault Tolerance and Robust Quantum Computing
12.4 References
The first paper on fault tolerance was by Shor [252]. Preskill [233] surveys issues and results related to fault tolerance. Both Aliferis [18] and Gottesman [138] give detailed, rigorous, but
nevertheless highly readable accounts of fault tolerance, including threshold theorem proofs. Early threshold results were proved by Aharonov and Ben-Or [9, 10], Kitaev [173], and Knill, Laflamme,
and Zurek [180, 181]. Improved error thresholds on the order of 10−3 have been found; see, for example, Steane [263], Knill [178], and Aliferis, Gottesman, and Preskill [19]. Threshold results have
been found for a variety of noise models, including non-Markovian noise [268]. Threshold results have also been found for alternative models of computation [219, 220] such as cluster state quantum
computing, which will be discussed in section 13.4. Steane [260] estimates realistic decoherence times. Steane [261] overviews a number of different universally approximating finite sets of gates
from the point of view of fault-tolerant quantum computing. Eastin and Knill [108] showed that no code admits a universal transversal gate set. Cross, DiVincenzo, and Terhal [92] provide a comparison
of many quantum error correcting codes from the point of view of fault tolerance. 12.5 Exercises Exercise 12.1. Show that a single-qubit gate followed by a single-qubit error is equivalent to a
(possibly different) single-qubit error followed by the same gate. Exercise 12.2. Why do we consider the preparation of the cat state in figure 12.6 to be fault-
tolerant, even though it includes two Cnot gates from the qubit on which the Z measurement is made? Exercise 12.3. What effect does applying P π to each of the qubits in the Steane seven-qubit 4
encoding have? Show that the transversal circuit shown in figure 12.8 does not implement the ˜ ⊗ |0 ˜ ⊗ |0. ˜ Toffoli gate. Consider the effect of the circuit on |1 Exercise 12.4.
Exercise 12.5.
Design a fault-tolerant version of the Toffoli gate for the Steane code.
Further Topics in Quantum Information Processing
This chapter gives brief overviews of topics that we were not able to discuss fully. Section 13.1 surveys more recent results in quantum algorithms. Known limitations of quantum computation are
discussed in section 13.2. Other approaches to robust quantum computation, as well as a few of the many advances in quantum error correction, are described in section 13.3. Section 13.4 briefly
describes alternative models of quantum computation, including cluster state quantum computation, adiabatic quantum computation, holonomic quantum computation, and topological quantum computation,
and their implications for quantum algorithms, robustness, and approaches to building quantum computers. Section 13.5 makes a quick tour of the extensive area of quantum cryptography, and touches
upon quantum games, quantum interactive protocols, and quantum information theory. Insights from quantum information processing that led to breakthroughs in classical computer sciences are discussed
in section 13.6. Section 13.7 briefly surveys approaches to building quantum computers, starting with criteria for scalable quantum computers. This discussion leads into the consideration of
simulations of quantum systems in section 13.8. Section 13.9 discusses the still poorly understood question of where the power of quantum computation comes from, with an emphasis on the status of
entanglement. Finally, section 13.10 discusses computation in theoretical variants of quantum theory. This overview is not meant to be complete. In an area advancing as quickly as this one, there are
new results every day. Exploring the quantum physics section of the e-print archive (http://arXiv.org/archive/quant-ph) is an excellent way to discover additional topics and to keep up with the
latest developments in the field (but be aware that the papers there are not refereed). 13.1 Further Quantum Algorithms
After Grover’s algorithm, there was a hiatus of more than five years before a significantly different algorithm was found. The field advanced during this time, with researchers finding variants on
the techniques of Shor and Grover to provide algorithms for a wider range of problems, but no algorithmic breakthroughs occurred. Grover and others extended his techniques to provide small speedups
for a number of problems, as mentioned in section 9.6. Shor’s algorithms were extended to provide solutions to the hidden subgroup problem over a variety of non-Abelian groups that are
13 Further Topics in Quantum Information Processing
close to being Abelian [244, 162, 161, 29], including a solution for the hidden subgroup problem for normal subgroups of arbitrary finite groups [147, 141] and groups that are almost Abelian in the
sense that the intersection of the normalizers for all subgroups is large [141]. On the negative side, Grigni et al. [141] showed in 2001 that for most non-Abelian groups and their subgroups, the
standard Fourier sampling method used by Shor and successors could yield only exponentially little information about the hidden subgroup. On the other hand, Ettinger et al. [114] showed in 2004 that
there is no information theoretic barrier to solving this problem; they showed that the query complexity of the general non-Abelian hidden subgroup problem is polynomial. Most researchers expect that
quantum computers cannot solve NP-complete problems in polynomial time. There is no proof (a proof would imply P = NP). As section 13.10 discusses in more detail, Aaronson goes so far as to suggest
that this limit on computational power be viewed as a principle governing any reasonable physical theory capable of describing our universe. A lot of focus has been given to candidate NP-intermediate
problems, problems that are in NP, not in P, and are not NP complete. Ladner’s theorem says that if P = NP, then there exist NP intermediate problems. Factoring and the discrete logarithm problem are
both candidate NP-intermediate problems. Other candidate problems include graph isomorphism, the gap shortest lattice vector problem, and many hidden subgroup problems [254, 13]. While polynomial
time quantum algorithms have been found for a few hidden subgroup problems, particularly cases that are close to Abelian, these problems remain some of the most important open questions in the field
of quantum computation. Two special cases of the hidden subgroup problem have received the most attention: the symmetric group Sn , the full permutation group of n elements, and the dihedral group Dn
, the group of symmetries of a regular n-sided polygon. An early result of Beals [34] provided a quantum Fourier transform for the symmetric group, but a solution to the hidden subgroup problem for
the symmetric group continues to elude researchers. This problem is of particular interest since a solution would yield a solution to the graph isomorphism problem. The hidden subgroup problem for
the dihedral group attracted even more attention when Regev [237] showed in 2002 that any efficient algorithm to the dihedral hidden subgroup problem that uses Fourier sampling, a generalization of
Shor’s technique, would enable the construction of an efficient algorithm for the gap shortest vector problem, a problem of cryptographic interest. In 2003, Kuperberg found a subexponential (but
still superpolynomial) algorithm for the dihedral group [189], which Regev improved upon by reducing the space requirements to polynomial while retaining the subexponential time complexity [239].
Alagic et al. have extended these techniques to a solution of Simon’s problem for general non-Abelian groups [17]. Lamont surveys hidden subgroup results and techniques in [191]. In 2002, Hallgren
found an efficient quantum algorithm for solving Pell’s equation [146]. Solving Pell’s equation is believed to be harder than factoring or the discrete log problem. The security of the
Buchmann-Williams classical key exchange and the Buchmann-Williams public key cryptosystem is based on the difficulty of solving Pell’s equation. So even the Buchmann-Williams public key
cryptosystem, which was believed to have a stronger security guarantee than standard public key encryption algorithms, is now known to be insecure in a world with quantum computers. In 2003, van Dam,
Hallgren, and Ip found an efficient quantum algorithm for the shifted Legendre
13.2 Limitations of Quantum Computing
symbol problem [272]. The shifted Legendre symbol problem is the basis for the security of some algebraically homomorphic cryptosystems that are used, for example, in certain cryptographicgrade
random number generators. The existence of van Dam et al.’s algorithm means that quantum computers can predict these random number generators, thus rendering them insecure. In 2007, Farhi, √
Goldstone, and Gutmann [115] found a quantum algorithm for evaluating NAND trees in O( N ), a problem that had puzzled quantum computing researchers for many years. In the past five years, a new
family of quantum algorithms has been discovered that uses techniques of quantum random walks to solve a variety of problems. Childs et al. [80] solve a black box graph traversal problem in
polynomial time that cannot be solved in subexponential time classically. Magniez et al. [201] prove a Grover-type speedup result for a different graph problem using a quantum random walk approach.
Magniez and Nayak [200] apply quantum random walks to the problem of testing commutativity of a group, Buhrman and Špalek [75] to matrix product verification, and Ambainis [23] to element
distinctness. Krovi and Brun [186] study hitting times of quantum walks on quotient graphs. Both Ambainis [22] and Kempe [169] provide overviews of quantum walks and quantum walk–based algorithms.
Quantum learning theory [70, 246, 132, 160, 27] provides a conceptual framework that unites Shor’s algorithm and Grover’s algorithm. Quantum learning is part of computational learning theory that is
concerned with concept learning. A concept is modeled by a membership function, a Boolean function c : {0, 1}n → {0, 1}. Let C = {ci } be a class of concepts. Generally, a quantum learning problem
involves querying an oracle Oc for one of the concepts c in C, and the job is to discover the concept c. The types of oracles vary. A common one is a membership oracle, which upon input of x outputs
c(x). Common models include exact learning and probably approximately correct (PAC) learning. In the quantum case, oracles output a superposition upon input of a superposition of inputs. Servedio and
Gortler [132] establish a negative result, that the number of classical and quantum queries required for any concept class does not differ by more than a polynomial in either the exact or the PAC
model. On the positive side, the same paper shows that for computational efficiency, rather than query complexity, the story is quite different. In the exact model, the existence of any classical
one-way function guarantees the existence of a concept class that is polynomial-time learnable in the quantum case but not in the classical. For the PAC model, a slightly weaker result is known in
terms of a particular one-way function. 13.2 Limitations of Quantum Computing
Beals and colleagues [35] proved that, for a broad class of problems, quantum computation can provide at most a small polynomial speedup. Their proof established lower bounds on the number of time
steps any quantum algorithm must use to solve these problems. Their methods were used by others to provide lower bounds for other types of problems. Ambainis [21] found another powerful method for
establishing lower bounds. In 2002, Aaronson answered negatively the question of whether there could be efficient quantum algorithms for the collision problem [1]. His results were generalized by Shi
and himself [248, 6].
13 Further Topics in Quantum Information Processing
This result was of great interest because it showed that there could not exist a generic quantum attack on all cryptographic hash functions. Aaronson’s result says that any attack must use specific
properties of the hash function under consideration. Shor’s algorithms break some cryptographic hash functions, and quantum attacks on others may yet be discovered. Section 9.3 showed that Grover’s
search algorithm is optimal. In 1999, Ambainis [20], building on work of Buhrman and de Wolf [73] and Farhi, Goldstone, Gutmann, and Sipser [117], showed that for searching an ordered list, quantum
computation can give no more than a constant factor improvement over the best possible classical algorithms. Childs and colleagues [81, 82] improved estimates for this constant. Aaronson [5] provides
a high-level overview of the limits of quantum computation. 13.3 Further Techniques for Robust Quantum Computation
While quantum error correction is one of the most advanced areas of quantum information processing, many open questions remain. As more quantum information processing devices are built, finding
quantum codes or other robustness methods optimized for the particular errors to which the devices are most vulnerable will remain a rich area of research. For transmitting quantum information,
either as part of quantum communication protocols or to move information around inside a quantum computer, not only are efficient error detection and the trade-off between data expansion and strength
of the code important, but the decoding efficiency is as well. One longtime frustration has been the difficulty of using certain classical codes with efficient decoding properties, such as
low-density parity check (LDPC) codes, as the basis for constructing quantum codes with similarly efficient decoding. The duality constraint in the CSS code construction was too much of a barrier for
these codes, and no one knew what else to do. In 2006, Brun, Devetak, and Hsieh realized that by using a side resource of entanglement between the sender and the receiver, quantum versions of many
more classical codes, including LDPC codes, could be obtained [68, 137]. This construction may also be useful beyond quantum communication. Instead of encoding the states so that we can detect and
correct common errors, we may be able to place the states in subspaces unaffected by these errors. Such approaches, complementary to the error correcting codes we have seen, go under the various
headings of error avoiding codes, noiseless quantum computation, or, most commonly, decoherence-free subspaces. Under certain conditions, we expect a system to be subject to systematic errors
affecting all the qubits of the system. The quantum codes we have seen, while effective on errors involving small numbers of qubits, are not effective on systematic errors affecting all qubits. Lidar
and Whaley provide a detailed review of decoherence-free subspaces in [195]. Operator error correction [184, 185] provides a framework that unifies quantum error correcting codes and decoherence-free
subspaces. Quantum computers built according the topological model of quantum computation (described in section 13.4.4) would have robustness built in from the start. Here we give a few simple
examples to illustrate the general approach of decoherence-free subspaces.
13.3 Further Techniques for Robust Quantum Computation
Example 13.3.1 Systematic bit-flip errors. Suppose a system tends to be subject to errors that perform a quantum bit flip on all qubits of the system. Bit-flip errors have no effect on the states |++
and |−− (or any linear combination of them). For example, a bit-flip error takes
|−− =
1 (|0 − |1)(|0 − |1) 2
to 1 (|1 − |0)(|1 − |0) = |−−. 2 If we encode every |0 and |1 as two qubit states |++ and |−− respectively, we will have succeeded in protecting our computational states from all systematic bit-flip
errors by embedding them in states of a 2n-qubit system that are immune from these errors.
Example 13.3.2 Systematic phase errors. Suppose a system tends to be subject to errors that perform the same relative phase shift E = |0 0| + eiφ |1 1| on all qubits of the system. If we encode each
single-qubit state |0 and |1 as the two qubit states
1 |ψ0 = √ (|01 + |10) 2 and 1 |ψ1 = √ (|01 − |10), 2 the error becomes a physically irrelevant global phase, so the computational states are entirely protected from these errors: 1 1 (E ⊗ E) √ (|01 ±
|10) = √ (|0 ⊗ eiφ |1 ± eiφ |1 ⊗ |0) 2 2 1 = eiφ √ (|01 ± |10) 2 1 ∼ √ (|01 ± |10). 2 Thus, the two-dimensional space spanned by {|ψ0 , |ψ1 } can be used as a binary quantum system that is error-free
within an environment that produces only systematic relative phase errors.
13 Further Topics in Quantum Information Processing
We would like to combine these approaches to obtain an encoding that protects against all systematic qubit errors. A subspace that is immune to both systematic X and Z errors is certainly immune to Y
= ZX errors, and therefore to any systematic single-qubit error, since it is immune to any linear combination of these errors. The following example is due to Zanardi and Rasetti [292]. This example
was designed for the error environment of qubits encoded in photon polarization as affected by a quartz crystal. This decoherence-free subspace method has been experimentally verified by Kwiat et al.
[190]. Example 13.3.3 Systematic single-qubit errors. The reader can check that all quantum states
represented by the elements of the two-dimensional space spanned by the vectors |ϕ0 = 12 (|1001 − |0101 + |0110 − |1010) |ϕ1 = 12 (|1001 − |0011 + |0110 − |1100) are left invariant by systematic X
and Z errors. Since |ϕ0 and |ϕ1 are not orthogonal, we cannot encode |0 and |1 as these two vectors. By using the Gram-Schmidt process, we can find orthonormal vectors: we can replace |ϕ1 with |ϕ1 ,
the normalized component of |ϕ1 perpendicular to |ϕ0 , by taking |ϕ2 = |ϕ1 − ϕ0 |ϕ1 |ϕ0 , and then normalizing to obtain |ϕ1 = √
1 |ϕ2 ,
ϕ2 |ϕ2
which by construction is perpendicular to |ϕ0 . By encoding all |0 and |1 as |ϕ0 and |ϕ1 , we can protect against all systematic X and Z errors and therefore against all systematic single-qubit
errors. Thus, by embedding the states of an n-qubit system in the states of a 4n-qubit system, we have obtained a computational subspace immune to all systematic single-qubit errors. Decoherence-free
subspace approaches have been developed for a variety of complex situations. See [195] for a survey. 13.4 Alternatives to the Circuit Model of Quantum Computation
The circuit model of quantum computing of section 5.6 is well designed for comparisons between quantum algorithms and classical algorithms. We have seen its use in comparing the efficiency of quantum
algorithms to classical algorithms and for showing that any classical computation can be done on a quantum computer in comparable time. Other models rival the circuit model for inspiring the
discovery of new quantum algorithms or for giving insight into the limitations of quantum computation. Furthermore, other models better support certain promising approaches toward ways of physically
realizing quantum computers and understanding the robustness of these implementations.
13.4 Alternatives to the Circuit Model of Quantum Computation
Two significant alternatives to the circuit model have been developed so far: cluster state quantum computing and adiabatic quantum computing. The next four subsections briefly describe these two
models and their applications, holonomic quantum computation, a hybrid of adiabatic quantum computing and the standard circuit model, and topological quantum computing, which is related to holonomic
quantum computation. 13.4.1 Measurement-Based Cluster State Quantum Computation
The elegant cluster state model of quantum computation [235, 218] makes exceptionally clear use of quantum entanglement, quantum measurement, and classical processing. In contrast to the standard
circuit model, cluster state quantum computation makes no use of unitary operations in its processing of information; all computations are accomplished by measurement of qubits in a cluster state,
the maximally connected, highly persistent entangled states of section 10.2.4. In a cluster state algorithm, the order in which the qubits are measured is set; only the basis in which each of the
qubits is measured is determined by the results of previous measurements. The initial cluster state is independent of the algorithm to be performed; it depends only on the size of the problem to be
solved. All of the processing, including input and output, takes place entirely by a series of single-qubit measurements, so the entanglement between the qubits can only decrease in the course of the
algorithm. For this reason, cluster state quantum computation is sometimes called one-way quantum computation. In cluster state quantum computation, the entanglement creation and the computational
stages of a quantum computation are cleanly separated. Cluster state quantum computation has been shown to be computationally equivalent to the standard circuit model of quantum computation. Cluster
states, therefore, provide a universal entanglement resource for quantum computation. The proof of computational equivalence relies on a mapping of the time sequence of quantum gates in a quantum
circuit to a spatial dimension of the 2-D lattice in which the cluster state lives. The processing proceeds from left to right, with the input placed in states on the far left of the cluster, and the
output appearing in the states on the far right of the cluster once the algorithm is complete. A single qubit in the quantum circuit model is mapped to a row of qubits in the cluster state; thus, the
single qubits of the cluster state are distinct from the logical qubits being processed by the computation. Many qubits in the cluster are not associated with any qubit in the circuit model. These
qubits connect the qubit rows and, together with measurement, enable quantum gates to be carried out as the qubits of the cluster are measured from left to right. The measurements of qubits in a
single column can be carried out in parallel. General cluster state computations use more general structures than those arising as analogs of quantum circuits. For example, the measurements do not
necessarily proceed from left to right, and rows in the cluster state may have no obvious meaning. There is no reason for them to represent a logical qubit, a concept from the circuit model of
quantum computation that does not have an analog in general cluster state quantum computation. Any computation in the cluster state model partitions the cluster into sets of qubits Q1 , Q2 , . . . ,
QL . The qubits within a set can be measured in any order; in particular, they may be measured in parallel. All qubits in the set Qi must be measured before any qubit of Qi+1 is measured. How a qubit
in Qi+1 is measured may
13 Further Topics in Quantum Information Processing
depend on the results of measurements of qubits in Q1 , Q2 , . . . , QL . The interpretation of the final result may depend on measurements results obtained at earlier states. Raussendorf, Browne,
and Briegel [235] define the logical depth of a computation to be the minimum number of sets Qi needed to carry out the computation. Both the interpretation of the final results and the decision of
what basis to use for a measurement given the previous results require classical computation that must be taken into account in terms of the efficiency of the algorithm. For some computations, the
logical depth is surprisingly low. For example, take any quantum circuit consisting entirely of elements of the Clifford group, the group generated by the Cnot , Hadamard, and π/2-phase shift gates.
While the corresponding computation in the cluster state model proceeds by measuring columns of qubits from right to left, it turns out that for all cluster computations corresponding to Clifford
group circuits, one can simply measure all the qubits at once. Thus, the logical depth of computations using only Clifford gates is 1; there are no dependencies between the measurements needed to
accomplish the computation. This result implies that the only computation going on is the classical interpretation of the results and determination of intermediate measurements. Thus, a quantum
circuit consisting of entirely of Clifford gates has a classical analog of equivalent efficiency. This result, known as the Gottesman-Knill theorem [133], is not trivial in that, for example, the
Walsh-Hadamard transformation is contained in the Clifford group. The cluster state model provides a particularly simple proof of this theorem. The cluster state model is of great theoretical
interest since it clarifies the role of entanglement in quantum computation and provides means of analyzing quantum computation. It has also had substantial impact on approaches to building quantum
computers, particularly optical quantum computers. It will be discussed again in section 13.7 in that context. Furthermore, as will be discussed in section 13.9, it has clarified the role of
entanglement in quantum competition in surprising ways. 13.4.2 Adiabatic Quantum Computation
To describe adiabatic quantum computation, we must first describe the Hamiltonian framework for quantum mechanics on which it rests. Quantum systems evolve by unitary operators, so the state of any
system, initially in state | 0 , as it evolves over time t can be described by | t = Ut | 0 , where Ut is a unitary operator for each t. Furthermore, the evolution must be continuous and additive:
Ut1 +t2 = Ut2 Ut1 for all times t1 and t2 . Any unitary operator U can be written U = e−iH for some Hermitian H . Any continuous and additive family of unitary operators can be written as Ut = e−itH
for some Hermitian operator H called the Hamiltonian for the system. Schrödinger’s equation provides an equivalent formulation: the Hamiltonian H must satisfy i
d | (t) = H | (t) dt
using units in which Planck’s constant = 1. Let λ0 be the smallest eigenvalue of H . Any λ0 eigenstate of H is called a ground state of H . The Hamiltonian framework and Schrödinger’s equation can be
found in any quantum mechanics book.
13.4 Alternatives to the Circuit Model of Quantum Computation
To solve a problem using adiabatic quantum computation, an appropriate Hamiltonian H1 must be found, one for which a solution to the problem can be represented as the ground state of the Hamiltonian.
An adiabatic algorithm begins with the system in the ground state of a known and easily implementable Hamiltonian H0 . A path Ht is chosen between the initial Hamiltonian and the final Hamiltonian H
= H1 , and the Hamiltonian is gradually perturbed to follow this path. The theory of adiabatic quantum computation rests on the adiabatic theorem [210], which says that as long as the path is
traversed slowly enough the system will remain in the ground state, and thus at the end of computation it will be in the solutions state, the ground state of H1 . How slowly the path must be
traversed depends on the eigengap, the difference between the two lowest eigenvalues. In general, it is hard to obtain bounds on this gap, so the art of designing an adiabatic algorithm is first in
finding a mapping of the problem to an appropriate Hamiltonian, and then in finding a short path for which one can show that the eigengap never becomes too narrow. Adiabatic quantum computation was
introduced by Farhi, Goldstone, Gutmann, and Sipser [118]. Childs, Farhi, and Preskill [79] show that adiabatic quantum computation has some inherent protection against decoherence, which means that
it may be a particularly good model both for designing robust implementations of quantum computers and robust algorithms [79]. Roland and Cerf [243] show how to recapture Grover’s algorithm, and the
optimality proof, within the adiabatic context. Aharonov et al. [16] develop a model for adiabatic quantum computation and prove that it is computationally equivalent to universal quantum computation
in the circuit model. Other models of adiabatic computation exist. Some are equivalent in power only to classical computation [63], while for others, the extent of their power in not yet understood.
This situation complicates not only discussions of adiabatic quantum computing but also implementation efforts. For example, some small adiabatic devices have been built for which it has not been
possible to determine whether they perform universal quantum computation or not. Aharonov and Ta-Shma’s wide-ranging paper [15], after developing tools for adiabatic quantum computation, investigates
the use of adiabatic models for understanding which states, particularly superpositions of states drawn from probability distributions, can be efficiently generated. Initial interest centered on the
possibility of using adiabatic methods to develop a quantum algorithm to solve NP-complete problems [116, 78, 153], because adiabatic algorithms were not subject to the lower bound results proven for
other approaches. Vazirani and van Dam [273] and Reichardt [240] rule out a variety of adiabatic approaches to solving NP-complete problems in polynomial time. 13.4.3 Holonomic Quantum Computation
Holonomic, or geometric, quantum computation [293, 76] is a hybrid between adiabatic quantum computation and the standard circuit model in which the quantum gates are implemented via adiabatic
processes. Holonomic quantum computation makes use of non-Abelian geometric phases that arise from perturbing a Hamiltonian adiabatically along a loop in its parameter space. The phases depend only
on topological properties of the loop, and so are insensitive to perturbations.
13 Further Topics in Quantum Information Processing
This property means that holonomic quantum computation has good robustness with respect to errors in the control driving the Hamiltonian’s evolution. Early experimental efforts have been carried out
using a variety of underlying hardware. 13.4.4 Topological Quantum Computation
In 1997, prior to the development of holonomic quantum computation, Kitaev proposed topological quantum computing, a more speculative approach to quantum computing that also has excellent robustness
properties [174, 125, 233, 87]. Kitaev recognized that topological properties are totally unaffected by small perturbations, so encoding quantum information in topological properties would give
intrinsic robustness. The type of topological quantum computing Kitaev proposed makes use of the Aharonov-Bohm effect, in which a particle that travels around a solenoid acquires a phase that depends
only on how many times it has encircled the solenoid. This topological property is highly insensitive to even large disturbances in the particle’s path. Kitaev defined quantum computation in this
model and showed that, by using non-Abelian Aharonov-Bohm effects, such a quantum computer would be universal in the sense of being able to simulate computations in the quantum circuit model without
a significant loss of efficiency. However, only a few non-Abelian Aharonov-Bohm effects have been found in nature, and all of these are unsuitable for quantum computation. Researchers are working to
engineer such effects, but even the most basic building blocks of topological quantum computation have yet to be realized experimentally in the laboratory. In the long term, the robustness properties
of topological quantum computing may enable it to win out over other approaches. In the meantime, it is of significant theoretical interest. For example, it led to a novel type of quantum algorithm
that provides a polynomial time approximation of the Jones polynomial [11]. 13.5 Quantum Protocols
The most famous quantum protocols are quantum key distribution schemes, such as those of sections 2.4 and 3.4. Quantum key distribution was the first example of a quantum cryptographic protocol.
Since then, quantum approaches to a wide variety of cryptographic and communication tasks have been developed. Some quantum cryptographic protocols, such as the quantum key distribution schemes we
described, use quantum means to secure classical information. Others secure quantum information. Many are unconditionally secure in that their security is based entirely on properties of quantum
mechanics. Others are only quantum computationally secure in that their security depends on a problem being computationally intractable for quantum computers. For example, unconditionally secure bit
commitment is known to be impossible to achieve through either classical or quantum means [205, 197, 93]. Weaker forms of bit commitment exist. In particular, quantum computationally secure bit
commitments schemes exist as long as there exist quantum one-way functions [8, 106]. Kashefi and Kerenidis discuss the status of quantum one-way functions [168].
13.6 Insight into Classical Computation
Closely related to quantum key distribution schemes are protocols for unclonable encryption [136]. Uncloneable encryption is a symmetric key encryption scheme that guarantees that an eavesdropper
cannot even copy an encrypted message, say for later attempts at decryption, without being detected. In addition to providing a stronger security guarantee than most symmetric key encryption systems,
the keys can be reused as long as eavesdropping is not detected. Uncloneable encryption has strong ties with quantum authentication [33]. One type of authentication is digital signatures. Shor’s
algorithms break all standard digital signature schemes. Quantum digital signature schemes have been developed [139], but the keys involved can be used only a limited number of times. In this respect
they resemble classical schemes such as Merkle’s one-time digital signature scheme [207]. Some quantum secret sharing protocols protect classical information in the presence of eavesdroppers [151].
Others protect a quantum secret. Cleve et al. [86] provide quantum protocols for (k, n) threshold quantum secrets. Gottesman et al. [134] provide protocols for more general quantum secret sharing.
There is a strong tie between quantum secret sharing and CSS quantum error correcting codes. Quantum multiparty function evaluation schemes exist [91]. Fingerprinting is a mechanism for identifying
strings such that equality of two strings can be determined with high probability by comparing their respective fingerprints. It has been shown √ that classical fingerprints for bit strings of length
n need to be of at least length O( n). Buhrman et al. [72] show that a quantum fingerprint of classical data can be exponentially smaller; they can be constructed with only O(log(n)) qubits. In 2005,
Watrous [280] was able to show that many classical zero-knowledge interactive protocols are zero knowledge against a quantum adversary. A significant part of the challenge was to find a reasonable
and sufficiently general definition of quantum zero knowledge. The problems on which statistical zero-knowledge protocols are generally based are candidate NP-intermediate problems such as graph
isomorphism, so for this reason also zero-knowledge protocols are of interest for quantum computation. Aharonov and Ta-Shma [15] detail intriguing connections between statistical zero-knowledge and
adiabatic state generation. There is a close connection between quantum interactive protocols and quantum games. An introduction to this field is provided by [192]. Early work in this area includes a
discussion of a quantum version of the prisoner’s dilemma [110]. See Meyer [212] for a lively discussion of other quantum games. Gutoski and Watrous [145] tie quantum games to quantum interactive
proofs. 13.6 Insight into Classical Computation
A number of classical algorithmic results have been obtained by taking a quantum information processing viewpoint. Kerenidis and de Wolf [170] and Wehner et al. [282] use quantum arguments to prove
lower bounds for locally decodable codes, Aaronson [2] for local search, Popescu et al. [230] for the number of gates needed for a classical reversible circuit, and de Wolf [98] for matrix rigidity.
Aharonov and Regev [14] “dequantize" a quantum complexity result for a lattice problems to obtain a related classical result. The usefulness of the complex perspective
13 Further Topics in Quantum Information Processing
for evaluating real valued integrals is sometimes used as an analogy to explain this phenomenon. Drucker and de Wolf survey these and other results in [105]. We know of two additional examples that
were not included in their survey. One is the Gentry result [127] discussed at the end of the next paragraph. Another is an early example due to Kuperberg, his proof of Johansson’s theorem [187]. We
describe a couple of examples in greater detail. Cryptographic protocols usually rely on the empirical hardness of a problem for their security; it is rare to be able to prove complete, information
theoretic security. When a cryptographic protocol is designed based on a new problem, the difficulty of the problem must be established before the security of the protocol can be understood.
Empirical testing of a problem takes a long time. Instead, whenever possible, reduction proofs are given that show that if the new problem were solved it would imply a solution to a known hard
problem; the proofs show that the solution to the known problem can be reduced to a solution of the new problem. Regev [238] designed a novel, purely classical cryptographic system based on a certain
problem. He was able to reduce a known hard problem to this problem, but only by using a quantum step as part of the reduction proof. Thus, he has shown that if the new problem is efficiently
solvable in any way, there is an efficient quantum algorithm for the old problem. But it says nothing about whether there would be a classical algorithm. This result is of practical importance; his
new cryptographic algorithm is a more efficient lattice-based public key encryption system. Lattice-based systems are currently the leading candidate for public key systems secure against quantum
attacks. Four years after Regev’s original result, Peikert provided a completely classical reduction [224]. At the same conference, however, Gentry presented his spectacular result, a fully
homomorphic encryption system [128], answering a thirty-year open question. As part of his work, he uses a related, but different, quantum reduction argument for an otherwise completely classical
result [127]. In another spectacular, if less practical, result, Aaronson found a new solution to a notorious conjecture about a purely classical complexity class PP [4]. From 1972 until 1995, this
question remained open. Aaronson defines a new quantum complexity class PostBQP, an extension of the standard quantum complexity class BQP, motivated by the use of postselection in certain quantum
arguments. It takes him a page to show that PostBQP=PP, and then only three lines to prove the conjecture. The original 1995 proof, while entirely classical, was significantly more complicated. Thus,
it seems, for this question at least, the right way to view the classical class PP is through the eyes of quantum information processing. 13.7 Building Quantum Computers
DiVincenzo developed widely used requirements for the building of a quantum computer. Obtaining n qubits does not suffice, just like n bits, say n light switches, does not make a classical computer;
the bits or qubits must interact in a controllable fashion. It is relatively easy to obtain n qubits, but it is hard to get them to interact with each other and with control devices, while preventing
them from interacting with anything else. DiVincenzo’s criteria [104] are, roughly:
13.7 Building Quantum Computers
Scalable physical system with well-characterized qubits,
Ability to initialize the qubits in a simple state,
Robustness to environmental noise: long decoherence times, much longer than the gate operation time,
Ability to realize high fidelity universal quantum gates,
High-efficiency, qubit-specific measurements.
Two other criteria were added later in recognition of the need for flying qubits used to transmit information between different parts of a quantum computer: •
Ability to interconvert stationary and flying qubits,
Faithful transmission of flying qubits between specified locations.
DiVincenzo’s criteria are rooted in the standard circuit model of quantum computation. PérezDelgado and Kok [227] give more general criteria, including formal operational definitions of a quantum
computer, that are meant to encompass alternative models of quantum computation. There are daunting technical difficulties in actually building such a machine. Research teams around the world are
actively studying ways to build practical quantum computers. The field is changing rapidly. It is impossible even for experts to predict which of the many approaches are likely to succeed. Both [295]
and [157] contain detailed evaluations of the various approaches. No one has yet made a detailed proposal that meets all of the DiVincenzo criteria, let alone realize it in a laboratory. A
breakthrough will be needed to go beyond tens of qubits to hundreds of qubits. The earliest small quantum computers [176] used liquid NMR [129]. NMR technology was already highly advanced due to its
use in medicine. The NMR approach uses the nuclear spin state of atoms. Many copies of one molecule are contained in a macroscopic amount of liquid. A quantum bit is encoded in the average spin state
of a large number of nuclei. Each qubit corresponds to a particular atom of the molecule, so the atoms for one qubit can be distinguished from those of other qubits by their nuclei’s characteristic
frequency. The spin states can be manipulated by magnetic fields, and the average spin state can be measured with NMR techniques. NMR quantum computers work at room temperature. However, liquid NMR
has severe scaling problems—the measured signal scales as 1/2n with the number of qubits n—so liquid NMR appears unlikely to lead implementation efforts much longer, let alone achieve a scalable
quantum computer. As an example of how hard it is to predict which approaches are most likely to lead to a scalable quantum computer, in 2000 optical approaches were considered unpromising. Optical
methods were recognized as the unrivaled approach for quantum communications applications such as quantum key distribution, and also as flying qubits sending information between different parts of a
quantum computer, because photons do not interact much with other things and so have long decoherence times. This same trait, however, means that it is difficult to get photons to interact with each
other, which made them appear unsuitable as the fundamental qubits on which computation
13 Further Topics in Quantum Information Processing
would be done. While nonlinear optical materials induce some photon-photon interactions, no known material has a strong enough nonlinearity to act as a Cnot gate, and scientists doubt that such a
material will ever be found. Knill, Laflamme, and Milburn’s 2001 paper [179] showed how, by clever use of measurement, Cnot gates could be achieved, avoiding the issue of nonlinear optical elements
altogether. While this result, known as the KLM approach, was a huge breakthrough for the field of optical quantum computing, major difficulties remained. The overhead required by these methods was
enormous. In 2004, Nielsen showed how this overhead could be greatly reduced by combining the KLM approach with cluster state quantum computing. O’Brien [222] gives a brief but insightful overview of
optical approaches to quantum computers, now viewed as one of the more promising approaches in spite of the many hurdles that remain. Ion trap approaches are currently the most advanced approach that
appear possibly scalable. The field has made steady progress. In an ion trap quantum computer [84, 258], individual ions, each representing a qubit, are confined by electric fields. Lasers are
directed at individual ions to perform single-qubit quantum gates and two-qubit operations between adjacent ions. All operations necessary for quantum computation have been demonstrated in the
laboratory for small numbers of ions. To scale this technology, proposed architectures include quantum memory and processing elements where qubits are moved back and forth either through physical
movement of the ions [171] or by using photons to transfer their state [262]. More recently, architectural designs for quantum computers have begun to be studied. Van Meter and Oskin [211] survey
architectural issues and approaches for quantum computers. Many other approaches exist, including cavity QED, neutral atom, and various solid state approaches. See [295] and [157] for descriptions of
these approaches, their experimental status at the time the reports were written, and their perceived strengths and weaknesses. Hybrid approaches are also being pursued. Of particular interest are
interfaces between optical qubits and qubits in some of these other forms. Once a quantum information processing device is built, it must be tested to determine if it works as expected and to learn
what sorts of errors occur. Finding good, efficient methods of testing is a far from trivial task, given the exponentially large state space, and that measurement affects the state. Quantum state
tomography studies methods for experimentally characterizing a quantum state by examining multiple copies of the state. Quantum process tomography aims to characterize experimentally sequences of
operations performed by a device. Early work includes Poyatos et al. [231, 232] and Chuang and Nielsen [83]. D’Ariano et al. provide a review of quantum tomography [94]. While a full characterization
of an n-qubit system requires exponentially many probes of the system, some features can be determined with less. Of particular interest is determining the decoherence to which a process is
subjected. A recent breakthrough by Emerson et al. provides a symmetrization process that reduces the number of probes needed to characterize the decoherence to only polynomially many [113, 28]. The
efforts and success in creating highly entangled states for use in quantum information processing devices have found a number of other applications, and they have enabled deeper experimental
exploration of quantum mechanics [157, 295]. Highly entangled states, and the
13.8 Simulating Quantum Systems
improvements in quantum control, have been used in quantum microlithography to affect matter at scales below the wavelength limit and in quantum metrology to achieve extremely accurate sensors.
Applications include clock accuracy beyond that of current atomic clocks, which are limited by the quantum noise of atoms, optical resolution beyond the wavelength limit, ultrahigh resolution
spectroscopy, and ultraweak absorption spectroscopy. 13.8 Simulating Quantum Systems
A major application of quantum computers is to the simulation of quantum systems. Long before we have quantum computers capable of simulating any quantum system, special-purpose quantum devices
capable of simulating small quantum systems will be built. The simulations run on these special purpose devices will have applications in fields ranging from chemistry to biology to material science.
They will also support the design and implementation of yet larger special purpose devices, a process that ideally leads all the way to the building of scalable general-purpose quantum computers.
Early work on quantum simulation of quantum systems includes [285, 196, 289]. Somma et al.’s overview [257] discusses what types of physical problems simulation on quantum computers could solve.
Clearly, a simulation cannot efficiently output the amplitudes of the state, as expressed in the standard basis, at all times, since even at just one point in time this information can be exponential
in the size of the system. What is meant by a full simulation of a quantum system by a quantum computer is an algorithm that gives a measurement outcome with the same probability as an analogous
measurement on the actual system no matter when or what measurement is performed. Even on a universal quantum computer, there are limits to what information can be gained from a simulation. For some
quantities of interest, it is not obvious how to extract efficiently that information from a simulation; for some quantities there may be an information theoretic barrier, for others algorithmic
advances are needed. Many quantum systems can be efficiently simulated classically. After all, we live in a quantum world but nevertheless have been able to use classical methods to simulate a wide
variety of natural phenomena effectively. Some entangled quantum systems can be efficiently simulated classically [278]. The question of which quantum systems can be efficiently simulated classically
remains open. New approaches to classical simulation of quantum systems continue to be developed, many benefiting from the quantum information processing viewpoint [249, 204]. The quantum information
processing viewpoint has also lead to improvements in a commonly used classical approach to simulating quantum systems, the DMRG approach [276]. While universal quantum computers will be able to
simulate a wide variety of quantum systems, they cannot efficiently simulate some theoretical quantum systems, systems that satisfy Schrödinger’s equation but have not been found in nature. They
cannot simulate efficiently, even approximately, most quantum systems in the theoretical sense, abstract systems whose dynamics are described by e−itH for some Hamiltonian H . The proof of this fact
follows directly from the fact most unitary operators are not efficiently implementable. It is conjectured [99], but not
13 Further Topics in Quantum Information Processing
known, that all physically realizable quantum systems are efficiently simulatable on a quantum computer. If it turns out that this conjecture is wrong and a natural phenomenon is discovered that is
not efficiently simulatable on quantum computers as we have defined them, then we will have to revise our notion of a quantum computer to incorporate this phenomenon. But we would also have
discovered an additional, potentially powerful, computational resource. 13.9 Where Does the Power of Quantum Computation Come From?
Entanglement is the most common answer given as to where the power of quantum computation comes from. Other common answers include quantum parallelism, the exponential size of the state space, and
quantum Fourier transforms. Section 7.6 discussed the inadequacy of quantum parallelism and the size of the state space as answers. Quantum Fourier transforms, while central to most quantum
algorithms, cannot be the answer in light of the result, mentioned in the reference section of chapter 7, that quantum Fourier transforms can be efficiently simulated classically. The rest of this
section is devoted to explaining why the answer entanglement is also unsatisfactory, followed by a challenge to our readers to contribute to ongoing efforts to understand what Vlatko Vedral [274]
terms “the elusive source of quantum effectiveness." One reason entanglement is so often cited as the source of quantum computing’s power is Jozsa and Linden’s result [167] that any pure state
quantum algorithm achieving an exponential speedup over classical algorithms must make use of entanglement between a number of qubits that increases with the size of the input to the algorithm. In
the same paper, however, Jozsa and Linden speculate that, in spite of this result, entanglement should not be viewed as the key resource for quantum computation. They suggest that similar results can
be proved for other properties quite different from entanglement. For example, the Gottesman-Knill theorem, discussed in section 13.4.1, implies that states that do not have polynomially sized
stabilizer descriptions are also essential for quantum computation. This property is distinct from entanglement. Since the Clifford group contains the Cnot , this set of states includes certain
entangled states. An analog of Jozsa and Linden’s result does not hold for less dramatic improvements over the classical case. In fact, improvements can be obtained with no entanglement whatsoever;
Meyer [213] shows that in the course of the Bernstein-Vazirani algorithm, which achieves an n to 1 reduction in the number of queries required, no qubits become entangled. More obviously, there exist
other applications of quantum information processing that require no entanglement. For example, the BB84 quantum key distribution protocol makes no use of entanglement. Looking at the question from
the opposite side, many entangled systems have been shown to be classically simulatable [278, 204]. The cluster state model of quantum computation, on the other hand, suggests the centrality of
entanglement to quantum computation. Other closely related models with other types of highly entangled initial states have been shown to be universal for quantum computation. While it was known that
these states are, in some measures of entanglement, far from maximally entangled,
13.10 What if Quantum Mechanics Is Not Quite Correct?
many researchers conjectured that in theory most classes of sufficiently entangled quantum states could be used as the basis of universal one-way quantum computation, but that finding measurement
strategies for many of these classes might be prohibitively difficult. This conjecture, however, turns out to be false. Two groups of researchers [142, 64] showed that most quantum states are too
entangled to be useful as a substrate for universal one-way quantum computation. For a few months, it was thought that perhaps these results would not apply to efficiently constructable quantum
states, but Low [199] quickly exhibited classes of efficiently constructable quantum states that were too entangled to be useful as the basis for one-way quantum computation. Most of these states,
however, are useful for quantum information processing applications such as quantum teleportation. These observations prompt two questions: what types of entanglement are useful, and for what. As
mentioned in chapter 10, multipartite entanglement remains only poorly understood. Another intriguing challenge is to find a view of quantum information processing √ that makes obvious its
limitations. For example, is there a vantage point from which the ( N ) lower bound on quantum algorithms for exhaustive search, proved in section 9.3, becomes a one-line observation? The route
toward understanding what aspects of quantum mechanics are responsible for the power of quantum information processing is even less obvious. We hope readers of this book will contribute toward an
improved understanding of these fundamental questions. 13.10 What if Quantum Mechanics Is Not Quite Correct?
Quantum mechanics may be wrong. Physicists have not yet understood how to reconcile quantum mechanics with general relativity. A complete physical theory would need to make modifications to one of
general relativity or quantum mechanics, possibly both. Any modifications to quantum mechanics would have to be subtle, however; quantum mechanics is one of the most tested theories of all time, and
its predictions hold to great accuracy. Most of the predictions of quantum mechanics will continue to hold, at least approximately, once a more complete theory is found. Since no one knows how to
reconcile the two theories, no one knows what, if any, modifications would be necessary. Once the new physical theory is known, its computational power can be analyzed. In the meantime, theorists
have looked at what computational power would be possible if certain changes in quantum mechanics were made. So far these changes imply greater computational power rather than less; computers built
on those principles could do everything a quantum computer could do and substantially more. For example, Abrams and Lloyd [7] showed that if quantum mechanics were nonlinear, even slightly,
computation using that nonlinearity could solve all problems in the class #P, a class that contains all NP problems and substantially more, in polynomial time. Aaronson [4] showed that if a certain
exponent in the axioms of quantum mechanics were anything other than 2, all PP problems, another class substantially larger than NP, would be solvable in polynomial time. These results mean that
modification to quantum mechanics would not necessarily destroy the power obtained
13 Further Topics in Quantum Information Processing
by computers making use of these physical principles; in fact, in many cases it would increase the power. With these results in mind, Aaronson [5] suggests that limits on computational power should
be considered a fundamental principle guiding our search for physical theories of the universe, much as is done for the laws of thermodynamics. Many intriguing questions as to the extent and source
of the power of quantum computation remain, and they are likely to remain for many years while we humans struggle to understand what Nature allows us to compute efficiently and why.
Some Relations Between Quantum Mechanics and Probability Theory
The inherently probabilistic nature of quantum mechanics is well known, but the close relationship between the formal structures underlying quantum mechanics and probability theory is surprisingly
neglected. This appendix describes standard probability theory in a somewhat nonstandard way, in a language closer to the standard way of describing quantum mechanics. This rephrasing illuminates the
parallels and differences between the two theories. Probability theory helps in understanding quantum mechanics, not only by placing structures such as tensor products in a more familiar context, but
also because the mathematical formalisms underlying quantum theory can be precisely and usefully viewed as an extension of probability theory. This view clarifies relationships between quantum theory
and probability theory, including differences between entanglement and classical correlation. A.1 Tensor Products in Probability Theory
Tensor products are rarely mentioned in probability textbooks, but the tensor product is as much a part of probability theory as of quantum mechanics. The tensor product structure inherent in
probability theory should be stressed more often; one of the sources of mistaken intuition about probabilities is a tendency to try to impose the more familiar direct product structure on what is
actually a tensor product structure. Let A be a finite set of n elements. A probability distribution μ on A is a function μ : A → [0, 1] such that a∈A μ(a) = 1. The space P A of all probability
distributions over A has dimension n − 1. We can view P A as the (n − 1)-dimensional simplex σn−1 = {x ∈ Rn |xi ≥ 0, x1 + x2 + · · · + xn = 1}, which is contained in the n-dimensional space RA , the
space of all functions from A to R, RA = {f : A → R} (see figure A.1). For n = 2, the simplex σn−1 is the line segment from (1, 0) to (0, 1). Each vertex of the simplex corresponds to an element a ∈
A in that it represents the probability distribution
Appendix A Some Relations Between Quantum Mechanics and Probability Theory
Figure A.1 Simplex σ2 , which corresponds to the set of all probability distributions over a set A of three elements.
that is 1 on a and 0 for all other elements of A. An arbitrary probability distribution μ maps to the point in the simplex x = (μ(a1 ), μ(a2 ), . . . , μ(an )). Let B be a finite set of m elements.
Let A × B be the Cartesian product A × B = {(a, b)|a ∈ A, b ∈ B}. What is the relation between P A×B , the space of all probability distributions over A × B, and the spaces P A and P B ? The tempting
guess is not correct: P A×B = P A × P B . The following dimension check shows that this relationship cannot hold. First, consider the relationship between RA×B and RA and RB . Since A × B has
cardinality |A × B| = |A||B| = nm, RA×B has dimension nm, which is not equal to n + m, the dimension of RA × RB . Since dim(P A ) = dim(RA ) − 1, dim(P A×B ) = nm − 1, which is not equal to n + m −
2, the dimension of P A × P B . Thus, P A×B = P A × P B . Instead, RA×B is the tensor product RA ⊗ RB of RA and RB , and P A×B ⊂ RA ⊗ RB . Before showing that this relationship holds, we give an
example to help build intuition. Example A.1.1 Let A0 = {00 , 10 }, A1 = {01 , 11 }, and A2 = {02 , 12 }. Let 10 and 00 correspond
to whether or not the next person you meet is interested in quantum mechanics, A1 to whether she knows the solution to the Monty Hall problem, and A2 to whether she is at least 5 6 tall. So 10 11 02
corresponds to someone under 5 6 who is interested in quantum mechanics and knows the solution to the Monty Hall problem. We often write 110 instead of 10 11 02 ; the subscripts are implied by the
position. A probability distribution over the set of eight possibilities, A0 × A1 × A2 , has form p = (p000 , p001 , p010 , p011 , p100 , p101 , p110 , p111 ).
A.1 Tensor Products in Probability Theory
More generally, a probability distribution over A0 × A1 × · · · × Ak , where the Ai are all 2 element sets, is a vector of length 2k . We always order the entries so that the binary subscripts
increase. Thus, the dimension of the space of probability distributions over the Cartesian product of n two-element sets increases exponentially with n. This paragraph shows that RA×B = RA ⊗ RB .
Given functions f : A → R and g : B → R, define the tensor product f ⊗ g : A × B → R by (a, b) → f (a)g(b). The reader should check that this definition satisfies the axioms for a tensor product.
Furthermore, the linear combination of functions in RA×B is a function in RA×B . Thus RA ⊗ RB ⊆ RA×B . Conversely, we must show that any function h ∈ RA×B can be written as a linear combination of
functions fi ⊗ gi where fi ∈ RA and gi ∈ RB . Define a family of functions fbA ∈ RA , one for each b ∈ B, by fbA : A → R a → h(a, b). Similarly, for each a ∈ A, define gaB : B → R b → h(a, b).
Furthermore, define the probability distributions δaA : A → R 1 if a = a a → 0 otherwise and δbB : B → R 1 if b = b b → 0 otherwise. Then h(a, b) = a ∈A δaA (a)gaB (b), so h = a ∈A δaA ⊗ gaB .
Therefore, h ∈ RA ⊗ RB . For completeness, we mention that by symmetry h = b ∈B fbA ⊗ δbB . Now let us restrict our attention to probability distributions. If μ and ν are probability distributions,
then so is μ ⊗ ν: (μ ⊗ ν)(a, b) = μ(a)ν(b) (a,b)∈A×B
ν(b) = 1.
Appendix A Some Relations Between Quantum Mechanics and Probability Theory
Furthermore, the linear combination of probability distributions is a probability distribution as long as the linear factors sum to 1. Conversely, we show that any probability distribution η ∈ P A×B
is the linear combination of tensor products of probability distributions in P A and P B with linear factors summing to 1. Define a family of probability distributions, one for each a ∈ A, hBa : B →
R η(a, b) . b ∈B η(a, b )
b → Let ca =
η(a, b) =
η(a, b) and cb =
η(a, b). Observe that
a∈A ca
= 1. Then
ca hBa δaA
a ∈A
ca δaA ⊗ hBa .
a ∈A
Since δaA is a probability distribution in P A , every probability distribution over A × B is in PA ⊗ PB. A joint distribution μ ∈ P A×B is independent or uncorrelated with respect to the
decomposition A P ⊗ P B if it can be written as a tensor product μA ⊗ μB of distributions μA ∈ P A and μB ∈ P B . The vast majority of joint distributions do not have this form, in which case they
are correlated. For any joint distribution μ ∈ P A×B , define a marginal distribution μA ∈ P A by μA : a → μ(a, b). b∈B
An uncorrelated distribution is the tensor product of its marginals. Other distributions cannot be reconstructed from their marginals; information has been lost. One of the sources of mistaken
intuition about probabilities is a tendency to try to impose the more familiar direct product structure, which does support reconstruction, on what is actually a tensor product structure; the
relationship between a distribution and its marginals properly understood only within a tensor product structure. A distribution μ : A → R that is concentrated entirely at one element is said to be
pure; on a set A of n elements there are exactly n pure distributions μa : A → [0, 1], one for each element of A, where 1 if a = a μa : a → 0 otherwise. These are exactly the distributions that
correspond to the vertices of the simplex. All other distributions are said to be mixed. When an observation is made, the probability distribution is updated accordingly. All states incompatible with
the observation are ruled out, and the remaining probabilities are normalized
A.1 Tensor Products in Probability Theory
to sum to 1. Much noise is made about the collapse of the state due to quantum measurement. But this collapse occurs in classical probability; it is known as updating a probability distribution in
light of new information. Example A.1.2 Suppose your friend is about to toss two fair coins. The probability distribution
for the four outcomes HH, HT, TH, and TT is pI = (1/4, 1/4, 1/4, 1/4). After she tosses the two coins, she tells you that the two coins agreed. To compute the new probability distributions, the
possibilities compatible with your friend’s observation, HT and TH, are ruled out, and the remaining possibilities are normalized to sum to 1, resulting in the probability distribution pF = (1/2, 0,
0, 1/2).
Example A.1.3 Let us return to the example of the traits for the next person you meet. Unless
you know all of these traits, the distribution pI = (p000 , . . . , p111 ) is a mixed distribution. When you meet the person you can observe her traits. Once you have made these observations, the
distribution collapses to a pure distribution. For example, if the person is interested in quantum mechanics, does not know the solution to the Monty Hall problem, and is 5 8 , the collapsed
distribution is pF = (0, 0, 0, 0, 0, 1, 0, 0). The true surprise in quantum mechanics is that quantum states cannot generally be modeled by probability distributions — the content of Bell’s theorem.
Overly simplified versions of the EPR paradox, in which only one basis is considered, reduce to an unsurprising classical result that instant, faster-than-light knowledge of a faraway state may be
possible upon the observation of a local state. Example A.1.4 Suppose someone prepares two sealed envelopes with identical pieces of paper and sends them to opposite sides of the universe. Half the
time, both envelopes contain 0; half the time, 1. The initial distribution is pI = (1/2, 0, 0, 1/2). If someone then opens one of the envelopes and observes a 0, the state of the contents of the
other envelope is immediately known — known faster than light can travel between the envelopes — and the distribution after the observation is pF = (1, 0, 0, 0).
To understand fully the relationship between quantum mechanics and probability theory, it is useful to view probability distributions as operators. Consider the set of linear operators MA = {M : RA →
RA }. To every function f : A → R, there is an associated operator Mf : RA → RA given by Mf : g → f g. In particular, for any probability distribution μ on A, there is an associated operator Mμ : RA
→ RA . An operator M is said to be a projector if M 2 = M. The set of probability distributions μ whose corresponding operators Mμ are projectors is exactly the set
Appendix A Some Relations Between Quantum Mechanics and Probability Theory
of pure distributions. The matrix for the operator corresponding to a function is always diagonal. For a probability distribution, this matrix is trace 1 as well as diagonal. For example, the
operator corresponding to the probability distribution pI = (1/2, 0, 0, 1/2) has matrix ⎛ ⎞ 1/2 0 0 0 ⎜ 0 0 0 0 ⎟ ⎜ ⎟ ⎝ 0 0 0 0 ⎠. 0
Updating the probability distribution with information from an observation involves setting some of the matrix entries to zero and renormalizing the diagonal to sum to 1. Example A.1.5 The matrix for
the initial probability distribution in example A.1.2 is
⎞ 1/4 0 0 0 ⎜ 0 1/4 0 0 ⎟ ⎜ ⎟. ⎝ 0 0 1/4 0 ⎠ 0 0 0 1/4 The matrix for the updated probability distribution after the measurement involves setting the probabilities of H T and T H to 0 and
renormalizing the matrix to obtain a trace 1 matrix: ⎛ ⎞ 1/2 0 0 0 ⎜ 0 0 0 0 ⎟ ⎜ ⎟ ⎝ 0 0 0 0 ⎠. 0 0 0 1/2
Example A.1.6 The matrix for the initial probability distribution in example A.1.3 is
1/2 0 0 ⎜ 0 0 0 ⎜ ⎝ 0 0 0 0 0 0
⎞ 0 0 ⎟ ⎟. 0 ⎠ 1/2
The matrix for the updated probability distribution after the envelope has been opened involves setting the probability of both envelopes containing 1 to 0 and renormalizing the matrix to obtain a
trace 1: ⎛ ⎞ 1 0 0 0 ⎜ 0 0 0 0 ⎟ ⎜ ⎟ ⎝ 0 0 0 0 ⎠. 0 0 0 0
A.2 Quantum Mechanics as a Generalization of Probability Theory
A.2 Quantum Mechanics as a Generalization of Probability Theory
The remainder of this appendix relies on the density operator formalism and the notions of pure and mixed quantum from section 10.1. This section describes how pure and mixed quantum states
generalize the classical notion of pure and mixed probability distributions. This viewpoint helps clarify the distinction between quantum entanglement and classical correlations in mixed quantum
states. Let ρ be a density operator. Section 10.1.2 showed that every density operator ρ can be written as a probability distribution over pure quantum states i pi |ψi ψi |, where the |ψi are
mutually orthogonal eigenvectors of ρ, and the pi are the eigenvalues, with pi ∈ [0, 1] and pi = 1. Conversely, any probability distribution μ over a set of orthogonal quantum states |ψ1 , |ψ2 , . .
. , |ψL with μ : |ψi → pi has a corresponding density operator ρμ = i pi |ψi ψi |. In the basis {|ψi }, the density operator ρμ is diagonal: ⎞ ⎛ p1 ⎟ ⎜ p2 ⎟ ⎜ ⎟. ⎜ .. ⎠ ⎝ . pL Thus, a probability
distribution over a set of orthonormal quantum states {|ψi } can be viewed as a trace 1 diagonal matrix acting on RL . Under the isomorphism between RL and the subspace of V generated by |ψ1 , |ψ2 ,
. . . , |ψL , the density operator ρμ realizes the operator Mμ of section A.1; a probability distribution over a set of orthonormal quantum states {|ψi } can be viewed as a trace 1 diagonal matrix
acting on RL . In this way, density operators are a direct generalization of probability distributions. Although every density operator can be viewed as a probability distribution over a set of
orthogonal quantum states, this representation is not unique in general. More importantly, for most pairs of density operators ρ1 and ρ2 , there is no basis over which both ρ1 and ρ2 are diagonal.
Thus, although each density operator of dimension N can be viewed as a probability distribution over N states, the space of all density operators is much larger than the space of probability
distributions over N states; the space of all density operators contains many different overlapping copies of the space of probability distributions over N states, one for each orthonormal basis. Let
ρ : V → V be a density operator. By exercise 10.4 a density operator ρ corresponds to a pure state if and only if it is a projector. This statement is analogous to that for probability distributions;
the pure states correspond exactly to rank 1 density operators, and mixed states have rank greater than 1. As explained in section 10.3, density operators are also used to model probability
distributions over pure states, particularly probability distributions over the possible outcomes of a measurement yet to be performed. This use is analogous to the classical use of probability
distributions to model the probabilities of possible traits before they can be observed.
Appendix A Some Relations Between Quantum Mechanics and Probability Theory
A pure quantum state |ψ is entangled with respect to the tensor decomposition into single qubits if it cannot be written as the tensor product of single-qubit states. For a mixed quantum state, it is
important to determine if all of its correlation comes from being a mixture in the classical sense or if it is also correlated in a quantum fashion. A mixed quantum state ρ : V ⊗ W → V ⊗ W is said to
be uncorrelated with respect to the decomposition V ⊗ W if ρ = ρV ⊗ ρW for some density operators ρV : V → V and ρW : W → W . Otherwise ρ is said to be correlated. A mixed quantum state ρ is said to
be separable if it can be written ρ = Lj=1 pj |ψjV ψjV | ⊗ |φjW φjW | where |ψjV ∈ V and |φjW ∈ W . In other words, ρ is separable if all the correlation comes from its being a classical mixture of
uncorrelated quantum states. If a mixed state ρ is not separable, it is entangled. For example, the mixed state ρcc = 12 (|00 00|) + (|11 11|) is classically correlated but not entangled, whereas the
Bell state |+ + | = 12 (|00 + |11)( 00| + 11|) is entangled. The marginals of a pure distribution are always pure, but the analogous statement is not true for quantum states; all of the partial
traces of a pure state are pure only if the original pure state was not entangled. The partial traces of the Bell state |+ , a pure state, are not pure. Most pure quantum states are entangled,
exhibiting quantum correlations with no classical analog. All pure probability distributions are completely uncorrelated. Classical and quantum analogs: Classical probability
Quantum mechanics
probability distribution μ viewed as operator Mμ
density operator ρ
pure distribution: Mμ is a projector
pure state: ρ is a projector
simplex: σn−1 = {x ∈ Rn |xi ≥ 0, x1 + · · · + xn = 1}
Bloch region: set of trace 1 positive Hermitian operators
marginal distribution
partial trace
A distribution is uncorrelated if it is the tensor product of its marginals
A state is uncorrelated if it is the tensor product of its partial traces
Key difference: Classical
pure distributions are always uncorrelated
pure states contain no classical correlation but can be entangled
A marginal of a pure distribution is a pure distribution
The partial trace of a pure state may be a mixed state
A.4 Exercises
A.3 References
The view of quantum mechanics as an extension of probability theory is discussed in many quantum mechanics references, particularly those concerned with the deeper mathematical aspects of the theory.
Aaronson gives a playful account [3]. Rieffel treats this subject in [241]. In their chapter on quantum probability, Kitaev et al. outline parallels between quantum mechanics and probability theory
[175]. Kuperberg’s A Concise Introduction to Quantum Probability, Quantum Mechanics, and Quantum Computation also serves as an excellent reference [188]. Sudbery [267] gives a brief account in his
section of Statistical Formulations of Classical and Quantum Mechanics. An early account of some of these ideas can be found in Mackey’s Mathematical Foundations of Quantum Mechanics. Strocchi’s An
Introduction to the Mathematical Structure of Quantum Mechanics gives a detailed and readable account [266]. A number of papers by Summers, including [236], address relations and distinctions between
quantum mechanics and probability theory. A.4 Exercises Exercise A.1. Show that an independent joint distribution is the tensor product of its marginals. Exercise A.2. Show that a general
distribution cannot be reconstructed from its marginals. Exhibit three distinct distributions with the same marginals. Exercise A.3. a. Show that the tensor product of a pure distribution is pure. b.
Show that any distribution is a linear combination of pure distributions. Conclude that the set
of distributions on a finite set A is convex. c. Show that any pure distribution on a joint system A × B is uncorrelated. d. A distribution is said to be extremal if it cannot be written as a linear
combination of other
distributions. Show that the extremal distributions are exactly the pure distributions. Exercise A.4. Show that the probability distributions μ whose corresponding operators Mμ are
projectors are exactly the pure distributions. Exercise A.5. For each of the states |0, |−, and |i =
√1 (|0 + i|1), 2
give the matrix for the corresponding density operator in the standard basis, and write each of these states as a probability distribution over pure states. For which of these states is this
distribution unique? Exercise A.6. a. Give an example of three density operators no two of which can be simultaneously diagonalized
in that there does not exist a basis with respect to which both are diagonal. b. Show that if a set of density operators commute, then they can be simultaneously diagonalized.
Appendix A Some Relations Between Quantum Mechanics and Probability Theory
Exercise A.7. Show that the binary operator f ⊗ g : (a, b) → f (a)g(b) for f ∈ RA and g ∈ RB
satisfies the relations defining a tensor product structure on RA×B given in section 3.1.2. Exercise A.8. Show that a separable pure state must be uncorrelated. Exercise A.9. Show that if a density
operator ρ ∈ V ⊗ W is uncorrelated with respect to the tensor decomposition V ⊗ W , then it is the tensor product of its partial traces with respect to V and W .
Solving the Abelian Hidden Subgroup Problem
This appendix covers the solution to the Abelian hidden subgroup problem using a generalization of Shor’s factoring algorithm. Recall from box 8.4 that any finite Abelian group can be written as the
product of cyclic groups. Let G be a finite Abelian group with cyclic decomposition G ∼ = Zn0 × · · · × ZnL . Suppose G contains a subgroup H < G that is implicitly defined by a function f on G in
that f is constant and distinct on every coset of H . Find a set of generators for H . This appendix shows that, for finite Abelian groups, if Finite Abelian Hidden Subgroup Problem
Uf : |g|0 → |g|f (g) can be computed in poly-log time, then generators for H can be computed in poly-log time. This appendix makes use of deeper aspects of group theory, such as group
representations, than the rest of the book. Basic elements of group theory were reviewed in the boxes accompanying section 8.6. Section B.1 reviews group representations of finite Abelian groups,
including Schur’s lemma. Section B.2 defines quantum Fourier transforms over finite Abelian groups. Section B.3 explains how these quantum Fourier transforms enable the solution of the Abelian hidden
subgroup problem. Section B.4 looks at Simon’s problem and Shor’s factoring algorithm as instances of this general solution to the Abelian hidden subgroup problem. The appendix concludes in section
B.5 with a few remarks on the non-Abelian hidden subgroup problem. B.1 Representations of Finite Abelian Groups
A representation of an Abelian group G is a group homomorphism χ from G to the multiplicative group of complex numbers C: χ : G → C.
Appendix B Solving the Abelian Hidden Subgroup Problem
More generally, representations of groups are group homomorphisms into the space of linear operators on a vector space. However, in the Abelian case it suffices to consider only characters, the
representations into the multiplicative group of complex numbers. For the additive group Zn , the homomorphism condition implies that any representation χ of Zn must send 0 → 1, and the generator 1
of Zn must map to one of the n roots of unity since n
implies n 2
χ (1) = χ (0) = 1.
Since χ (1) determines the image of all other elements in Zn there can be at most n representations. Any nth root of unity works, so the n representations χj 2πi jx χj : x → exp n for all j ∈ Zn form
the complete set of representations of Zn . Many of the representations are not one-to-one: for example the trivial representation that we have labeled by 0 ∈ Zn sends all group elements to 1. We
have labeled the representations by group elements j ∈ Zn in one way. We use this labeling as our standard labeling throughout this appendix. Other labelings by group elements are possible. More
generally, for any Abelian group, the homomorphism condition χ (gh) = χ (g)χ (h) implies that χ (e) = 1, χ (g −1 ) = χ (g), and that every χ (g) is a kth root of unity, where k is the order of g. An
Abelian group of order |G| has exactly |G| distinct representations χi . Example B.1.1 The two representations for Z2 are χi (j ) = −1ij or
χ0 (x) = 1 χ1 (x) =
1 −1
if x = 0 if x = 1.
Example B.1.2 The four representations χi (j ) = exp(2π i 4 ) of Z4 are given in the following ij
B.1 Representations of Finite Abelian Groups
χ0 χ1 χ2 χ3
1 i −1 −i
1 −1 1 −1
1 −i −1 i
The representations of a product Zn × Zm can be defined in terms of the representation of each of its factors. Let χi be the n different representations of Zn and χj be the m different
representations of Zm . Then χˆ ij ((g, h)) = χi (g)χj (h) are all nm distinct representations of Zn × Zm . We have labeled these representations by group elements (i, j ) ∈ Zn × Zm . Example B.1.3
The 2n representations of Zn2 have a particularly nice form. If we write each
element b of Zn2 as b = (b0 , b1 , . . . , bn−1 ), where each bi is a binary variable, then the group representation χb is the n-way product of the two representations χ0 and χ1 for Z2 , χb (a) = χb0
(a0 ), . . . , χbn−1 (an−1 ) = (−1)a·b ,
where a · b is the standard dot product of the vectors a and b. Since any finite Abelian group is isomorphic to a finite product Zn0 × · · · × Znk of cyclic groups, the definition of χi , together
with the result about representations for product groups, provides an effective way to construct all of the representations for any finite Abelian group. These representation may be labeled by group
elements as before. ˆ where For Abelian groups, the set of representations itself forms a group denoted by G •
the representation χ (g) = 1 for all g ∈ G is the identity,
the product χ = χi ◦ χj of two representations χi and χj defined by χ (g) = χi (g)χj (g) for all g ∈ G is itself a representation, and •
the inverse of any representation χ is defined by
χ −1 (g) = 1/χ (g) = χ (g) for all g ∈ G. For a subgroup H < G, let H ⊥ = {g ∈ G|χg (h) = 1, ∀h ∈ H }. Since G is Abelian, the set of cosets of H in G forms a group G/H , the quotient group of G
modulo H , of order [G : H ] = |G|/|H |. The [G : H ] representations of G/H are in one-to-one correspondence with
Appendix B Solving the Abelian Hidden Subgroup Problem
representations of G that map all elements of H to 1. Thus, there are exactly [G : H ] representations in H ⊥ . The set H ⊥ forms a group that has representations in its own right. Since H ⊥ has size
[G : H ], there are exactly [G : H ] distinct representations of the group H ⊥ . An element g ∈ G acts as a representation χg of H ⊥ in the following way: g : H ⊥ → C χg → χg (g ). Not all of these
representations are distinct, however. All h ∈ H act as the trivial representation on H ⊥ : h : H⊥ → C χg → χg (h) = 1. ⊥
The group H ⊥ = {g ∈ G|χg (g) = 1, ∀g ∈ H ⊥ } has size |G|/[G : H ] = |H |. By definition of H ⊥ and χg , ⊥
H ⊥ = {g ∈ G|g (χg ) = 1, ∀g ∈ H ⊥ } = {g ∈ G|χg (g ) = 1, ∀g ∈ H ⊥ }. ⊥
Thus, all elements of H are contained in H ⊥ . Since |H ⊥ | = |H |, ⊥
H ⊥ = H. Chapter 11 discusses groups C that are classical error correcting codes. The dual group C ⊥ to a classical code C is defined in the way we just discussed. Classical codes and their duals
form the basis for the construction of the quantum CSS codes discussed in section 11.3. Example B.1.4 Any subgroup H of G = Zn2 is isomorphic to Zk2 for some k. Since there are
[G : H ] = 2n−k elements of H ⊥ < G, H ⊥ is isomorphic to Zn−k 2 . Using the expression for the representations of Zn2 of example B.1.3, the elements of H ⊥ are the elements b such that χb (a) = (−1)
a·b = 1 for all a ∈ H . Thus, H ⊥ = {b|a · b = 0 mod 2, ∀a ∈ H }. To define the quantum Fourier transform for a general Abelian group, we need a technical result, Schur’s lemma, that is a
generalization of identity 11.7 for the Walsh-Hadamard transformation. B.1.1 Schur’s Lemma Schur’s lemma Let χi and χj be representations of an Abelian group G. Then,
χi (g)χi (g) = |G|,
B.2 Quantum Fourier Transforms for Finite Abelian Groups
and g∈G
χi (g)χj (g) = 0 for χi = χj .
The first case follows by observing that ωω = 1 for any root of unity. For i = j , χi (g)χj (g) = χi (h)χi (g)χj (g) χi (h) g∈G
χi (hg)χj (h−1 hg)
χi (g)χj (h−1 g)
χi (g)χj (h)χj (g)
= χj (h)
χi (g)χj (g).
Since χi (h) = χj (h) for some h, it follows that g∈G χi (g)χj (g) = 0. If we think of χi as a complex vector of n elements (χi (g0 ), . . . χi (gn−1 )), then Schur’s lemma says that χi has length |G
| and any two different vectors χi and χj are orthogonal. Schur’s lemma for subgroups A simple corollary of Schur’s lemma holds for representations χ
of G restricted to subgroups: |H | if χ (h) = 1, ∀h ∈ H χ (h) = 0 otherwise.
Since any representation χ of G is a representation of H when restricted to H , we can apply Schur’s lemma directly to χ viewed as a representation of H to obtain this equality. B.2 Quantum Fourier
Transforms for Finite Abelian Groups
This section defines quantum Fourier transforms over finite Abelian groups. Section B.2.1 defines the Fourier basis for an Abelian group. This basis is used in the definition of the quantum Fourier
transform over an Abelian group given in section B.2.2. B.2.1 The Fourier Basis of an Abelian Group
To an Abelian group G with |G| = n, we associate an n-dimensional complex vector space V by labeling a basis for the vector space with the n elements of the group {|g0 , . . . |gn−1 }. The Fourier
transform of section 7.8 takes elements of this basis to another, the Fourier basis. As the first step to generalizing the Fourier transform to general Abelian groups, this section defines
Appendix B Solving the Abelian Hidden Subgroup Problem
the Fourier basis for V corresponding to the basis {|g0 , . . . |gn−1 }. The Fourier basis is defined in terms of the set of group representations χg of G. A group G acts in a natural way upon
itself: for every group element g ∈ G, there is a map from G to G that sends a to ga for all elements a ∈ G. This map can be viewed as a unitary transform Tg acting on V that takes Tg : |a → |ga. The
transformation Tg is unitary for any g because it is reversible, Tg−1 Tg = I , and maps basis states to basis states. The Fourier basis of G with respect to a particular labeling χg of the
representations of G consists of all {|ek |k ∈ G}, where 1 χk (g)|g. |ek = |G| g∈G From Schur’s lemma and the fact that g |g = 0 for g = g , it is easy to see that this set forms a basis, since ⎞⎛ ⎞
⎛ 1 ⎝
ej |ek = χj (g ) g |⎠ ⎝ χk (g)|g⎠ | G| g∈G g ∈G
1 = χ (g )χk (g) g |g |G| g∈G j g ∈G
1 = χ (g)χk (g) |G| g∈G j = δj k . For each k ∈ G, the vector |ek is an eigenvector of Tj = |h → |j h with eigenvalue χk (j ): Tj |ek = = =
|G| g∈G 1
|G| g∈G 1
|G| g∈G
= χk (j )
χk (g)Tj |g χk (g)|jg χk (j −1 )χk (jg)|jg
|G| h∈G
= χk (j )|ek .
χk (h)|h
B.2 Quantum Fourier Transforms for Finite Abelian Groups
Example B.2.1 The Fourier basis for Z2 is
|e0 = |e1 =
√1 (χ0 (0)|0 + χ0 (1)|1) 2 √1 (χ1 (0)|0 + χ1 (1)|1) 2
Similarly, for Z4 we get |e0 = 12 3i=0 χ0 (i)|i = |e1 = 12 3i=0 χ1 (i)|i = |e2 = 12 3i=0 χ2 (i)|i = |e3 = 12 3i=0 χ3 (i)|i =
= =
√1 (|0 + |1) 2 √1 (|0 − |1). 2
1 (|0 + |1 + |2 + |3) 2 1 (|0 − i|1 − |2 + i|3) 2 1 (|0 − |1 + |2 − |3) 2 1 (|0 + i|1 − |2 − i|3). 2
B.2.2 The Quantum Fourier Transform Over a Finite Abelian Group
The quantum Fourier transform for an Abelian group G is the transformation F that maps |eg to |g, |g eg |. F= g∈G
Consider the effect of F on a group element |h. With ek | =
ek |h =
|G| g∈G
χk (g) g|h =
| G|
1 G
| |
χk (g) g| we get
χk (h)
and thus, 1 F|h = |g eg |h = χg (h)|g. |G| g∈G g∈G It follows that the matrix for F in the standard basis has entries χg (h) . Fgh = g|F|h = | G| The inverse Fourier transform is F −1 = |eg g|. g∈G
With F −1 |h = |eh = χh (g) −1 Fgh = . | G|
√1 |G|
χh (g)|g, the matrix for F −1 in the standard basis has entries
Appendix B Solving the Abelian Hidden Subgroup Problem
Suppose that FG and FH are Fourier transforms for G and H , respectively. If the elements (g, h) ∈ G × H are encoded as |g|h, then FG×H = FG ⊗ FH is a Fourier transform for G × H . Example B.2.2 The
Hadamard transformation H is the Fourier transform for Z2 :
1 F2 = √ 2
χ0 (0) χ1 (0)
χ0 (1) χ1 (1)
1 =√ 2
1 −1
= H.
The k-bit Walsh-Hadamard transform W is the Fourier transform for Zk2 . In standard labeling, the representations for Zk2 are of the form χi (j ) = (−1)i·j . For instance, F2×2 , for Fourier
transform for Z2 × Z2 is ⎛ ⎞ 1 1 1 1 1 ⎜ 1 −1 1 −1 ⎟ ⎟. F2×2 = H ⊗ H = ⎜ 2 ⎝ 1 1 −1 −1 ⎠ 1 −1 −1 1 By comparison, F4 , the Fourier transform for Z4 is ⎛ 0 0 0 0 ⎞ ⎛ i i i i 1 1 1 1 0 1 2 3 ⎟ ⎜ 1 i 1⎜
1 i i i i −1 −i ⎟= ⎜ F4 = ⎜ 2 ⎝ i0 i2 i4 i6 ⎠ 2 ⎝ 1 −1 1 −1 1 −i −1 i i0 i3 i6 i9
⎞ ⎟ ⎟. ⎠
The quantum Fourier transform can be defined for non-Abelian groups as well. The definition is in terms of group representations, but the set of representations for non-Abelian groups is much more
complicated than for the Abelian case. All of these quantum Fourier transforms have efficient implementations. Even in the Abelian case, some of the implementations are simpler than others. One
useful property is that if U1 and U2 are two quantum algorithms implementing the quantum Fourier transforms for groups G1 and G2 respectively, then U1 ⊗ U2 implements the quantum Fourier transform
for G1 × G2 . Section 7.8 gave an O(n2 ) implementation for quantum Fourier transforms over the groups Z2n . Section B.6 gives pointers to papers on efficient implementations for quantum Fourier
transforms over other groups. We now turn to the use of quantum Fourier transforms in solving the hidden subgroup problem for Abelian groups. B.3 General Solution to the Finite Abelian Hidden
Subgroup Problem
This section explains how to solve the finite Abelian hidden subgroup problem. Suppose a group G, with cyclic decomposition G ∼ = Zn0 × · · · × ZnL , contains a subgroup H < G that is implicitly
defined by a function f : G → G in that f is constant and distinct on every coset of H . Suppose further that Uf can be computed in polylogarithmic time with respect to the size of the group G.
B.3 General Solution to the Finite Abelian Hidden Subgroup Problem
This section shows how, with high probability, generators for H can be found in polylogarithmic time. A general procedure used to solve the Abelian hidden subgroup problem consists of four steps
followed by a final measurement. This procedure is repeated a number of times that depends on the desired level of certainty 1 − . 1 initialization: |0 |G| g∈G
FG :
Uf :
|G| g∈G
|g|f (g)
|H | h∈H 1
|gh ˜
|G||H | g∈G
χg (g) ˜
χg (h) |g.
A measurement of this state returns with equal probability a g ∈ H ⊥ such that χg (h) = 1 for all h ∈ H. We now go through these steps in more detail. After computing Uf on the superposition of all
group elements, ⎛ ⎞ 1 1 Uf ⎝ |g|0⎠ = |g|f (g), |G| g∈G |G| g∈G a measurement of the second register randomly yields a single f (g) ˜ for some g˜ ∈ G. Since f (g) ˜ = f (gh) ˜ for all h ∈ H , and by
assumption f is different on every coset, f (g) ˜ is the value of f on all elements of the coset gH ˜ and on no others. After this measurement, we have the state 1 |ψ = |gh, ˜ |H | h∈H a
superposition over only elements of the coset gH ˜ . Each coset is equally likely to be the result of this measurement, so measuring |ψ at this point yields a random element g ∈ G with equal
probability. The key insight is that the Fourier transform of the state |ψ eliminates the constant g˜ and allows us to extract information about H . 1 The state |ψ is the image of the state √|H h∈H |
h under the transformation | Tg˜ : G → G Tg˜ : |g → |gg. ˜
Appendix B Solving the Abelian Hidden Subgroup Problem
Apply the quantum Fourier transform to |ψ: 1 1 |gh ˜ = χg (gh)|g ˜ F |H | h∈H |G||H | h∈H g∈G = =
|G||H | h∈H g∈G 1
|G||H | g∈G
χg (g)χ ˜ g (h)|g
χg (g) ˜
χg (h) |g.
Schur’s lemma for subgroups says h∈H χg (h) = 0 if and only if χg (h) = 1 for all h ∈ H . It follows that measuring this state returns the index g of some representation χg that is constant (1) on H
. For product groups G = G0 × · · · × GL , the element g of H ⊥ returned is in the form g = (g0 , g1 , . . . , gL ), where gi is an element of Gi . To obtain a complete set of generators for H ⊥ , we
repeat the preceding algorithm a number of times that depends on our desired level of certainty 1 − . If the first n − 1 group elements returned do not yet generate all of H ⊥ , the next run through
the algorithm has at least a 50 percent chance of returning an element of H ⊥ not generated by the previous elements, since any proper subgroup has index at least 2 in the whole group. Thus, by
repeating this procedure the appropriate number of times, we can obtain any desired level of certainty 1 − . We have now completed the quantum part of the solution. From a sufficient number of
elements of H ⊥ , classical methods can efficiently find a full set of generators for H . B.4 Instances of the Abelian Hidden Subgroup Problem B.4.1 Simon’s Problem
Simon’s problem works with the group G = Zn2 that has representations χx (y) = (−1)x·y . The function f (g ⊕ a) = f (g) defines a subgroup A = {0, a}. The measurement at the end of one run through
the four-step procedure for solving Abelian hidden subgroup problems returns an element xj ∈ A⊥ = {x|(−1)x·y = 1 for all y ∈ A}. The element xj must satisfy xj · y = 0 mod 2 for all y ∈ A. With
sufficiently many values xj , we can solve for a. In this problem, we know that we have found a solution when there is a unique non-zero solution for a.
B.6 References
B.4.2 Shor’s Algorithm: Finding the Period of a Function
For simplicity, assume that r divides n (see section 8.2.1 for the general case) and work with the group G = Zn . The periodic function f has the property f (x + r) = f (x), which defines the
subgroup H = {kr|k ∈ [0, . . . n/r)}. The problem is to find the generator r of the subgroup. In the standard labeling of representations for Zn , gh χg (h) = exp 2πi n and
xh = 1 for all h ∈ H } H ⊥ = {x| exp 2πi n = {x|xkr = 0 mod n for all k ∈ [0, . . . n/r)}. Measurement after one round of the four-step procedure yields x ∈ H ⊥ . The element x satisfies xkr = 0 mod
n for all k ∈ [0, . . . n/r). In particular, xr = 0 mod n, so x is a multiple of n/r. The period r can now be computed as in section 8.2.1. B.5 Comments on the Non-Abelian Hidden Subgroup Problem
No one knows how to solve the general hidden subgroup problem. Quantum Fourier transforms can be defined over non-Abelian groups. In fact, efficient implementations of quantum Fourier transforms over
all finite groups are known. It is not known, however, how to use the quantum Fourier transformation to extract information about the generators of hidden subgroups for most non-Abelian groups. Worse
still, researchers have proved that Fourier sampling, a general technique based on Shor’s technique, cannot be used to solve the general hidden subgroup problem. Section 13.1 briefly describes more
recent progress in understanding quantum approaches to the non-Abelian hidden subgroup problem. B.6 References
Kitaev [172] presents a solution for the Abelian stabilizer problem and relates it to factoring and discrete logarithms. The general hidden subgroup problem as presented in this appendix and
Appendix B Solving the Abelian Hidden Subgroup Problem
its solution were introduced by Mosca and Ekert [214]. Ekert and Jozsa [112] and Jozsa [165] analyze the quantum Fourier transform in the context of the hidden subgroup problem. Hallgren [148]
studies extensions to the non-Abelian case. Grigni et al. [141] showed in 2001 that for most non-Abelian groups, Fourier sampling yields only exponentially little information about the hidden
subgroup. B.7 Exercises Exercise B.1. Let G and H be finite graphs. A map f : G → H is a graph isomorphism if it is one-to-one and f (g1 ) and f (g2 ) have an edge between them if and only if g1 and
g2 do. An automorphism of G is a graph isomorphism from G to itself, f : G → G. A graph automorphism of G is a permutation of its vertices. The graph isomorphism problem is to find an efficient
algorithm for determining whether there is an isomorphism between two graphs or not. a. Show that the set Aut(G) of automorphisms of a graph G forms a group, a subgroup of the permutation group Sn ,
where n = |G|. b. Two graphs G1 and G2 are isomorphic if there exists at least one automorphism in Aut(G1 ∪
G2 ) < S 2n that maps nodes of G1 to G2 and vice versa. Show that if G1 and G2 are nonisomorphic connected graphs, then Aut(G1 ∪ G2 ) = Aut(G1 ) × Aut(G2 ). c. Show that if Aut(G1 ∪ G2 ) is strictly
bigger than Aut(G1 ) × Aut(G2 ), then there must be an
element of Aut(G1 ∪ G2 ) that swaps G1 and G2 .
d. Express the graph isomorphism problem as a hidden subgroup problem. Exercise B.2. Write out the algorithm that solves Simon’s problem using the hidden subgroup framework of section B.3. Exercise
B.3. Write out an algorithm that finds the period of a function using the hidden subgroup framework of section B.3. Exercise B.4. Find an efficient algorithm that solves the discrete logarithm
[1] Scott Aaronson. Quantum lower bounds for the collision problem. In Proceedings of STOC ’02, pages 635–642, 2002. [2] Scott Aaronson. Lower bounds for local search by quantum arguments. In
Proceedings of STOC ’04, pages 465–474, 2004. [3] Scott Aaronson. Are quantum states exponentially long vectors? arXiv:quant-ph/0507242, 2005. [4] Scott Aaronson. Quantum computing, postselection,
and probabilistic polynomial-time. Proceedings of the Royal Society A, 461:3473–3482, 2005. [5] Scott Aaronson. The limits of quantum computers. Scientific American, 298(3):62–69, March 2008. [6]
Scott Aaronson and Yaoyun Shi. Quantum lower bounds for the collision and the element distinctness problems. Journal of the ACM, 51(4):595–605, 2004. [7] Daniel S. Abrams and Seth Lloyd. Nonlinear
quantum mechanics implies polynomial-time solution for NP-complete and #P problems. Physical Review Letters, 81:3992–3995, 1998. [8] Mark Adcock and Richard Cleve. A quantum Goldreich-Levin theorem
with cryptographic applications. In Proceedings of STACS ’02, pages 323–334, 2002. [9] Dorit Aharonov and Michael Ben-Or. Fault-tolerant quantum computation with constant error. In Proceedings of
STOC ’97, pages 176–188, 1997. [10] Dorit Aharonov and Michael Ben-Or. Fault-tolerant quantum computation with constant error rate. arXiv:quant-ph/ 9906129, 1999. [11] Dorit Aharonov, Vaughan Jones,
and Zeph Landau. A polynomial quantum algorithm for approximating the Jones polynomial. In Proceedings of STOC ’06, pages 427–436, 2006. [12] Dorit Aharonov, Zeph Landau, and Johann Makowsky. The
quantum FFT can be classically simulated. Los Alamos Physics Preprint Archive, http://xxx.lanl.gov/abs/quant-ph/0611156, 2006. [13] Dorit Aharonov and Oded Regev. A lattice problem in quantum NP. In
Proceedings of FOCS ’03, pages 210–219, 2003. [14] Dorit Aharonov and Oded Regev. Lattice problems in NP ∩ coNP. Journal of the ACM, 52(5):749–765, 2005. [15] Dorit Aharonov and Amnon Ta-Shma.
Adiabatic quantum state generation and statistical zero knowledge. In Proceedings of STOC ’03, pages 20–29, 2003. [16] Dorit Aharonov, Wim van Dam, Julia Kempe, Zeph Landau, Seth Lloyd, and Oded
Regev. Adiabatic quantum computation is equivalent to standard quantum computation. SIAM Journal on Computing, 37(1):166–194, 2007. [17] Gorjan Alagic, Cristopher Moore, and Alexander Russell.
Quantum algorithms for Simon’s problem over general groups. In Proceedings of SODA ’07, pages 1217–1224, 2007. [18] Panos Aliferis. Level reduction and the quantum threshold theorem. Ph.D. thesis,
Caltech, 2007. [19] Panos Aliferis, Daniel Gottesman, and John Preskill. Quantum accuracy threshold for concatenated distance-3 codes. Quantum Information and Computation, 6(2):97–165, 2006. [20]
Andris Ambainis. A better lower bound for quantum algorithms searching an ordered list. In Proceedings of FOCS’99, pages 352–357, 1999.
[21] Andris Ambainis. Quantum lower bounds by quantum arguments. In Proceedings of STOC ’00, pages 636–643, 2000. [22] Andris Ambainis. Quantum walks and their algorithmic applications. International
Journal of Quantum Information, 1:507–518, 2003. [23] Andris Ambainis. Quantum walk algorithm for element distinctness. In Proceedings of FOCS’02, pages 22–31, 2004. [24] Alain Aspect, Jean Dalibard,
and Gérard Roger. Experimental test of Bell’s inequalities using time-varying analyzers. Physical Review Letters, 49:1804–1808, 1982. [25] Alain Aspect, Philippe Grangier, and Gérard Roger.
Experimental tests of realistic local theories via Bell’s theorem. Physical Review Letters, 47:460–463, 1981. [26] Alain Aspect, Philippe Grangier, and Gérard Roger. Experimental realization of
Einstein-Podolsky-Rosen-Bohm gedanken experiment: A new violation of Bell’s inequalities. Physical Review Letters, 49:91–94, 1982. [27] Alp Atici and Rocco Servedio. Improved bounds on quantum
learning algorithms. Quantum Information Processing, 4(5):355–386, 2005. [28] Dave Bacon. Does our universe allow for robust quantum computation? Science, 317(5846):1876, 2007. [29] Dave Bacon,
Andrew Childs, and Wim van Dam. From optimal measurement to efficient quantum algorithms for the hidden subgroup problem over semidirect product groups. In Proceedings of FOCS ’05, 2005. [30] Paul
Bamberg and Shlomo Sternberg. A Course in Mathematics for Students of Physics, volume 2. Cambridge University Press, 1990. [31] Adriano Barenco, Charles H. Bennett, Richard Cleve, David P.
DiVincenzo, Norman H. Margolus, Peter W. Shor, Tycho Sleator, John A. Smolin, and Harald Weinfurter. Elementary gates for quantum computation. Physical Review A, 52(5):3457–3467, 1995. [32] Adriano
Barenco, Artur K. Ekert, Kalle-Antti Suominen, and Päivi Törmä. Approximate quantum Fourier transform and decoherence. Physical Review A, 54(1):139–146, July 1996. [33] Howard Barnum, Claude Crépeau,
Daniel Gottesman, Adam Smith, and Alain Tapp. Authentication of quantum messages. In Proceedings of FOCS ’02, pages 449–458, 2002. [34] Robert Beals. Quantum computation of Fourier transforms over
the symmetric group. In Proceedings of STOC ’97, pages 48–53, 1997. [35] Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald de Wolf. Quantum lower bounds by polynomials. Journal of
the ACM, 48(4):778–797, 2001. [36] John S. Bell. On the Einstein-Podolsky-Rosen paradox. Physics, 1:195–200, 1964. [37] C. H. Bennett, F. Bessette, G. Brassard, L. Salvail, and J. Smolin.
Experimental quantum cryptography. Journal of Cryptology, 5(1):3–28, 1992. [38] C. H. Bennett and P. W. Shor. Quantum information theory. IEEE Transactions on Information Theory, 44(6):2724– 2742,
1998. [39] Charles H. Bennett. Logical reversibility of computation. IBM Journal of Research and Development, 17:525–532, 1973. [40] Charles H. Bennett. Time/space trade-offs for reversible
computation. SIAM Journal on Computing, 18(4):766–776, 1989. [41] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh V. Vazirani. Strengths and weaknesses of quantum computing. SIAM
Journal on Computing, 26(5):1510–1523, 1997. [42] Charles H. Bennett and Gilles Brassard. Quantum cryptography: Public key distribution and coin tossing. In Proceedings of IEEE International
Conference on Computers, Systems, and Signal Processing, pages 175–179, 1984. [43] Charles H. Bennett and Gilles Brassard. Quantum public key distribution reinvented. SIGACT News, 18, 1987. [44]
Charles H. Bennett, Gilles Brassard, Claude Crépeau, Richard Jozsa, A. Peres, and William K. Wootters. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels.
Physical Review Letters, 70:1895– 1899, 1993. [45] Charles H. Bennett, Gilles Brassard, and Artur K. Ekert. Quantum cryptography. Scientific American, 267(4):50, October 1992.
[46] Charles H. Bennett and Stephen J. Wiesner. Communication via one- and two-particle operators on EinsteinPodolsky-Rosen states. Physical Review Letters, 69:2881–2884, 1992. [47] Daniel Bernstein,
Johannes Buchmann, and Erik Dahmen. Post-Quantum Cryptography. Springer Verlag, 2009. [48] Ethan Bernstein and Umesh V. Vazirani. Quantum complexity theory. In Proceedings of STOC ’93, pages 11–20,
1993. [49] Ethan Bernstein and Umesh V. Vazirani. Quantum complexity theory. SIAM Journal on Computing, 26(5):1411–1473, 1997. [50] André Berthiaume and Gilles Brassard. The quantum challenge to
structural complexity theory. In Proceedings of the Seventh Annual Structure in Complexity Theory Conference, pages 132–137, 1992. [51] J. Bienfang, A. J. Gross, A. Mink, B. J. Hershman, A. Nakassis,
X. Tang, R. Lu, D. H. Su, C. W. Clark, D. J. Williams, E. W. Hagley, and J. Wen. Quantum key distribution with 1.25 gbps clock synchronization. Optics Express, 12:2011–2016, 2004. [52] David Biron,
Ofer Biham, Eli Biham, Markus Grassel, and David A. Lidar. Generalized Grover search algorithm for arbitrary initial amplitude distribution. In Selected Papers from QCQC ’98, pages 140–147, 1998.
[53] Arno Bohm. Quantum Mechanics: Foundations and Applications. 3rd ed. Springer Verlag, 1979. [54] David Bohm. The paradox of Einstein, Rosen, and Podolsky. Quantum Theory, pages 611–623, 1951.
[55] Ravi B. Boppana and Michael Sipser. The complexity of finite functions. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A, pages 757–804. Elsevier, 1990. [56] D.
Boschi, S. Branca, F. De Martini, L. Hardy, and S. Popescu. Experimental realization of teleporting an unknown pure quantum state via dual classical and Einstein-Podolski-Rosen channels. Physical
Review Letters, 80:1121–1125, 1998. [57] Dirk Bouwmeester, Jian-Wei Pan, Klaus Mattle, Manfred Eibl, Harald Weinfurter, and Anton Zeilinger. Experimental quantum teleportation. Nature, 390:575, 1997.
[58] Michel Boyer, Gilles Brassard, Peter Høyer, and Alain Tapp. Tight bounds on quantum search. In Proceedings of PhysComp ’96, pages 36–43, 1996. [59] Gilles Brassard. Quantum communication
complexity (a survey). arXiv:quant-ph/0101005, 2001. [60] Gilles Brassard, Richard Cleve, and Alain Tapp. The cost of exactly simulating quantum entanglement with classical communication. Physical
Review Letters, 83:1874–1877, 1999. [61] Gilles Brassard, Peter Høyer, andAlain Tapp. Quantum algorithm for the collision problem. SIGACT News, 28:14–19, 1997. [62] Gilles Brassard, Peter Høyer, and
Alain Tapp. Quantum counting. Lecture Notes in Computer Science, 1443:820–831, 1998. [63] Sergey Bravyi and Barbara Terhal. Complexity of stoquastic frustration-free Hamiltonians. arXiv:0806.1746,
2008. [64] Michael J. Bremner, Caterina Mora, and Andreas Winter. Are random pure states useful for quantum computation? Physical Review Letters, 102:190502, 2009. [65] Hans Briegel and Robert
Raussendorf. Persistent entanglement in arrays of interacting particles. Physical Review Letters, 86(5):910–913, 2001. [66] E. Oran Brigham. The Fast Fourier Transform. Prentice-Hall, 1974. [67] D.
E. Browne. Efficient classical simulation of the quantum Fourier transform. New Journal of Physics, 9:146, 2007. [68] Todd A. Brun, Igor Devetak, and Min-Hsiu Hsieh. Correcting quantum errors with
entanglement. Science, 314(5798):436–439, 2006. [69] Dagmar Bruss. Characterizing entanglement. Journal of Mathematical Physics, 43(9):4237–4250, 2002. [70] Nader H. Bshouty and Jeffrey C. Jackson.
Learning DNF over the uniform distribution using a quantum example oracle. SIAM Journal on Computing, 28:1136–1142, 1999. [71] Jeffrey Bub. Interpreting the Quantum World. Cambridge University Press,
1997. [72] Harry Buhrman, Richard Cleve, John Watrous, and Ronald de Wolf. Quantum fingerprinting. Physical Review Letters, 87(16), 2001. [73] Harry Buhrman and Ronald de Wolf. A lower bound for
quantum search of an ordered list. Information Processing Letters, 70(5):205–209, 1999.
[74] Harry Buhrman and Ronald de Wolf. Communication complexity lower bounds by polynomials. In Proceedings of CCC ’01, pages 120–130, 2001. [75] Harry Buhrman and Robert Špalek. Quantum verification
of matrix products. In Proceedings of SODA ’06, pages 880–889, 2006. [76] Angelo C. M. Carollo and Vlatko Vedral. Holonomic quantum computation. arXiv:quant-ph/0504205, 2005. [77] Nicolas J. Cerf,
Lov K. Grover, and Colin P. Williams. Nested quantum search and structured problems. Physical Review A, 61(3):032303, 2000. [78] Andrew Childs, Edward Farhi, Jeffrey Goldstone, and Sam Gutmann.
Finding cliques by quantum adiabatic evolution. Quantum Information and Computation, 2(181):181–191, 2002. [79] Andrew Childs, Edward Farhi, and John Preskill. Robustness of adiabatic quantum
computation. Physical Review A, 65:012322, 2001. [80] Andrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman. Exponential algorithmic speedup by a quantum
walk. In Proceedings of STOC ’03, pages 59–68, 2003. [81] Andrew M. Childs, Andrew J. Landahl, and Bablo A. Parrilo. Improved quantum algorithms for the ordered search problem via semidefinite
programming. Physical Review A, 75:032335, 2007. [82] Andrew M. Childs and Troy Lee. Optimal quantum adversary lower bounds for ordered search. Lecture Notes in Computer Science, 5125:869–880, 2008.
[83] Isaac L. Chuang and Michael Nielsen. Prescription for experimental determination of the dynamics of a quantum black box. Journal of Modern Optics, 44:2567–2573, 1997. [84] J. Ignacio Cirac and
Peter Zoller. Quantum computations with cold trapped ions. Physical Review Letters, 74:4091– 4094, 1995. [85] Richard Cleve. An introduction to quantum complexity theory. arXiv:quant-ph/9906111v1,
1999. [86] Richard Cleve, Daniel Gottesman, and Hoi-Kwong Lo. How to share a quantum secret. Physical Review Letters, 83(3):648–651, 1999. [87] Graham P. Collins. Computing with quantum knots.
Scientific American, 294(4):56–63, April 2006. [88] James W. Cooley and John W. Tukey. An algorithm for the machine calculation of complex Fourier series. Mathematics of Computation, 19(90):297–301,
1965. [89] Don Coppersmith. An approximate Fourier transform useful in quantum factoring. Research Report RC 19642, IBM, 1994. [90] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein. Introduction to Algorithms. MIT Press, 2001. [91] Claude Crépeau, Daniel Gottesman, and Adam Smith. Secure multi-party quantum computation. In Proceedings of STOC ’02, pages 643–652,
2002. [92] Andrew Cross, David P. DiVincenzo, and Barbara Terhal. A comparative code study for quantum fault tolerance. arXiv:quant-ph/0711.1556v1, 2007. [93] G. M. D’Ariano, D. Kretschmann, D.
Schlingemann, and R. F. Werner. Reexamination of quantum bit commitment: The possible and the impossible. Physical Review A, 76(3):032328, 2007. [94] G. Mauro D’Ariano, Matteo G. A. Paris, and
Massimiliano F. Sacchi. Quantum tomography. Advances in Imaging and Electron Physics, 128:205–308, 2003. [95] Christopher M. Dawson and Michael Nielsen. The Solovay-Kitaev algorithm. Quantum
Information and Computation, 6:81–95, 2006. [96] Ronald de Wolf. Characterization of non-deterministic quantum query and quantum communication complexity. In Proceedings of CCC ’00, pages 271–278,
2000. [97] Ronald de Wolf. Quantum communication and complexity. Theoretical Computer Science, 287(1):337–353, 2002. [98] Ronald de Wolf. Lower bounds on matrix rigidity via a quantum argument.
Lecture Notes in Computer Science, 4051:299–310, 2006. [99] David Deutsch. Quantum theory, the Church-Turing principle and the universal quantum computer. Proceedings of the Royal Society of London
Ser. A, A400:97–117, 1985. [100] David Deutsch. Quantum computational networks. Proceedings of the Royal Society of London Ser. A, A425:73–90, 1989.
[101] David Deutsch, Adriano Barenco, and Artur K. Ekert. Universality in quantum computation. Proceedings of the Royal Society of London Ser. A, 449:669–677, 1995. [102] David Deutsch and Richard
Jozsa. Rapid solution of problems by quantum computation. Proceedings of the Royal Society of London Ser. A, A439:553–558, 1992. [103] P. A. M. Dirac. The Principles of Quantum Mechanics. 4th ed.
Oxford University Press, 1958. [104] David P. DiVincenzo. The physical implementation of quantum computation. Fortschritte der Physik, 48:771–784, 2000. [105] Andrew Drucker and Ronald de Wolf.
Quantum proofs for classical theorems. arXiv:0910.3376, 2009. [106] Paul Dumais, Dominic Mayers, and Louis Salvail. Perfectly concealing quantum bit commitment from any quantum one-way permutation.
Lecture Notes in Computer Science, 1807:300–315, 2000. [107] Wolfgang Dür, Guifre Vidal, and J. Ignacio Cirac. Three qubits can be entangled in two inequivalent ways. Physical Review A, 62:062314,
2000. [108] Bryan Eastin and Emanuel Knill. Restrictions on transversal encoded quantum gate sets. Physical Review Letters, 102(11)110502, 2009. [109] Albert Einstein, Boris Podolsky, and Nathan
Rosen. Can quantum-mechanical description of physical reality be considered complete? Physical Review, 47:777–780, 1935. [110] Jens Eisert, Martin Wilkens, and Maciej Lewenstein. Quantum games and
quantum strategies. Physical Review Letters, 83, 1999. [111] Artur K. Ekert. Quantum cryptography based on Bell’s theorem. Physical Review Letters, 67(6):661–663, August 1991. [112] Artur K. Ekert
and Richard Jozsa. Quantum algorithms: Entanglement enhanced information processing. In Proceedings of Royal Society Discussion Meeting “Quantum Computation: Theory and Experiment.” Philosophical
Transactions of the Royal Society of London Ser. A, 1998. [113] J. Emerson, M. Silva, O. Moussa, C. Ryan, M. Laforest, J. Baugh, D. Cory, and R. Laflamme. Symmetrized characterization of noisy
quantum processes. Science, 317(5846):1893–1896, 2007. [114] M. Ettinger, P. Høyer, and E. Knill. The quantum query complexity of the hidden subgroup problem is polynomial. Information Processing
Letters, 91(1):43–48, 2004. [115] E. Farhi, J. Goldstone, and S. Gutmann. A quantum algorithm for the Hamiltonian nand tree. arXiv:quantph/0702144, 2007. [116] Edward Farhi, Jeffrey Goldstone, Sam
Gutmann, Joshua Lapan, Andrew Lundgren, and Daniel Preda. A quantum adiabatic evolution algorithm applied to instances of an NP-complete problem. Science, 292:5516, 2001. [117] Edward Farhi, Jeffrey
Goldstone, Sam Gutmann, and Michael Sipser. A limit on the speed of quantum computation for insertion into an ordered list. arXiv:quant-ph/9812057, 1998. [118] Edward Farhi, Jeffrey Goldstone, Sam
Gutmann, and Michael Sipser. Quantum computation by adiabatic evolution. arXiv:quant-ph/0001106, January 2000. [119] Richard Feynman. Simulating physics with computers. International Journal of
Theoretical Physics, 21(6–7):467– 488, 1982. [120] Richard Feynman. Quantum mechanical computers. Optics News, 11, 1985. [121] Richard Feynman. Feynman Lectures on Computation. Addison-Wesley, 1996.
[122] Richard P. Feynman, Robert B. Leighton, and Matthew Sands. Lectures on Physics, Vol. III. Addison-Wesley, 1965. [123] Joseph Fourier. Théorie analytique de la chaleur. Firmin Didot, 1822. [124]
Edward Fredkin and Tommaso Toffoli. Conservative logic. International Journal of Theoretical Physics, 21:219– 253, 1982. [125] Michael H. Freedman, Alexei Kitaev, Michael J. Larsen, and Zhenghan
Wang. Topological quantum computation. Bulletin of the American Mathematical Society, 40(1):31–38, 2001. [126] Murray Gell-Mann. Questions for the future. In The Nature of Matter; Wolfson College
Lectures 1980. Clarendon Press, 1981. [127] Craig Gentry. A fully homomorphic encryption scheme. Ph.D. thesis, Stanford University, 2009. [128] Craig Gentry. Fully homomorphic encryption using ideal
lattices. In Proceedings of STOC ’09, pages 169–178, 2009.
[129] Neil A. Gershenfeld and Isaac L. Chuang. Bulk spin resonance quantum computing. Science, 275:350–356, 1997. [130] Nicolas Gisin, Gregoire Ribordy, Wolfgang Tittel, and Hugo Zbinden. Quantum
cryptography. Reviews of Modern Physics, 74(1):145–195, January 2002. [131] Oded Goldreich. Computational Complexity. Cambridge University Press, 2008. [132] Steven Gortler and Rocco Servedio.
Equivalences and separations between quantum and classical learnability. SIAM Journal on Computing, 33(5):1067–1092, 2004. [133] Daniel Gottesman. The Heisenberg representation of quantum computers.
arXiv:quant-ph/9807006, July 1998. [134] Daniel Gottesman. On the theory of quantum secret sharing. Physical Review A, 61, 2000. [135] Daniel Gottesman. Stabilizer codes and quantum error correction.
Ph.D. thesis, Caltech, May 2000. [136] Daniel Gottesman. Uncloneable encryption. Quantum Information and Computation, 3:581–602, 2003. [137] Daniel Gottesman. Jump-starting quantum error correction
with entanglement. Science, 314:427, 2006. [138] Daniel Gottesman. An introduction to quantum error correction and fault-tolerant quantum computation. arXiv:0904.2557, 2009. [139] Daniel Gottesman
and Isaac L. Chuang. Quantum digital signatures. arXiv:quant-ph/0105032, November 2001. [140] George Greenstein and Arthur G. Zajonc. The Quantum Challenge. Jones and Bartlett, 1997. [141]
Michelangelo Grigni, Leonard Schulman, Monica Vazirani, and Umesh V. Vazirani. Quantum mechanical algorithms for the nonabelian hidden subgroup problem. In Proceedings of STOC ’01, pages 68–74, 2001.
[142] D. Gross, S. T. Flammia, and J. Eisert. Most quantum states are too entangled to be useful as computational resources. Physical Review Letters, 102:190501, 2009. [143] Lov K. Grover. Quantum
computers can search arbitrarily large databases by a single query. Physical Review Letters, 79(23):4709–4712, 1997. [144] Lov K. Grover. A framework for fast quantum mechanical algorithms. In
Proceedings of STOC ’98, pages 53–62, 1998. [145] Gus Gutoski and John Watrous. Toward a general theory of quantum games. In Proceedings of STOC ’07, pages 565–574, 2007. [146] Sean Hallgren.
Polynomial-time quantum algorithms for Pell’s equation and the principal ideal problem. In Proceedings of STOC ’02, pages 653–658, 2002. [147] Sean Hallgren, Alexander Russell, and Amnon Ta-Shma.
Normal subgroup reconstruction and quantum computing using group representations. In Proceedings of STOC ’00, pages 627–635, 2000. [148] Sean Hallgren, Alexander Russell, and Amnon Ta-Shma. The
hidden subgroup problem and quantum computation using group representations. SIAM Journal on Computing, 32(4):916–934, 2003. [149] G. H. Hardy and E. M. Wright. An Introduction to the Theory of
Numbers. Oxford University Press, 1979. [150] Anthony J. G. Hey. Feynman and Computation. Perseus Books, 1999. [151] Mark Hillery, Vladimir Buzek, and Andre Berthiaume. Quantum secret sharing.
Physical Review A, 59:1829–1834, 1999. [152] Kenneth M. Hoffman and Ray Kunze. Linear Algebra. 2nd ed. Prentice Hall, 1971. [153] Tad Hogg. Adiabatic quantum computing for random satisfiability
problems. Physical Review A, 67:022314, 2003. [154] Tad Hogg, Carlos Mochon, Wolfgang Polak, and Eleanor Rieffel. Tools for quantum algorithms. International Journal of Modern Physics, C10:1347–1362,
1999. [155] Peter Høyer and Ronald de Wolf. Improved quantum communication complexity bounds for disjointness and equality. In Proceedings of STACS ’02, pages 299–310, 2002. [156] R. J. Hughes, J. E.
Nordholt, D. Derkacs, and C. G. Peterson. Practical free-space quantum key distribution over 10km in daylight and at night. New Journal of Physics, 4:43.1–43.14, 2002. [157] Richard Hughes et al.
Quantum cryptography roadmap, version 1.1. http://qist.lanl.gov, July 2004. [158] Thomas A. Hungerford. Algebra. Springer Verlag, 1997. [159] Thomas W. Hungerford. Abstract Algebra: An Introduction.
Saunders College Publishing, 1997. [160] Markus Hunziker, David A. Meyer, Jihun Park, James Pommersheim, and Mitch Rothstein. The geometry of quantum learning. arXiv:quant-ph/0309059, 2003.
[161] Yoshifumi Inui and Franc¸ois LeGall. An efficient quantum algorithm the non-Abelian hidden subgroup problem over a class of semi-direct product groups. Quantum Information and Computation, 7
(5):559–570, 2007. [162] Gabor Ivanyos, Frederic Magniez, and Miklos Santha. Efficient quantum algorithms for some instances of the non-Abelian hidden subgroup problem. International Journal of
Foundations of Computer Science, 14(5):723–740, 2003. [163] Thomas Jennewein, Christoph Simon, Gregor Weihs, Harald Weinfurter, and Anton Zeilinger. Quantum cryptography with entangled photons.
Physical Review Letters, 84:4729–4732, 2000. [164] David S. Johnson. A catalog of complexity classes. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A, pages 67–162.
Elsevier, 1990. [165] Richard Jozsa. Quantum algorithms and the Fourier transform. Proceedings of the Royal Society of London Ser. A, pages 323–337, 1998. [166] Richard Jozsa. Searching in Grover’s
algorithm. arXiv:quant-ph/9901021, 1999. [167] Richard Jozsa and Noah Linden. On the role of entanglement in quantum computational speed-up. Proceedings of the Royal Society of London Ser. A,
459:2011–2032, 2003. [168] Elham Kashefi and Iordanis Kerenidis. Statistical zero knowledge and quantum one-way functions. Theoretical Computer Science, 378(1):101–116, 2007. [169] Julia Kempe.
Quantum random walks—an introductory overview. Contemporary Physics, 44(4):307–327, 2003. [170] Iordanis Kerenidis and Ronald de Wolf. Exponential lower bound for 2-query locally decodable codes via
a quantum argument. In Proceedings of STOC ’03, pages 516–525, 2003. [171] David Kielpinski, Christopher R. Monroe, and David J. Wineland. Architecture for a large-scale ion-trap quantum computer.
Nature, 417:709–711, 2002. [172] Alexei Kitaev. Quantum measurements and the Abelian stabilizer problem. arXiv: quant-ph/9511026, 1995. [173] Alexei Kitaev. Quantum computations: Algorithms and error
correction. Russian Mathematical Surveys, 52(6):1191–1249, 1997. [174] Alexei Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303:2, 2003. [175] Alexei Kitaev, Alexander
Shen, and Mikhail N. Vyalyi. Classical and Quantum Computation. American Mathematical Society, 2002. [176] E. Knill, R. Laflamme, R. Martinez, and C.-H. Tseng. A cat-state benchmark on a seven bit
quantum computer. arXiv:quant-ph/9908051, 1999. [177] Emanuel Knill. Approximation by quantum circuits. arXiv:quant-ph/9508006, 1995. [178] Emanuel Knill. Quantum computing with realistically noisy
devices. Nature, 434:39–44, 2005. [179] Emanuel Knill, Raymond Laflamme, and Gerard Milburn. A scheme for efficient quantum computation with linear optics. Nature, 409:46–52, 2001. [180] Emanuel
Knill, Raymond Laflamme, and Wojciech H. Zurek. Resilient quantum computation. Science, 279:342– 345, 1998. [181] Emanuel Knill, Raymond Laflamme, and Wojciech H. Zurek. Resilient quantum
computation: Error models and thresholds. Proceedings of the Royal Society London A, 454:365–384, 1998. [182] Donald E. Knuth. The Art of Computer Programming, volume 2: Seminumerical Algorithms.
Addison-Wesley, 2nd edition, 1981. [183] Neal Koblitz and Alfred Menezes. A survey of public-key cryptosystems. SIAM Review, 46:599–634, 2004. [184] David W. Kribs, Raymond Laflamme, and David
Poulin. A unified and generalized approach to quantum error correction. Physical Review Letters, 94:199–218, 2005. [185] David W. Kribs, Raymond Laflamme, David Poulin, and Maia Lesosky. Operator
quantum error correction. Quantum Information and Computation, 6:382–399, 2006. [186] Hari Krovi and Todd A. Brun. Quantum walks on quotient graphs. arXiv:quant-ph/0701173, 2007. [187] Greg
Kuperberg. Random words, quantum statistics, central limits, random matrices. Methods and Applications of Analysis, 9(1):101–119, 2002. [188] Greg Kuperberg. A concise introduction to quantum
probability, quantum mechanics, and quantum computation. Unpublished notes, available at www.math.ucdavis.edu/greg/intro.pdf, 2005.
[189] Greg Kuperberg. A subexponential-time quantum algorithm for the dihedral hidden subgroup problem. SIAM Journal on Computing, 35(1):170–188, 2005. [190] Paul C. Kwiat, Andrew J. Berglund, Joseph
B. Altepeter, and Andrew G. White. Experimental verification of decoherence-free subspaces. Science, 290:498–501, 2000. [191] Chris Lamont. The hidden subgroup problem: Review and open problems.
arXiv:quant-ph/0411037, 2004. [192] Steven E. Landsburg. Quantum game theory. Notices of the American Mathematical Society, 51(4):394–399, 2004. [193] Arjen Lenstra and Hendrik Lenstra, editors. The
Development of the Number Field Sieve, volume 1554 of Lecture Notes in Mathematics. Springer Verlag, 1993. [194] Richard L. Liboff. Introductory Quantum Mechanics. 3rd ed. Addison-Wesley, 1997. [195]
Daniel A. Lidar and K. Birgitta Whaley. Decoherence-free subspaces and subsystems. In Irreversible Quantum Dynamics, volume 622, pages 83–120, 2003. [196] Seth Lloyd. Universal quantum simulators.
Science, 273:1073–1078, 1996. [197] Hoi-Kwong Lo and H. F. Chau. Why quantum bit commitment and ideal quantum coin tossing are impossible. Physics D, 120(1–2):177–187, 1998. [198] Hoi-Kwong Lo and H.
F. Chau. Unconditional security of quantum key distribution over arbitrarily long distances. Science, 283:2050–2056, 1999. [199] Richard A. Low. Large deviation bounds for k-designs. arXiv:0903.5236,
2009. [200] Frederic Magniez and Ashwin Nayak. Quantum complexity of testing group commutativity. Algorithmica, 48(3):221–232, 2007. [201] Frederic Magniez, Miklos Santha, and Mario Szegedy. Quantum
algorithms for the triangle problem. SIAM Journal on Computing, 37(2):413–424, 2007. [202] Yuri I. Manin. Computable and uncomputable. Sovetskoye Radio, Moscow (in Russian), 1980. [203] Yuri I.
Manin. Mathematics as Metaphor: Selected Essays of Yuri I. Manin. American Mathematical Society, 2007. [204] Igor Markov and Yaoyun Shi. Simulating quantum computation by contracting tensor networks.
arXiv:quant-ph/ 0511069, 2005. [205] Dominic Mayers. Unconditionally secure quantum bit commitment is impossible. Physical Review Letters, 78(17):3414–3417, 1997. [206] Dominic Mayers. Unconditional
security in quantum cryptography. Journal of the ACM, 48:351–406, 2001. [207] Ralph C. Merkle. A certified digital signature. In CRYPTO ’89: Proceedings on Advances in Cryptology, pages 218–238,
1989. [208] N. David Mermin. Hidden variables and the two theorems of John Bell. Reviews of Modern Physics, 65:803–815, 1993. [209] N. David Mermin. Copenhagen computation: How I learned to stop
worrying and love Bohr. IBM Journal of Research and Development, 48:53, 2004. [210] Albert Messiah. Quantum Mechanics, Vol. II. Wiley, 1976. [211] Rodney Van Meter and Mark Oskin. Architectural
implications of quantum computing technologies. Journal on Emerging Technologies in Computing Systems, 2(1):31–63, 2006. [212] David A. Meyer. Quantum strategies. Physical Review Letters,
82:1052–1055, 1999. [213] David A. Meyer. Sophisticated quantum search without entanglement. Physical Review Letters, 85:2014–2017, 2000. [214] Michele Mosca and Artur Ekert. The hidden subgroup
problem and eigenvalue estimation on a quantum computer. Lecture Notes in Computer Science, 1509:174–188, 1999. [215] Geir Ove Myhr. Measures of entanglement in quantum mechanics. arXiv:quant-ph/
0408094, August 2004. [216] Ashwin Nayak and Felix Wu. The quantum query complexity of approximating the median and related statistics. In Proceedings of STOC ’99, pages 384–393, 1999. [217] Michael
Nielsen. Conditions for a class of entanglement transformations. Physics Review Letters, 83(2):436–439, 1999. [218] Michael Nielsen. Cluster-state quantum computation. arXiv:quant-ph/0504097, 2005.
[219] Michael Nielsen and Christopher M. Dawson. Fault-tolerant quantum computation with cluster states. Physical Review A, 71:042323, 2004. [220] Michael Nielsen, Henry Haselgrove, and Christopher
M. Dawson. Noise thresholds for optical quantum computers. Physical Review A, 96:020501, 2006. [221] Michael Nielsen, Emanuel Knill, and Raymond Laflamme. Complete quantum teleportation using nuclear
magnetic resonance. Nature, 396:52–55, 1998. [222] Jeremy L. O’Brien. Optical quantum computing. Science, 318(5856):1567–1570, 2008. [223] Christos H. Papadimitriou. Computational Complexity.
Addison-Wesley, 1995. [224] Chris Peikert. Public-key cryptosystems from the worst-case shortest vector problem: Extended abstract. In Proceedings of STOC ’09, pages 333–342, 2009. [225] Roger
Penrose. The Emperor’s New Mind. Penguin Books, 1989. [226] Asher Peres. Quantum Theory: Concepts and Methods. Kluwer Academic, 1995. [227] Pérez-Delgado and Pieter Kok. What is a quantum computer,
and how do we build one? arXiv:0906.4344, 2009. [228] Ray A. Perlner and David A. Cooper. Quantum resistant public key cryptography: A survey. In IDtrust ’09: Proceedings of the 8th Symposium on
Identity and Trust on the Internet, pages 85–93, 2009. [229] Nicholas Pippenger and Michael J. Fischer. Relations among complexity measures. Journal of the ACM, 26(2):361– 381, 1979. [230] Sandu
Popescu, Berry Groisman, and Serge Massar. Lower bound on the number of Toffoli gates in a classical reversible circuit through quantum information concepts. Physical Review Letters, 95:120503, 2005.
[231] J. F. Poyatos, R. Walser, J. I. Cirac, P. Zoller, and R. Blatt. Motion tomography of a single trapped ion. Physical Review A, 53(4):1966–1969, 1996. [232] Juan Poyatos, J. Ignacio Cirac, and
Peter Zoller. Complete characterization of a quantum process: The two-bit quantum gate. Physical Review Letters, 78(2):390–393, 1997. [233] John Preskill. Fault-tolerant quantum computation. In H.-K.
Lo, S. Popescu, and T. P. Spiller, editors, Introduction to Quantum Computation and Information, pages 213–269. World Scientific, 1998. √ √ [234] H. Ramesh and V. Vinay. On string matching in o( ˜ n
+ m) quantum time. Journal of Discrete Algorithms, 1(1):103–110, 2001. [235] Robert Raussendorf, Daniel Browne, and Hans Briegel. Measurement-based quantum computation with cluster states. Physical
Review A, 68:022312, 2003. [236] Miklos Redei and Stephen J. Summers. Quantum probability theory. arXiv:hep-th/0601158, 2006. [237] Oded Regev. Quantum computation and lattice problems. In
Proceedings of FOCS ’02, pages 520–529, 2002. [238] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. In Proceedings of STOC ’05, pages 84–93, 2005. [239] Oded
Regev. A subexponential-time quantum algorithm for the dihedral hidden subgroup problem with polynomial space. arXiv:quant-ph/0406151, 2005. [240] Ben W. Reichardt. The quantum adiabatic optimization
algorithm and local minima. In Proceedings of STOC ’04, pages 502–510, 2004. [241] Eleanor Rieffel. Certainty and uncertainty in quantum information processing. In Proceedings of the AAAI Spring
Symposium 2007, pages 134–141, 2007. [242] Eleanor Rieffel. Quantum computing. In The Handbook of Technology Management, pages 384–392. Wiley, 2009. [243] Jeremie Roland and Nicholas Cerf. Quantum
search by local adiabatic evolution. arXiv:quant-ph/0107015, 2001. [244] Markus Rötteler and Thomas Beth. Polynomial-time solution to the hidden subgroup problem for a class of nonAbelian groups.
arXiv:quant-ph/9812070, 1998. [245] Arnold Schönhage and Volker Strassen. Schnelle Multiplikation grosser Zahlen [Fast multiplication of large numbers]. Computing, 7(3–4):281–292, 1971. [246] Rocco
A. Servedio. Separating quantum and classical learning. Lecture Notes in Computer Science, 2076:1065– 1080, 2001. [247] Ramamurti Shankar. Principles of Quantum Mechanics. 2nd ed. Plenum Press, 1980.
[248] Yaoyun Shi. Quantum lower bounds for the collision and the element distinctness problems. In Proceedings of FOCS ’02, pages 513–519, 2002. [249] Yaoyun Shi, Luming Duan, and Guifre Vidal.
Classical simulation of quantum many-body systems with a tree tensor network. Physical Review A, 74(2):022320, August 2006. [250] Peter W. Shor. Algorithms for quantum computation: Discrete log and
factoring. In Proceedings of FOCS’94, pages 124–134, 1994. [251] Peter W. Shor. Scheme for reducing decoherence in quantum memory. Physical Review A, 52, 1995. [252] Peter W. Shor. Fault-tolerant
quantum computation. In Proceedings of FOCS ’96, pages 56–65, 1996. [253] Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal
on Computing, 26(5):1484–1509, 1997. [254] Peter W. Shor. Progress in quantum algorithms. Quantum Information Processing, 3(1–5):5–13, 2004. [255] Peter W. Shor and John Preskill. Simple proof of
security of the BB84 quantum key distribution protocol. Physical Review Letters, 85:441–444, 2000. [256] David R. Simon. On the power of quantum computation. SIAM Journal on Computing, 26
(5):1474–1483, 1997. [257] Rolando D. Somma, Gerardo Ortiz, Emanuel Knill, and James Gubernatis. Quantum simulations of physics problems. In Quantum Information and Computation, volume 5105, pages
96–103, 2003. [258] Andrew Steane. The ion trap quantum information processor. Applied Physics B, 64:623–642, 1996. [259] Andrew Steane. Multiple particle interference and quantum error correction.
Proceedings of the Royal Society of London Ser. A, 452: 2551–2573, 1996. [260] Andrew Steane. Quantum computing. Reports on Progress in Physics, 61(2):117–173, 1998. [261] Andrew Steane. Efficient
fault-tolerant quantum computing. Nature, 399:124–126, 1999. [262] Andrew Steane and David M. Lucas. Quantum computing with trapped ions, atoms and light. Fortschritte der Physik, 48:839–858, 2000.
[263] Andrew M. Steane. Overhead and noise threshold of fault-tolerant quantum error correction. Physical Review A, 68(4):042322, 2003. [264] Gilbert Strang. Introduction to Applied Mathematics.
Wellesley-Cambridge Press, 1986. [265] Gilbert Strang. Linear Algebra and its Applications. Harcourt Brace Jovanovich, 1988. [266] Franco Strocchi. An Introduction to the Mathematical Structure of
Quantum Mechanics. World Scientific, 2005. [267] Anthony Sudbery. Quantum Mechanics and the Particles of Nature. Cambridge University Press, 1986. [268] Barbara Terhal and Guido Burkard.
Fault-tolerant quantum computation for local non-Markovian noise. Physical Review A, 71:012336, 2005. [269] Barbara M. Terhal and JohnA. Smolin. Single quantum querying of a database. Physical Review
A, 58(3):1822–1826, 1998. [270] Tommaso Toffoli. Reversible computing. In J. W. de Bakker and Jan van Leeuwen, editors, Automata, Languages and Programming, pages 632–644. Springer Verlag, 1980.
[271] Joseph F. Traub and Henryk Wo´zniakowski. Path integration on a quantum computer. Quantum Information Processing, 1(5):365–388, 2002. [272] Wim van Dam, Sean Hallgren, and Lawrence Ip. Quantum
algorithms for some hidden shift problems. In Proceedings of SODA ’03, pages 489–498, 2003. [273] Wim van Dam and Umesh V. Vazirani. Limits on quantum adiabatic optimization, 2003. [274] Vlatko
Vedral. The elusive source of quantum effectiveness. arXiv:0906.2656, 2009. [275] Vlatko Vedral, Adriano Barenco, and Artur K. Ekert. Quantum networks for elementary arithmetic operations. Physical
Review A, 54(1):147–153, 1996. [276] Frank Verstraete, Diego Porras, and J. Ignacio Cirac. DMRG and periodic boundary conditions: A quantum information perspective. Physical Review Letters,
93:227205, 2004. [277] George F. Viamontes, Igor L. Markov, and John P. Hayes. Is quantum search practical? Computing in Science and Engineering, 7(3):62–70, 2005. [278] Guifre Vidal. Efficient
classical simulation of slightly entangled quantum computations. Physical Review Letters, 91:147902, 2003.
[279] Heribert Vollmer. Introduction to Circuit Complexity. Springer, 1999. [280] John Watrous. Zero-knowledge against quantum attacks. In Proceedings of STOC ’06, pages 296–305, 2006. [281] John
Watrous. Quantum computational complexity. arXiv:0804.3401, 2008. [282] Stephanie Wehner and Ronald de Wolf. Improved lower bounds for locally decodable codes and private information retrieval.
Lecture Notes in Computer Science, 3580:1424–1436, 2005. [283] Stephen B. Wicker. Error Control Systems for Digital Communication and Storage. Prentice-Hall, 1995. [284] Stephen Wiesner. Conjugate
coding. SIGACT News, 15:78–88, 1983. [285] Stephen Wiesner. Simulations of many-body quantum systems by a quantum computer. arXiv:quant-ph/9603028, March 1996. [286] William K. Wootters and Wojciech
H. Zurek. A single quantum cannot be cloned. Nature, 299:802, 1982. [287] Andrew Yao. Quantum circuit complexity. In Proceedings of FOCS ’93, pages 352–361, 1993. [288] Nadav Yoran and Anthony J.
Short. Efficient classical simulation of the approximate quantum Fourier transform. Physical Review A, 76(4):060302, 2007. [289] Christof Zalka. Simulating quantum systems on a quantum computer.
Royal Society of London Proceedings Series A, 454:313–322, 1998. [290] Christof Zalka. Grover’s quantum searching algorithm is optimal. Physical Review A, 60(4):2746–2751, 1999. [291] Christof Zalka.
Using Grover’s quantum algorithm for searching actual databases. Physical Review A, 62(5):052305, 2000. [292] Paolo Zanardi and Mario Rasetti. Noiseless quantum codes. Physical Review Letters, 79
(17):3306–3309, 1997. [293] Paolo Zanardi and Mario Rasetti. Holonomic quantum computation. Physical Review A, 264:94–99, 1999. [294] Anton Zeilinger. Quantum entangled bits step closer to IT.
Science, 289:405–406, 2000. [295] Peter Zoller et al. Quantum information processing and communication: Strategic report on current status, visions and goals for research in Europe. http://
qist.ect.it, 2005.
Notation Index
Standard Notation |x| [x..y]
absolute value closed interval
≈ e i exp(x)
approximately equal 2.718281 . . . √ −1 ex
log logm
logarithm base e logarithm base m
traditional vector notation
vT aij
transpose of a vector or matrix element i,j of matrix A
det A λ
determinant of A generic eigenvalue
U −1 U†
inverse of a unitary transformation, quantum algorithm conjugate transpose
C R
the complex numbers the real numbers
Rn Z
n dimensional real space the natural numbers
Z2 Zn2
the natural numbers modulo 2 group of n-bit strings under bitwise addition modulo 2
|G| ˆ G
order of a group the group of representations of G
χ H
group homomorphism subgroup relation
o ∼ =
generic group operation isomorphism
the centralizer of subgroup S
Notation Index
General Concepts ⊕ a¯ ∗ O(f (n)) (f (n)) (f (n))
bit-wise exclusive or operation complex conjugate a set of symbols (alphabet) the set of words over alphabet measures of complexity
Page p. 100 p. 14 p. 148 p. 148 p. 107 p. 107 p. 107
Section 6.1 2.2 7.7 7.7 6.2.2 6.2.2 6.2.2
p. 14 p. 14 p. 15 p. 14 p. 47 p. 248 p. 16 p. 33 p. 14 p. 127 p. 32
2.2 2.2 2.2 2.2 4.2 11.1.1 2.3 3.1 2.2 7.1.1 3.1.1
Linear Algebra Vectors |v ||v|
a|b |a b| x˜ ⊥ ⊗ a·b
dH (v, c) dH (x) Matrices ||A||Tr (A|B) I (k) D (k) dTr (ρ, ρ ) δij |A| tr trA
quantum state vector labeled v length or norm of a vector conjugate transpose of |v inner product of a| and |b outer product of a| and |b the label for |x in the code space used as superscript,
signifying orthogonality right Kronecker product inner product on bit vectors or binary vector/matrix multiplication sometimes scalar multiplication Hamming distance
p. 127
Hamming weight
p. 127
trace norm of a matrix or operator composition of two matrices 2k × 2k identity matrix
p. 309 12.3.2 p. 263 11.2.5 p. 81 5.3
2k × 2k diagonal matrix trace metric
p. 154 7.8.1 p. 309 12.3.2
Kronecker delta √ positive square root A† A trace partial trace
p. 14 2.2 p. 309 12.3.2 p. 209 10.1.1 p. 211 10.1.1
Quantum States Transformations, Operators ∧Q controlled transformation Q ∧k Q Q controlled by k control qubits single qubit transformation Q, controlled by a pattern ∧ix Q Uf unary transformation for
(classical) function f H
Hadamard transformation
p. 78
p. 87 p. 88
5.4.3 5.4.3
p. 100 p. 76
6.1 5.2.2
Notation Index
W K(δ) R(β) T (α) X, Y, Z, I σX , σY , σZ UF(n) ρ ρA ε Gn ∼
Walsh transformation a phase shift by δ a rotation by β a phase rotation by α elements of the Pauli group
Page Section p. 128 7.1.1 p. 84 5.4.1 p. 84 5.4.1 p. 84 5.4.1 p. 75 5.2.1
elements of the Pauli group quantum Fourier transform on 2n qubits density operator density operator of a designated subsystem correctable set of errors generalized Pauli Group equivalence state
vectors modulo a global phase
p. 215 p. 156 p. 207 p. 208 p. 259 p. 271 p. 21
10.1.3 7.8.1 10.1.1 10.1.1 11.2.4 11.2.11 2.5.1
p. 14 p. 22
2.2 2.5.1
p. 226
Specific Quantum States |0, |1 single qubit standard basis |−, |+ single qubit Hadamard basis |GH Zn entangled n-qubit state |Wn |± | ±
entangled n-qubit state Bell states
p. 226 10.2.4 p. 39 3.2
Bell states
p. 39
complex projective space of dimension one
p. 22
probability distributions over A set of mappings from A to R
p. 331 A.1 p. 331 A.1
set of mappings on distributions
p. 335 A.1
dimension of a space direct sum of vector spaces direct product (same as direct sum for finite dimension)
p. 32 3.1.1 p. 32 3.1.1 p. 174 8.6.2
product of multiple spaces tensor product on spaces
p. 33
number of cosets of H in G stands for 2d x + y (Ch 7)
p. 261 11.2.5 p. 152 7.7.2
Sets and Spaces CP1 PA RA MA dim V ⊕ × 1n i=1
Groups [G : H ] xoy
Abelian group, 174 addition, 113 carry, 114 adiabatic quantum algorithms, 319 adiabatic quantum computation, 318 adiabatic state generation, 321 adjoint operator, 50 amplitude, 12, 147 amplitude
amplification, 183 ancilla, 247, 248 anticommute, 281 automorphism, 352 BB84 quantum key distribution protocol, 18 Bell, John, 62 Bell basis, 36 Bell state, 36, 38 Bell’s inequality, 62, 64 Bell’s
theorem, 62 big O notation O(f (n)), 107 bit quantum, 9 black box, 131 Bloch sphere, 25, 216 block code, 254, 257 bound entanglement, 225 bra, 15 bra / ket notation, 47 Buchmann-Williams public key
cryptosystem, 312 cat state, 95, 299 centralizer of a subgroup, 173 characters, 342 circuit quantum, 74 time partitioned, 294 circuit complexity, 111 circuit complexity quantum, 130 circuit model, 93
classical bits, 83 Clifford group, 318 cloning, impossibility of, 73 cluster state, 227 cluster state quantum computation, 317, 324 code block, 254, 257 distance of, 270 dual, 274 linear, 254 code
space, 255 codewords, 248, 257 coding concatenated, 306 collapse, 335 collision problem, 313 communication complexity, 132, 145 complex conjugate, 14 complex projective space, 22 complexity
communication, 132, 145 query, 130, 131 computation reversible, 73, 99 computational qubits, 248 concatenated coding, 306 concept learning, 313 condition correctable error set, 259 conjugate
transpose, 15, 50 consistency condition for circuits, 130 controlled gates, 78 controlled-not, 77 controlled-controlled-not, 101, 129 correctable error set condition, 259 correctable set of errors,
259 correlation, 331, 338 coset, 261 cryptography, 163 quantum, 320
database search, 199 decoherence, 239 decoherence-free subspaces, 314 degenerate codes, 260, 282 dense coding, 81 density operator, 207, 208 density operators trace metric, 309 DFT (discrete Fourier
transform), 153 Diffie-Hellman protocol, 172 dihedral hidden subgroup problem, 312 Dirac, Paul, 25 Dirac’s bra / ket notation, 47 direct sum, 31, 32 discrete Fourier transform, 153 discrete
logarithm, 172 discrete logarithm problem, 18, 171, 172 disjointness condition for error correction, 258 distance of a code, 270 distillable entanglement, 224 distribution mixed, 334 pure, 334
DiVincenzo criteria, 322 dot product, 15 dual codes, 274 eavesdropping, 20 ebit, 132 eigenspace, 53 eigenspace decomposition of Hermitian operators, 54 eigenvalue, 53 eigenvector, 53 Einstein,
Albert, 60 El Gamal encryption, 172 elliptic curves, 172 encoding function, 255 entangled mixed states, 224 entangled states, 31, 34, 38, 39, 129, 218 entanglement, 76, 205, 331 bound, 225
distillable, 224 maximally connected, 226 persistency, 226 relative entropy of, 224 entanglement cost, 224 environment, 239 EPR pair, 38, 60 EPR paradox, 60 error model local stochastic, 308 error
rate, 308 error syndrome, 265 errors correctable, 258 correctable set, 259
Euclidean algorithm, 164 exhaustive search, 188, 199 fast Fourier transform (FFT), 154 fault-tolerance, 297 flying qubits, 323 Fourier coefficients, 153 Fourier transform, discrete, 153 Fredkin gate,
102 gap shortest lattice vector problem, 312 generalized Pauli group, 271 generate, 14 generator matrix, 255 generators independent, 173 set of, 173 global phase, 21 Gottesman-Knill theorem, 318
graph isomorphism, 312, 352 graph isomorphism problem, 352 graph states, 227 Gray code, 89 ground state, 318 group, 173 Abelian, 174 finitely generated, 173 homomorphism, 255 isomorphic, 255
isomorphism, 255 product, 174 quotient, 276 set of generators for, 173 subgroup, 173 group representation, 341 Hadamard transformation, 76 Hadamard basis, 17, 22 Hamiltonian, 318 Hamming bound, 273
Hamming codes, 262 Hermitian operator, 53 Hermitian operators, eigenspace decomposition, 54 hidden subgroup problem dihedral, 312 hidden-variable theories local, 61 Hilbert space, 25 homomorphism,
255 independent group elements, 286 independent error model, 270 index of a subgroup, 261 inner product, 14, 15, 34 inversion about the average, 178
ion trap quantum computers, 324 irrational rotation, 92 isomorphism, 255 kernel, 255 ket, 14 Ladner’s theorem, 312 lattice-based public key encryption, 322 length, 14 linear code, 254 linear
combination, 12, 14 linear transformation, 47 local stochastic error model, 308 location, 308 LOCC, stochastic, 218, 225 LOCC equivalence, 222 logical qubit, 248, 257 lower bound, 131, 188, 313 lower
bound results, 147, 159 majorize, 223 Markov assumption, 239 maximally connected entangled states, 226 maximally entangled states, 223 measurement, 16, 47 measurement syndrome, 250 message space, 255
message words, 257 mixed distribution, 334 mixed states, 206, 224 NMR quantum computers, 323 no-cloning principle, 73 nonlinear optics, 323 norm, 14 NP-intermediate problems, 312, 321 observables, 55
omega notation (f (n)), 107 one-way quantum computation, 317 operator Hermitian, 53 projection, 49 trace, 209 operator error correction, 314 operator sum decomposition, 234 optical quantum computers,
323, 324 oracle, 188 order mod M, 164 of a group, 173 of an element, 173 orthogonal, 14 orthogonality condition, 260 orthonormal basis, 14 outer product, 47
PAC learning, 313 parity check matrix, 262 partial trace, 211 particles, 21 Pauli group, generalized, 271 Pell’s equation, 312 perfect code, 273 persistency of entanglement, 226 phase global, 21
relative, 22 photon, 11 photon polarization, 11 polarization, 11 polaroid (polarization filter), 9 prisoner’s dilemma, 321 product of groups, 174 outer, 47 projection, 55 projection operator, 49
projective space, 37 projectors, 50 public key encryption elliptic curve, 172 Buchmann-Williams, 312 lattice based, 322 pure distribution, 334 pure states, 206 purification, 214 quantum algorithms
adiabatic, 319 collision problem, 313 dihedral hidden subgroup problem, 312 Pell’s equation, 312 quantum random walk based, 313 shifted Legendre symbol problem, 313 quantum bit, 9, 13, 18 quantum
circuit complexity, 130 quantum circuits, 74 quantum computation adiabatic, 318 quantum computers ion trap, 324 NMR, 323 optical, 323 quantum computing cluster state, 317, 324 topological, 320
quantum counting, 197 quantum cryptography, 320 quantum error correction, 245, 314 quantum Fourier transform, 155 quantum games, 321 quantum gate, 74
quantum key distribution B92, 28 BB84, 18 quantum learning theory, 313 quantum parallelism, 129 quantum process tomography, 324 quantum query complexity, 130 quantum random walks, 313 quantum state
tomography, 324 quantum state, 206 quantum tomography, 324 qubit, 9, 13 ancilla, 248 flying, 323 logical, 248 qudits, 16 query complexity, 131 query complexity quantum, 130 quotient group, 276
qutrits, 16 rational rotation, 92 relative entropy of entanglement, 224 relative phase, 22 relatively prime, 164 representation group, 341 reversible computation, 73, 99 reversible computation and
energy, 120 reversible transformations, 73 rotation irrational, 92 rational, 92 Schmidt coefficients, 219, 242 Schmidt decomposition, 219 Schmidt number, 219, 242 Schmidt rank, 219, 242 Schrödinger
equation, 25, 318 search, exhaustive, 188, 199 separable, 218, 224 shifted Legendre symbol problem, 313 Shor, Peter, 163 Shor code, 252 Shor’s algorithm, 147 simulation of quantum systems classical,
325 quantum, 325 SLOCC, 225 span, 14 stabilizer, 281 standard basis, 14, 34 standard circuit model, 93 state cat, 299
cluster, 227 mixed, 208 state space, 13, 21, 32 states graph, 227 maximally entangled, 223 mixed, 206 pure, 206 statistical distance, 309 Steane code, 310 subgroup, 173 index of, 261 superoperators,
234 superposition, 12, 15, 18, 40, 129 swap circuit, 79 symplectic inner product, 286 syndrome, 247, 248, 262 syndrome error, 265 syndrome extraction operator, 248, 265 teleportation, 81, 93 tensor
product, 31–33, 58 time partitioned circuit, 294 Toffoli gate, 87, 101, 129 tomography quantum, 324 quantum process, 324 quantum state, 324 topological quantum computation, 320 topological quantum
computing, 320 trace, 209 partial, 211 trace distance, 309 trace metric, 309 transformation reversible, 73 unitary, 72 transversal gates, 301 two-state quantum systems, 13 unclonable encryption, 321
uncorrelated, 338 uniformity condition on circuits, 130 unitary transformation, 72 vector length, 14 norm, 14 wave functions, 25 zero-knowledge protocols, 321
Scientific and Engineering Computation William Gropp and Ewing Lusk, editors; Janusz Kowalik, founding editor
Unstructured Scientific Computation on Scalable Multiprocessors, edited by Piyush Mehrotra, Joel Saltz, and Robert Voigt, 1992 Parallel Computational Fluid Dynamics: Implementation and Results,
edited by Horst D. Simon, 1992 The High Performance Fortran Handbook, Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele Jr., and Mary E. Zosel, 1994 PVM: Parallel Virtual
Machine-A Users’ Guide and Tutorial for Network Parallel Computing,Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Bob Manchek, and Vaidy Sunderam, 1994 Practical Parallel Programming,
Gregory V. Wilson, 1995 Enabling Technologies for Petaflops Computing, Thomas Sterling, Paul Messina, and Paul H. Smith, 1995 An Introduction to High-Performance Scientific Computing, Lloyd D.
Fosdick, Elizabeth R. Jessup, Carolyn J. C. Schauble, and Gitta Domik, 1995 Parallel Programming Using C++, edited by Gregory V. Wilson and Paul Lu, 1996 Using PLAPACK: Parallel Linear Algebra
Package, Robert A. van de Geijn, 1997 Fortran 95 Handbook, Jeanne C. Adams, Walter S. Brainerd, Jeanne T. Martin, Brian T. Smith, and Jerrold L. Wagener, 1997 MPI—The Complete Reference: Volume 1,
The MPI Core, Marc Snir, Steve Otto, Steven HussLederman, David Walker, and Jack Dongarra, 1998 MPI—The Complete Reference: Volume 2, The MPI-2 Extensions, William Gropp, Steven HussLederman, Andrew
Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, 1998 A Programmer’s Guide to ZPL, Lawrence Snyder, 1999 How to Build a Beowulf, Thomas L. Sterling, John Salmon, Donald J. Becker,
and Daniel F. Savarese, 1999 Using MPI: Portable Parallel Programming with the Message-Passing Interface, second edition, William Gropp, Ewing Lusk, and Anthony Skjellum, 1999 Using MPI-2: Advanced
Features of the Message-Passing Interface, William Gropp, Ewing Lusk, and Rajeev Thakur, 1999
Beowulf Cluster Computing with Linux, 2nd edition, William Gropp, Ewing Lusk, and Thomas Sterling, 2003 Beowulf Cluster Computing with Windows, Thomas Sterling, 2001 Scalable Input/Output: Achieving
System Balance, Daniel A. Reed, 2003 Using OpenMP: Portable Shared Memory Parallel Programming, Barbara Chapman, Gabriele Jost, and Ruud van der Pas, 2008 Quantum Computing Without Magic, Zdzislaw
Meglicki, 2008 | {"url":"https://epdf.tips/quantum-computing-a-gentle-introduction.html","timestamp":"2024-11-05T06:02:29Z","content_type":"text/html","content_length":"950622","record_id":"<urn:uuid:ad2ba463-5c8d-4509-939b-5722871ba50c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00632.warc.gz"} |
Modeling and Simulation of Pressure Equalization Step Between a Packed Bed and an Empty Tank in Pressure Swing Adsorption Cycles
All published articles of this journal are available on ScienceDirect.
Modeling and Simulation of Pressure Equalization Step Between a Packed Bed and an Empty Tank in Pressure Swing Adsorption Cycles
It is of paramount importance to pay a great attention when modeling pressure equalization step of pressure swing adsorption cycles for its notable effect on the accurate prediction of the whole
cycle performances. Studies devoted to pressure equalization between an adsorption bed and a tank have been lacking in the literature. Many factors could affect the accuracy of the dynamic simulation
of pressure equalization between a bed and an empty tank.
The method used for the equilibrium pressure evaluation has a significant impact on simulation results. The exact equilibrium pressure (P[eq]) obtained when connecting an adsorption column and an
empty tank could only be obtained by numerical simulation given the complexity of the set of partial differential equations.
It has been shown that, with some simplifying assumptions, one can analytically determine Peq with satisfactory precision. The analytical solution proposed permits to assess rapidly the equilibrium
pressure and the equilibrium mole fraction of the adsorbable species in the tank (Y[eq]) and without the need to resort to a cubersome modeling.
Based on the comparisons presented, one can conclude that the agreement between the experimental and numerical results relative to P[eq] and Y[eq] is very satisfactory.
Keywords: Pressure equalization step, Equilibrium pressure, Empty tank, Modeling, Dynamic simulation, Pressure swing adsorption.
The modeling of varying pressure steps of pressure swing adsorption cycles have received a great deal of attention because of the great effect of these steps on the accurate prediction of whole cycle
performances. These steps comprise pressurization, blowdown and equalization. However, until recently, the main attention has been given to pres- surization and depressurization. The incorporation of
a pressure equalization step has been proposed for the first time by Berlin in 1966 [1]. The pressure equalization step has two purposes: To save the mechanical energy contained in the gas of a high
pressure bed, and to recover a part of the product that would otherwise be lost in the blowdown. One of the ways to do this is to use the high-pressure gas removed during the cocurrent
depressurization step to repressurize other adsorber by pressure equalizations. It is done by connecting the ends of two beds at a particular stage of the cycle. Thus, pressure equalization steps
enable gas separations to be realized economically on a large scale.
Very few articles entirely devoted to the study of the equalization step of PSA cycles have been published. Apart from some adsorption simulators, capble of simulating the step with no simplifying
assumptions such as ASPEN Adsorption, the equalization step has been generally treated as an ordinary depressurization for the high pressure bed (from P[H] to P[eq] ) and as an ordinary
pressurization for the low pressure bed (from P[L] to P[eq] ) with a constant pressure at the open end of the bed (P[eq] ). The equilibrium pressure P[eq] reached at the end of the equalization step
is evaluated after considering simplifying assumptions and rough approximations [2-5]. In many approaches, the pressure history during equalization is assumed to follow some simple law, e.g.
exponential or linear, similarly to the conventional depressurization or pressurization steps.
Warmuzinski [6] has developed a simple analytical formula to measure energy savings resulting from the pressure equalization step. Warmuzinski and Tanczyki [7], have also proposed a formula for
calculating the equilibrium pressure P[eq] in the case of linear uncoupled isotherms and no breakthrough of the more strongly adsorbed component from the bed undergoing depressurization. Delgado and
Rodrigues [8] have analyzed the effect of three types of boundary conditions on the time and spatial profiles of the pressure equalization step of a Skarstrom PSA cycle. Chahbani and Tondeur [9] have
simulated the dynamics of the pressure equalization step and evaluated the equilibrium pressure numerically and analytically.They have shown that in some simple cases only the pressure equalization
step can be decomposed into independent pressurization and depressurization steps and that the constant pressure at the open end of the pressurized or depressurized bed should be accurately estimated
prior to this decomposition. Yavary et al. [11] have found that the analytical solution, proposed by Chahbani and Tondeur for the evaluation of the equilibrium pressure, provides a more realistic
mathematical procedure with respect to the use of an arithmetic mean for the calculation of the final pressure for the pressure equalization steps. Yavary et al. [12] have also shown that the number
of pressure equalization steps affects significantly both purity and recovery of product. Therefore, the number of pressure equalization steps must be considered as an important process parameter in
evaluating process performance.
In 1964, prior to Berlin’s improvement of the Skarstrom cycle, Marsh et al. [10] have suggested another idea for reducing blowdown loss. Besides the two adsorption beds, an empty tank is used. At the
end of the high-pressure adsorption step but well before breakthrough, the feed flow is stopped and the product end of the high-pressure bed is connected to the empty tank where a portion of the
compressed gas, rich in the product, is stored. The blowdown of the high-pressure bed is completed by venting to the atmosphere in the reverse-flow direction. The stored gas is then used to purge the
bed after which the bed is finally purged with product gas. The product comsumption during purge is reduced, thereby increasing the recovery. Tondeur and Wankat [13] have described a number of PSA
cycles that implement empty tanks, and have shown that the sequences in complex multi-column, multi-step cycles can be emulated by a system comprising a single column and a multiplicity of empty
tanks. It is therefore of general interest to revisit the pressure equalization between a column and an empty tank.
In the work previously cited, we have studied in detail pressure equalization between two packed columns. In this work, we will try to extend the previous study to pressure equalization between a
column and a tank in the sense that studies devoted to pressure equalization with a tank have been lacking in the literature.
2. MODELING
The simulation of a fixed-bed adsorber requires the numerical solution of the governing partial differential equations: mass, heat and momentum balances as well as mass transfer kinetics.
2.1. Mass Balance for the Packed Bed
A differential fluid phase mass balance for the component i is given by the following axially dispersed plug flow equation:
with ϵ[t] = ϵ + (1 − ϵ) ϵ[p], being the total porosity.
The overall mass balance for the bulk gas is given by:
wherein u is the interstitial fluid velocity, ϵ the bed or the interparticle void fraction,ϵ[p] the intraparticle void fraction or porosity, Z being the compressibility factor of the gas mixture.
2.2. Heat Balance for the Packed Bed
A heat balance for the bed can be written as:
For completeness, this equation accounts for heat transfer between the column and some environment at a temperature T[e]. However, in the numerical application, only the adiabatic case will be
considered. In the following, the heat capacities of adsorbed species (cp^i[a] ) are supposed to be equal to those in gas phase (cp^i[g] ).
2.3. Momentum Balance
Ergun’s law is used to estimate locally the bed pressure drop.
where µ is the gas mixture viscosity, ρ the gas density and d[p] the particle diameter.
2.4. Mass Balance for the Empty Tank
A fluid phase mass balance for the component i in the tank is given by the following equation:
2.5. Heat Balance for the Empty Tank
A heat balance for the tank can be written as:
with S[tank] the heat transfer surface of the tank.
2.6. Numerical Method
The foregoing models require the simultaneous solution of a set of partial differential and algebraic equations (overall and component mass balance equations, heat balance equation and momentum
balance equation). The above equations are easily rewritten using dimensionless variables, (P/P[ref] , u/u[ref] ,T /T[ref], ............). The well-mixed cells method is used to discretize the
system. The resulting system of ordinary differential and algebraic equations are solved by the DDASSL integration algorithm of Petzold [14] DASSL is designed for the numerical solution of implicit
systems of differential/algebraic equa-tions written in the form F(t,y,y’)=0, where F, y, and y’ are vectors, and initial values for y and y’ are given. The time derivatives are approximated by the
Gear formula BDF (Backward Differentiation Formula) and the resulting nonlinear system at each time-step is solved by Newton’s method. The full description of the code can be found in reference [14].
The runtime of the code is less than one minute for an ordinary computer equipped with Intel Core 2 Duo Processor E4700 (2M Cache, 2.60 GHz, 800 MHz FSB).
2.7. Boundary and Initial Conditions
As shown in Fig. (1), the boundary conditions are as follows:
The whole step can be considered as a depressurization of the bed and a pressurization of the tank. The pressure at the outlet of the bed or at the inlet of the tank varies with time.
The initial conditions for the bed and the tank should also be defined. The pressure is P[H] in the bed and P[L] in the tank. The axial profiles of composition and temperature at t = 0 in the bed
correspond to those of the final state attained during the step preceding the equalization step.
The effluent of the column is the feed of the tank. Thus, pressure, temperature and composition at the outlet of the bed are identical to those prevailing at the inlet of the tank.
This paper will only focus on the assessment of the equilibrium pressure obtained when connecting an adsorption column and a tank and will not deal with energy consumption optimization and product
recovery. In fact, one of the goals of the equalization step is to collect the portion of product-rich gas ahead of the concentration wave (in the mass transfer zone). The amount of gases transferred
to the tank strongly depends on the initial location of the concentration wave. In a real situation, the optimized tank volume is related to the initial location of the concentration wave front in
the bed. The best situation would be that the adsorbable species is just about to breakthrough when the pressure is equalized. The packed bed is initially uniformly loaded. The reason of starting the
simulation from a uniformly loaded bed is that it would be easy, as we shall see later when the bed is initially uniformly loaded, to validate the experiments carried out without having to worry
about the precise position of the front in the column before the pressure equalization step. The validation is simply done through the comparison of the values of the equilibrium pressure and the
molar fraction of the adsorbable gas in the tank at the end of the step obtained numerically and experimentally.
In practical operations, the column and the tank are separated by valves and tubings, which could in principle be included in the modeling. The work presented herein considers that the pressure drop
through the valves and tubings is negligible. This does not affect the value of equilibrium pressure attained at the end of the step, it does only modify the dynamics of the pressure equalization. in
fact, the presence of significant pressure drop tends to increase the duration of the step.
When flow goes through a valve or any other restricting device, it loses some energy. The flow coefficient (c[v] ) is a designing factor which relates pressure drop (∆P) across the valve with the
flow rate (Q). Each valve has its own flow coefficient. This depends on how the valve has been designed to let the flow going through the valve. Therefore, the main differences between different flow
coefficients come from the type of valve, and of course the opening position of the valve. At same flow rate, higher flow coefficient means lower drop pressure across the valve. Depending of
manufacturer and type of valve, the flow coefficient can be expressed in several ways.
Simulating of varying pressure steps without the incorporation of a valve equation shows a notable disparity with experimental results as can be seen in Fig. (2) for depressurization. This is why it
is indispensible to take into account the significant pressure drop across the valve in the modeling. The two following valve models could be used:
1. Q = c[u] ∆P [15].
2. Q = c[u]16].
• C concentration (mol/m^3)
• Q molar flow rate (mol.s^−^1)
• ∆P pressure drop across the valve (Pa)
Figs. (3 and 4) show that the incorporation of a valve equation in the modeling permits to obtain a good agreement between experimental results and simulations. The values of the flow coefficients
given by the two models are different. However, it must be noted that the values of flow coefficients obtained herein are only valid for the experimental PSA setup studied and they can not be used
for simulating different theoretical PSA systems.
However, it should be mentioned that, in the following, a valve equation will be only used when comparing simulation and experimental results. As previously mentioned, the assessed equilibrium
pressure is not affected by the presence or absence of a valve equation in the modeling.
2.8. Analytical Solution for the Equilibrium Pressure
As presented above, the model can not be analytically solved. However, if the following simplifying assumptions are considered, an analytical solution can be obtained. For the purpose of comparison,
we shall examine the results of this analytical approximation as well as the full numerical solutions:
1. The process is isothermal
2. The column and the tank are considered homogeneous at the initial and final states.
3. The gaseous and solid phases are in equilibrium at the initial and final states.
4. The equalization time is large.
The fourth condition implies that the column and the tank become identical at the end of the equalization step. This approximation may appear very rough, but as will be seen later, the value of the
equalization pressure obtained considering these hypotheses is more accurate than the arithmetic or geometric mean.
For the sake of simplicity, the resolution is restricted to the case of a gas mixture composed of two species, one of which is inert (I) whereas the other is adsorbable (A).
The total number of moles in each bed at the initial state are
Where ϵ[t] is the total porosity (ϵ + (1 − ϵ)ϵ[p]). The first term on the right hand side represents the moles adsorbed. In equations (7) and (8), the subscript 0 refers to the initial state.
The constitutive equations of the model are written as follows:
-Conservation of the total number of moles in the the column
Where the total number of moles in the system n[0] is given by the sum of Equations (7) and (8). The subscript eq refers to the end of the equalization step.
-Conservation of the total number of moles of inert component I
-Langmuir adsorption equilibrium in column 1 after equalization
-Summation of mole fractions in the column
-Summation of mole fractions in the tank
The system of 5 equations (9) to (13) relates 6 unknowns (the four mole fractions y, the adsorbed quantity q^col[eq] and P[eq] ). By substituting equations (11) to (13) into Equations (9) and (10),
three of these variables can be eliminated, leaving a system of two equations with three unknowns, P[eq] , y^col[eqI] and y^tank[eqI] for example. To resolve analytically, an additional assumption
needs to be made. We shall assume here that the column and the tank have identical gas phase compositions at the end of equilibration (thus, y^col[eqI] = y^tank[eqI] ), leaving only two unknowns. The
analytical resolution then leads to a quadratic equation whose positive root is
The inert species molar fraction is:
The adsorbable species molar fraction is
In the case of more than one adsorbable species, a similar set of conservation and equilibrium equations may be written, and with the assumption of equal gas-phase compositions of the columns and the
tank in the end state, this set determines the end pressure Peq. However, a compact analytical solution is probably impossible, and the solution for Peq has to be found numerically.
In the case where the column and the tank initially contain only one species (adsorbable or inert), one can easily obtain the following formula for the equilibrium pressure
If the column and the tank have the same volume P[eq] becomes:
The numerical simulations presented herein concern methane uptake from hydrogen by using activated carbon. H[2] is supposed to be a non-adsorbed species. The adsorption equilibrium isotherm of CH[4]
on activated carbon is given by the Langmuir model
The parameters Q[m] and b are function of temperature:
Table 1 gives the values of k[i] parameters and adsorption heat used in simulations.
Table 1.
Langmuir parameters
k[1] 7.063 10^3 mol/m^3
k[2] 13.610 mol/(m^3K)
k[3] 3.07110^−^8 1/Pa
k[4] 1574.1 K
Adsorption heat (∆H) 20.0 kJ/mol
The model requires the assessment of the physical properties of the gas mixture. The compressibility factor is calculated following the method of Lee-Keesler [17]. The viscosity of each pure gas is
estimated by Lucas method [17] whereas the viscosity of the mixture is evaluated by the Reichenberg method [17]. The compressibility factor (Z), the mixture viscosity (µ) as well as the gas density (
ρ) vary with temperature, pressure and composition, therefore they are calculated at every computation step.
The adsorbent physical properties are given in Table 2. The heat capacities of the various gases are calculated by using an equation of the following form:
Table 2.
Apparent density (ρ[p]) 830 kg/m^3
Intraparticle porosity (ϵ[p]) 0.6
Particule diameter (dp)
0.1 10^−^3 m
Heat capacity (cp[s]) 1.050 kJ/(kg K)
The constants a, b, c and d for the two gases are listed in Table 3.
Table 3.
a b c d
H[2] 27.14 9.274 10^-3
-1.381 10^-5 7.645 10^-8 CH[4] 19.25 5.213 10^-2
1.197 10^-5 -1.132 10^-8
Table 4 gives the operating conditions used for computations. All the following simulations are in the adiabatic case (h=0).
Table 4.
Initial pressure (P[0])
Bed 5.0 10^5 Pa
Tank 1.0 10^5 Pa
Initial temperature 298 K
Initial CH[4] mole fraction
In the bed 0.5
In the tank 0.0 (pure hydrogen)
Bed length (L) 2.0 m
Interparticle porosity (ϵ) 0.4
In what follows, we will consider a fully saturated column at a high pressure P[H] and a tank at a low pressure P[L] containing only inert (pure product). The volume of the tank is equal to that of
the column. Fig. (5) gives the evolution of the pressure in the column and the tank during time. The pressure in the tank, as mentioned previously, is uniform. X-coordinates (z/L) varying from 0.0 to
1.0 correspond to the column, and x-coordinates from 1.0 to 2.0 are relative to the tank. This does not mean that the tank has the same length as the column.This representation is chosen to show both
the pressure variation in the column and the tank, since the numerical solution gives only a single pressure value in the tank and not a longitudinal profile.
The reference pressure is taken as P[ref] = P[H] . The reference velocity u[ref] is calculated using Ergun’s correlation as follows:
The pressure in the tank is uniform and is nearly equal to that prevailing at the column exit.
Fig. (6) shows the change with time of axial velocity profile for the column. These profiles are similar to those obtained in the cases of depressurization of a column and pressure equalization
between two columns (Chahbani and Tondeur, 2010).
Given that the pressure at the open end of the column is variable in time, it would not be judicious to treat pressure equalization step just as a simple depressurization step with a constant
pressure P[eq] at the open end of the column even if one succeeds to accurately estimate P[eq] as previously mentioned. This procedure was successfully done for pressure equalization between two
packed beds in some cases (Chahbani and Tondeur, 2010), thus allowing substantial savings in computational time besides the simplification of modeling.
Consequently, for pressure equalization between a bed and an empty tank, the whole step could not be decomposed into two independent steps, namely depressurization of the bed and pressurization of
the tank with a constant pressure at the open end (P[eq] ). When connecting together a bed and an empty tank having different initial pressures, it is expected to get always a varying pressure at the
junction point.
Fig. (7) gives the evolution of the adsorbable species molar fraction along the bed and in the tank during equalization. One can note that in the vicinity of the open end of the column, the adsorbed
quantity q decreases drastically just after the beginning of the operation and then begins to increase gradually as shown in Fig. (8). This explains clearly the evolution of the adsorbable species
molar fraction near the open end of the bed. In fact, it increases at the beginning of the equalization step due to desorption as the pressure diminishes notably at the open end of the column. The
adsorbable species molar fraction then decreases regularly as the pressure starts to increase continuously tending towards the equilibrium pressure P[eq] . After the phase of desorption, occuring
during the first moments of the pressure equalization in the vicinity of the open end of the column, an adsorption phase follows due to the increase of the partial pressure of the adsorbable gas.
The substantial reduction of pressure at the open end of the column at the beginning of the pressure equalization step inevitably leads to an increase in desorption. This results in a temperature
decrease as can be seen from Fig. (9) giving the change with time of axial profiles of temperature in the bed and in the tank. The pressures at the open end of the column and in the tank evolve
simultaneously from 1 bar to Peq. The pressure in the tank continuously increases from 1 bar to Peq while at the open end of the column it drops rapidly from 10 bar (initial value of pressure in the
column) to 1 bar (initial value of pressure in thank) and then begins to increase as the pressure in the tank increases. Inside the column, the decrease of temperature is continuous due to continuous
desorption caused by the decrease of pressure. The temperature increase at the open end of the column which follows a temperature decrease observed at the beginning of the step is of course due to
the adsorption (see Fig. 8) caused by the increase in pressure as can be seen in (Fig. 5).
The equilibrium pressure P[eq] is not sensitive to the initial state of the column as is the case of pressure equalization between two columns. From Fig. (10), giving P[eq] variation in function of
the initial adsorbable species molar fraction in the column for several values of P[H] , one can see that for a given initial pressure P[H] , the difference between P[eq] values for all initial
states is of the order of 1 bar. In addition, the equilibrium pressure is not affected by the initial state of the tank. In fact, the value of P[eq] obtained is the same whether the tank is filled
with inert or adsorbable gas. This can be explained by the fact that the phenomenon of adsorption does not intervene in the tank.
It is clear that, given the complexity of the set of partial differential equations, it is very difficult to analytically determine with satisfactory precision the final pressure P[eq] attained and
it therefore seems necessary to resort to numerical simulation to get reliable results. A reliable value of P[eq] is of paramount importance when assessing pressure swing adsorption cycles
performances. P[eq] is usually calculated by a trial and error procedure. A reasonably good initial guess of P[eq] can speed up notably the whole procedure.
Fig. (11) shows a comparison between the variations of P[eq] with Y[i] obtained by numerical simulation and analytical solution proposed herein for P[H] = 20 bars .
It is interesting to note that both solutions provide the same trend of evolution of P[eq] with Y[i]. In both cases, the equilibrium pressure increases initially with Y[i] to attain a maximum value
and then begins a slight decrease. The difference between the two solutions does not exceed 0.5 bar despite the rough approximations considered for the analytical solution. The value of P[eq] given
by the analytical solution is always greater than the one given by the numerical solution. For Y[i] = 0 (the column contains only an inert gas), note the coincidence of the numerical value of P[eq]
with that obtained analytically (from equations 17 or 18). One can often resort to comparison between numerical and analytical solutions in special cases to prove the reliability of modelling and
numerical resolution. It is the case herein.
To validate the simulation results, many pressure equalization experiments have been carried out and compared to simulations. Figs. (12 and 13) show the experimental variation with time of pressure
at the closed end of the column and inside the tank during equalization respectively for many different initial values of P[H] (between 2 and 20 bars with an increment of 2 bars). The initial
pressure prevailing in the tank is always P[L]=1 bar. The column and the tank contain initially pure hydrogen. For the latter experiments, the tank is an empty column having the same length and
diameter as the packed one (D=0.05 m, L=1 m). The arrow indicates direction of increasing initial P[H] value. Pressure at the closed end of the bed decreases while the pressure in the tank increases
during equalization until reaching the equilibrium pressure. For the different experiements, the value of P[eq] obtained is very close to the one given by the equation (17) which is valid for a non
adsorbable gas. Fig. (14) gives a comparison between time change of pressure at the closed end of the column obtained both experimentally and numerically. It can be seen that the simulation allows to
model the experimental results in a satisfactory way. The slight difference that can be noticed is due to the fact that the volume of tubings relating the column and the tank is not taken into
account in modeling. This is why the value of equilibrium pressure obtained by simulation is slightly greater than the experimental one. The same remarks are valid for time variation of pressure in
the tank.
Let us now compare numerical and experimental results when the packed bed initially contains a binary gas, one of which is adsorbable (methane). Fig. (15) shows the experimental change with time of
pressure at the closed end of the column and inside the tank during equalization for many initial states of the column. In fact, prior to pressure equalization, the column is in equilibrium with
different binary mixtures of H[2] and CH[4] (for y[CH][4] =0,0.1, 0.2 and 1.0) at P[H] =10 bars whereas the tank is filled with H[2] at P[L]=1 bar.
The arrow indicates direction of increasing initial methane molar fraction in the column. The end of the equalization step is obtained when the pressures in the bed and the tank become equal. It can
be seen that the equalization time t[eq], corresponding to obtaining the same pressure in the column and the tank, increases with CH[4] molar fraction. The highest and lowest values of t[eq] are
obtained for columns which have been initially in equilibrium with pure methane and pure H[2] respectively. The experimental value of P[eq] obtained when connecting a column initially saturated with
pure CH[4] with the tank is P[eq] =6.56 bars, this value is not far from P[eq] =6.47 bars given by simulation, but very far from P[eq] =4.88 bars if equation (17) is used.
Fig. (16) gives a comparison between the values of the equilibrium pressure P[eq] obtained experimentally, numerically and analytically (from equation (14)). The values of P[H] and P[L] are 10 bars
and 1 bar respectively. The initial methane molar fraction in the packed bed varies from 0 to 1. The experimental values are slightly underestimated by the numerical values and slightly overestimated
by the analytical values. Indeed, the maximum difference does not exceed 0.1 bar when comparison is made with the experimental values. For the experiment with a column containing initially pure
hydrogen (y[CH][4] =0), the experimental value of P[eq] obtained is 4.8 bars whereas the value given both by simulation and analytical solution is 4.88 bars. As previously mentioned, this slight
difference is attributed to the fact of neglecting the volume of tubings connecting the packed bed and the tank. Indeed, the minimum value can not be lower than 4.88 bar obtained by the analytical
solution (equation (22)) for a column containing only an inert gas or by simulation for the same case. The sole source of the discrepancies is to be attributed to the impact of the tubing volume. It
is highly unlikely that the accuracy of the pressure measuring instrument is the cause of this deviation insofar as the value of the equilibrium pressure is obtained both by a manometer (pressure
gauge) installed on the column (observable visually) and by a pressure transducer (electrical signal), the two values obtained are identical.
Fig. (17) shows a comparison between the values of the methane molar fraction Y[eq] in the tank at the end of the equalization step obtained experimentally, numerically and analytically (from
equation (16)). It has to be mentioned that the experimental value of methane molar fraction in the tank is obtained by averaging the values for five samples of gas recovered from the tank after
disconnecting it from the packed bed. The gas analysis is done by infrared absorption spectroscopy. The experimental values are close to the ones obtained by numerical simulation.The values given by
the analytical solution are somewhat less precise, for example, the values obtained experimentally, numerically and analytically are 0.82, 0.84 and 0.91 respectively for the case of a bed initially
saturated with pure methane (y[CH][4] =1). Hence, the numerical simulation of the equalization step is very satisfactory and allows to obtain reliable results either for the equilibrium pressure or
mole fraction in the tank. The analytical solution gives acceptable results despite the simplifying assumptions considered, it permits to assess rapidly P[eq] and Y[eq] without the need to resort to
a cubersome modeling.
One of parameters, among others, which contributes enormously to the optimization of the operation of pressure equalization is the tank volume V[tank] . It should be borne in mind that the goals of
this step are to maximize the pure product transfer from the column to the tank for increasing product recovery and to conserve mechanical energy.
In both cases, it is important that the adsorbable species molar fraction of the gas transferred to the tank at the end of pressure equalization is as low as possible. This would allow a better
partial purge if the tank gas is used to partially purge a column just after the depressurization step, on one hand, and the preservation of the capacity of the adsorption column if the tank gas is
used to pressurize a column at the end of a low pressure purge step on the other hand. It then becomes important to optimize the pressure equalization step so that the gas transferred from the column
to the tank is minimally contaminated by the adsorbable species.
Fig. (18) shows the variation of P[eq] in function of the tank volume. The equilibrium pressure P[eq] decreases notably with V[tank] . Thus, as an example, P[eq] =4.75 bars for V[tank] =5.0V[col] and
P[eq] =13.8 bars for V[tank] =0.5 V[col]. It follows that increasing the volume of the tank presents the disadvantage of lowering enormously pressure. If the gas collected in the tank is intended for
a column pressurization, one will have to choose the right volume of the tank so as to reach the desired pressure in the column to be pressurized at the end of the equalization step.
It is issential to pay a great attention when choosing models to simulate pressure swing processes. The pressure equalization step must be treated with special care since its impact on the assessment
of the overall performance of the process is notable. If the equilibrium pressure is not accurately evaluated, this could lead inevitably to erroneous simulations in the case where the PSA cycle
comprises a pressure equalization step. In fact, if rough approximations are considered, estimated P[eq] may differ from the real value leading to inaccurate simulation results. The analytical
solution proposed herein for assessing the final pressure when connecting a bed and an empty tank could be considered acceptable despite the simplifying assumptions considered. Based on the
comparisons presented, one can conclude that the agreement between the experimental and numerical results relative to the equilibrium pressure and the equilibrium mole fraction of the adsorbable
species in the tank is very satisfactory, thus simulation results could be considered reliable and used safely so as to optimize PSA cycles. If the gas transferred to the tank is destined for
subsequent column pressurization, the choice of the tank volume will depend on the value of the desired pressure to be reached in the column to be pressurized at the end of the equalization step.
b =parameter of Langmuir isotherm, P a^−^1
C =bulk phase concentration, mol/m^3
cp =heat capacity, J /(molK) or J/(kgK)
c[p] =mean intra-particle gas phase concentration, mol/m^3
D[ax] =mass axial dispersion coefficient, m^2/s
D[col] =column diameter, m d[p] : Particle diameter, m
D[p] =pore diffusivity, m^2/s
D[s] =surface diffusivity, m^2/s
h =heat transfer coefficient, W /(m^2s)
H =enthalpy, J /mol L: Bed length, m
N[g] =number of species in the gas mixture
P =total pressure, P a
Q =molar flow rate, mol/s
Q[m] =parameter of Langmuir isotherm, mol/kg
q =adsorbed phase concentration, mol/m^3 or mol/kg
R =gas constant or particle radius, J /(molK) or m
S =cross section of the column, m^2
u =interstitial velocity, m/s
U =internal energy, J /mol
V =volume, m^3
T =temperature, K t: Time, s
Z =compressibility factor of the gas mixture
z =axial coordinate in the bed, m
∆H =heat of adsorption, J /mol
ϵ =interparticle porosity
ϵ[p] =intraparticle porosity
ϵ[t] =total porosity
µ =fluid viscosity, kg/(m s)
ρ =fluid or bed density, kg/m^3
τ =particle tortuosity factor
^∗ =equilibrium
i =refers to species i
a =refers to adsorbed phase
A =refers to the adsorbable species
b =refers to the bed
col =refers to column
e =refers to surroundings
eq =refers to the equilibrium state
f eed =at the bed entrance
g =refers to gas phase
H =refers to the high value
i =refers to species i
I =refers to the inert
L =refers to the low value
out =at the bed exit
p =refers to adsorbent particle
s =refers to solid phase
tank =refers to tank
0 =initial condition
Not applicable.
The authors declare no conflict of interest, financial or otherwise.
Declared none.
N.H. Berlin, U.S. Paten, vol. 280, no. 3, p. 536, 1966. [to Exxon Research and Engineering.].
R. Banerjee, K.G. Narayankhedkar, and S.P. Sukhatme, "Exergy analysis of pressure swing adsorption processes for air separation",
Chem. Eng. Sci.,
vol. 45, no. 2, pp. 467-475, 1990.[
CrossRef Link
A. Bossy, and D. Tondeur, "A non-linear equilibrium analysis of blowdown policy in pressure swing adsorption separation",
Chem. Eng. J.,
vol. 48, pp. 173-182, 1992.[
CrossRef Link
A.S. Chiang, "An analytical solution to equilibrium PSA cycles",
Chem. Eng. Sci.,
vol. 51, no. 2, pp. 207-216, 1996.[
CrossRef Link
O.J. Smith, and A.W. Westerberg, "The optimal design of pressure swing ad- sorption systems",
Chem. Eng. Sci.,
vol. 46, no. 12, pp. 2967-2976, 1991.[
CrossRef Link
K. Warmuzinski, "Effect of pressure equalization on power requirements in PSA systems",
Chem. Eng. Sci.,
vol. 57, pp. 1475-1478, 2002.[
CrossRef Link
K. Warmuzinski, and M. Tanczyk, "Calculation of the equalization pressure in PSA systems",
Chem. Eng. Sci.,
vol. 58, pp. 3285-3289, 2003.[
CrossRef Link
J.A. Delgado, and A.E. Rodrigues, "Analysis of the boundary conditions for the simulation of the pressure equalization step PSA cycles",
Chem. Eng. Sci.,
vol. 63, pp. 4452-4463, 2008.[
CrossRef Link
Chahbani M.H., and D. Tondeur, "Predicting the final pressure in the equalization step of PSA cycles", Sep. Pur. Technol., vol. 71, pp. 225-232.
M. Yavary, H. Ale-Ebrahim, and C. Falamaki, "The effect of number of pressure equalization steps on the performance of pressure swing adsorption process",
Chem. Eng. Prog.,
vol. 87, pp. 35-44, 2015.[
CrossRef Link
W.D. Marsh, R.C. Hoke, F.S. Pramuk, and C.W. Skarstrom, "Pressure equalization depressuring in heallen adsorption", U.S. Patent, vol. 3, pp.142-547, July 28, 1964.
M. Yavary, H. Ale-Ebrahim, and C. Falamaki, "The effect of reliable prediction of final pressure during pressure equalization steps on the performance of PSA cycles",
Chem. Eng. Sci.,
vol. 66, pp. 2587-2595, 2011.[
CrossRef Link
D. Tondeur, and C. Wankat Phillip, "Gas Purification by Pressure Swing Ad- sorption",
Separ. Purif. Rev.,
vol. 14, no. 2, pp. 157-212, 1985.[
CrossRef Link
L.R. Petzold, A Description of DASSL: a Differential / Algebraic System Solver., Sandia National Laboratories: Livermore, California, 1982.
Chahbani M.H., "Separation de gaz par adsorption modulee en pression", PhD thesis, ENSIC, INPL, France, 1996.
E. Alpay, C.N. Kenney, and D.M. Scott, "Simulation of rapid pressure swing adsorption and reaction processes",
Chem. Eng. Sci.,
vol. 48, no. 18, pp. 3173-3186, 1993.[
CrossRef Link
J.M. Prausnitz, R.C. Reid, and B.E. Poling, The Properties of Gases and Liquids., McGraw-Hill: New York, 1987. | {"url":"https://www.openchemicalengineeringjournal.com/VOLUME/11/PAGE/33/FULLTEXT/","timestamp":"2024-11-07T11:55:39Z","content_type":"text/html","content_length":"440786","record_id":"<urn:uuid:71e45787-e012-4e2f-9b16-018fb997ca2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00209.warc.gz"} |
om Online Grader
You are given a set $P$ with $n$ points in the 2-dimensional plane. We want to know if a point $w$ can be written as a linear combination of points from $R$, where $R \subseteq P$, that is:
$w = \displaystyle\sum_{i=1}^{|R|} {\mu}_i \cdot R_i$
such that ${\mu}_i \ge 0$ for $i = 1,2,...,|R|$ and $\displaystyle\sum_{i=1}^{|R|} {\mu}_i = 1$.
For each point $w$ you should find a set $R$ with at most 5 elements ($|R| \le 5$) such that coefficients satisfy previous rule, or tell if such representation doesn't exist..
• Let $p = (x, y)$ be a point and $c$ a real number. The product $c \cdot p$ is defined as point $s = (c \cdot x, c \cdot y)$.
• Let $p_1 = (x_1, y_1)$ and $p_2 = (x_2, y_2)$ be two points. The sum $p_1 + p_2$ is defined as point $s = (x_1 + x_2, y_1 + y_2)$.
First line of input contains an integer $n$ $(1 \leq n \leq 10^5)$, the number of points in $P$.
Next $n$ lines contains two integers $x_i, y_i$ $(0 \le x_i, y_i \le 10^9)$, the coordinates of the $i$-th point.
Next line contains an integer $q$ $(1 \le q \le 10^4)$, representing the amount of points $w$ to answer.
The remaining $q$ lines contains two integers $x_w, y_w(0 \le x_w, y_w \le 10^9)$, the coordinates of each point $w$.
For each point $w$:
• If there is no way to represent $w$ as described, print the word "impossible" (without the quotes).
• If there is a solution, print a line with an integer $m$, indicating the amount of elements in $R$.
Then print $m$ lines, each of them with an integer $i$ indicating that the $i$-th point of $P$ belongs to $R$, followed by a real number $\mu_i$ indicating its coefficient.
Note: Your solution will be considered correct if it follows all the described restrictions and the squared distance between $w$ and the point $w' = \displaystyle\sum_{i=1}^{|R|} {\mu}_i \cdot R_i$
obtained does not exceed $10^{-5}$. | {"url":"https://matcomgrader.com/problem/9688/new-combination/","timestamp":"2024-11-03T04:04:56Z","content_type":"text/html","content_length":"27900","record_id":"<urn:uuid:de15a0ea-f123-4151-9a77-65a2aa1df884>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00195.warc.gz"} |
Adi Adiredja
• Associate Professor
• Member of the Graduate Faculty
• Ph.D. Mathematics Education
□ University of California, Berkeley, Berkeley, California, United States
□ Leveraging Students’ Intuitive Knowledge About the Formal Definition of a Limit
• M.A. Mathematics
□ University of California, Berkeley, Berkeley, California, United States
□ The Weirstrass Approximation Theorem and the Positivity of Kernel
• B.A. Mathematics
□ University of California, Berkeley, Berkeley, California, United States
□ The Relation of Mathematical Relevance to Student Opposition in Low-track Mathematics Classrooms
• A.A. General Studies
□ Irvine Valley College, Irvine, California, United States
Work Experience
• University of Arizona, Tucson, Arizona (2015 - Ongoing)
• Oregon State University, Corvallis, Oregon (2014 - 2015)
• A Feature Article in Diversity in Action
□ Diversity in Action, Fall 2021
• JRME Outstanding Reviewer
□ Journal for Research in Mathematics Education, Fall 2021
• APIDA Faculty Spotlight
□ The University of Arizona Faculty Affairs, Spring 2021
• Editors’ Pick 2021
□ PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, Spring 2021
• Distinguished Early Career Teaching Award
□ College of Science, the University of Arizona, Summer 2020
• Editors’ Pick 2020
□ The International Journal for Research in Undergraduate Mathematics Education, Spring 2020
• Spirit of ASEMS Program Award
□ Arizona’s Science, Engineering, & Mathematics Scholars (ASEMS) – The University of Arizona, Spring 2019
• Meritorious Citation – Proceedings of the 18th Annual Conference on Research in Undergraduate Mathematics Education
□ SIGMAA on RUME, Spring 2015
My research lies in the intersections between advanced mathematics, equity and cognition. I investigate ways that students make sense of challenging mathematical topics in undergraduate curriculum
with a particular interest in the role intuitive knowledge in learning formal mathematics. I explore ways that our views of epistemology and learning determine what kind of knowledge and what kind of
students get privileged in the classroom. I am also interested in broader equity and diversity issues in undergraduate mathematics education.
I am interested in the instruction of calculus and linear algebra. In particular, I am interested in finding ways to make the content of these courses accessible and coherent for students. I always
look for ways to leverage students' intuitions from everyday experiences in learning formal mathematics.
2024-25 Courses
• Dissertation
MATH 920 (Fall 2024)
• Independent Study
MATH 599 (Fall 2024)
• Math of Bio-Systems
MATH 119A (Fall 2024)
2023-24 Courses
• Independent Study
MATH 599 (Spring 2024)
• Intro Num Thry+Mod Alg
MATH 315 (Spring 2024)
• Independent Study
MATH 599 (Fall 2023)
• Math of Bio-Systems
MATH 119A (Fall 2023)
2022-23 Courses
• Independent Study
MATH 599 (Spring 2023)
• Intro Num Thry+Mod Alg
MATH 315 (Spring 2023)
• Dissertation
MATH 920 (Fall 2022)
• Independent Study
MATH 599 (Fall 2022)
2021-22 Courses
• Dissertation
MATH 920 (Spring 2022)
• Independent Study
MATH 599 (Spring 2022)
• Dissertation
MATH 920 (Fall 2021)
• Rsrch Lrng/Mathematics
MATH 506A (Fall 2021)
• Synthesis/Math Concepts
MATH 407 (Fall 2021)
2020-21 Courses
• Dissertation
MATH 920 (Spring 2021)
• Honors Thesis
MATH 498H (Spring 2021)
• Intro Num Thry+Mod Alg
MATH 315 (Spring 2021)
• Honors Thesis
MATH 498H (Fall 2020)
• Independent Study
MATH 599 (Fall 2020)
• Intro to Linear Algebra
MATH 313 (Fall 2020)
• Synthesis/Math Concepts
MATH 407 (Fall 2020)
2019-20 Courses
• Independent Study
MATH 599 (Spring 2020)
• Intro Num Thry+Mod Alg
MATH 315 (Spring 2020)
• Intro to Linear Algebra
MATH 313 (Fall 2019)
• Synthesis/Math Concepts
MATH 407 (Fall 2019)
• Thesis
MATH 910 (Fall 2019)
2018-19 Courses
• Intro Num Thry+Mod Alg
MATH 315 (Spring 2019)
• Thesis
MATH 910 (Spring 2019)
• Rsrch Lrng/Mathematics
MATH 506A (Fall 2018)
• Synthesis/Math Concepts
MATH 407 (Fall 2018)
2017-18 Courses
• Intro Num Thry+Mod Alg
MATH 315 (Spring 2018)
• Synthesis/Math Concepts
MATH 407 (Fall 2017)
2016-17 Courses
• Directed Research
MATH 392 (Summer I 2017)
• Intro Num Thry+Mod Alg
MATH 315 (Spring 2017)
• Intro to Linear Algebra
MATH 313 (Fall 2016)
• Synthesis/Math Concepts
MATH 407 (Fall 2016)
2015-16 Courses
• Intro to Linear Algebra
MATH 313 (Spring 2016)
Scholarly Contributions
• Adiredja, A. P. (2021).
Cognition, Interdisciplinarity, and Equity
. In Handbook of the Mathematics and the Arts Sciences(pp 1-26). Cham, Switzerland: Springer. doi:10.1007/978-3-319-57072-3_73
• Adiredja, A. (2020). Cognition, interdisciplinarity, and equity. In Handbook of the Mathematics and the Arts Sciences. Cham, Switzerland: Springer. doi:https://doi.org/10.1007/
• Adiredja, A. P. (2018). Building on “Misconceptions” and Students’ Intuitions in Advanced Mathematics. In Toward Equity and Social Justice in Mathematics Education(pp 59-76). Cham, Switzerland:
Springer, Cham. doi:10.1007/978-3-319-92907-1_4
More info
The goal of this chapter is to challenge deficit perspectives about students and their knowledge. I argue that predominant reliance on formal procedural knowledge in most undergraduate
mathematics curricula and the oftentimes focus on students’ misconceptions contribute to the racialized and gendered inequities in mathematics education. I discuss my design of an instructional
tool to learn the formal limit definition in calculus called the Pancake Story. The story builds on a misconception and student’s everyday intuitions. A successful sensemaking episode by a
Chicana student illustrates the utility of everyday intuitions leveraged in the story and the inaccuracy and harm of the notion of “misconceptions.” Recognizing misconceptions as students’
attempts to make sense of mathematics, solidifying such knowledge by finding an appropriate context for it, and leveraging other knowledge resources are explicit ways to challenge dominant power
structures in our practice.
• Adiredja, A. P. (2018). The politics of intuitive knowledge in advanced mathematics. In Toward Equity and Social Justice in Mathematics Education.
• Adiredja, A., Leyva, L., Seashore, K., & Zavala, M. (2017). Equity in Practice. In Mathematical Association of America Instructional Practice Guide(pp 157-170).
• Louie, N., Adiredja, A., & Jessup, N. (2021). Teacher noticing from a sociopolitical perspective: The FAIR framework for anti-deficit noticing. ZDM Mathematics Education, 53, 95-107. doi:https://
• Adiredja, A. (2021). Students’ “Struggles” with Temporal Order in the Limit Definition: Uncovering Resources Using Knowledge in Pieces. International Journal for Mathematical Education in Science
and Technology, 52(9), 1295-1321. doi:https://doi.org/10.1080/0020739X.2020.1754477
• Adiredja, A. P. (2020).
Everyday Examples in Linear Algebra: Individual and Collective Creativity
. Journal of Humanistic Mathematics, 10(2), 40-75. doi:10.5642/jhummath.202002.05
• Adiredja, A. P. (2020). Students’ struggles with temporal order in the limit definition: uncovering resources using knowledge in pieces. International Journal of Mathematical Education in Science
and Technology, 52(9), 1295-1321. doi:10.1080/0020739x.2020.1754477
More info
A few case studies have suggested students’ “struggles” with the temporal order of epsilon and delta in the formal limit definition. This study problematizes this hypothesis by exploring
students’ claims in different contexts and uncovering productive resources from students to make sense of the critical relationship between epsilon and delta. A three-step analysis supports these
aims. The analysis starts by investigating the generalizability and specificity of the struggle with the temporal order. Then, analyzing students’ justifications reveals dominant ideas supporting
students’ claims. Finally, attending to the foci of the justifications reveals the potential resources to make sense of the temporal order. This study illustrates the productivity of the
principles context sensitivity, cueing priority, and reliability priority from Knowledge in Pieces in understanding students’ “struggles.” The study offers the three-step analysis as a method to
approach students’ understanding from an anti-deficit perspective.
• Adiredja, A. P., & Louie, N. (2020).
Untangling the Web of Deficit Discourses in Mathematics Education.
. For the Learning of Mathematics, 40(1), 42-46..
• Adiredja, A., & Louie, N. (2020). Understanding the web of deficit discourse in mathematics education.. For the Learning of Mathematics, 40(1), 42-46.
• Adiredja, A., & Zandieh, M. (2020). Everyday examples in linear algebra: Individual and collective creativity. Journal of Humanistic Mathematics, 10(2), 40-75. doi:https://doi.org/10.5642/
• Adiredja, A., & Zandieh, M. (2020). The lived experience of linear algebra: A Counter-story about women of color in mathematics.. Educational Studies in Mathematics, 104(2), 239-260. doi:https://
• Adiredja, A. (2018). Anti-deficit narratives: Politics of mathematical sense making. Journal for Research in Mathematics Education.
• Adiredja, A. (2021). The pancake story and the epsilon-delta definition. PRIMUS, 31(6), 662-677. doi:https://doi.org/10.1080/10511970.2019.1669231
• Adiredja, A. P. (2019).
The Pancake Story and the Epsilon–Delta Definition
. PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, 31(6), 662-677.. doi:10.1080/10511970.2019.1669231
• Adiredja, A. P. (2019). Anti-Deficit Narratives: Engaging the Politics of Research on Mathematical Sense Making. Journal for Research in Mathematics Education, 50(4), 401-435. doi:10.5951/
More info
This article identifies a self-sustaining system of deficit narratives about students of color as an entry point for studies of cognition to engage with the sociopolitical context of mathematical
learning. Principles from sociopolitical perspectives and Critical Race Theory, and historical analyses of deficit thinking in education research, support the investigation into the system. Using
existing research about students' understanding of a limit in calculus as context, this article proposes a definition of a deficit perspective on sense making and unpacks some of its tenets. The
data illustration in this article focuses on the mathematical sense making of a Chicana undergraduate student. The analysis uses an anti-deficit perspective to construct a sensemaking
counter-story by a woman of color. The counter-story challenges existing deficit master-narratives about the mathematical ability of women of color. The article closes with a proposal for an
anti-deficit method for studying the sense making of students of color.
• Adiredja, A. P., Bélanger-Rioux, R., & Zandieh, M. (2019).
Everyday Examples About Basis From Students: An Anti-Deficit Approach in the Classroom
. PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, 30(5), 520-538. doi:10.1080/10511970.2019.1608609
• Adiredja, A., Belanger-Rioux, R., & Zandieh, M. (2020). Everyday examples about basis from students: An anti-deficit approach in the classroom. PRIMUS, 30(5), 520-538. doi:https://doi.org/10.1080
• Zandieh, M., Adiredja, A. P., & Knapp, J. (2019).
Exploring everyday examples to explain basis from eight German male graduate STEM students.
. ZDM Mathematics Education, 51, 1153–1167. doi:10.1007/s11858-019-01033-z
• Zandieh, M., Adiredja, A., & Knapp, J. (2019). Exploring Everyday Examples to Explain Basis: Insights into Student Understanding from Students in Germany. ZDM Mathematics Education.
• Adiredja, A. P., & Andrews-Larson, C. (2017).
Taking the Sociopolitical Turn in Postsecondary Mathematics Education Research
. International Journal for Research in Undergraduate Mathematics Education, 3(3), 444-465. doi:10.1007/s40753-017-0054-5
• Adiredja, A. P., & Andrews-Larson, C. (2017). Taking the sociopolitical turn in postsecondary mathematics education research. International Journal for Research in Undergraduate Mathematics
Education, 3(3), 444-465.
• Karunakaran, S. S., & Adiredja, A. P. (2016). Dual Analyses Examining Proving Process: Grounded Theory and Knowledge Analysis.. International Group for the Psychology of Mathematics Education.
Proceedings Publications
• Adiredja, A. (2020). Mathematical limitations as opportunities for creativity: An anti-deficit perspective. In S. S. Karunakaran, Z. Reed, & A. Higgins (Eds.),. In Proceedings of the 23rd Annual
Conference on Research in Undergraduate Mathematics Education, 814-819.
• Adiredja, A. P. (2019, April).
Synthesizing Community Cultural Wealth With STEM Communities of Practice for Students of Color
. In American Educational Research Association (AERA) Annual Meeting.
• Adiredja, A., & Louie, N. (2019, January). An ecological perspective on the reproduction of deficit discourses in mathematics education.. In Mathematics Education in Society, In J. Subramanian
(Ed.). Proceedings of the Tenth International Mathematics Education and Society Conference, Paper 63.
• Adiredja, A., & Zandieh, M. (2017, February). Using intuitive examples from women of color to reveal nuances about basis.. In (Eds.) A. Weinberg, C. Rasmussen, J. Rabin, M. Wawro, and S. Brown,
Proceedings of the 20th Annual Conference on Research in Undergraduate Mathematics Education, 346-359.
• Adiredja, A. P., & Karunakaran, S. (2016, Nov). Dual analysis examining proving process: Grounded theory and knowledge analysis. In The 38th annual meeting of the North American Chapter of the
International Group for the Psychology of Mathematics Education, 1573-1580.
• Adiredja, A. P. (2015). Exploring Roles of Cognitive Studies in Equity: A Case for Knowledge in Pieces.. In Psychology of Mathematics Education North American Chapter (PME-NA) Annual Conference.
• Adiredja, A. P. (2015, Nov). Exploring roles of cognitive studies in equity: a case for knowledge in pieces.. In The 37th annual meeting of the North American Chapter of the International Group
for the Psychology of Mathematics Education, 1269-1276.
• Adiredja, A. (2021, April). Deficit perspectives on students’ mathematical thinking and white supremacy. In N. Nishi (Organizer), Racism in Mathematics Education. Symposium conducted at the 2021
American Educational Research Association Annual [Virtual] Meeting.American Educational Research Association.
• Adiredja, A. (2021, April). One brick at a time: Dismantling structural inequities in math education with the anti-deficit perspective.. California State University at Long Beach Mathematics
Department Colloquium. Long Beach, CA.: California State University at Long Beach.
• Adiredja, A. (2021, June). Deficit perspectives on students’ mathematical thinking and white supremacy. Ignite Presentation at the TODOS: Mathematics for All [Virtual] ConferenceTODOS Mathematics
for All.
• Adiredja, A., & Rios, J. (2021, April). Culturally affirming engagement in undergraduate mathematics. In L. Leyva (Organizer), Equity Perspectives on Research in Undergraduate Teaching and
Learning. Symposium conducted at the 2021 American Educational Research Association Annual [Virtual] MeetingAmerican Educational Research Association.
• Adiredja, A., & Zandieh, M. (2021, July). Creativity in linear algebra through interactions.. The 14th International Congress on Mathematics Education. Shanghai, China.: International Congress on
Mathematics Education.
• Adiredja, A. (2020, January). Not Singing the Quadratic Formula. Mathematics Educator Appreciation Day (MEAD) Conference. Tucson, AZ: Center for Recruitment and Retention.
• Adiredja, A. (2020, June). Challenging deficit thinking about students and their thinking. Pedagogy Bootcamp. Tucson, AZ: The College of Veterinary Medicine at the University of Arizona.
• Adiredja, A. (2020, June). “Those who can’t do, teach”: Examining deficit narratives. Center for Recruitment and Retention Summer Institute. Tucson, AZ: Center for Recruitment and Retention,
Mathematics Department, University of Arizona.
• Adiredja, A. (2020, October). An anti-deficit perspective on the mathematical thinking of minoritized students: From counter-narratives to creative thinking (Plenary). The 7th International
[Virtual] Conference on Mathematics, Science, and Education 2020. Semarang, Indonesia: Faculty of Science and Natural Sciences of Universitas Negeri Semarang.
• Adiredja, A. (2020, October). Focusing on deficits in students’ mathematical work: A norm or a form of racism?. The Arizona Mathematical Association of Two-Year Colleges (ArizMATYC) [Virtual]
Conference. Phoenix, AZ: The Arizona Mathematical Association of Two-Year Colleges.
• Adiredja, A. (2020, Spring). Focusing on deficits in students’ mathematical work: A norm or a form of racism? (Plenary). The Third Annual Mathematics Equity in Southern California (MESCal)
Unconference on Equity and Inclusivity in the Mathematical Sciences. Pomona, CA: Mathematics Equity in Southern California.
• Adiredja, A. (2020, Spring). “Those who can’t do, teach”: Examining deficit narratives.. Mathematics Educator Appreciation Day (MEAD) Conference. Tucson, AZ.
• Adiredja, A., & Zandieh, M. (2020, Spring). Mathematical limitations as opportunities for creativity: An anti-deficit perspective. The 23rd Annual Conference on Research in Undergraduate
Mathematics Education. Boston, MA: Mathematical Association of America.
• Adiredja, A. (2019, April). An Anti-deficit Framework for Mathematical Sense-making by Women of Color. Annual Conference of the American Educational Research Association (AERA). Toronto, Canada:
More info
This paper is presented as part of a symposium organized by Luis Leyva, titled "Exploring Equity in Undergraduate Mathematics Education through Different Dimensions of Historically Marginalized
Students’ Experiences"
• Adiredja, A. (2019, April). Critical perspectives and Knowledge in Pieces (KiP) Paper Session Organizer. Annual Conference of the American Educational Research Association (AERA). Toronto,
• Adiredja, A., & Louie, N. (2019, January). An ecological perspective on the reproduction of deficit discourses in mathematics education. Mathematics Education and Society. Hyderabad, India:
Mathematics Education and Society.
• Adiredja, A., & Rios, J. (2019, April). Synthesizing Community Cultural Wealth with STEM Communities of Practice for Students of Color. Annual Conference of the American Educational Research
Association (AERA). Toronto, Canada: AERA.
• Yeh, C., Louie, N., Kokka, K., Jong, C., Eli, J., Chao, T., & Adiredja, A. (2019, April). Growing Against the Grain: Counterstories of Asian American Mathematics Education Scholars. Annual
Conference of the American Educational Research Association (AERA). Toronto, Canada: AERA.
• Adiredja, A. (2018, January). Social justice and teaching in undergraduate mathematics. Joint Mathematics Meetings/ MAA Project NExT session on incorporating social justice projects into the
college mathematics curriculum. San Diego, CA: AMS/MAA.
• Adiredja, A. (2018, November). Developing counter-narratives through anti-deficit teaching. National Math Summit and American Mathematical Association of Two-Year Colleges (AMATYC) Conference.
Orlando, Florida: AMATYC.
• Adiredja, A. (2018, October). Constructing and analyzing everyday examples about basis: An anti-deficit approach in teaching. American Mathematical Society Sectional Meeting. San Francisco:
American Mathematical Society.
• Adiredja, A., & Franco, M. (2018, February). Impact of critical conversations about race and gender in a calculus workshop on student success. Critical Issues in Mathematics Education (CIME)
Conference. Berkeley, CA: The Mathematical Research Institute (MSRI).
• Adiredja, A., & Zandieh, M. (2018, January). Resources from women of color to understand basis: A counter-narrative.. Joint Math Meetings/ MAA Invited Paper Session on Research in Undergraduate
Mathematics Education: Highlights from the 2017 Annual SIGMAA on RUME Conference. San Diego, CA: AMS/MAA.
• Adiredja, A., Leyva, L., & Mendoza, J. (2018, February). Impacts of peer mentorship in a calculus workshop on the mentors’ identities and academic experiences in undergraduate STEM. Annual
Conference on Research in Undergraduate Mathematics Education. San Diego: Mathematics Association of America.
• Adiredja, A., Leyva, L., & Mendoza, J. (2018, February). Impacts of peer mentorship in a calculus workshop on the mentors’ identities and academic experiences in undergraduate STEM. The 21st
Annual Conference on Research in Undergraduate Mathematics Education. San Diego, CA: The Special Interest Group of the Mathematical Association of America (SIGMAA) on Research for Undergraduate
Mathematics Education (RUME).
• Adiredja, A. (2017, March). Considering Equity in Undergraduate Mathematics. Department Colloquium. San Francisco, CA: San Francisco State University.
• Adiredja, A. (2017, March). Intersection of Power and Cognition in Undergraduate Mathematics. Mathematical Sciences Research Institute (MSRI) Critical Issues in Mathematics Education (CIME) 2017.
Berkeley, CA: Mathematical Sciences Research Institute (MSRI).
• Adiredja, A., & Zandieh, M. (2017, February). Using women of color’s intuitive examples to reveal nuances about basis. Conference for Research in Undergraduate Mathematics Education. San Diego:
Mathematical Association of America.
• Adiredja, A., Schumacher, C., & Washington, T. (2017, July). IBL diversity: Yesterday, today, and tomorrow. MathFest/ Panelist at the Inquiry Based Learning (IBL) Mini Conference. Conference:
Chicago, IL.: Mathematical Association of America.
• Adiredja, A. P. (2016, Jan). Using the pancake story to make sense of the epsilon delta definition.. Joint Mathematics Meetings. Seattle, WA: Mathematical Association of America.
• Adiredja, A. P., & Karunakaran, S. (2016, Nov). Dual analysis examining proving process: Grounded theory and knowledge analysis. The 38th annual meeting of the North American Chapter of the
International Group for the Psychology of Mathematics Education. Tucson, AZ: University of Arizona.
• Adiredja, A. P. (2015, Nov). Exploring roles of cognitive studies in equity: a case for knowledge in pieces. The 37th annual meeting of the North American Chapter of the International Group for
the Psychology of Mathematics Education. East Lansing, MI: Michigan State University.
Poster Presentations
• Louie, N., Jessup, N., & Adiredja, A. (2021, April). Reframing students, math, and interactions for anti-deficit noticing. In M. T. Kisa, & E. van Es (Organizers), L. Leyva (Organizer),
Conceptualizing Teacher Noticing in Research on Teaching and Teacher Learning. Structured poster session conducted at the 2021 American Educational Research Association Annual [Virtual]
Meeting.American Educational Research Association.
• Knapp, J., Zandieh, M., & Adiredja, A. (2018, February). Using Everyday Examples to Understand the Concept of Basis. Annual Conference on Research in Undergraduate Mathematics Education. San
Diego: Mathematical Association of America. | {"url":"https://profiles.arizona.edu/person/adiredja","timestamp":"2024-11-05T10:41:44Z","content_type":"text/html","content_length":"87073","record_id":"<urn:uuid:460259a0-5e81-478c-a48d-ec72a26689a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00429.warc.gz"} |
Optimizing the number of robots for web search engines for Telecommunication Systems
Telecommunication Systems
Optimizing the number of robots for web search engines
View publication
Robots are deployed by a Web search engine for collecting information from different Web servers in order to maintain the currency of its data base of Web pages. In this paper, we investigate the
number of robots to be used by a search engine so as to maximize the currency of the data base without putting an unnecessary load on the network. We use a queueing model to represent the system. The
arrivals to the queueing system are Web pages brought by the robots: service corresponds to the indexing of these pages. The objective is to find the number of robots, and thus the arrival rate of
the queueing system, such that the indexing queue is neither starved nor saturated. For this, we consider a finite-buffer queueing system and define the cost function to be minimized as a weighted
sum of the loss probability and the starvation probability. Under the assumption that arrivals form a Poisson process, and that service times are independent and identically distributed random
variables with an exponential distribution, or with a more general service function, we obtain explicit/numerical solutions for the optimal number of robots to deploy. © 2001 Kluwer Academic | {"url":"https://research.ibm.com/publications/optimizing-the-number-of-robots-for-web-search-engines","timestamp":"2024-11-04T09:14:40Z","content_type":"text/html","content_length":"74194","record_id":"<urn:uuid:2c6b9610-2b63-47d9-80e4-d812e26c7e60>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00405.warc.gz"} |
The Mathematical Principles of Natural Philosophy
The Mathematical Principles of Natural Philosophy, Volume 1
Isaac Newton
Isaac Newton's The Mathematical Principles of Natural Philosophy translated by Andrew Motte and published in two volumes in 1729 remains the first and only translation of Newton's Philosophia
naturalis principia mathematica, which was first published in London in 1687. As the most famous work in the history of the physical sciences there is little need to summarize the contents.--J.
Norman, 2006.
Section 10 168
Section 11 176
Section 12 177
Section 13 196
Section 14 218
Section 15 291
Section 16 292
Section 17 310
Bibliographic information | {"url":"https://books.google.co.uk/books/download/The_Mathematical_Principles_of_Natural_P.pdf?id=b3RgvRUi2FsC&hl=en&capid=AFLRE72_RIcVABnny4iur-_EpRvC2-XdsmsfhMO2c5JX4EHLsqNyHRkk9b_DxqZUS-dByrN8_ycawW9Zgl8g0cegzMdvzNcABw&continue=https://books.google.co.uk/books/download/The_Mathematical_Principles_of_Natural_P.pdf%3Fid%3Db3RgvRUi2FsC%26output%3Dpdf%26hl%3Den&output=html_text","timestamp":"2024-11-03T07:51:19Z","content_type":"text/html","content_length":"59102","record_id":"<urn:uuid:053a5c96-884a-46f1-903b-e916c3395589>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00659.warc.gz"} |
Difference in DB Depreciation answers
07-01-2018, 03:02 AM
Post: #1
Carsen Posts: 206
Member Joined: Jan 2017
Difference in DB Depreciation answers
The HP Prime gives a different answer for following declining-balance depreciation problem when compared to the HP-12C. The problem is on page 141 of the HP-12C user's guide, except I'm using a
factor of 200% instead of 150% . Here are the problem's variables...
Cost = $50,000
Salvage = $8,000
Useful Life = 6 years
Factor = 200%
First Used in the beginning of September.
The Prime and 12C give similar answers for depreciation and the remaining depreciation for years 1, 2, 3, & 4. Years 5 and 6 are when the answers are way off. This is the answers each calculator
provides for the depreciation for that year.
Year 5:
Year 6:
My Prime has the latest firmware and I have checked what I have entered into my calculators more than twice. I even re-entered my partial year declining-balance depreciation program into my 12C. I
always doubt the user before I begin to suspect the calculator. So this means there is no input error. What do you all think is going on?
07-01-2018, 11:33 AM
Post: #2
roadrunner Posts: 450
Senior Member Joined: Jun 2015
RE: Difference in DB Depreciation answers
If you use DB with SL crossover:
You get the same answer as the 12c for years 5 and 6:
07-01-2018, 11:36 AM
Post: #3
Tim Wessman Posts: 2,293
Senior Member Joined: Dec 2013
RE: Difference in DB Depreciation answers
So the next question... is the SL crossover what the 12C is doing nomrally or is Prime calculating the wrong one for the setting?
Although I work for HP, the views and opinions I post here are my own.
07-01-2018, 11:58 AM
Post: #4
roadrunner Posts: 450
Senior Member Joined: Jun 2015
RE: Difference in DB Depreciation answers
I don't know, but calculator soup here:
gives the same answer as the prime using sl crossover:
07-01-2018, 05:29 PM
Post: #5
Carsen Posts: 206
Member Joined: Jan 2017
RE: Difference in DB Depreciation answers
For this particular problem, it seems that both declining balance and declining balance with straight line crossover have the same answers. This will why the Prime yields the same answer as the
partial declining balance program on the 12C
The 12C has two programs. The first one is a partial declining balance depreciation program that takes 37 steps total. The other program is declining balance with a Straight line crossover, which
takes 95 steps. Both programs generate the same answers for the problem.
I wonder if there this is caused by my lack of knowledge about these depreciation types? I'll keep looking into it. Let me know if there is anything else you all find. Thanks
07-02-2018, 05:59 AM
Post: #6
cyrille de brébisson Posts: 1,047
Senior Member Joined: Dec 2013
RE: Difference in DB Depreciation answers
Is the "First USe" set to 1 on Prime?
Although I work for the HP calculator group, the views and opinions I post here are my own. I do not speak for HP.
07-02-2018, 01:42 PM
(This post was last modified: 07-02-2018 01:44 PM by Claudio L..)
Post: #7
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: Difference in DB Depreciation answers
(07-01-2018 11:36 AM)Tim Wessman Wrote: So the next question... is the SL crossover what the 12C is doing nomrally or is Prime calculating the wrong one for the setting?
From the table roadrunner shows, there's no SL crossover happening:
Depreciation% = (1/Years)*Factor = 1/6*200/100 = 1/3
First Year:
DB = $50000*1/3 = $16666.67 for a full year, since it started in September (only 4 months out of 12):
DB = $16666.67 *4/12 = $5555.56 (in agreement with HP Prime)
Value = $50000 - 5555.56 = $44444.44
Now for the second row...
DB = $44444.44/3 = $14814.81
Value = $44444.44-$14814.81 = $29629.62
And so on. Basically this whole application merely computes the initial Depreciation% then does << DUP Depreciation% * - >> on each row.
Keep proceeding the same way and you'll find all the values in the table are correct. The last year you can only depreciate the difference you have left over those $8000 salvage value, hence only
$779 and change.
Since the procedure didn't change, those results are consistent with no SL crossover.
To do it with SL crossover, at each row you compute how much the straight line depreciation would be with the years you have left. If it's more than the other method, then you switch to straight line
for the rest of the years.
In this case:
First year, DB = (50000-8000)/6 = $7000 (since we got $16666.67, use the other method).
2nd year, DB = ($44444.44 - $8000)/5 = $7288.88 vs $14k and change, so keep using the other method
3rd year: DB = (29629.62-8000)/4 =$5407.41 vs $9876, stick to same method
and if you keep going, you'll find the straight line method is always less so there's no crossover happening.
Now if the Prime gives different results with and without SL crossover, there's something fishy going on.
07-02-2018, 04:42 PM
Post: #8
Carsen Posts: 206
Member Joined: Jan 2017
RE: Difference in DB Depreciation answers
(07-02-2018 05:59 AM)cyrille de brébisson Wrote: Is the "First USe" set to 1 on Prime?
Yes. When I come across an error with a calculator, I always doubt the user. Hence, I have triple checked all of my inputs. I put the number 9 into First Use to tell the calculator that the item was
first used in the beginning of September.
(07-02-2018 01:42 PM)Claudio L. Wrote: Now if the Prime gives different results with and without SL crossover, there's something fishy going on.
Pictures are worth a thousand words. So here is the pictures of my results using the declining balance.
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/thread-10984-post-99831.html","timestamp":"2024-11-09T20:34:27Z","content_type":"application/xhtml+xml","content_length":"40789","record_id":"<urn:uuid:dfa893ab-c81a-4dfa-94bb-5746853cff37>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00239.warc.gz"} |
Cluster Gaussian Mixture Data Using Soft Clustering
This example shows how to implement soft clustering on simulated data from a mixture of Gaussian distributions.
cluster estimates cluster membership posterior probabilities, and then assigns each point to the cluster corresponding to the maximum posterior probability. Soft clustering is an alternative
clustering method that allows some data points to belong to multiple clusters. To implement soft clustering:
1. Assign a cluster membership score to each data point that describes how similar each point is to each cluster's archetype. For a mixture of Gaussian distributions, the cluster archetype is
corresponding component mean, and the component can be the estimated cluster membership posterior probability.
2. Rank the points by their cluster membership score.
3. Inspect the scores and determine cluster memberships.
For algorithms that use posterior probabilities as scores, a data point is a member of the cluster corresponding to the maximum posterior probability. However, if there are other clusters with
corresponding posterior probabilities that are close to the maximum, then the data point can also be a member of those clusters. It is good practice to determine the threshold on scores that yield
multiple cluster memberships before clustering.
This example follows from Cluster Gaussian Mixture Data Using Hard Clustering.
Simulate data from a mixture of two bivariate Gaussian distributions.
rng(0,'twister') % For reproducibility
mu1 = [1 2];
sigma1 = [3 .2; .2 2];
mu2 = [-1 -2];
sigma2 = [2 0; 0 1];
X = [mvnrnd(mu1,sigma1,200); mvnrnd(mu2,sigma2,100)];
Fit a two-component Gaussian mixture model (GMM). Because there are two components, suppose that any data point with cluster membership posterior probabilities in the interval [0.4,0.6] can be a
member of both clusters.
gm = fitgmdist(X,2);
threshold = [0.4 0.6];
Estimate component-member posterior probabilities for all data points using the fitted GMM gm. These represent cluster membership scores.
For each cluster, rank the membership scores for all data points. For each cluster, plot each data points membership score with respect to its ranking relative to all other data points.
n = size(X,1);
[~,order] = sort(P(:,1));
legend({'Cluster 1', 'Cluster 2'})
ylabel('Cluster Membership Score')
xlabel('Point Ranking')
title('GMM with Full Unshared Covariances')
Although a clear separation is hard to see in a scatter plot of the data, plotting the membership scores indicates that the fitted distribution does a good job of separating the data into groups.
Plot the data and assign clusters by maximum posterior probability. Identify points that could be in either cluster.
idx = cluster(gm,X);
idxBoth = find(P(:,1)>=threshold(1) & P(:,1)<=threshold(2));
numInBoth = numel(idxBoth)
hold on
legend({'Cluster 1','Cluster 2','Both Clusters'},'Location','SouthEast')
title('Scatter Plot - GMM with Full Unshared Covariances')
hold off
Using the score threshold interval, seven data points can be in either cluster.
Soft clustering using a GMM is similar to fuzzy k-means clustering, which also assigns each point to each cluster with a membership score. The fuzzy k-means algorithm assumes that clusters are
roughly spherical in shape, and all of roughly equal size. This is comparable to a Gaussian mixture distribution with a single covariance matrix that is shared across all components, and is a
multiple of the identity matrix. In contrast, gmdistribution allows you to specify different covariance structures. The default is to estimate a separate, unconstrained covariance matrix for each
component. A more restricted option, closer to k-means, is to estimate a shared, diagonal covariance matrix.
Fit a GMM to the data, but specify that the components share the same, diagonal covariance matrix. This specification is similar to implementing fuzzy k-means clustering, but provides more
flexibility by allowing unequal variances for different variables.
gmSharedDiag = fitgmdist(X,2,'CovType','Diagonal', ...
Estimate component-member posterior probabilities for all data points using the fitted GMM gmSharedDiag. Estimate soft cluster assignments.
[idxSharedDiag,~,PSharedDiag] = cluster(gmSharedDiag,X);
idxBothSharedDiag = find(PSharedDiag(:,1)>=threshold(1) & ...
numInBoth = numel(idxBothSharedDiag)
Assuming shared, diagonal covariances among components, five data points could be in either cluster.
For each cluster:
1. Rank the membership scores for all data points.
2. Plot each data points membership score with respect to its ranking relative to all other data points.
[~,orderSharedDiag] = sort(PSharedDiag(:,1));
legend({'Cluster 1' 'Cluster 2'},'Location','NorthEast')
ylabel('Cluster Membership Score')
xlabel('Point Ranking')
title('GMM with Shared Diagonal Component Covariances')
Plot the data and identify the hard, clustering assignments from the GMM analysis assuming the shared, diagonal covariances among components. Also, identify those data points that could be in either
hold on
legend({'Cluster 1','Cluster 2','Both Clusters'},'Location','SouthEast')
title('Scatter Plot - GMM with Shared Diagonal Component Covariances')
hold off
See Also
fitgmdist | gmdistribution | cluster
Related Topics | {"url":"https://au.mathworks.com/help/stats/cluster-gaussian-mixture-data-using-soft-clustering.html","timestamp":"2024-11-10T04:54:44Z","content_type":"text/html","content_length":"81482","record_id":"<urn:uuid:092796d6-5c27-41cc-ab96-6ec116b4935e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00316.warc.gz"} |
logic puzzles Archives | Fundamentals of Mathematics and Physics
Update: Scroll to the bottom of this post to see the solution to Smullyan’s logic puzzle discussed below. Raymond Smullyan has written many books. What is the Name of This Book?, published in 1978,
is a collection of logic puzzles and paradoxes that culminate in a development of Gödel‘s incompleteness theorem. The first page of Chapter … Read more | {"url":"https://fomap.org/tag/logic-puzzles/","timestamp":"2024-11-11T14:49:39Z","content_type":"text/html","content_length":"52757","record_id":"<urn:uuid:3b216e73-20f1-4cb5-a55f-02ba2f42dc3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00452.warc.gz"} |
Light Speed to Centimeters Per Minute
Light Speed to Centimeters Per Minute
Convert Centimeters Per Minute to Light Speed (cm/min to ls) ▶
Conversion Table
light speed to centimeters per minute
ls cm/min
1 ls 1798750000000 cm/min
2 ls 3597500000000 cm/min
3 ls 5396250000000 cm/min
4 ls 7195000000000 cm/min
5 ls 8993750000000 cm/min
6 ls 10792500000000 cm/min
7 ls 12591250000000 cm/min
8 ls 14390000000000 cm/min
9 ls 16188750000000 cm/min
10 ls 17987500000000 cm/min
11 ls 19786250000000 cm/min
12 ls 21585000000000 cm/min
13 ls 23383750000000 cm/min
14 ls 25182500000000 cm/min
15 ls 26981250000000 cm/min
16 ls 28780000000000 cm/min
17 ls 30578750000000 cm/min
18 ls 32377500000000 cm/min
19 ls 34176250000000 cm/min
20 ls 35975000000000 cm/min
How to convert
1 light speed (ls) = 1.79875E+12 centimeter per minute (cm/min). Light Speed (ls) is a unit of Speed used in Metric system. Centimeter Per Minute (cm/min) is a unit of Speed used in Metric system.
Light Speed
Definition of Light Speed
Light speed, commonly denoted c, is a universal physical constant that is exactly equal to 299,792,458 metres per second (approximately 300,000 kilometres per second; 186,000 miles per second; 671
million miles per hour). It is the speed at which light waves propagate through vacuum, and also the upper limit for the speed at which any form of matter or energy can travel through space. Light
speed is an essential parameter in the theories of relativity and electromagnetism, and has relevance beyond the context of light and electromagnetic waves.
How to Convert Light Speed
To convert light speed to other units of speed, we need to multiply or divide by the corresponding conversion factors. For example, to convert light speed to kilometers per hour, we need to multiply
by 3,600, since there are 3,600 seconds in one hour. To convert light speed to miles per hour, we need to multiply by 2.2369362920544, since there are 2.2369362920544 miles in one kilometer.
Here are some examples of how to convert light speed to other units of length in the US Standard system and the SI system:
• To convert c to kilometers per hour (km/h), we multiply by 3,600: c x 3,600 = 1,079,252,848.8 km/h
• To convert c to miles per hour (mph), we multiply by 2.2369362920544: c x 2.2369362920544 = 670,616,629.384 mph
• To convert c to feet per second (fps), we multiply by 3.2808398950131, since there are 3.2808398950131 feet in one meter: c x 3.2808398950131 = 983,571,056.43 fps
• To convert c to knots (kn), we multiply by 1.9438444924406, since there are 1.9438444924406 nautical miles in one kilometer: c x 1.9438444924406 = 582,749,918.284 kn
• To convert c to meters per second (m/s), we use the exact value: c = 299,792,458 m/s
• To convert c to meters per minute (m/min), we multiply by 60, since there are 60 seconds in one minute: c x 60 = 17,987,547,480 m/min
Where Light Speed Is Used
Light speed is used in various fields of science and technology where the properties and behavior of light and electromagnetic waves are studied or applied. For example:
• In astronomy and cosmology, light speed is used to measure astronomical distances and time scales, such as light-years and parsecs. It also determines the observable size and age of the universe
and the effects of gravity on light such as gravitational lensing and gravitational redshift.
• In physics and engineering, light speed is used to calculate the energy and momentum of particles and fields using the famous equation E = mc2. It also sets the limit for causality and
information transfer in physical systems.
• In communication and navigation, light speed is used to determine the delay and bandwidth of signals transmitted through various media such as optical fibers or radio waves. It also affects the
accuracy and precision of measurements based on time-of-flight or Doppler effect methods.
History of Light Speed
The concept of light speed has a long history that spans across different cultures and disciplines. Some of the milestones in its development are:
• In ancient times, many philosophers and scientists assumed that light traveled instantaneously or infinitely fast.
• In the late 17th century, Danish astronomer Ole Romer was the first to demonstrate that light had a finite speed by observing the apparent motion of Jupiter’s moon Io. He estimated that light
took about 22 minutes to cross the diameter of Earth’s orbit.
• In the early 18th century, English astronomer James Bradley discovered the aberration of starlight caused by Earth’s motion around the Sun. He used this phenomenon to calculate that light
traveled about 10 thousand times faster than Earth’s orbital speed.
• In the late 19th century, French physicist Hippolyte Fizeau and American physicist Albert Michelson conducted various experiments using rotating mirrors or interferometers to measure the speed of
light more accurately in air or vacuum.
• In the early 20th century, German-born physicist Albert Einstein proposed the special theory of relativity, which postulated that light speed was constant and independent of the motion of the
source or the observer. He also showed that light speed was the maximum speed for any form of matter or energy in the universe.
• In the late 20th century, various methods and standards were developed to define and measure light speed more precisely and consistently. In 1983, the International System of Units (SI) adopted
the exact value of 299,792,458 metres per second as the definition of light speed in vacuum.
Example Conversions of Light Speed to Other Units
Here are some examples of how to convert light speed to other units of speed, using the conversion factors given above:
• To convert c to kilometers per hour, we multiply by 3,600: c x 3,600 = 1,079,252,848.8 km/h
• To convert c to miles per hour, we multiply by 2.2369362920544: c x 2.2369362920544 = 670,616,629.384 mph
• To convert c to feet per second, we multiply by 3.2808398950131: c x 3.2808398950131 = 983,571,056.43 fps
• To convert c to knots, we multiply by 1.9438444924406: c x 1.9438444924406 = 582,749,918.284 kn
• To convert c to meters per second, we use the exact value: c = 299,792,458 m/s
• To convert c to meters per minute, we multiply by 60: c x 60 = 17,987,547,480 m/min
• To convert c to centimeters per second, we multiply by 100: c x 100 = 29,979,245,800 cm/s
Light speed also can be marked as c and speed of light.
Centimeters per minute: A unit of speed
Centimeters per minute (cm/min) is a unit of speed or velocity in the International System of Units (SI). It measures how fast an object is moving by calculating the distance traveled in centimeters
divided by the time taken in minutes. For example, if a snail travels 6 centimeters in 3 minutes, its speed is 2 cm/min.
How to convert centimeters per minute
Centimeters per minute can be converted to other units of speed or velocity by using simple conversion factors. Here are some common units and their conversion factors:
• Meters per second (m/s): To convert from cm/min to m/s, divide by 6000. To convert from m/s to cm/min, multiply by 6000. For example, 1 cm/min is equal to 0.000166667 m/s, and 10 m/s is equal to
60000 cm/min.
• Kilometers per hour (km/h): To convert from cm/min to km/h, divide by 100000. To convert from km/h to cm/min, multiply by 100000. For example, 1 cm/min is equal to 0.0006 km/h, and 50 km/h is
equal to 833333.333 cm/min.
• Miles per hour (mph): To convert from cm/min to mph, multiply by 0.000372823. To convert from mph to cm/min, divide by 0.000372823. For example, 1 cm/min is equal to 0.000372823 mph, and 30 mph
is equal to 80467.2 cm/min.
• Knots (kn): To convert from cm/min to kn, multiply by 0.000323974. To convert from kn to cm/min, divide by 0.000323974. For example, 1 cm/min is equal to 0.000323974 kn, and 15 kn is equal to
46296.296 cm/min.
• Feet per minute (ft/min): To convert from cm/min to ft/min, divide by 30.48. To convert from ft/min to cm/min, multiply by 30.48. For example, 1 cm/min is equal to 0.0328084 ft/min, and 10 ft/min
is equal to 304.8 cm/min.
• Inches per minute (in/min): To convert from cm/min to in/min, divide by 2.54. To convert from in/min to cm/min, multiply by 2.54. For example, 1 cm/min is equal to 0.393701 in/min, and 20 in/min
is equal to 50.8 cm/min.
Where centimeters per minute are used
Centimeters per minute are mainly used in science and engineering to measure the flow rate of fluids and gases.
For example, the standard cubic centimeter per minute (SCCM) is a unit used to quantify the flow rate of a fluid at standard conditions of temperature and pressure.
The flow rate of blood through a capillary is often measured in centimeters per minute.
The flow rate of air through a wind tunnel is often measured in centimeters per second, which is equivalent to centimeters per minute divided by 60.
Definition of centimeters per minute
According to the SI definition, one centimeter per minute is the speed of a body that covers a distance of one centimeter in a time of one minute.
Mathematically, it can be expressed as:
where v is the speed or velocity in centimeters per minute, s is the distance traveled in centimeters, and t is the time taken in minutes.
History of centimeters per minute
The concept of speed or velocity has been studied since ancient times by philosophers and scientists such as Aristotle, Galileo, Newton, etc.
The centimeter was originally derived from the French centimeter which was defined as one hundredth of a meter.
The meter was originally derived from the French meter which was defined as one ten-millionth of the distance from the equator to the North Pole.
The minute was originally derived from the Babylonian sexagesimal system which divided an hour into sixty minutes.
The combination of these two units resulted in the centimeter per minute as a unit of speed or velocity.
The centimeter per minute was officially adopted as part of the SI system in 1960.
Example conversions of centimeters per minute to other units
Here are some examples of converting centimeters per minute to other units of speed or velocity:
1 cm/min = 0.000166667 m/s = 0.0006 km/h = 0.000372823 mph = 0.000323974 kn = 0.0328084 ft/min = 0.393701 in/min
2 cm/min = 0.000333333 m/s = 0.0012 km/h = 0.000745645 mph = 0.000647948 kn = 0.0656168 ft/min = 0.787402 in/min
5 cm/min = 0.000833333 m/s = 0.003 km/h = 0.00186411 mph = 0.00161987 kn = 0.164042 ft/min = 1.9685 in/min
10 cm/min = 0.00166667 m/s = 0.006 km/h = 0.00372823 mph = 0.00323974 kn = 0.328084 ft/min = 3.93701 in/min
20 cm/min = 0.00333333 m/s = 0.012 km/h = 0.00745645 mph = 0.00647948 kn = 0.656168 ft/min = 7.87402 in/min
50 cm/min = 0.00833333 m/s = 0.03 km/h = 0.0186411 mph = 0.0161987 kn = 1.64042 ft/min = 19.685 in/min
Centimeters per minute also can be marked as cm/min and Centimetres per minute (alternative British English spelling in UK).
Español Russian Français
Related converters:
Light Speed to Kilometers Per Second
Light Speed to Knots
Light Speed to Kilometers Per Hour
Light Speed to Mach
Light Speed to Miles Per Second
Light Speed to Miles Per Hour
Light Speed to Meters Per Second
Centimeters Per Minute to Centimeters Per Second
Kilometers Per Second to Kilometers Per Hour
Knots to Kilometers Per Hour
Knots to Miles Per Hour
Kilometers Per Hour to Kilometers Per Second
Kilometers Per Hour to Knots
Kilometers Per Hour to Light Speed
Kilometers Per Hour to Mach
Kilometers Per Hour to Miles Per Second
Kilometers Per Hour to Miles Per Hour
Kilometers Per Hour to Meters Per Second
Light Speed to Kilometers Per Hour
Light Speed to Miles Per Hour
Mach to Kilometers Per Hour
Mach to Miles Per Second
Mach to Miles Per Hour
Miles Per Second to Kilometers Per Hour
Miles Per Second to Mach
Miles Per Hour to Knots
Miles Per Hour to Kilometers Per Hour
Miles Per Hour to Light Speed
Miles Per Hour to Mach
Miles Per Hour to Meters Per Second
Meters Per Second to Kilometers Per Hour
Meters Per Second to Miles Per Hour | {"url":"https://metric-calculator.com/convert-light-speed-to-centimeters-per-minute.htm","timestamp":"2024-11-06T05:40:57Z","content_type":"text/html","content_length":"32778","record_id":"<urn:uuid:cc42cd6f-bea7-4ac4-bca7-863b0149b9f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00111.warc.gz"} |
Present Value of Lump Sum Calculator
Present Value of Lump Sum Calculator is a tool that helps calculate the present value of lump sum based on the fixed interest rate per period
Present Value Of Lump Sum:
What does Lump Sum mean?
It refers to maturity amount of a present value lump sum investment, or a one-time investment, after a specified number of years.
PV = FV / (1 + r) ^ t
□ PV = present value of lump sum
□ FV = future value of lump sum
□ r = interest rate per period
□ t = number of compounding periods
A person wants to purchase a house in two years. He expects that he would need to have $20,000 at that time to use as a down payment. The certificate of deposit pays 5% per year.
In this case, we know that the future value is $20,000 and time frame (2 years) and the interest rate (5% per year). Therefore, apply the above formula:-
□ PV = 20,000 / (1.05) ^ 2 = 18,140.59
Jun 11, 2018
Tool Launched | {"url":"https://toolslick.com/finance/loan/present-value-of-lump-sum","timestamp":"2024-11-13T12:24:32Z","content_type":"text/html","content_length":"50733","record_id":"<urn:uuid:210761b5-ca65-4fd8-96ab-138f6c00c2d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00196.warc.gz"} |
RudKnow - RudKnow
For all of the proofs, click the link below for each proof and art piece:
Finding The Square Roots: Faces
Mathematics can be beautiful. The following proofing is expressed as an art piece for my latest EP: Teacher Training.
The subsequent geometric proof details how the artwork was created using the free geogebra app. While the color base is red, blue, and yellow; the shapes are derived from manipulating the values of
the radius of a circle along with the altering of the squared function. First, the proof will be explored. Then, the final picture will be shown.
I hope you enjoy...
For Math remediation and guidance, check out these short papers:
Interactive Proof:
Infinite Geometric Series: Concentric Circles and the Squared Function
This infinite geometric series are derived from geometric proofs applied to the coordinate plane. The main concept of this series is to find the intersect points of the parent functions: y=x, y=x^2,
y=x^(1/2), y=n^x, and y=1/x; and the circle, y^2+x^2=r; with the value of r as an element in the set of positive real numbers.
This first exploration involves the intersection of the functions y=x^2, y=-x^2, x=y^2, and x=-y^2; and y^2+x^2=r; with the value of r as an element in the set of positive real numbers.
Protocol for intersect points for a value of r.
1. Draw the unit circle: y^2+x^2=1
2. Draw a square with the intersect points of
the unit circle and the x and y axis.
3. Draw triangles at the intersect points of
the unit circle and the functions:
y=x^2, y=-x^2, x=y^2, and x=-y^2
This is a division of the unit circle based on the intersection of the functions y=x^2, y=-x^2, x=y^2, and x=-y^2; and the unit circle.
The series become an infinite series once the value of r is derived from the set of positive real numbers. At certain values of r, the intersect points between the functions and the unit circle
invert. For instance at r=2^(1/2), the intersection of the squared functions and the circle create an equivalent square to resultant square of the intersection of the x and y axis and the circle at r
=2^(1/2). This is due to the functions relationship with the points (1,1), (1,-1), (-1,1), and (-1,-1); which is where the squared functions themselves intersect. This reduces the number of intersect
points of the circle at r=2^(1/2) and the squared functions from 8 total to 4 total.
Protocol for intersect points for a value of r.
1. Draw the unit circle: y^2+x^2=2
2. Draw a square with the intersect points of
the unit circle and the x and y axis.
3. Draw triangles at the intersect points of
the unit circle and the functions:
y=x^2, y=-x^2, x=y^2, and x=-y^2
The circle, y^2+x^2=2, can act as the base of this particular series. Meaning, at r=2^(1/2) is the only point where the circle is divided evenly through the axis and squared functions
For each circle in the series, the intersection of the squared functions and respective circle can be elements of a sub set of the set of the intersect points and resultant divisions of the circle
when r is an element of the set of positive real numbers. The two sub groups are when r>2^(1/2) and when r<2^(1/2). The protocol for each individual proof occurs when an element is selected from the
set of positive real numbers. This selection of the element does not effect the iterations of the parent function, y=x^2.
How does this proof relate to the artwork?
If there are concentric circles created through selecting elements from the set of positive real numbers, then one can create the divisions of the circles based on the intersection at the altered
squared functions. This process creates the artwork below | {"url":"https://www.rudknow.org/math/rudknow","timestamp":"2024-11-04T07:19:35Z","content_type":"text/html","content_length":"115378","record_id":"<urn:uuid:61e2a28b-dfbe-4370-abda-14af615571b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00657.warc.gz"} |
Force Analysisof Heavy Duty Transmission Drum - The Home of Engineering and Sciences
Engineering Software
Force Analysisof Heavy Duty Transmission Drum
Force Analysisof Heavy Duty Transmission Drum
As one of the main components of the belt conveyor, the design of the transmission roller often uses the empirical formula method, and adopts a higher safety factor to ensure the reliability of the
roller. The disadvantage of this method is that the roller structure is too large, the mass is increased, and the cost is greatly increased. This paper uses Ansys software to analyze the roller
assembly model, find out the stress-strain distribution law, and analyze the stress-strain cloud map to facilitate the optimization of the roller.
1 3D modeling of roller assembly
The transmission roller consists of roller shell, spoke plate and wheel hub, sleeve and roller shaft. According to its load-bearing capacity, it can be divided into three categories: light roller,
medium roller and heavy roller. According to the surface structure of the roller, it can be divided into smooth roller and ceramic roller, etc. According to the function, it can be divided into
transmission roller, redirection roller, surface increase roller and unloading roller. The roller type in this paper is a heavy transmission roller. The spoke plate and the wheel hub are cast and
welded into one. The roller shaft and the wheel hub are connected by expansion sleeves, which can withstand greater loads and are easy to disassemble and assemble.
There are two ways to model the roller assembly. One is to create a model in 3D software and then import it into Ansys software through the interface between Ansys and 3D software; the other is to
model directly in Ansys. This paper adopts the latter modeling method. The roller shaft, expansion sleeve, spoke plate and wheel hub, roller shell, etc. are modeled from bottom to top in Ansys
software, as shown in Figure 1. In order to improve the calculation rate and accuracy, the following simplifications were performed when establishing the model:
1) The expansion sleeve was regarded as a unified solid body without considering the internal structure;
2) Some small features such as chamfers and fillets of each component were ignored;
3) The constraint of the bearing seat on the roller shaft was simplified to a simply supported beam form;
4) Minor components such as the bearing seat, screws used for pre-tightening the expansion sleeve, and screw holes were omitted.
Figure 1 3D model of transmission roller
The main parameters of the transmission roller are: roller diameter D = 1,250 mm, conveyor belt width B = 1,800 mm, roller length L = 2,000 mm, cylinder thickness t = 25 mm, shaft length 3,000 mm,
shaft diameter at the expansion sleeve is 400 mm, shaft diameter at the bearing is 360 mm, expansion sleeve type and size are ZT9300×375, and the wrap angle is 210°.
2 Finite element model of roller and definition of contact pairs
The roller solid model is meshed, different parts are assigned different unit types and unit properties, and different parts are meshed differently. In this paper, the surface of the shaft is firstly
meshed intelligently using Mesh 200, and then the Solid 185 unit is used to rotate to obtain the meshing of the shaft. The other parts are all meshed by sweeping, but the number of units for meshing
is set differently. Since the shaft is the main force-bearing component, the meshing is relatively fine. After the meshing is completed, contact pairs are established between the shaft and the
expansion sleeve, and between the expansion sleeve and the hub. The analysis of the shaft and the expansion sleeve, and between the expansion sleeve and the hub belongs to nonlinear analysis. This
paper takes this part into consideration to obtain more reliable results. The finite element model of the roller is shown in Figure 2, and the contact pairs established between the roller shaft and
the expansion sleeve are shown in Figure 3. The finite element analysis of the contact pairs between the expansion sleeve and the hub is similar.
Figure 2 Finite element model of transmission drum
Figure 3 Contact pairs established between the roller shaft and the expansion sleeve
3. Applying constraints
After dividing the mesh, the loads must be set before applying the loads to the finite element model. The loads in the finite element include boundary constraints, displacement constraints, and force
constraints. Generally, loads are divided into 6 categories: freedom constraints, force loads, surface loads, volume loads, inertia forces, and coupled field loads. This paper applies constraints at
the bearings to limit the axial (Z axis) and radial (X axis) degrees of freedom of movement of the shaft. At the same time, the shaft is also restricted by the coupling, and the degree of freedom of
rotation of the torque input end of the transmission roller shaft around the axis must be set.
4 Determination of load
The force analysis of the drive roller is shown in Figure 4. The drive roller is the main component for transmitting power. In order to transmit the necessary traction, there must be sufficient
friction between the conveyor belt and the roller. According to the Euler formula S in ≤ S out eμα
Where: Sin is the tension of the conveyor belt at the winding end, Sout is the tension of the conveyor belt at the winding end, e is the base of the natural logarithm, and μ is the friction factor
between the conveyor belt and the roller.
In the Euler formula, the ratio of Sin/Sout must be less than or equal to eμα. Less than means that the wrap angle is not fully utilized, so there must be a utilization arc α N of the wrap angle.
Figure 4 Force analysis of the driving roller
Using arcs to express Euler’s formula
The tension diagram of the conveyor belt along the arc expressed in polar coordinates is based on the spiral logarithm, as shown in Figure 5. For any φ <α N, the general expression is
The static arc represents a reserve of circumferential force, which is used to overcome the resistance and unestimated resistance that occurs during starting. Therefore, it can also be regarded as a
safety factor. The static arc generally occurs when the conveyor is stable. If S in and S out reach their maximum values, the static arc disappears, and the entire wrap angle is used for power
transmission. At this time, S in = S out eμα.
According to the friction drive theory, the winding end and the winding end of the roller follow the Euler formula. The wrap angle of the roller is 230°, the static arc is 30°, and the utilization
arc is 200°. In the static arc, there is no sliding between the conveyor belt and the roller, but there is static friction. The utilization arc is completely different. In this arc segment, the force
on the roller conforms to the Euler formula from the winding end to the winding end and gradually increases, that is, the tension of the conveyor belt changes along the circumferential direction on
the roller surface, which conforms to the Euler formula. At the same time, it is also subject to the friction force tangent to the roller surface, which
is simulated by the surface unit. According to theoretical analysis, the force on the axial roller surface is not constant, but distributed in a semi-sine function. In order to simplify the
calculation, this paper assumes that the force on the roller along the axial direction is constant, which has little effect on the results.
Figure 5 Load application
5 Solving and post-processing analysis of the stress condition of the transmission drum
After completing the previous series of work, the drum is solved and post-processed in the Ansys solution and post-processing module, and the stress distribution diagram of the drum shell and the
deformation diagram of the drum shell are obtained as shown in Figures 6 and 7. In theory, after the transmission drum is subjected to the force of the conveyor belt, the main stress-bearing parts
are the shaft and bearing, the expansion sleeve and the hub, and the contact part between the spoke plate and the inner wall of the drum. The solution of Ansys can list the stress components,
principal stresses, displacements, etc. of the unit nodes, and can also use other methods to display displacements and stresses. These can describe the distribution of the transmission drum model as
a whole and determine which part is subjected to the greatest stress and which part is the most dangerous.
Figure 6 Stress cloud diagram
Figure 7 Strain cloud diagram
The analysis of the heavy-duty transmission roller in this paper takes into account the nonlinear analysis of the roller shaft and the expansion sleeve, as well as the nonlinear analysis of the hub
and the expansion sleeve. Through the analysis of the stress deformation cloud map, it can be seen that the maximum displacement coordinates are (300.86, 412.89, 1 688.91), which appears in the
middle of the transmission roller shell. The maximum value is 0.342 3. From the stress distribution diagram of the transmission roller, it can be seen that the actual force of the roller is almost
the same as the force of the theoretical analysis, that is, they all appear at the contact point between the shaft and the bearing, with coordinates of (136.08, 25.430,
880.72), and the maximum stress is 43.23 MPa. According to the strength theory, the shaft is made of 45# steel, and the allowable strength can reach 65 MPa after quenching and tempering. The
transmission roller meets the strength requirements, and the allowable strength of the shaft is much greater than the actual stress of the shaft, so there is still a lot of room for optimization.
6 Conclusion
1) Use Ansys software to better judge the stress condition of the transmission roller, view the results from the deformation diagram and stress diagram, and provide a better basis for optimization.
2) Analyze the deformation diagram and stress distribution diagram of the roller, and obtain the actual stress and deformation of the roller. The roller has a small deformation and its strength is
much smaller than the actual bearing strength of the shaft. It can be seen that the size and quality of the roller can also be optimized.
3) This article simplifies the modeling process. The actual roller stress condition may be more complicated. Due to limited conditions, the actual roller modeling analysis was not performed. | {"url":"https://www.electromds.com/engineering-software/force-analysisof-heavy-duty-transmission-drum/","timestamp":"2024-11-13T19:31:59Z","content_type":"text/html","content_length":"142693","record_id":"<urn:uuid:c98d493a-b405-446d-916e-6fff55215b53>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00134.warc.gz"} |
Finite Automata
Behavior and Synthesis
• 1st Edition - February 12, 1984
• Paperback ISBN:
9 7 8 - 1 - 4 9 3 3 - 0 5 0 8 - 7
• eBook ISBN:
9 7 8 - 1 - 4 8 3 2 - 9 7 2 9 - 3
This dictionary supplies associations which have been evoked by certain words, signs, etc. in Western civilization in the past, and which may float to the surface again tomorrow;… Read more
Save 50% on book bundles
Immediately download your ebook while waiting for your print delivery. No promo code needed.
This dictionary supplies associations which have been evoked by certain words, signs, etc. in Western civilization in the past, and which may float to the surface again tomorrow; for however
'daringly new' a modern use of imagery may look, it generally appears to have roots in what has been said and done in the past. No fine distinctions have been made between symbols (in the limited
sense), allegories, metaphors, signs, types, images, etc. (not to mention 'ascending' and 'descending' symbols), since such subtle distinctions, however sensible from a scientific point of view, are
useless to a person struggling with the deeper comprehension (and thus appreciation) of a particular 'symbol'.
Chapter 0. Introduction
0.1. The Concept of an Automaton
0.2. Types of Automata
0.3. Automata and Graphs
0.4. Terminological Clarifications
0.5. Survey of the Contents of Chapters I to V
Chapter I. Behavior of Outputless Automata
I.1. Representation of Languages and ω-Languages in Automata
I.2. Interchangeability
I.3. Distinguishability of Words and ω-Words
I.4. Decidability of Properties of Finite Automata
I.5. Projections, Sources, Macrosources
I.6. Operations on Sources (Macrosources) and on the Languages (ω-Languages) Represented by them
I.7. Determinization of Sources. Operations Preserving Representability of Languages in Finite Automata
I.8. Determinization of Macrosources. Operations Preserving Representability of ω-Languages in Finite Automata
I.9. Proof of the Concatenation Theorem (Theorem 1.11)
I.10. Proof of the Strong Iteration Theorem (Theorem 1.12)
I.11. Probabilistic Automata
I.12. Grammars and Automata
Supplementary Material, Problems, Examples
Chapter II. Behavior of Automata with Output
II.1. Anticipation
II.2. Memory (Weight)
II.3. Equivalent Automata
II.4. Comparison of the Weight of an Operator with the Weight of an Automaton Realizing it
II.5. Representation of Languages (ω-Languages) and Realization of Operators. The Uniformization Problem
II.6. More About Decision Problems of Finite Automata
II.7. Games, Strategies and Nonanticipatory Operators
II.8. Game-Theoretic Interpretation of the Uniformization Problem
II.9. Proof of the Fundamental Theorem on Finite-State Games—Intuitive Outline
II.10. Proof of the Fundamental Theorem on Finite-State Games
II.11. Spectra of Accessibility and Distinguishabihty
II.12. Spectra of Operators and of Automata Defining them
II.13. Parameters of a Finite Automaton and its Behavior
Supplementary Material, Problems
Chapter III. Metalanguages
III.1. Preliminary Examples and Problems
III.2. Discussion of the Examples. Statement of the Problem
III.3. The Metalanguages of Sources (Macrosources), Trees and Grammars
III.4. The Metalanguage of Regular Expressions
III.5. The Metalanguage of ω-Regular Expressions
III.6. The Logical Metalanguage I
III.7. Expressive Power of the Logical Metalanguage I
III.8. Normal Form
III.9. Synthesis of an Automaton Representing the ω-Language Defined by an I-Formula
III.10. Synthesis of an Automaton According to Conditions Imposed on an Operator or a Language
III.11. Cases without a Synthesis Algorithm
Supplementary Material, Problems
Chapter IV. Automaton Identification
IV.1. Introduction
IV.2. Identification of Relative Black Boxes
IV.3. Frequency Criteria. Complexity of Identification of Almost All Relative Black Boxes
IV.4. General Remarks on Identification of Absolute Black Boxes
IV.5. Iterative Algorithms
IV.6. Identification of Absolute Black Boxes by Multiple Algorithms, with Arbitrary Preassigned Frequency
IV.7. Bound on the Complexity of Uniform Identification
IV.8. Bound on the Complexity of (Nonuniform) Identification. Statement of the Fundamental Results
IV.9. Proof of Theorem 4.8
IV.10. Identification of Absolute Black Boxes by Simple Algorithms with Arbitrary Preassigned Frequency
IV.11. Bounds on the Complexity of (Nonuniform) Identification by Simple Algorithms
Supplementary Material, Problems
Chapter V. Statistical Estimates for Parameters and Spectra of Automata
V.1. Uniform Statistical Estimate of Degree of distinguishability
V.2. Uniform Statistical Estimate of the Saturation Spectrum
V.3. Stochastic Procedure Generating Automaton Graphs
V.4. Statistical Estimate of the Accessibility Spectrum for Automaton Graphs
V.5. Statistical Estimate of the Diameter. Statement of the Fundamental Result
V.6. Auxiliary Propositions from Probability Theory
V.7. Proof of the Fundamental Lemma
V.8. Statistical Estimate from Below for the Height of Automaton Graphs
V.9. Statistical Estimate for Accessibility Spectrum, Degree of Accessibility and Degree of Reconstructibility of Automata
Subject Index
• Published: February 12, 1984
• Imprint: Elsevier Science
• Paperback ISBN: 9781493305087
• eBook ISBN: 9781483297293 | {"url":"https://shop.elsevier.com/books/finite-automata/de-vries/978-0-7204-8021-4","timestamp":"2024-11-04T21:05:13Z","content_type":"text/html","content_length":"182583","record_id":"<urn:uuid:8af00eea-b372-49d9-885c-b99fcf4d3f9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00361.warc.gz"} |
loop group
stub for loop group
Two days back I had started adding some details to loop group. My plan had been to write out essentially all the statements from Segal’s talk Loop groups. But then I got interrupted before I had
gotten very far and now I am looking into something else. Maybe later. | {"url":"https://nforum.ncatlab.org/discussion/2768/","timestamp":"2024-11-06T20:28:50Z","content_type":"application/xhtml+xml","content_length":"38472","record_id":"<urn:uuid:9421e2f5-39c9-4d01-9273-358db9a2a3b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00384.warc.gz"} |
Programming Math is about combining the creativity of people and the computational power of programming to solve math problems. We explore a wide range of math topics with the help of JavaScript, the
lingua franca of the web.
James Taylor is a math PhD with a quantum mechanics specialty who is fascinated with programming. He teaches mathematics at Johns Hopkins University.
Thomas Park is graduate student of human-computer interaction, researching how people learn basic web development and ways of supporting them. He is motivated by the idea that regular people can use
programming to improve their lives.
This site benefits from the amazing work of other projects. MathJax is used to display math and SyntaxHighlighter to display code. jsFiddle is used as an environment for experimenting with | {"url":"http://programmingmath.com/about/","timestamp":"2024-11-12T19:22:14Z","content_type":"application/xhtml+xml","content_length":"12144","record_id":"<urn:uuid:85606b4d-6fbf-4bae-a604-22ec5555a779>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00570.warc.gz"} |
In testing for correlation of the errors in regression models, the power of tests can be very low for strongly correlated errors. This counterintuitive phenomenon has become known as the “zero-power
trap.” Despite a considerable amount of literature devoted to this problem, mainly focusing on its detection, a convincing solution has not yet been found. In this article, we first discuss
theoretical results concerning the occurrence of the zero-power trap phenomenon. Then, we suggest and compare three ways to avoid it. Given an initial test that suffers from the zero-power trap, the
method we recommend for practice leads to a modified test whose power converges to 1 as the correlation gets very strong. Furthermore, the modified test has approximately the same power function as
the initial test and thus approximately preserves all of its optimality properties. We also provide some numerical illustrations in the context of testing for network generated correlation.
Bibliographical note
Publisher Copyright:
© The Author(s), 2021. Published by Cambridge University Press. | {"url":"https://research.wu.ac.at/en/publications/how-to-avoid-the-zero-power-trap-in-testing-for-correlation","timestamp":"2024-11-10T07:52:11Z","content_type":"text/html","content_length":"38524","record_id":"<urn:uuid:c39b0222-4a30-49f1-8889-bbbcce8e34ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00554.warc.gz"} |
Tangency and Discriminants
Published on Jul 19, 2019141 Views
The discriminant of a polynomial with prescribed monomials is an irreducible polynomial in the coefficients vanishing when the corresponding polynomial has multiple roots. It is an important tool in
Tangency and Discriminants00:00
Natural concept02:27
The discriminants of univariate polynomials04:13
Algebra vs Geometry06:12
The definition of discriminant - 109:52
The definition of discriminant - 212:03
Example 114:31
Geometry - 115:42
Geometry - 219:47
Projective duality - 120:58
Projective duality - 223:54
Polar geometry:the degree and dimension of the discriminant27:07
Toric projective duality = A-discriminants30:00
Can a discriminant govern multiple roots of systems of polynomials?35:12
Example - 137:11
Tangential intersections37:45
The mixed discriminant39:09
Example - 240:36
One more example: The distance to a variety41:59
Consider now a plane curve43:12
This proves:44:34
Singular intersection of Quadric Surfaces46:53
Two Main Questions47:38
Towards an answer to question 148:24
Answer to question 152:36
Answer to question 254:31
Thank you55:26 | {"url":"https://videolectures.net/videos/FPSAC2019_di_rocco_tangency_and_discriminants","timestamp":"2024-11-12T04:04:13Z","content_type":"text/html","content_length":"129740","record_id":"<urn:uuid:b050c59e-c667-480f-9152-5ab2c5bae789>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00602.warc.gz"} |
How to Find the Intersection Of Two Lists In Haskell?
To find the intersection of two lists in Haskell, you can use the intersect function from the Data.List module. This function takes two lists as arguments and returns a new list containing only the
elements that are present in both input lists. Additionally, you can also use list comprehensions or manually iterate through the lists to find the intersection.
What is the difference between finding the intersection and the union of two lists in Haskell?
In Haskell, finding the intersection of two lists means finding the elements that are common to both lists. The intersect function from the Data.List module can be used for this purpose.
1 intersect [1,2,3,4] [3,4,5,6] ==> [3,4]
On the other hand, finding the union of two lists means combining all the elements from both lists while removing duplicates. The union function from the Data.List module can be used for this
1 union [1,2,3,4] [3,4,5,6] ==> [1,2,3,4,5,6]
How to find the intersection of two lists in Haskell using a fold function?
To find the intersection of two lists in Haskell using a fold function, you can apply the following steps:
1. Define a function that takes two lists as input and returns their intersection. This function will use a fold function to iterate over one of the lists and check if each element is present in the
other list.
2. Use the foldr function to iterate over one list, checking if each element is present in the other list using the elem function.
3. Use a helper function within the fold function to accumulate the intersection of the two lists.
4. Return the accumulated list as the final result.
Here is an example implementation of finding the intersection of two lists in Haskell using the foldr function:
1 intersection :: Eq a => [a] -> [a] -> [a]
2 intersection xs ys = foldr (\x acc -> if x `elem` ys then x:acc else acc) [] xs
4 main = do
5 print $ intersection [1,2,3,4,5] [3,4,5,6,7] -- Output: [3,4,5]
In this example, the intersection function takes two lists xs and ys as input and uses foldr to iterate over the list xs. For each element x in xs, it checks if it is present in the list ys using the
elem function. If the element x is found in ys, it is accumulated in the result list acc. The final result is the intersection of the two input lists.
What is the implication of finding the common elements of two lists in Haskell?
Finding the common elements of two lists in Haskell implies that you are identifying the elements that appear in both lists at least once. This can be useful in a variety of scenarios, such as
comparing two sets of data, removing duplicates, or determining shared values between different data structures. The common elements can be extracted and used for further processing or analysis,
depending on the specific requirements of the problem at hand.
How to use the Set data type to find the intersection of two lists in Haskell?
To find the intersection of two lists using the Set data type in Haskell, you can follow these steps:
1. Import the necessary module for using sets in Haskell:
1 import qualified Data.Set as Set
1. Define the two lists that you want to find the intersection of:
1 list1 = [1, 2, 3, 4, 5]
2 list2 = [3, 4, 5, 6, 7]
1. Convert the lists to sets using the fromList function:
1 set1 = Set.fromList list1
2 set2 = Set.fromList list2
1. Find the intersection of the two sets using the intersection function:
1 intersectionSet = Set.intersection set1 set2
1. Convert the intersection set back to a list using the toList function:
1 intersectionList = Set.toList intersectionSet
1. Print the intersection list:
Putting it all together, your Haskell code to find the intersection of two lists using the Set data type would look like this:
1 import qualified Data.Set as Set
3 list1 = [1, 2, 3, 4, 5]
4 list2 = [3, 4, 5, 6, 7]
6 set1 = Set.fromList list1
7 set2 = Set.fromList list2
9 intersectionSet = Set.intersection set1 set2
11 intersectionList = Set.toList intersectionSet
13 print intersectionList
What is the relationship between finding the intersection of two lists in Haskell and set theory?
In Haskell, finding the intersection of two lists can be done using the intersect function from the Data.List module. This function takes two lists as input and returns a new list containing the
elements that are present in both input lists.
In set theory, the intersection of two sets is a new set that contains only the elements that are common to both input sets. The intersection of sets is a fundamental concept in set theory and is
denoted by the symbol ∩.
Therefore, the relationship between finding the intersection of two lists in Haskell and set theory is that they both involve finding the common elements between two collections of items. The
intersect function in Haskell operates in a similar way to the intersection operation in set theory, by determining which elements are present in both input collections and returning them as the
What is the most efficient way to find the intersection of two lists in Haskell?
The most efficient way to find the intersection of two lists in Haskell is to use the built-in intersect function from the Data.List module. This function takes two lists as input and returns a new
list containing only the elements that are present in both input lists. The intersect function has a time complexity of O(n*m), where n and m are the lengths of the two input lists. This is the most
efficient way to find the intersection of two lists in Haskell because it is implemented in a way that optimizes performance and reduces unnecessary computations. | {"url":"https://topminisite.com/blog/how-to-find-the-intersection-of-two-lists-in","timestamp":"2024-11-03T23:06:39Z","content_type":"text/html","content_length":"430857","record_id":"<urn:uuid:18c54f07-8ce8-4ac3-847e-b07777b38a1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00311.warc.gz"} |