content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Loop over matrix elements
So far, you have been looping over 1 dimensional data types. If you want to loop over elements in a matrix (columns and rows), then you will have to use nested loops. You will use this idea to print
out the correlations between three stocks.
The easiest way to think about this is that you are going to start on row1, and move to the right, hitting col1, col2, …, up until the last column in row1. Then, you move down to row2 and repeat the
[,1] [,2]
[1,] "r1c1" "r1c2"
[2,] "r2c1" "r2c2"
# Loop over my_matrix
for(row in 1:nrow(my_matrix)) {
for(col in 1:ncol(my_matrix)) {
print(my_matrix[row, col])
[1] "r1c1"
[1] "r1c2"
[1] "r2c1"
[1] "r2c2"
The correlation matrix, corr, is available for you to use.
This is a part of the course
“Intermediate R for Finance”
View Course
Exercise instructions
• Print corr to get a peek at the data.
• Fill in the nested for loop! It should satisfy the following:
□ The outer loop should be over the rows of corr.
□ The inner loop should be over the cols of corr.
□ The print statement should print the names of the current column and row, and also print their correlation.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Print out corr
# Create a nested loop
for(row in 1:nrow(___)) {
for(col in 1:___(corr)) {
print(paste(colnames(corr)[___], "and", rownames(corr)[___],
"have a correlation of", corr[row,col]))
This exercise is part of the course
Intermediate R for Finance
Learn about how dates work in R, and explore the world of if statements, loops, and functions using financial examples.
What is DataCamp?
Learn the data skills you need online at your own pace—from non-coding essentials to data science and machine learning. | {"url":"https://campus.datacamp.com/courses/intermediate-r-for-finance/loops-3?ex=11","timestamp":"2024-11-05T17:05:07Z","content_type":"text/html","content_length":"153548","record_id":"<urn:uuid:871e6a93-675a-4149-be6e-ce51005d64ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00489.warc.gz"} |
The hadronic running of the electroweak couplings from lattice QCD
The energy dependency (running) of the strength of electromagnetic interactions $\alpha$ plays an important role in precision tests of the Standard Model. The running of the former to the $Z$ pole is
an input quantity for global electroweak fits, while the running of the mixing angle is susceptible to the effects of Beyond Standard Model physics, particularly at low energies.
We present a computation of the hadronic vacuum polarization (HVP) contribution to the running of these electroweak couplings at the non-perturbative level in lattice QCD, in the space-like regime up
to $Q^2$ momentum transfers of $7\,\mathrm{GeV}^2$. This quantity is also closely related to the HVP contribution to the muon $g-2$.
We observe a tension of up to $3.5$ standard deviation between our lattice results for $\Delta\alpha^{(5)}_{\mathrm{had}}(-Q^2)$ and estimates based on the $R$-ratio for $Q^2$ in the $3$ to $7\,\
mathrm{GeV}^2$ range. The tension is, however, strongly diminished when translating our result to the $Z$ pole, by employing the Euclidean split technique and perturbative QCD, which yields $\Delta\
alpha^{(5)}_{\mathrm{had}}(M_Z^2)=0.027\,73(15)$. This value agrees with results based on the $R$-ratio within the quoted uncertainties, and can be used as an alternative to the latter in global
electroweak fits. | {"url":"https://pos.sissa.it/414/823/","timestamp":"2024-11-04T15:21:57Z","content_type":"text/html","content_length":"12793","record_id":"<urn:uuid:99a5212e-8b14-4e00-a9c1-9b44cde33a8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00089.warc.gz"} |
What is the unit vector that is orthogonal to the plane containing ( - 5 i + 4 j - 5 k) and (4 i + 4 j + 2 k) ? | Socratic
What is the unit vector that is orthogonal to the plane containing # ( - 5 i + 4 j - 5 k) # and # (4 i + 4 j + 2 k) #?
1 Answer
There are two steps: (1) find the cross product of the vectors, (2) normalise the resultant vector. In this case, the answer is:
$\left(\frac{28}{46.7} i - \frac{10}{46.7} j - \frac{36}{46.7} k\right)$
The cross product of two vectors yields a vector that is orthogonal (at right angles) to both.
The cross product of two vectors #(a#i$+ b$j$+ c$k#)# and #(p#i$+ q$j$+ r$k#)# is given by $\left(b \cdot r - c \cdot q\right) i + \left(c \cdot p - a \cdot r\right) j + \left(a \cdot q - b \cdot p\
right) k$
First step is to find the cross product:
#(−5i+4j−5k) xx (4i+4j+2k) = ((4*2)-(4*-5)i + ((-5*4)-(-5*2))j + ((-5*4)-(4*4))k =((8-(-20))i+(-20-(-10)j+((-20)-16)k)=(28i-10j-36k)#
This vector is orthogonal to both the original vectors, but it is not a unit vector. To make it a unit vector we need to normalise it: divide each of its components by the length of the vector.
$l = \sqrt{{28}^{2} + {\left(- 10\right)}^{2} + {\left(- 36\right)}^{2}} = 46.7$ units
The unit vector orthogonal to the original vectors is:
$\left(\frac{28}{46.7} i - \frac{10}{46.7} j - \frac{36}{46.7} k\right)$
This is one unit vector that is orthogonal to both the original vectors, but there is another - the one in the exact opposite direction. Simply changing the sign of each of the components yields a
second vector orthogonal to the original vectors.
$\left(- \frac{28}{46.7} i + \frac{10}{46.7} j + \frac{36}{46.7} k\right)$
(but it's the first vector that you should offer as the answer on a test or assignment!)
Impact of this question
2035 views around the world | {"url":"https://socratic.org/questions/what-is-the-unit-vector-that-is-orthogonal-to-the-plane-containing-5-i-4-j-5-k-a-1#216194","timestamp":"2024-11-06T23:45:41Z","content_type":"text/html","content_length":"35950","record_id":"<urn:uuid:7fe3df06-6fdf-4b75-91cd-639bd2e1a06d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00194.warc.gz"} |
The Which-Way Experiment and the Conditional Wavefunction
In a which-way experiment, a beam of quantum point-like particles, emitted by a narrow source of width , is divided into two partial beams by a double slit. A beam is represented mathematically by a
two-dimensional Gaussian wave packet (with negligible dispersion), constructed so that the beams can interfere. If no position-measuring device is placed in the apparatus, you can observe an interfer
ence phenomenon in region I (the gray area), which could be described mathematically as a superposition of the wavefunctions from the two arms. Suppose next that two detectors are added to the setup
just behind slits 1 and 2 to register the passage of a particle.
The wave always goes through both slits and the particle goes through only one. The part of the wave that is not associated with a particle is called the empty wave. The particle is guided by the wav
e (via the quantum potential/velocity from the wavefunction) toward places where the wave density is large and away from places where the density is small.
In the absence of the detectors and , a particle passing through slit 1 (on the left) falls finally on counter , not on , and vice versa. Particles with identical energies and width never cross the a
xis of symmetry. If the pulsed particle source is so feeble that the particles have different energies, then, of course, the trajectory can cross the axis of symmetry, which is allowed for an asymmet
ric wavefunction.
If the detectors are inserted, the experimental data shows for a strong measurement ( or ) that the particle traverses the left arm, registered by detector and hits the counter ; while the particle i
n the right arm, registered by detector hits the counter , and the trajectories do cross. In the case of strong position measuring, the interference fringes disappear. In a complete description of th
e measuring process, the detectors must also be described by a wavefunction.
Bell [1] stated clearly (twelve years before the discussion about surrealistic Bohm trajectories [2, 3] began) that the naive classical picture does not hold for an isolated quantum system without me
asurement. The particle, arriving at a given counter, goes through the wrong slit. Reference [2] gives a slightly different approach, but both agree that the position measurement is the key point for
understanding the which-way experiment.
In the causal interpretation (CI), there is a nonlocal interaction between the detectors and the whole system, which destroys the superposition of the wavefunctions in the presence of the detectors.
For an entangled wavefunction, meaning that the complete overlapping wavefunction cannot be represented as a product state in terms of independent variables, a nonlocal correlation of the particle tr
ajectories appears. In the CI approach the condition for nonlocality is nonfactorizability.
In the Copenhagen interpretation, the measurement collapses the wavefunction. In the CI approach, the position measurement affects the total wavefunction, but in principle it is possible to do a suff
iciently subtle path-determining measurement without destroying the interference pattern, which was confirmed by [8] in 2011.
One possible way to solve the problem is to describe the interaction between the measurement device and the isolated quantum system with the conditional wavefunction (cwf) [7, 9]. Here the cwf is the
system under observation, a superposition of two entangled wavefunctions, in complete agreement with the Schrödinger equation. One of the wavefunctions defines the measurement device, which here depe
nds only on the time-independent variable (the pointer position), and the other wavefunction defines the isolated quantum system.
In the CI approach, the position-measuring process decomposes the superposition state into a single state, which corresponds to the naive perception of reality, via a nonlocal interaction between the
measuring devices triggered by the cwf. So every position registration of a particle, which obeys the time-dependent Schrödinger equation and which is in a superposition of states, is a nonlocal proc
ess. In the measuring process, the position of the particle in the direction influences the amplitudes of the detectors in the direction. As a result, measuring the particle's position reduces or can
cels out the amplitude of the empty part of the wavefunction.
The graphic shows the possible trajectory, the velocity vector field (red), and the wave density. | {"url":"https://www.wolframcloud.com/objects/demonstrations/TheWhichWayExperimentAndTheConditionalWavefunction-source.nb","timestamp":"2024-11-02T15:26:19Z","content_type":"text/html","content_length":"463018","record_id":"<urn:uuid:1b6aa734-5e8b-4daf-a695-e92a91f630eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00019.warc.gz"} |
Meadows and the equational specification of division
The rational, real and complex numbers with their standard operations, including division, are partial algebras specified by the axiomatic concept of a field. Since the class of fields cannot be
defined by equations, the theory of equational specifications of data types cannot use field theory in applications to number systems based upon rational, real and complex numbers. We study a new
axiomatic concept for number systems with division that uses only equations: a meadow is a commutative ring with a total inverse operator satisfying two equations which imply 0^-1=0. All fields and
products of fields can be viewed as meadows. After reviewing alternate axioms for inverse, we start the development of a theory of meadows. We give a general representation theorem for meadows and
find, as a corollary, that the conditional equational theory of meadows coincides with the conditional equational theory of zero totalized fields. We also prove representation results for meadows of
finite characteristic.
• Division-by-zero
• Equational specifications
• Field
• Finite fields
• Finite meadows
• Initial algebras
• Meadow
• Representation theorems
• Total versus partial functions
• Totalized fields
• von Neumann regular ring
Dive into the research topics of 'Meadows and the equational specification of division'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/meadows-and-the-equational-specification-of-division","timestamp":"2024-11-11T16:57:32Z","content_type":"text/html","content_length":"50641","record_id":"<urn:uuid:472a92dc-8299-4677-806c-2361f49572df>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00727.warc.gz"} |
What Is The Lowest Common Multiple: Explained For Primary School
In this post we will be answering the question “what is the lowest common multiple?” and providing you with all of the information you need to help your students understand this area of maths.
We’ve also got some questions based around the lowest common multiple that your child can complete, all to help them (and you) master maths fast!
What is a multiple in maths?
What is a multiple? A multiple is a number that can be divided without any remainder.
Sometimes it helps children to think of it as a number in another number’s times table – for example, 24 is a multiple of 12; it is also a multiple of 1, 2, 3, 4, 6, 8, and 24. The first five
multiples of 6 are 6, 12, 18, 24 and 30.
Multiples and factors link together – for example, 4 is a factor of 12 and 12 is a multiple of 4.
Children often confuse multiples with factors, so it is important they learn about the difference between factors and multiples.
See also: Divisibility rules
What is a common multiple in maths?
A common multiple is a multiple that is shared by two or more numbers.
12 is a common multiple of 6 and 4 as it’s in both the 6 and 4 times tables.
Three common multiples of 6 and 9 are 18, 36 and 54.
Students are asked to identify common multiples in the Third Space Learning online one-to-one tutoring slide below. Each tutoring programme is specially designed to meet the needs of the individual
student, plugging learning gaps and building confidence in maths.
Example of a Third Space Learning online lesson slide on identifying multiples
Third Space Learning offers a wide range of free maths resources, including our collection of multiplication worksheets.
What is the lowest common multiple?
The lowest common multiple is the lowest multiple shared by two or more numbers.
For example, common multiples of 4 and 6 are 12, 24 and 36, but the lowest of those is 12; therefore, the lowest common multiple of 4 and 6 is 12.
How to find the lowest common multiple
One way of helping children to find the lowest common multiple is to ask them to list the multiples of each number until they come across the first one each number shares.
For example, the LCM of 5 and 7 is 35:
FREE Factors, Multiples, Square & Cube Numbers Pack
Download this resource pack aimed at helping pupils identify number properties and relationships in advance of SATs. It includes teaching guidance, pupil practice sheets and activity slides.
Download Free Now!
When will my child learn about lowest common multiples?
Children are introduced to multiples in Year 1 (perhaps without knowing the actual term) when they will count in multiples of twos, fives and tens, as part of their learning of number bonds. In Year
2, the non-statutory guidance suggests that children count in multiples of three to support their later understanding of a third.
In Year 3, children count from 0 in multiples of 4, 8, 50 and 100. The non-statutory guidance suggests that children use multiples of 2, 3, 4, 5, 8, 10, 50 and 100.
In Year 4, children count in multiples of 6, 7, 9, 25 and 1000. The non-statutory guidance suggests that pupils use factors and multiples to recognise equivalent fractions and simplify where
appropriate (for example, 6/9 = 2/3 or 1/4 = 2/8).
The National Curriculum states that Year 5 pupils should be taught to identify multiples and solve problems involving multiplication and division including using their knowledge of multiples.
Common multiples are not introduced until Year 6. Year 6 pupils are expected to use common multiples to express fractions in the same denomination and to solve problems involving unequal sharing and
grouping using knowledge of fractions and multiples.
How do lowest common multiples relate to other areas of maths?
Lowest common multiples are useful when needing to express fractions in the same denomination (required when going through the process of how to add fractions and how to subtract fractions, ordering
or comparing fractions). For example, to calculate 3/5 + 1/6, we’d need to find the common denominator by calculating the lowest common multiple of 5 and 6 (30). We can then convert the fractions
to 18/30 + 5/30 = 23/30.
Wondering about how to explain other key maths vocabulary to your children? Check out our Primary Maths Dictionary, or try these primary maths terms:
Lowest common multiple practice questions
1) What is the lowest common multiple of 8 and 10?
2) Write all the common multiples of 3 and 8 that are less than 50.
3) What is the lowest common multiple of 100 and 50?
4) Write all the common multiples of 4 and 6 that are less than 60.
5) What is the lowest common multiple of 1000 and 650?
Every week Third Space Learning’s maths specialist tutors support thousands of students across hundreds of schools with weekly online maths tuition designed to plug gaps and boost progress.
Since 2013 these personalised one to one lessons have helped over 169,000 primary and secondary students become more confident, able mathematicians.
Learn about the scaffolded lesson content or request a personalised quote for your school to speak to us about your school’s needs and how we can help. | {"url":"https://thirdspacelearning.com/blog/lowest-common-multiple-explained/","timestamp":"2024-11-13T18:05:52Z","content_type":"text/html","content_length":"138301","record_id":"<urn:uuid:bec968d0-24d4-4afb-a8d5-2b5da701dc66>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00513.warc.gz"} |
Naive Bayes Classifier
Naive Bayes Classifier is a simple and intuitive method for the classification. The algorithm is based on Bayes’ Theorem with two assumptions on predictors: conditionally independent and equal
importance. This technique mainly works on categorical response and explanatory variables. But it still can work on numeric explanatory variables as long as it can be transformed to categorical
This post is my note about Naive Bayes Classifier, a classification teachniques. All the contents in this post are based on my reading on many resources which are listed in the References part.
• Name : Naive Bayes Classifier
• Data Type :
□ Reponse Variable : Categorical
□ Explanatory Variable : Categorical and Numeric
(The numeric variables need to be discretized by binning or using probability density function.)
The below picture originated from : here
• Assumptions :
1. All the predictors have equal importance to the response variable.
In other words, all predictors will be put in the algorithm even though some of them may not be as influential as others.
2. All the predictors are conditionally independent to each other given in any class. That means, we always treat them conditionally independent no matter it is true or not. For example, suppose
we have two datasets. In the first dataset, \(X1\) and \(X2\) given \(Type=A\) is kind of independent to each other, but in the second datset, they are completely dependent to each other.
Nevertheless, they all have they same contingency table and will have the same result of Naive Bayes Classifier.
• Bayes’ Theorem
\[ P(A|B) &= \frac{P(A \cap B)}{P(B)} \\[5pt] &= \frac{P(B|A)P(A)}{P(B)} \\[5pt] &= \frac{P(B|A)P(A)}{\sum_{i}^{}P(B|{A}_{i})P({A}_{i})} \]
• Algorithm
Given a class variable \[Y= \{ 1, 2,..., K \}, K\geq2\] and explanatory variables, \[X=\{ X_1, X_2,..., X_p \}, \] the Bayes’ Theorem can be written as: \[ P(Y=k|X=x) &= \frac{P(X=x|Y=k)P(Y=k)}{P
(X=x)} \\[5pt] &= \frac{P(X=x|Y=k)P(Y=k)}{\sum_{i=1}^{K}P(X=x|Y=i)P(Y=i)} \]
The Naive Bayes Classifier is a function, \[C \colon \mathbb{R}^p \rightarrow \{ 1, 2,..., K \}\] defined as
\[ C(x) &=\underset{k\in \{ 1, 2,..., K \} }{\operatorname{argmax}}P(Y=k|X=x) \\[5pt] &= \underset{k\in \{ 1, 2,..., K \} }{\operatorname{argmax}}P(X=x|Y=k)P(Y=k) \\[5pt] \quad &(\text{by
assuming that } X_1,...,X_p \text{ are conditionally independent when given } Y=k, \forall k\in \{ 1, 2,..., K \}) \\[5pt] &= \underset{k\in \{ 1, 2,..., K \} }{\operatorname{argmax}}P(X_1=x_1|Y=
k)P(X_2=x_2|Y=k)\cdots P(X_p=x_p|Y=k)P(Y=k) \]
• Strengths and Weaknesses
□ Strengths:
1. simple and effective
□ Weaknesses:
1. hard to meet the assumptions of eaqual important and mutual independence on predictors.
2. not good to deal with many numeric predictors.
• A Simple Example
Suppose we have a contingency table like this:
Q : And, what will be our guess on type if we have a data has X1=“Yes” and X2=“Unsure”?
A : Our guess is Type B.
\[ P(A|X_1=\text{"Yes"}, X_2=\text{"Unsure"}) &\propto P(X_1=\text{"Yes"}, X_2=\text{"Unsure"}|A)P(A) \\[5pt] &= P(X_1=\text{"Yes"}|A)P(X_2=\text{"Unsure"}|A)P(A) \\[5pt] &= \frac{10}{50} \cdot \frac
{30}{50} \cdot \frac{50}{150} \\[5pt] &= \frac{1}{25} \\[10pt] P(B|X_1=\text{"Yes"}, X_2=\text{"Unsure"}) &\propto P(X_1=\text{"Yes"}, X_2=\text{"Unsure"}|B)P(B) \\[5pt] &= P(X_1=\text{"Yes"}|B)P(X_2
=\text{"Unsure"}|B)P(Type=B) \\[5pt] &= \frac{70}{100} \cdot \frac{10}{100} \cdot \frac{100}{150} \\[5pt] &= \frac{14}{300} \\[10pt] C(X_1=\text{"Yes"}, X_2=\text{"Unsure"}) &= \underset{k\in \{ A, B
\} }{\operatorname{argmax}}P(Y=k|X_1=\text{"Yes"}, X_2=\text{"Unsure"}) \\[5pt] &= B \]
• Further topics
□ Laplace Estimator (Machine Learning with R, Chapter 4)
Adding a small number to the frequency table to avoid zero probability for the Naive Bayes Classifier. | {"url":"https://yintingchou.com/posts/2017-02-12-naive-bayes-classifier/","timestamp":"2024-11-13T14:15:33Z","content_type":"text/html","content_length":"13483","record_id":"<urn:uuid:bd3c85d9-2d93-4f76-8b41-d7a0f43e02f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00376.warc.gz"} |
A Brief History of Time
A Brief History of Time: From the Big Bang to Black Holes
Bantam Books, 1988 - 198 páginas
"Stephen W. Hawking has achieved international prominence as one of the great minds of the twentieth century. Now, for the first time, he has written a popular work exploring the outer limits of our
knowledge of astrophysics and the nature of time and the universe. The result is a truly enlightening book: a classic introduction to today's most important scientific ideas about the cosmos, and a
unique opportunity to experience the intellect of one of the most imaginative, influential thinkers of our age. From the vantage point of the wheelchair where he has spent the last twenty years
trapped by Lou Gehrig's disease, Professor Hawking himself has transformed our view of the universe. His groundbreaking research into black holes offers clues to that elusive moment when the universe
was born. Now, in the incisive style which is his trademark, Professor Hawking shows us how mankind's "world picture evolved from the time of Aristotle through the 1915 breakthrough of Albert
Einstein, to the exciting ideas of today's prominent young physicists. Was there a beginning of time? Will there be an end? Is the universe infinite? Or does it have boundaries? With these
fundamental questions in mind, Hawking reviews the great theories of the cosmos - and all the puzzles, paradoxes and contradictions still unresolved. With great care he explains Galileo's and
Newton's discoveries. Next he takes us step-by-step through Einstein's general theory of relativity (which concerns the extraordinarily vast) and then moves on to the other great theory of our
century, quantum mechanics (which concerns the extraordinarily tiny). And last, he explores the worldwide effort to combine the two into a single quantum theory of gravity, the unified theory, which
should resolve all the mysteries left unsolved - and he tells why he believes that momentous discovery is not far off. Professor Hawking also travels into the exotic realms of deep space, distant
galaxies, black holes, quarks, GUTs, particles with "flavors" and "spin," antimatter, the "arrows of time" - and intrigues us with their unexpected implications. He reveals the unsettling
possibilities of time running backward when an expanding universe collapses, a universe with as many as eleven dimensions, a theory of a "no boundary" universe that may replace the big bang theory
and a God who may be increasingly fenced in by new discoveries - who may be the prime mover in the creation of it all. A BRIEF HISTORY OF TIME is a landmark book written for those of us who prefer
words to equations. Told by an extraordinary contributor to the ideas of humankind, this is the story of the ultimate quest for knowledge, the ongoing search for the secrets at the heart of time and
space." --
Dentro del libro
Resultados 1-3 de 31
Página 22
... measured vertically , and the distance from the observer is measured horizontally . The observer's path through ... measure time more accurately than length . In effect , the meter is defined to
be the distance traveled by light in ...
Página 24
From the Big Bang to Black Holes Stephen Hawking, Carl Sagan. any measure of time . In relativity , there is no real ... measuring the position of a point on the earth in miles north of Piccadilly
and miles west of Piccadilly , one ...
Página 55
... measure the position of the particle , the less accurately you can measure its speed , and vice versa . Heisenberg showed that the uncertainty in the position of the particle times the
uncertainty in its velocity times the mass of the ...
Our Picture of the Universe 1
Space and Time 15
The Expanding Universe 35
Derechos de autor
Otras 13 secciones no mostradas
Términos y frases comunes
Información bibliográfica | {"url":"https://books.google.com.uy/books?id=BdEPAQAAMAAJ&q=measure&dq=editions:ISBN0553380168&output=html_text&source=gbs_word_cloud_r&cad=5","timestamp":"2024-11-13T04:51:09Z","content_type":"text/html","content_length":"55410","record_id":"<urn:uuid:3c8f4b19-e5d2-4883-8d0d-90f46aa44f9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00771.warc.gz"} |
These are some questions I've been thinking about lately. The bounty for answering any of them is your choice of $5, a nice cup of coffee, or a portrait drawn by me.
Ideas may be misguided and/or obviously wrong. Please get in touch if that's the case! Or if you're interested in talking about any of them with me.
Geometric group theory is the study of the relationship between the algebraic, geometric, and combinatorial properties of finitely generated groups. Here, we add to the dictionary of correspondences
between geometric group theory and computational complexity. We then use these correspondences to establish limitations on certain models of computation.
In particular, we establish a connection between read-once oblivious branching programs and growth of groups. We then use Gromov's theorem on groups of polynomial growth to give a simple argument
that if the word problem of a group \(G\) is computed by a non-uniform family of read-once, oblivious, polynomial-width branching programs, then it is computed by an \(O(n)\)-time uniform algorithm.
That is, efficient non-uniform read-once, oblivious branching programs confer essentially no advantage over uniform algorithms for word problems of groups.
We also construct a group which faithfully encodes reversible circuits and note the correspondence between certain proof systems for proving equations of circuits and presentations of groups
containing this group. We use this correspondence to establish a quadratic lower bound on the proof complexity of such systems, using geometric techniques which to our knowledge are new to complexity
theory. The technical heart of this argument is a strengthening of the now classical theorem of geometric group theory that groups with linear Dehn function are hyperbolic. The proof also illuminates
a relationship between the notion of quasi-isometry and models of computation that efficiently simulate each other.
This is a first stab at what a categorical theory of cryptography might look like. It essentially gives a generalized definition of a cryptosystem in categorical terms in terms of a category of
interactive computations.
One interesting thing is that the definition given is a common generalization both of cryptosystems and error-correcting codes.
The document is pretty incomplete and doesn't cover some other ideas I had about how to formalize interactive proof systems and one-way functions in this setting.
A short expository article aimed at non-mathematicians explaining a bit about the result that IP = PSPACE, and how it would allow you to ensure that an AI is giving you good advice.
A tool for walking around the plane endowed with different Riemannian metrics. Includes the ability to write down custom metrics.
Talking About Combinatorial Objects Seminar
This was an introductory talk on expanders and some of their basic pseudorandomness/mixing properties.
Here is a little visualization of expanders I showed at the talk. Click to make it expand!
ICFP 2015
A talk on my senior thesis work applying string diagrams to program synthesis. | {"url":"https://math.berkeley.edu/~izaak/research.html","timestamp":"2024-11-03T03:16:06Z","content_type":"text/html","content_length":"13227","record_id":"<urn:uuid:11779e50-5d26-4886-bf73-9c4d6ecd8b11>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00085.warc.gz"} |
What are Character Sets?
This is the first in a short series on character sets and character encodings. I'm rewriting this as a precursor to some other pieces I've got coming up. As the planned pieces need a basic
understanding of character sets and encodings I thought I need to cover those first. This first article looks at character sets.
Back in the mists of time...
Back in the mists of time there was ASCII and ASCII was all that mattered. Originally it was designed from the ideas of telegraph codes.
If you are blanking on "telegraph code", that means a code to send information over a telegraph system - yes those things they have in Westerns were some poor unfortunate sits in a cubicle at the
railway station and receives the message that the gunslingers are coming into town and everyone better hide. They found it convenient to run the telegraph lines next to the railway lines, and so the
telegraph posts were often situated in the town's railway station. High Noon and all that...
For a while, the main telegraph code was Morse code, where characters are encoded as a series of dots and dashes. There were other telegraph codes beside Morse code. There was even a telegraph code
for Chinese characters!
Anyway, with the very early computers of the 1960s it was realized that Morse code wasn't going to cut it and so ASCII was developed. But what exactly is ASCII?
At its most basic ASCII is a table of characters. The table starts at 0 and goes up to 127, giving 128 entries. For example, SPC is table position 32. A is 65. z is 122.
The so called "printable characters" are in the range 32 to 127 decimal. There are also a bunch of "non-printable" characters - things like BEL at 7 which was "Bell". BS at number 8 is Backspace.
The thing about the printable characters in the ASCII character set was they were all English characters. The rest of the world didn't exist - at least as far as ASCII was concerned. England also
kind of didn't exist too - there is no £ in ASCII - American Standard for Information Interchange - the clue is in the name. $ is there of course at position 36. There are a total of 128 characters
supported in ASCII (0 to 127 decimal) - so only 7-bits are required to encode each character.
ASCII is also strange in that it is a character set and an encoding. A character set lays out a table that maps numbers (or more formally code points) to a character. The encoding says how that
character is actually stored or transmitted. In the case of ASCII the position in the table is how the character is encoded. For example, position (code point) 37 decimal in the table (character set)
is the % character. But % is also stored and transmitted (encoded) as 37. In other words the code point and the ultimate representation of the character are both '37' or '0100101' in binary. I will
go into more details on encoding in the next article.
Things soon got more complicated, especially with the advent of the IBM PC (and DOS) and a growing number of computer users for whom English was not their first language. There were people who used,
shock horror, accented characters, and even squiggly characters!
While ASCII was a concrete standard at this point, it only covered the first 7 bits of a byte. This meant the most significant bit of a byte was available to extend the table of characters (character
set) from 128 (0 to 127) entries to 256 (0 to 255). In other words, the table could be expanded from 128 to 255 to include some of those accented and squiggly characters, and a few other weird things
like box drawing characters. The box drawing characters were used in DOS type applications (before Windows) to create dialogs. You could create dialogs with different frame types, buttons, and even
simulate shadows with the box drawing characters.
The table entries from 128 to 255 became something of a "wild west" for characters. While 0 to 127 was the ASCII standard, 128 to 255 was a free-for-all, with PC OEM manufacturers adopting different
entries according to their markets. ANSI was an attempt to bring some order to the chaos of the 128 to 255 zone. The key idea was that of code pages, where you could switch out the 128 to 255 area
with different sets of characters, depending on market. For example, if you were targetting the Russian market you could switch in a code page that supported Russian characters. Each code page had a
number. So the Greek code page was Microsoft OEM DOS CP 737. Hebrew was Microsoft DOS CP 862. There were also IBM code pages for Japanese, Korean, and a very limited set of Chinese characters -
burgeoning markets for the IBM PC and DOS-based clones (which was the main OS on PCs at the time).
MS-DOS even had a command for selecting the code page. For example, chcp 850 would select the CP-850 code page for all devices in the PC that supported it.
ANSI, while collecting together these assorted character sets, was still a one-byte encoding per code point system.
And then Windows came along
Windows 1.0 rocked up on the scene sometime around 1985, and brought a whole new bunch of code pages with it. One of the most well known (at least here in the West) was Windows CP-1252. Windows
CP-1252 was still an 8-bit character set. CP-1252 went on to become one of the most popular 8-bit character sets in the world (and still is). Windows CP-1252 is sometimes referred to as Windows Latin
Another popular character set still found in the wild is ISO-8859-1 and family. This is still a single byte character set, with single byte encoding. This character set was also the default method of
dealing with documents (typically web pages) delivered via HTTP where a MIME type of text/ was specified. Now, in HTML 5 this has changed to Windows-1252. ISO-8859-1 is also known as Latin1. There
are other character sets in the family sequentially numbered up to ISO-8859-15. Bizzarely, ISO-8859-15 is also known as Latin9 and sometimes Latin0! Confused? The main thing to remember is that
information coded in ISO-8859-1 is out there are needs to be handled from time to time. It is also deemed to be superceded by Windows-1252 for web standards.
Unicode - Beyond the Byte
The main problem with these systems so far is they only allowed for a character set consisting of 256 characters (one byte). They attempted to solve this limited number of characters by using the
concept of code pages to switch in the required character set to extend ASCII as we already saw. It's still limited though. Take Chinese for example. In the simplified Chinese alphabet there are
2,235 characters. Oops - hello code page mayhem!
Obviously, things had to go beyond the limitations of ASCII, ANSI and the byte.
Enter Unicode...
Unicode is a standard that creates a character set table for (at least) every character on the planet. It is essentially an unlimited character set table in that there is no hard limit to the number
of code points. Typically though 32-bits is far enough. 32-bits is actually enough to create a character set table of up to 4,294,967,296 code points (0 to 4,294,967,295 decimal). Which is - a lot of
The Unicode character set includes made-up languages, such as Klingon and Elvish. Yes, Elvish is a proposed part of the Unicode standard. Unicode also includes all sorts of dingbats, emojis and
whatnots, including the infamous Pile of Poo emoji. Cute little chap. The most recent version of Unicode, 12.1 includes 137,994 characters. There's plenty of room in those 32-bits we were talking
16-bits can store 65,536 code points (0 to 65,535 decimal), and while not enough to represent all code points, it covers most of the useful ones.
NOTE: Unicode is also known as ISO-10646
Unicode complexity
Unicode is an extremely complex character set, with planes and blocks and standardized subsets. For example, the first 128 entries (0 to 0x7F) of the Unicode character set is the standardized subset
known as Basic Latin, and corresponds to ASCII. The set from 0x80 to 0xFF is known as Latin-1 Supplement. I'm not going to cover these additional complexities in this piece, but thought you should at
least be aware of them.
Unicode also has what you can think of as ready-made characters, so called precomposed characters. For example, there is a single code point for e-acute (as used in French). Precomposed characters
are provided in Unicode mainly for backwards compatibility with older character sets.
Unicode also supports so-called decomposed characters. Here, you could for example combine an acute accent code point with the e code point to create an e-acute character.
This provide immense flexibility and efficiency, especially for dealing with complex writing schemes. Rather than having to have a large numb er of precomposed characters (and associated fonts), you
can create these characters using simpler subsets. For example, you could create all French 'e characters' for a single 'e' character and then combine that with accent code units (grave, acute,
circumflex etc.) as required. Similarly you could do the same for the 'a' character. This drastically reduces code points and other resources required.
For this reason, and the fact that "characters" might be things like emoticons, Unicode doesn't associate code points with "characters", but rather each code point is associated with a code unit. In
fact a code unit might not be a character in the way we think of it.
Even though in formal Unicode parlance a code point is associated with a code unit, I do sometimes use the term character instead of a code unit. Mostly this is when I have a specific renderable
character in mind, such as 'A'.
How are Unicode code units represented in the real world?
If we had a series of Unicode code units (a Unicode string), such as "HELLO" it might look like this as a series of Unicode code points:
U+0048 U+0045 U+004C U+004C U+004F
Note each "character" here is specified using the Unicode U+code point notation.
What would this look like in memory or on disk? A guess might be:
But what if we stored big-endian? What if we used four bytes per character? Also, in our representation above there's a total of 10 bytes (two bytes per character). "HELLO" in ASCII is five bytes, so
there's some wastage of memory/space/bandwidth there.
We are now into the topic of how strings of characters are actually stored in memory and on disk, or transmitted over the Internet, and there are various encodings that can be used.
The sample Unicode string above is encoded with UCS-2 encoding, which specifies two bytes per character. The number matches the code point in this encoding. I will get into the weeds on character
encodings in the next article.
So at this stage we hopefully have some idea of what character set is, and that there are various character sets out there, Unicode being the most important one in use today. Many of the older
character sets have been assimilated into Unciode as subsets, for example the first 128 entries in Unicode are the same as ASCII.
In the next article I will look at character encodings - that is, how character strings are stored and transmitted in the real world, rather than when considered as a code point in a character set. | {"url":"https://coffeeandcode.neocities.org/articles/character-sets","timestamp":"2024-11-03T18:34:08Z","content_type":"text/html","content_length":"14632","record_id":"<urn:uuid:0b926549-8ae1-4d54-9cc5-dc8c3b03f5cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00333.warc.gz"} |
Some excellent simulations are available here, please see the movies: Visualization with the TU-Dresden 3D Traffic Simulator This movie shows traffic jam… if a shock wave makes an angle of 90 with
the flow direction, it becomes a normal shock wave. A bow wake, such as the one in (Figure), is created when the wave source moves faster than the wave propagation speed. 214-226, Chapter 3 pp.
However, I dont quite understand which type of shockwave forms the sonic boom heard on the ground. 9.12. This means that a very fast particle can actually exceed the speed of light in a material.
tersect. stationary normal shocks, expansion fans and Mach waves. 88-125, Anderson: Chapter 5 pp. While most people have probably experienced plenty of traffic congestion first hand, it is useful to
see it systematically from three different perspectives: (1) That of the driver (with which most people are familiar), (2) a birdseye view, and (3) a helicopter view. The area of the shockwave relies
on the variety in the cross-sectional stream zone of the conduit, and also on the upstream and downstream limit conditions. > Why is the normal shock wave normal? Normal Shock Waves Under the
appropriate conditions, very thin, highly irreversible discontinuities can Under the appropriate conditions, very thin, highly irreversible discontinuities can The airflow behind the shock wave
breaks up into a turbulent wake, increasing drag. CCBY-NC-ND. I also understand that normal shockwaves form on the top of wings past the critical mach number. Some common examples are supernovae
shock waves or blast waves travelling through the interstellar medium, the bow shock caused by the Earth's magnetic field colliding with the solar wind and shock waves caused by galaxies colliding
with each other. is the same on both sides of the shock wave.The equations 1.6, 1.12 and 1.13 are called Rankine-Hugonit equations for normal shock waves. 1.2.1.1 Bow shock wave • suddenly raises
density, temperature and pressure of shocked air; consider normal shock in ideal air – ρo = 1.16 kg/m3 → ρs = 6.64 kg/m3 (over five times as dense!!) > Prandtl's relation for the normal shock waves is
derived by applying the governing equations to an infinitesimally small volume. p t1 = p t1 p 1 p 1 = 1 0.2335 (20) = 85.7 psia p t2 = p t2 p 2 p 2 = 1 0.5075 (41.7) = 82.2 psia If we looked at the
normal-shock problem and computed stagnation pressures on the basis of the normal Mach numbers, we would have The truck enters the roadway and Show the figure of normal shock waves as below. Oblique
shock relations The effective equivalence between an oblique and a normal shock allows re-use of the already derived normalshock jump relations. A shock wave (or simply "shock") is a type of
propagating disturbance.Like an ordinary wave, it carries energy and can propagate through a medium (solid, liquid or gas) or in some cases in the absence of a physical medium, through a field such
as the electromagnetic field.Shock waves are characterized by an abrupt, nearly … Why does this approach results in a formula for supersonic flows? – To = 300 K → Ts = 6,100 K (hot as the sun’s
surface !!) When an object travels faster than the speed of sound in a medium, a cone shaped region of high pressure called a shock wave trails behind it. Unlike ordinary sound waves, the speed of a
shock wave varies with its amplitude. See Figure 1.1. moving observer sees a normal shock with velocities u1, and u2. What circumstances normal shock wave wo… If the back pressure (the pressure
outside of the nozzle) is l… Result of the flow start to reach M = 1 at the throat. The shock wave that formed on … duct can accommodate without a modification of the duct geometry. the only
explanation i found on the internet is : it is called normal because the wave is perpendicular to the flow direction ! Note that it is the special case of oblique shock wave i.e. Most sources state
the (double) sonic boom is formed by a shockwave from the nose of the aircraft and from the rear of the tail. Frequencies in Shockwave … For a detached shock wave around a blunt body or a wedge, a
normal shock wave exists on the stagnation streamline; the normal shock is followed by a strong oblique shock, then a weak oblique shock, and finally a Mach wave, as shown in Fig. Example • Given: CD
nozzle designed to produce Me=3 for isentropic flow • Find: What range of back ... (normal shock at M=3, shock relations/tables) M1=3 M2=Me pes,sup x p/po 1 x M 1 Mes=3 Mes,sub pes,sub pe,sh Me,sh b
es,sup()2 1 o e,sh 2 M 3 2 1 M 3 p p p p 28.1%p M M 0.475 and p p 10.33 4.12 Detached Shock Wave in Front of a Blunt Body. 1.The state of a gas (γ=1.3,R =0.469 KJ/KgK.) an example of an external flow.
Any blunt-nosed body in a supersonic flow will develop a curved bow shock, which is normal to the flow locally just ahead of the stagnation point. Piston-Generated Shock Wave Up: One-Dimensional
Compressible Inviscid Flow Previous: Sonic Flow through a Normal Shocks As previously described, there is an effective discontinuity in the flow speed, pressure, density, and temperature, of the gas
flowing through the diverging part of an over-expanded Laval nozzle. velocity components normal and tangential, respectively, to oblique shock wave speed of flow maximum speed obtainable by expanding
to zero absolute temperature external work performed per unit mass angle of attack ratio of specific heats, angle of flow deflection across an oblique shock wave shock-wave angle measured from …
upstream of normal shock wave is given by the following data: Mx =2.5, Px =2 bar. For example, it may occur through constant area nozzle or a diverging duct, or in front of a blunt-nosed body.
Section 5: Non-Isentropic 1-D Flow, Normal Shock Waves, Heat Addition, Measurement of Airpseed (Anderson: Chapter 3 pp. An example of a normal 15mm (0.7mm convex head) can be seen below: Whereas a
concave head should allow some focusing of the energy as seen below: This means the depth at which the highest amount of energy applied can be varied. The static fluid properties p, ρ, h, a are of
course, remains fixed since flow. ’ s surface!! may be expressed in another form using Rankine-Hugoniot equation this approach in... I also understand that normal shockwaves form on the top of wings
past critical! Tubes the strength of shock wave 4.2 oblique shock wave may be expressed in another using... Far, we have only studied waves under steady state conditions, i.e nozzle... Gas ( γ=1.3, R
=0.469 KJ/KgK. of 90 with the flow is choked, upstream! If a shock wave forms somewhere downstream of the throat, as illustrated in curve ( D ) % its... Top of wings past the critical mach number a
normal shock waves a slow moving truck drives along roadway... Shock is normal generating a region of subsonic flow in front of the shock wave therapy be! Course, remains fixed since the flow
direction, it may occur through constant area nozzle or diverging. Wave is perpendicular to the flow direction it is called normal because the wave is perpendicular to the wave... Direction, it may
occur through constant area nozzle or a diverging duct, or in front of wing! The detached shock wave makes an angle of 90 with the flow direction it is called normal... ( γ=1.3, R =0.469 KJ/KgK.
which type of shockwave forms the sonic boom heard on internet... A formula for supersonic flows of oblique shocks is not possible and instead we will introduced. The strength of shock wave appears
in many types of supersonic flows exceed the speed of sound, a of. Wave if the shock waves as below across this normal shock γ=1.3, R =0.469 KJ/KgK. so far we! And a normal shock flux area travels
about 67 % of its normal speed flow choked... Are one example of a blunt body, Generation of oblique shocks not. Therapy can be an effective therapeutic strategy, executed after a careful diagnostic
evaluation why this. Object, the detached shock wave is given by the following data: Mx =2.5 Px! May be expressed in another form using Rankine-Hugoniot equation wave.The equations 1.6, 1.12 and 1.13
are called Rankine-Hugonit for. Fans and mach waves of oblique shocks is not possible and instead we be. Supersonic flows to subsonic across this normal shock wave is always detached a! Called
Rankine-Hugonit equations for normal shock wave in front of the already derived normalshock jump.! Fixed since the flow is choked, and upstream conditions have not changed and a normal shock wave
oblique... Minimally invasive shock wave may be expressed in another form using Rankine-Hugoniot equation boom!!! and instead we will be introduced to unsteady waves illustrated in curve ( D ).. The
conditions in example 7.3, compute the stagnation pressures and tem-peratures for a downstream! Explanation i found on the ground =0.469 KJ/KgK. a broader phenomenon called wakes., as illustrated in
curve ( D ) mach waves the critical mach number an effective therapeutic,... Types of supersonic flows shock relations the effective equivalence between an oblique and a normal shock allows of!
Detached shock wave 11 the throat, as illustrated in curve ( D.! ; PROBLEMS region of subsonic flow in front of normal shock wave examples throat, as illustrated in curve ( ). ( D ) state of a blunt
object leading edge a formula for supersonic flows you are standing … 7.4... Curved shock wave is always detached on a blunt object is given by following... I found on the internet is: it is called
normal because the wave is given the..., a shock normal shock wave examples forms somewhere downstream of the object flux area duct, or front. Get a detached bow shock the object, the detached shock
wave 11 be expressed in form... The object, the detached shock wave is perpendicular to the flow direction, it may occur through constant nozzle. Found on the top of wings past the critical mach
number we only... In example 7.3, compute the stagnation pressures and tem-peratures boom heard on the is... That a very fast particle can actually exceed the speed of light in a formula
supersonic... Careful diagnostic evaluation travels about 67 % of its normal speed angle for a downstream! A slow moving truck drives along the roadway and the airflow behind the shock equations...,
the detached shock wave therapy can be an effective therapeutic strategy, executed after a careful diagnostic evaluation very. Therapeutic strategy, executed after a careful diagnostic evaluation
truck enters the roadway 10. The energy flux area wave appears in many types of supersonic flows waves are one example of a object... Subsonic across this normal shock allows re-use of the shock wave
forms just ahead of the shock equations... Is always detached on a blunt body, Generation of shock waves perpendicular to the shock wave.The 1.6! If you are standing … example 7.4 for the conditions
in example,. Forms the sonic boom heard on the ground makes an angle of 90 with the flow direction is! 4.12 detached shock wave may be expressed in another form using Rankine-Hugoniot equation fans.
Called normal because the wave is directly proportional to ; PROBLEMS course, remains fixed since the flow choked. Internet is: it is called normal because the wave is perpendicular to the flow
direction a turbulent,! The airflow behind the shock wave 4.3 Curved shock wave 4.2 oblique shock relations the effective equivalence between an and. An angle of 90 with the flow direction, it may
occur through constant area nozzle a! Most energy is referred to as the energy flux area shockwaves form on the internet is it. Type of shockwave forms the sonic boom heard on the ground are 4.1. We
came to know strength of shock waves as below this approach results a. The only explanation i found on the internet is: it is called a normal shock allows re-use of wing... To unsteady waves wave
therapy can be an effective therapeutic strategy, after! S surface!! the shock wave is always detached on a object! Flux area 1.6, 1.12 and 1.13 are called Rankine-Hugonit equations for normal shock
waves as.!, Px =2 bar a material is not possible and instead we will get a detached bow.... Careful diagnostic evaluation the already derived normalshock jump relations have only studied under! For
supersonic flows shock relations the effective equivalence between an normal shock wave examples and normal. Which type of shockwave forms the sonic boom heard on the ground fast particle can actually
exceed the of. From this equation we came to know strength of shock wave 11 the stagnation pressures tem-peratures. This equation we came to know strength of shock wave forms just ahead of the object
show the figure normal. 1.13 are called Rankine-Hugonit equations for normal shock wave if the shock wave appears in many types of supersonic....... for example, if you are standing … example 7.4 for
the in... Of glass, only travels about 67 % of its normal speed on... Wave 4.2 oblique shock relations the effective equivalence normal shock wave examples an oblique and a shock! Another form using
Rankine-Hugoniot equation throat, as illustrated in curve ( D ) sound, a of... The only explanation i found on the ground detached on a blunt body Generation! 7.4 for the conditions in example 7.3,
compute the stagnation pressures and tem-peratures many of!: Mx =2.5, Px =2 bar and upstream conditions have not changed a diverging duct, or in of. ( hot as the sun ’ s surface!! already derived
normalshock relations! Its normal speed studied waves under steady state conditions, i.e Generation of oblique shocks is not possible instead... Minimally invasive shock wave 11, h normal shock wave
examples a shock wave is perpendicular the... Supersonic to subsonic across this normal shock wave 11 of a gas γ=1.3..., remains fixed since the flow direction and a normal shock wave therapy can be
effective... Formula for supersonic flows subsonic flow in front of a blunt-nosed body directly to. Heard on the top of wings past the critical mach number shock waves as.... We will be introduced to
unsteady waves invasive shock wave may be expressed in another form using Rankine-Hugoniot.! 'S leading edge 1.12 and 1.13 are called Rankine-Hugonit equations for normal shock waves curve D! Throat,
as illustrated in curve ( D ) shocks is not possible and instead we get! Wave may be expressed in another form using Rankine-Hugoniot equation which type of shockwave forms the sonic boom on...
Object, the detached shock is normal generating a region of subsonic flow in front of the wing leading! Downstream condition introduced to normal shock wave examples waves area nozzle or a diverging
duct, or front. The airplane exceeds the speed of sound, a shock wave in front of the 's! We will be introduced to unsteady waves is always detached on a blunt body example 7.3, compute stagnation.
To as the sun ’ s surface!! stagnation pressures and tem-peratures, the detached shock is normal a! An oblique and a normal shock Generation of oblique shocks is not possible and instead we will a.
And instead we will get a detached bow shock normal generating a of. Equations 1.6, 1.12 and 1.13 are called Rankine-Hugonit equations for normal shock wave is to... Figure of normal shock allows
re-use of the object, the detached shock wave the... Directly proportional to ; PROBLEMS, executed after a careful diagnostic evaluation of flow. Can actually exceed the speed of sound, a shock wave
11, and upstream have... | {"url":"http://clinicadermacare.com.br/puccini-for-wgfy/e5cd77-normal-shock-wave-examples","timestamp":"2024-11-13T02:13:39Z","content_type":"text/html","content_length":"25067","record_id":"<urn:uuid:76ef2b41-0df1-4ade-8d9b-338a53fc7fa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00084.warc.gz"} |
Worst additive noise: An information-estimation view
The "worst additive noise" problem is considered. The problem refers to an additive channel in which the input is known to some extent. It is further assumed that the noise consists of an additive
Gaussian component and an additive component of arbitrary distribution. The question is: what is the distribution over the additive noise that will minimize the mutual information between the input
and the output? Two settings for this problem are considered. In the first setting a Gaussian input with a given covariance matrix is considered and it is shown that the problem can be handled in the
framework of the Guo, Shamai and Verdú I-MMSE relationship. This framework gives a simple derivation of Diggavi and Cover's result, that under a covariance constraint the "worst additive noise"
distribution is Gaussian, meaning that Gaussian noise minimizes the input-output mutual information given that the input is Gaussian. The I-MMSE framework also shows that given that the input is
Gaussian distributed, for any constraint on the distribution of the noise, which does not prohibit a Gaussian distribution, the "worst" distribution is a Gaussian distribution complying with the
constraint. In the second setting it is assumed that the input contains a codeword from an optimal point-to-point codebook (i.e., it achieves capacity) and it is shown, for a subset of SNRs, that the
minimum mutual information is obtained when the additive signal is Gaussian-like up to a given SNR.
Publication series
Name 2014 IEEE 28th Convention of Electrical and Electronics Engineers in Israel, IEEEI 2014
Other 2014 28th IEEE Convention of Electrical and Electronics Engineers in Israel, IEEEI 2014
Country/Territory Israel
City Eilat
Period 12/3/14 → 12/5/14
All Science Journal Classification (ASJC) codes
• Electrical and Electronic Engineering
Dive into the research topics of 'Worst additive noise: An information-estimation view'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/worst-additive-noise-an-information-estimation-view","timestamp":"2024-11-14T12:38:04Z","content_type":"text/html","content_length":"54595","record_id":"<urn:uuid:329e5606-25bb-445c-9d82-37055ea35161>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00093.warc.gz"} |
Gossiping -- from Wolfram MathWorld
Gossiping and broadcasting are two problems of information dissemination described for a group of individuals connected by a communication network. In gossiping, every person in the network knows a
unique item of information and needs to communicate it to everyone else. In broadcasting, one individual has an item of information which needs to be communicated to everyone else (Hedetniemi et al.
A popular formulation assumes there are
Gossiping (which is also called total exchange or all-to-all communication) was originally introduced in discrete mathematics as a combinatorial problem in graph theory, but it also has applications
in communications and distributed memory multiprocessor systems (Bermond et al. 1998). Moreover, the gossip problem is implicit in a large class of parallel computing problems, such as linear system
solving, the discrete Fourier transform, and sorting. Surveys are given in Hedetniemi et al. (1988) and Hromkovic et al. (1995).
In the case of one-way communication ("polarized telephones"), e.g., where communication is done by letters or telegrams, the graph becomes a directed graph and the minimum number of calls becomes | {"url":"https://mathworld.wolfram.com/Gossiping.html","timestamp":"2024-11-02T08:18:47Z","content_type":"text/html","content_length":"62017","record_id":"<urn:uuid:1648ed55-f330-43f0-aa1d-aaecd906a3fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00586.warc.gz"} |
Proof evaluation
This is a check list intended for
• proof writers to use before checking a proof into a repository and
• proof reviewers to use during the code review of a pull request containing a proof.
This check list is intended to ensure clear answers to two questions:
• What properties are being checked by the proof?
• What assumptions are being made by the proof?
and that these answers can be found in one of three places:
• The proof harness,
• The proof makefiles (and, with the starter kit, these makefiles are the proof Makefile, the project Makefile-project-defines, and the project Makefile.common), and perhaps
• The proof readme file.
The best practices for writing a proof are described in Write a good proof. Reviewers should keep these best practices in mind when reading a proof. We recommend that any deviations from best
practices be explained in the readme file.
Check the following:
• All of the standard property-checking flags are used:
* --bounds-check
* --conversion-check
* --div-by-zero-check
* --float-overflow-check
* --malloc-fail-null
* --malloc-may-fail
* --nan-check
* --pointer-check
* --pointer-overflow-check
* --pointer-primitive-check
* --signed-overflow-check
* --undefined-shift-check
* --unsigned-overflow-check
Note that the starter kit uses these flags by default. The properties checked by these flags is documented on the CPROVER website. Note, however, that a developer may disable any one of these
flags by editing project Makefile.common or by setting a makefile variable to the empty string (as in CBMC_FLAG_MALLOC_MAY_FAIL = ) in the project Makefile-project-defines or a proof Makefile.
These are the places to look for deviations.
• All deviations from the standard property-checking flags are documented.
There are valid reasons to omit flags either for a project or for an individual proof. But the decision and the reason for the decision must be documented either in a project readme or a proof
readme file.
CBMC checks assertions in the code. This is understood and need not be documented.
Check the following:
• All nontrivial data structures have an ensure_allocated function as described in the training material.
Feel free to use any naming scheme that makes sense for your project --- some projects use allocate_X in place of ensure_allocated_X --- but be consistent.
• All nontrivial data structures have an is_valid() predicate as described in the training material for every nontrivial data structure.
• All definitions of ensure_allocated functions and is_valid predicates appear in a common location.
These definitions are most commonly stored in the proofs/sources subdirectory of the starter kit. Definitions are stored here and used consistently in the proofs.
• All pointers passed as input are allocated on the heap with malloc.
One common mistake is to allocate a buffer buf on the stack and to pass &buf to the function under test in the proof harness. This prevents the proof from considering the case of a NULL pointer.
• All instances of __CPROVER_assume appear in a proof harness.
Note that some exceptions are required. For example, it may be necessary in an ensure_allocated to assume length < CBMC_MAX_OBJECT_SIZE before invoking malloc(length) to avoid a false positive
about malloc'ing a too-big object. But every instance of __CPROVER_assume in supporting code should be copied into the proof harness. The goal is for all proof assumptions to be documented in one
• All preprocessor definitions related to bounds on input size or otherwise related to proof assumptions appear in the proof Makefile.
In particular, do not embed definitions in the supporting code or header files. The goal is for all proof assumptions to be documented in one place.
• Confirm that all stubs used in the proof are acceptable abstractions of the actual code.
Acceptable could mean simply that every behavior of the original code is a behavior of the abstraction.
Look at the report in the checks attached to the pull request.
• Confirm that the coverage is acceptable and confirm that the readme file explains the reason for any lines not covered.
• Confirm that the list of missing functions is acceptable.
• Confirm that there are no errors reported.
• Consider writing function contracts for the function under test as described in Write a good proof. The check list above ensures that the properties (including the assumptions about the input)
that must be true before function invocation are clearly stated in the proof harness. Consider adding a statement of what properties must be true after function invocation as assertions at the
end of the proof harness.
• Consider adding the assumptions made by the proof harness for a function under test to the source code for the function in the form of assertions in the code. This will validate that the
assumptions made by the proof of a function are satisfied by each invocation of the function (at least during testing). | {"url":"https://model-checking.github.io/cbmc-training/management/Code-review-for-proofs.html","timestamp":"2024-11-10T16:15:34Z","content_type":"text/html","content_length":"21261","record_id":"<urn:uuid:b4cfdc3c-07b6-4db9-8bde-523fbd243b40>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00892.warc.gz"} |
lattice polytope
We consider the problem of optimizing a linear function over a lattice polytope P contained in [0,k]^n and defined via m linear inequalities. We design a simplex algorithm that, given an initial
vertex, reaches an optimal vertex by tracing a path along the edges of P of length at most O(n^6 k log k). The … Read more
On the diameter of lattice polytopes
In this paper we show that the diameter of a d-dimensional lattice polytope in [0,k]^n is at most (k – 1/2) d. This result implies that the diameter of a d-dimensional half-integral polytope is at
most 3/2 d. We also show that for half-integral polytopes the latter bound is tight for any d. Citation University … Read more | {"url":"https://optimization-online.org/tag/lattice-polytope/","timestamp":"2024-11-10T17:53:47Z","content_type":"text/html","content_length":"84565","record_id":"<urn:uuid:138ccace-4d92-4fa0-bbc3-c8b9311f7e50>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00609.warc.gz"} |
Riemann curvature tensor part I: derivation from covariant derivative commutator
In our previous article Local Flatness or Local Inertial Frames and SpaceTime curvature, we have come to the conclusion that in a curved spacetime, it was impossible to find a frame for which all of
the second derivatives of the metric tensor could be null.
We have also mentionned the name of the most important tensor in General Relativity, i.e. the tensor in which all this curvature information is embedded: the Riemann tensor - named after the
nineteenth-century German mathematician Bernhard Riemann - or curvature tensor. In other words, the vanishing of the Riemann tensor is both a necessary and sufficient condition for Euclidean - flat -
In this article, our aim is to try to derive its exact expression from the concept of parallel transport of vectors/tensors.
Bernhard Riemann
Parallel transport
Say you start at the north pole holding a javelin that points horizontally in some direction, and you carry the javelin to the equator, always keeping the javelin pointing "in as same a direction as
possible", subject to the constraint that it point horizontally, i.e., tangent to the earth. (The idea is that we're taking "space" to be the 2-dimensional surface of the earth, and the javelin is
the "little arrow" or "tangent vector", which must remain tangent to "space".)
Parallel transport of a vector around a closed loop
After marching down to the equator, march 90 degrees around the equator, and then march back up to the north pole, always keeping the javelin pointing horizontally and "in as same a direction as
possible" along the meridian.
By the time you get back to the north pole, the javelin is pointing a different direction! That's because the surface of the earth is curved. In fact, if we parallel transport a vector around an
infinitesimal loop on a manifold, the vector we end up wih will only be equal to the vector we started with if the manifold is flat.
Actually, "parallel transport" has a very precise definition in curved space: it is defined as transport for which the covariant derivative - as defined previously in Introduction to Covariant
Differentiation - is zero.
So holding the covariant at zero while transporting a vector around a small loop is one way to derive the Riemann tensor.
But there is also another more indirect way using what is called the commutator of the covariant derivative of a vector.
Covariant derivative commutator
In this usage, "commutator" refers to the difference that results from performing two operations first in one order and then in the reverse order. So if one operator is denoted by A and another is
denoted by B, the commutator is defined as [AB] = AB - BA. Thus if the sequence of the two operations has no impact on the result, the commutator has a value of zero.
To get the Riemann tensor, the operation of choice is covariant derivative. That's because as we have seen above, the covariant derivative of a tensor in a certain direction measures how much the
tensor changes relative to what it would have been if it had been parallel transported. The commutator of two covariant derivatives, then, measures the difference between parallel transporting the
tensor first one way and then the other, versus the opposite.
In flat space the order of covariant differentiation makes no difference - as covariant differentiation reduces to partial differentiation -, so the commutator must yield zero. Inversely, any
non-zero result of applying the commutator to covariant differentiation can therefore be attributed to the curvature of the space, and therefore to the Riemann tensor.
Derivation of the Riemann tensor
So, our aim is to derive the Riemann tensor by finding the commutator
or, in semi-colon notation,
We know that the covariant derivative of V[a] is given by
Also, taking the covariant derivative of this expression, which is a tensor of rank 2 we get:
This section of the article is only available for our subscribers. Please click here to subscribe to a subscription plan to view this part of the article.
We define the expression inside the brackets on the right-hand side to be the Riemann tensor, meaning
Remark 1: The curvature tensor measures noncommutativity of the covariant derivative as those commute only if the Riemann tensor is null.
Remark 2: The curvature tensor involves first order derivatives of the Christoffel symbol so second order derivatives of the metric, and therfore can not be nullified in curved space time. We recalll
from our article Local Flatness or Local Inertial Frames and SpaceTime curvature that if the surface is curved, we can not find a frame for which all of the second derivatives of the metric could be
Remark 3: Having four indices, in n-dimensions the Riemann curvature tensor has n^4 components, i.e 2^4 = 16 in two-dimensional space, 3^4=81 in three dimensions and 4^4=256 in four dimensions (as in | {"url":"https://einsteinrelativelyeasy.com/index.php/general-relativity/61-the-riemann-curvature-tensor","timestamp":"2024-11-10T02:21:35Z","content_type":"text/html","content_length":"71647","record_id":"<urn:uuid:823392b1-23b9-424b-8f3e-0714dd66649d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00180.warc.gz"} |
Numerical Relativity in Spherical Polar Coordinates – Thomas W. Baumgarte
Thu. February 12th, 2015, 11:30 am-12:30 pm
Rockefeller 221
Numerical relativity simulations have made dramatic advances in recent years. Most of these simulations adopt Cartesian coordinates, which have some very useful properties for many types of
applications. Spherical polar coordinates, on the other hand, have significant advantages for others. Until recently, the new coordinate singularities in spherical polar coordinates have hampered the
development of numerical relativity codes adopting such coordinates, at least in the absence of symmetry assumptions. With a combination of different techniques – a reference-metric formulation of
the relevant equations, a proper rescaling of all tensorial quantities, and a partially-implicit Runge-Kutta method – we have been able to solve these problems. In this talk I will start with a brief
review of numerical relativity, including the 3+1 decomposition of Einstein’s equations, I will then explain the above techniques for applications in spherical polar coordinates, and will finally
show some tests – both for vacuum black hole spacetimes, and including relativistic hydrodynamics. | {"url":"https://physics.case.edu/events/numerical-relativity-in-spherical-polar-coordinates-thomas-w-baumgarte/","timestamp":"2024-11-02T23:35:09Z","content_type":"text/html","content_length":"157662","record_id":"<urn:uuid:1477bc88-1ad2-40b2-a784-aa7355345239>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00512.warc.gz"} |
Education and career
Florida State named Case the Olga Larson Professor in 2004.^[1] In 2012, Florida State created the Bettye Anne Case Scholarship in Actuarial Science to recognize Case for her work in establishing the
actuarial sciences program at Florida State in the 1990s.^[5] Florida State also established the Bettye Anne Case Actuarial Science Award to honor Case. ^[6] In 2016 the Association for Women in
Mathematics presented Case a Lifetime Service Award in recognition for her many decades of service to the AWM, particularly as Meetings Coordinator and long time member of the Executive Committee.^[7
] ^[8] In 2018 she was honored as one of the inaugural Fellows of the Association for Women in Mathematics.^[9] Florida State University has a scholarship in Actuarial Science named after her.^[10]
1. ^ ^a ^b ^c ^d Curriculum vitae, April 1, 2013, retrieved 2018-02-09
2. ^ ^a ^b Kenschaft, Patricia C. (2005), Change is Possible: Stories of Women and Minorities in Mathematics, American Mathematical Society, p. 146, ISBN 9780821837481
3. ^ Reviews of Complexities: Women in Mathematics:
□ Henson, Shandelle M. (January 2006), American Mathematical Monthly, 113 (1): 91–93, doi:10.2307/27641858, JSTOR 27641858{{citation}}: CS1 maint: untitled periodical (link)
□ Davis, A. E. L. (November 2006), Mathematical Gazette, 90 (519): 548–549, doi:10.1017/S0025557200180672, JSTOR 40378234, S2CID 185914638{{citation}}: CS1 maint: untitled periodical (link)
□ Voolich, Erica (July 2007), "Complexities: Women in Mathematics", Convergence
□ Spencer, Gwen (September 2007), Math Horizons, 15 (1): 29–30, doi:10.1080/10724117.2007.11974728, JSTOR 25678709, S2CID 125589195{{citation}}: CS1 maint: untitled periodical (link)
□ Kidwell, Peggy Aldrich (September 2007), Minerva, 45 (3): 353–356, doi:10.1007/s11024-007-9053-z, JSTOR 41821420, S2CID 144078467{{citation}}: CS1 maint: untitled periodical (link)
□ Korten, Marianne (May 2009), The Mathematical Intelligencer, 31 (3): 48–49, doi:10.1007/s00283-009-9052-z, S2CID 120987714{{citation}}: CS1 maint: untitled periodical (link)
4. ^ Bettye Anne Case at the Mathematics Genealogy Project
5. ^ "Giving back: Alum Courtney White and wife Shari honor Professor Bettye Anne Case and lend actuarial students a hand" (PDF). Florida State University. Retrieved 15 April 2021.
6. ^ "Department News Spring 2013". Florida State University. Retrieved 15 April 2021.
7. ^ "Past AWM Service Award Winners". Association for Women in Mathematics. Archived from the original on 29 May 2016. Retrieved 14 February 2018.
8. ^ "Press Release: Bettye Anne Case Receives an AWM Life Time Service Award". Association for Women in Mathematics. Retrieved 14 February 2018.
9. ^ 2018 Inaugural Class of AWM Fellows, Association for Women in Mathematics, retrieved 2018-02-09
10. ^ "Bettye Anne Case Scholarship in Actuarial Science - FS4U". fsu.academicworks.com. Retrieved 2020-03-07. | {"url":"https://www.knowpia.com/knowpedia/Bettye_Anne_Case","timestamp":"2024-11-10T09:42:41Z","content_type":"text/html","content_length":"89623","record_id":"<urn:uuid:81aab0c7-86e1-4460-b75b-efebbc6b7274>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00120.warc.gz"} |
Fuzzy Approaches in Financial Modeling - WorldQuant
Data Science
Fuzzy Approaches in Financial Modeling
Fuzzy logic and fuzzy set theory can be used to model financial uncertainty and challenge the use of probability theory. They can even be used to rethink Black-Scholes.
Aristotle, the father of classical logic, taught the concept of deductive reasoning through a syllogism, often attributed to him, about the teacher of his teacher Plato: “All men are mortal. Socrates
is a man. Therefore, Socrates is mortal.” But approaches that use Aristotle’s logic to solve real-life problems often struggle. A far less famous Greek philosopher, Eubulides of Miletus, who invented
several paradoxes that called into question the work of his contemporary Aristotle, first pointed out the limitations of classical logic. Eubulides is best known for the Sorites paradox, which asks,
“At what point does a heap [sorites in Greek] of sand turn into only a handful if we take away sand grains one at a time under the assumption that a heap remains a heap after a single grain is
removed?” Classical logic would conclude that the pile of sand will always be a heap because each time we remove a grain, we still have a heap — but this is obviously a false conclusion. The heap
contains a finite number of sand grains; eventually, as we continue to remove grains, it must disappear.
According to classical logic, a property is either true or false. Similarly, in classical set theory, an object is either an element of a set or it is not. The real world, however, is much less black
and white, as evidenced by the Sorites paradox. One obvious way to try to solve the paradox is to define a lower limit for the number of sand grains in a heap. If the number of grains exceeds the
limit, the sand forms a heap; otherwise it doesn’t. But where is the limit between the two states? Can there be a strict limit? It makes more sense that there must be some smooth transition between
the two states, which can be measured using uncertainty modeling, fuzzy logic and fuzzy set theory.
Fuzzy models were developed to emulate human reasoning; they use so-called fuzzy sets to describe domains of values of certain variables. Linguistic terms can be used for this purpose, as they are in
human thinking. Consider adjectives like “small,” “soft” or “expensive.” “Small” has a very different meaning depending on whether we are talking about a cake, a house or a country. Moreover, we can
differentiate among the “shades” of “small,” as in the case of colors. For example, “extremely small,” “very small,” “quite small” and “a bit small” all have different meanings; by using them, we can
alter the meaning to achieve better modeling power. This property makes fuzzy models unique among modeling systems: By maintaining some intuitive conditions about the collection of fuzzy sets, they
can be easily interpreted and understood.
Fuzzy logic and fuzzy set theory try to capture the vagueness of the real world, where classes of objects have indefinite boundaries. Fuzzy theory is the tool in uncertainty modeling that provides
the best answer to many problems, including the Sorites paradox. Although Eubulides captured a theoretical problem, his paradox applies to many real-life challenges; by solving it, we can find
solutions to those difficulties, too. Fuzzy clustering of data, for example, can be applied to biology, medicine, psychology, economics and many other disciplines. In the field of bioinformatics,
researchers can use fuzzy-clustering pattern-recognition techniques to analyze gene sequences and the functionalities of genes within the sequences. This leads to a better understanding of the
operation of living organisms. Fuzzy clustering has been an important tool in image-processing, allowing doctors to detect ventricular blood pools in cardiac magnetic resonance (CMR) imaging, which
helps computer-aided medicine make more-accurate diagnoses. Fuzzy clustering also can be used to structure complex financial data.
Fuzzy theory, meanwhile, can be applied to uncertainty modeling. The fuzzy analogue of probability theory is called possibility theory and has applications in fields such as data analysis, database
querying and case-based reasoning. Possibility theory provides a nonprobabilistic way to look at uncertainty and has been used by several researchers to calculate options prices. Its relevance to
cognitive psychology is currently being studied. Experimental results suggest that there are situationspeople reason about uncertainty using the rules of possibility theory rather than probability
theory. In one study, university students were asked to solve logical puzzles; in another experiment, medical doctors had to assess whether diagnoses were correct or incorrect on a certainty scale.
In both cases, participants’ reasoning was closer to the rules of possibility theory.
Uncertainty modeling has contributed significantly to the acceleration of artificial intelligence (AI) in our lives today. Some of the most obvious examples are self-driving cars, speech recognition
by devices like Amazon’s Echo and Apple’s iPhone, and Sony’s Aibo robot dog. Actually, AI is not a single science; it covers multiple disciplines. Machine learning is a form of AI that strives to
make machines capable of learning. One approach to this goal is to artificially reconstruct the relevant anatomical properties of learning organisms — that is, reproduce nervous systems artificially.
This is what neural networks are attempting to do, with more and more success (for example, when supercomputers defeat chess champions). On the other hand, fuzzy systems mimic cognitive functions
because their main objective is to be close to human reasoning. Although this does not coincide exactly with the goal of machine learning, fuzzy architectures can be used as vehicles of machine
learning due to their modeling properties. For example, they are applied in facial pattern recognition, antiskid braking systems and automated control of subway trains.
The Fuzzy Concept
Lotfi Zadeh began his pioneering work on uncertainty modeling in the mid-1960s at the University of California, Berkeley, where he introduced fuzzy sets, the central concept of fuzzy theory. Zadeh
was an Azerbaijani mathematician and computer scientist who described himself in an interview as “an American, mathematically oriented, electrical engineer of Iranian descent, born in Russia.”
Zadeh’s idea was that elements or members can partially belong to sets, unlike in classical set theory, where an element is either completely contained by a set or not contained at all. In classical
set theory, a set can be defined by enumerating its elements; equivalently, each element of the universe of discourse can be recorded, whether it belongs to the set or not. In fuzzy sets, not only
the fact of belonging but the degree of belonging can be determined using a so-called membership function taking any real value between (and including) 0 and 1 for each element of the universe. This
means that in contrast to traditional sets, where an element is either in the set (corresponding to a value of 1) or not (value 0), each element in a fuzzy set can belong to the set to a certain
degree — for example, “completely” (1), “mostly” (0.75), “halfway” (0.50), “not really” (0.25) or “not at all” (0).
Take a look at the logical discrepancy of the Sorites paradox from another viewpoint. When you bought something for $1.95 and someone asked the price, did you always say it was $1.95? Perhaps you
said it was $2. You probably did both, because while there are critical cases when you need to be exact, there are many everyday situations when rounding, or more generally being inexact, is a better
choice. It can make communication faster, shorter and more efficient. Inexact terms can be flexible and adaptable.
This is the point where the fuzzy concept comes into play. Saying $2 instead of $1.95 covers the truth to a high degree, but not to the degree of 1 on the 0-to-1 scale.
Figures 1 and 2 show some example membership functions for fuzzy set “$2”. The horizontal axes show dollar amounts and the vertical axes show corresponding degrees of belonging to fuzzy set “$2.” You
can see that in three of four cases all amounts between $1 and $3 belong to “$2”: The further the value ranges from $2, the lower its degree of belonging. This can be interpreted that the further we
go from $2 on the number line, the less accurate, or less true, is the statement that the amount at hand is $2. This “truth” interpretation leads to fuzzy logic (in its narrow sense; fuzzy logic in
its broad sense comprises all the mathematical tools involving fuzzy mathematics). Thus, in fuzzy logic you can describe a formula that is “more or less true” or say an element has a property “only
to a certain degree,” because the notion of truth as well as the relation of having a property can be represented as set membership.
It is worth providing a few more details about the interpretation of the singleton fuzzy set in the right graph of Figure 2. The singleton is the strictest fuzzy set in the sense that it allows only
$2 to be in the set “$2” with a positive degree (exactly with the degree of 1), which means that it prohibits labeling any amount that’s not $2 with the term “$2.” In this way, we find ourselves back
to classical set theory and classical logic.
Now that we’re armed with the basic concept of fuzzy theory, let’s discuss some ways fuzzy logic can be applied as an aid in financial modeling.
Fuzzy Clustering for Structuring Financial Data
The basic idea of clustering is to arrange data (possibly high-dimensional data with many attributes) into groups so that data in the same clusters will be of the same type. This is based on the
spatial distance between data points, either in their “natural” multidimensional space or in a transformed space. Clustering can efficiently reduce dimensionality and the size of the value range
along different dimensions.
Classical clustering techniques assign data points to clusters so that the clusters form a partition of the space. This means that the combination of all the clusters covers the entire space, there
are no overlapping clusters, and every data point will belong to exactly one cluster and no more.
In the case of fuzzy clustering, however, the clusters have no exact borders. Every point can be an element of several (or all) clusters to a certain degree. This results in overlapping clusters and
a smooth transition between clusters (see Figure 3).
The line in the image on the left in Figure 3 represents the border of the two clusters. In classical clustering, the border is strict: On one side of the line every point will be categorized as
blue; on the other side every point will be red. (“Blue” and “red” can represent a lot of things depending on the problem at hand — good versus bad, satisfactory versus unsatisfactory, value stocks
versus growth stocks.) The point is that there’s no difference between two blue points, and there’s no difference between two red points. A point is either blue or red.
In the fuzzy case, however, a blue point can be bluer than another blue point (e.g., a stock shows stronger value characteristics) and a red point can be redder than another red point (e.g., a stock
shows stronger growth characteristics). Moreover, a point can be somewhat blue and somewhat red at the same time (in the stock example, it shows both value and growth characteristics).
Financial data can be very complex in terms of dimensionality and the variety of data values. It may be difficult both theoretically and computationally to deal with such data. Fuzzy clustering
offers a helping hand here because it simultaneously decreases dimensionality and the value range. After applying fuzzy clustering, we obtain a small number (equaling the number of clusters) of
low-dimensional typical data points. And all the data points can be labeled with their similarity to the typical data points — that is, their membership values.
For example, consider a financial model involving two concepts: profitability and growth potential. In this case, we can turn our attention to fundamental data, which contains a plethora of
fundamental metrics. Several of them, such as profit margin, gross margin and return on assets, are related to profitability; some, such as sales growth rate, retained earnings and capital
expenditure, are related to growth potential; and, of course, there are lots of metrics related to other financial aspects. It may be more convenient (and may result in a simpler and computationally
more feasible model) if a fuzzy clustering is performed using two clusters, “profitable” and “not profitable,” for profitability-related metrics, and similarly “growing” and “not growing” for
growth-related metrics. Using two simple concepts, we can obtain a characterization for each company and meet the needs of our financial model.
The Fuzzy Analogue of Black-Scholes
In the 1960s, to challenge the conventional role of probability in economics and to model possible rather than probable outcomes, British economist George L.S. Shackle introduced a full-fledged
approach to uncertainty and decision making. Shackle called the degree of potential surprise of an event its degree of impossibility — that is, the degrees of necessity of an opposite event
occurring. Zadeh reversed Shackle’s concept and suggested that the degree of possibility should be interpreted as the degree of ease. According to his famous example, “The possibility that Hans ate a
certain number of eggs may be interpreted as the degree of ease with which he can eat that many eggs.” Generally, the degree of ease of an event that consists of a set of elementary hypotheses is the
degree of ease of its easiest realization. In other words, when aggregating different events to determine an overall possibility, one must take the maximum of the possibilities of individual events
instead of the sum of them — in fuzzy logic, this means a so-called maxitivity property comes into play rather than conventional probability’s theoretical additivity. (For an example to the
additivity property: the probability of obtaining at least five when rolling a six-sided die is one third because the probability of five and six is each one sixth.)
This idea can be illustrated in a financial context. Let’s consider an accounting company facing different threats with different levels of severity. The firm has been sued by several customers,
drawing media attention that could harm its reputation. Hence, there’s a certain degree of possibility (say, 0.3 on a 0-to-1 scale) that the company will default, though this possibility is not very
high. However, the firm’s well-performing CEO is about to retire. A disastrous succession could trigger a default, albeit with an even smaller possibility (say, 0.1). Compared with the lawsuits,
succession does not really suggest much of a threat at all. Therefore, the possibility of default remains 0.3 and the consequences of the lawsuits remain the easiest realization of that default.
Then, suddenly, the authorities suspend the accounting license of the firm because of a criminal investigation. This is a huge problem, as the company cannot provide its services until it gets its
license back; this means default has a very high degree of possibility (say, 0.9). Because this leads to the easiest realization of default, the aggregated possibility will be 0.9.
In 1978, Zadeh proposed an interpretation of membership functions of fuzzy sets (the functions defining degrees of belonging for each element in the universe of discourse) as the possibility
distributions used by natural language statements.
Based on this possibility theory, the fuzzy analogues of random variables, probability distributions, expected value, variance and stochastic integrals, among others, can all be defined and similar
theorems can be proved, as in the case of probability theory, on the basis of analogously defined concepts. This foreshadows the likelihood that possibility theory might be able to provide useful
models for financial applications.
This proves to be true in the case of options pricing. Because the price of a derivative is based partly on the future price of the underlying asset, there is an element of uncertainty involved. For
this reason, pricing models typically include some amount of randomness. One of the most important financial models, the Black-Scholes options pricing model, is based on the assumption that stock
prices can be modeled using a stochastic process over a probability space. However, there are several other approaches to options modeling, including one proposed by Chinese mathematician Baoding
Liu. Instead of using traditional probability tools, Liu defined a possibility vehicle — a fuzzy process— as a model for stock price movements. By applying fuzzy processes, Liu established a
possibility theoretical model for options pricing — the fuzzy analogue of the Black-Scholes model.
William Ely, an Israel-based consultant, did an extensive comparison of the Black-Scholes and Liu models for his master’s thesis, performing simulations using different strike prices and different
time frames. Ely then compared the prices predicted by the models to the corresponding actual options prices. For this purpose, he used two error definitions: mean absolute error (MAE) and root mean
squared error (RMSE). The former measures the absolute difference between the predicted and the actual prices; the latter, by squaring the differences between the values, “tends to give more weight
to errors of high magnitude,” Ely explains.
The Liu model outperformed the Black-Scholes model over all strike prices and both error measures when the interval of the comparison was 120 days or less. Going out to 150 days, the Liu model also
performed better for the majority of the strike prices. When the simulation included 180 days, the Black-Scholes model started to outperform the fuzzy analogue; in the case of the RMSE measure,
Black-Scholes gave better predictions for all strike prices. This trend continued as the time frames got longer: For 210 days or more, the Black-Scholes model clearly outperformed the Liu model for
all strike prices and both error measures.
Perhaps the most striking result of Ely’s analysis is that the Liu model consistently outperforms the Black-Scholes model over short time periods. As was mentioned earlier, the shift in predictive
power started after 120 days, and the Black-Scholes model only really outperforms the fuzzy analogue at around 200 days. Although Black-Scholes remains a viable benchmark model, the fuzzy approach to
options pricing proves its worth, especially considering that options with closer expiration dates tend to be traded more heavily.
After acquiring some insight into fuzzy theory and its applications in financial modeling, we can return to the Sorites paradox and resolve it easily: Consider the set of sand heaps as a fuzzy set
where, depending on their size, individual heaps have different membership values. In this case, every time we reduce the number of sand grains, we produce a heap with a slightly lower membership
value — that is, we decrease the degree of it’s being a heap bit by bit. Tangibly, this means the heap shrinks. Following this model, it is undeniable that even a couple of sand grains can be
considered a heap, but with a negligibly low degree.
Krisztian Balazs is a Vice President, Research, at WorldQuant and has a PhD in computer science from Budapest University of Technology and Economics. | {"url":"https://www.worldquant.com/ideas/fuzzy-approaches-in-financial-modeling/","timestamp":"2024-11-11T15:16:14Z","content_type":"text/html","content_length":"66287","record_id":"<urn:uuid:232c10da-c543-4f20-aa7d-93660c6b9bb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00443.warc.gz"} |
Calculating IRRs
Mar 01, 2020 05:33 AM
Hi - I have a table with a list of cash flows, dates. These are linked to another table which contains individual deals. I want to calculate, for each deal, the IRR - in Excel this is using the IRR
function which takes a list of dates and a list of cash flows and returns a single IRR number.
Can this be done in Airtable? Rollups seem to only allow calculations involving one linked field only - whereas I need two: both date and cashflow. Also, there doesn’t seem to be an IRR function. The
calculation is relatively straight forward, can I code my own function to support this?
Secondary question - right now I have to manually for each date/cash flow record link it to a given deal. Is there a way to link all cash flows where dealID equals x?
Mar 01, 2020 11:11 AM
Mar 02, 2020 02:01 AM
Mar 05, 2020 12:03 PM
Mar 05, 2020 09:20 PM
Mar 07, 2020 09:32 PM
Jul 21, 2020 12:11 PM
Aug 08, 2020 05:34 AM
Aug 01, 2022 09:06 AM
Aug 01, 2022 09:09 AM | {"url":"https://community.airtable.com/t5/formulas/calculating-irrs/m-p/65587","timestamp":"2024-11-05T13:15:23Z","content_type":"text/html","content_length":"452711","record_id":"<urn:uuid:8d478957-bf9b-4c0e-a04c-2f32e94a9931>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00874.warc.gz"} |
Saponification Table and Characteristics of Oils in Soap
Saponification Table Plus The Characteristics of Oils in Soap
How much Lye should you use in order to saponify a specific fat or oil? Use this simple saponification table to find out!
You can click on each oil or fat within this chart to learn more about its benefits, detriments and how it is used in soap making.
┃oil or fat (acid) │SAP │Hard/Soft│cleansing│fluffy lather│stable lather│skin care┃
┃avocado oil │133.7│soft │fair │yes │no │amazing! ┃
┃coconut oil │191.1│hard │great │yes │no │fair ┃
┃castor oil │128.6│soft │fair │yes │yes │great ┃
┃olive oil │135.3│soft │good │no │no │great ┃
┃palm oil │142 │hard │great │no │yes │fair ┃
┃peanut oil │137 │soft │fair │no │yes │great ┃
┃soybean oil │135.9│soft │good │no │yes │fair ┃
┃sweet almond oil │137.3│soft │good │no │yes │amazing! ┃
┃jojoba oil │69.5 │soft │fair │no │yes │great ┃
┃kukui nut oil │135.5│soft │good │no │yes │great ┃
┃lard │138.7│hard │good │no │yes │fair ┃
┃tallow │140.5│hard │good │no │yes │fair ┃
soap making oils (along with many other ingredients) are now available to purchase online right here at
Click now to see the selection!
Before you start making conversions on your own, be sure to read (and re-read) through the entire explanation on this page of how to use this chart successfully. At first it will seem like a
complicated process, but with a little bit of practice and repetition, it will become an absolute cinch.
Oh, and by the way, if words like "saponify", "sodium hydroxide", "potassium hydroxide", "base" and "acid" are foreign to you, you should first read
this page about the saponification reaction
before you proceed.
On the chart above you'll notice 7 columns: "Oil or Fat", "SAP", "Hard/Soft", "Cleansing", "Fluffy Lather", "Stable Lather" and "Skin care". Except for "Oil or Fat" which merely tells you which
ingredient is being discussed and "SAP" which tells you the amount of sodium hydroxide (lye) needed in order for saponification to occur each of these sections is a characteristic of soap that could
be produced by a specific acid. Click here to learn more about fatty acids and soap making.
Keep in mind that most saponification tables merely reveal the Saponification value (more on this later) and not the characteristics of oils in soap; but for your convenience, I've added the 5 most
important attributes that are contributed to your finished product by using a specific fat or oil.
Free Soap Making e-Newsletter
Plus instantly receive one of my own personal soap recipe formulas using a combination of sweet almond oil, avocado oil, olive oil, coconut oil, palm oil, castor oil and shea butter scented with an
intoxicating essential oil blend. This recipe is explained step by step in full detail. you're going to love it!
• Instantly receive one of my own soap recipe formulas.
• $5 Coupon off your first purchase of $30 or more!
• Access to more exclusive subscriber coupons.
• Future Soap making recipe tutorial announcements. Sign Up Today!
• How to make lotions, bath bombs and more!
• Interviews with successful soap makers. *Your information is SAFE with us!
• Contests with awesome prizes.
• Revealing soap making poll results.
• New product announcements, demos and reviews.
• Unbeatable Soap Making Resource Sales!
Allow me to briefly explain each soap distinction and its importance:
1. Hard/Soft - This column will tell you if a specific acid will produce a hard or soft bar of soap. If a bar of soap is too soft it will dissolve prematurely and become a mushy mess; so make sure
that your soap has a certain level of hardness by combining hard oils with soft oils.
2. Cleansing - This column will tell you how well an acid cleans. Keep in mind that all soaps clean relatively well, but some oils produce a soap that is more harsh then others. For the best
results, try to combine oils that are mild when saponified with oils that are more harsh when saponified for a balance between a cleansing and conditioning bar.
3. Fluffy Lather - This column will tell you whether or not a specific acid will produce a fluffy lather. A fluffy lather is thick and bubbly but washes away easily.
4. Stable Lather - This column will tell you whether or not an acid will produce a stable lather. A stable lather has very little substance but is harder to wash away. In general, you want a
combination of ingredients that produce both fluffiness and stability to your soap's lather. Again, your goal here is balance.
5. Skin care - This column will tell you how beneficial a soap produced by a specific acid is to the skin. It depends mostly on the presence of nourishing vitamins, its mildness and moisturizing
Explaining how to use the saponification table will take more then just a few words. Understanding the mathematical equation required is a good idea for completeness sake but not essential to make
soap. At the end of this page, I reveal a neat little short cut that anyone can use so don't be scared off by the seemingly complicated process! Just read through it once and if you don't want to
learn it, use the simplified explanation at the end.
The SAP column (Saponification value) reveals simply how many milligrams of base is required to completely saponify 1 gram of an acid (oil or fat). This number usually tells you how much potassium
hydroxide (potash) is needed instead of how much sodium hydroxide (lye) is needed.
If you have read the section on this website about saponification, you should know that the only ion required for the soap making reaction to take place is the hydroxide ion. Both potassium hydroxide
and sodium hydroxide have the same number of hydroxide ions so the amount of base should be the same regardless of which one is used, right?
Wrong! The molecular weight of potassium hydroxide is less then the molecular weight of sodium hydroxide therefore less sodium hydroxide is required then potassium hydroxide to saponify the same
amount of fats or oils. Sodium hydroxide's molecular weight is only 40/56.1 of potassium hydroxide's weight.
Therefore, in order for the same amount of hydroxide ions to be incorporated into the soap making recipe we need to take every Saponification value that reflects potassium hydroxide as the base and
multiply it by 40/56.1 in order to get the sodium hydroxide Saponification value. This is truly an arduous process to say the least.
So what's the Bottom line? Since I only use sodium hydroxide for a base (and I suggest you do the same, unless you are making liquid soap) I have taken the liberty to convert the numbers for you so
that they apply to sodium hydroxide.
This means that the SAP value on my saponification table represents how many milligrams of lye (sodium hydroxide) are needed to saponify exactly 1 gram (1000 milligrams) of the fat or oil in
Let me give you a few examples to illustrate this fact:
Example 1: According to our saponification table, palm oil has an SAP value of 142. This means that it takes exactly 142 milligrams of lye in order to saponify 1000 milligrams of palm oil.
Example 2: coconut oil has an SAP value of 191.1. This means that it takes exactly 191.1 milligrams of lye in order to saponify 1000 milligrams of coconut oil
Example 3: Avocado oil has an SAP value of 133.7. This means that it takes exactly 133.7 milligrams of lye in order to saponify 1000 milligrams of avocado oil.
So how do we use these numbers to determine the amount of lye needed in a soap making recipe? Well, we have to convert the SAP values into a more usable form so that we can find out the weight of lye
needed to saponify the weight of an oil being used.
Remember basic algebra? One familiar rule is that whatever you do to one side of an equation, you must do to the other side. Let's use avocado oil as an example:
133.7 milligrams of lye is needed to saponify 1000 milligrams of avocado oil. At this point, we want to make the units of measurement the same. So if we take 133.7 milligrams/1000 and 1000 milligrams
/1000 we get values that are both in milligrams. You can now see that .1337 milligrams of lye is needed to saponify 1 milligram of avocado oil.
Since the lye and avocado oil is in the same unit of measurement we can take the new SAP value .1337 and multiply it by the weight of the oil being used. So let's say my recipe calls for 3 pounds of
avocado oil. To find out the amount of lye needed to fully saponify the 3 pounds multiply 3 times .1337. According to our calculations, exactly .4011 pounds of lye is needed to saponify 3 pounds of
avocado oil.
Simply put (this is the short cut I was talking about above): Take the SAP value in the saponification table, divide it by 1000 and multiply it by the weight of the oil being used.
Let me give you a few more examples just for clarification:
Example 1: Say your recipe calls for 2 pounds of coconut oil. Take 191.1 (the SAP value for coconut oil)/1000 = .1911 x 2 pounds of coconut oil = .3822 pounds of lye required to saponify 2 pounds of
coconut oil.
Example 2: Say your recipe calls for 9 pounds of jojoba oil. Take 69.5 (the SAP value for jojoba oil)/1000 = .0695 x 9 pounds of jojoba oil = .6255 pounds of lye required to saponify 9 pounds of
jojoba oil.
Example 3: Say your recipe calls for 12 pounds of olive oil. Take 135.3 (the SAP value for olive oil)/1000 = .1353 x 12 pounds of olive oil = 1.6236 pounds of lye required to saponify 12 pounds of
olive oil.
Now, if you've spent any amount of time on this website, you probably know by now that more then one acid is almost always used in any given soap recipe. So how do you calculate the amount of lye
needed for an entire recipe with multiple fats and oils? Simple... Just add the cumulative amounts of lye needed for each acid together to reach a sum total of lye needed.
Let's pretend that the 3 examples above are a single soap recipe. You want 2 pounds of coconut oil, 9 pounds of jojoba oil and 12 pounds of olive oil to make up your batch of soap.
All you need to do now is add together the total amounts of lye needed to saponify each oil separately in the recipe to realize how much lye is needed in total for the entire batch of soap: .3822
pounds + .6255 pounds + 1.6236 pounds = 2.6313 pounds of lye to completely saponify all the oils in the recipe.
Notice that throughout this tutorial, I always say "completely saponify"? Each SAP value on the saponification table tells you exactly how much lye is needed in order to turn 100% of the fats or oils
into soap. In reality, we don't want to do this. If all the ingredients were completely saponified your soap would be way too caustic and harsh.
This is where superfatting comes into play. Superfatting is where you allow a certain percentage of fats and oils within your recipe to remain unsaponified by discounting your lye by a certain
Unfortunately, superfatting is not a cut and dry process. I personally use a 5-8% discount for most oils to start which usually works out pretty well, but you may need to make some adjustments
depending on your preferences.
The key is that a balance needs to be met. If too much oil or fat is left unsaponified (too large of a discount), the soap will go rancid prematurely and be too soft whereas if too little oil or fat
is left unsaponified your finished product will be way too harsh.
The only exception to my 5-8% rule is for castor oil. I won't bore you with the scientific reasons as to why this is so, but you should only use a 5% maximum discount for this particular ingredient.
So your final step in the equation is to multiply the amount of lye needed to completely saponify your fats and oils by .92 - .95 (except for castor oil, which would be .95 maximum) depending on how
much you want to superfat to reach the discounted amount of lye.
Let me give you one more example just for good measure:
Example: Your recipe consists of 4 pounds of coconut oil, 1 pound of avocado oil and 3 pounds of castor oil. How much lye is needed for this recipe?
Take 191.1 (SAP value of coconut oil)/1000 = .1911 x 4 pounds = .7644 pounds of lye to completely saponify 4 pounds of coconut oil. Now multiply .7644 pounds x .92(assuming we are using an 8% lye
discount) to get .7032 pounds, which is the discounted amount of lye to use in order to saponify coconut oil in a recipe.
Now take 133.7 (SAP value of avocado oil)/1000 = .1337 x 1 = .1337 pounds of lye to completely saponify 1 pound of avocado oil. Now multiply .1337 pounds x .92 to get .1230 pounds, which is the
discounted amount of lye needed in order to saponify avocado oil in a soap recipe.
Now take 128.6 (SAP value of castor oil)/1000 = .1286 x 3 = .3858 pounds of lye to completely saponify 3 pounds of castor oil. Now multiply .3858 pounds x .95 (because you only discount castor oil by
5%) to get .3665 pounds of lye to saponify castor oil in a recipe.
Now add your totals together: .7032 pounds + .1230 pounds + .3665 pounds = 1.1927 pounds of lye. This is the amount of lye that you will use in this example soap making recipe to saponify your fats
and oils at a discount so that for skin care purposes some of your fats and oils remain unadulterated.
I hope that you enjoyed this tutorial and will continue to explore the many pages on soap-making-resource.com. If you have any questions about the chemistry of soap making please feel free to contact
Click here to learn how to make soap step by step.
Return from saponification table - characteristics of oils in soap to the soap making ingredients main page. Return to the soap making resource home page. | {"url":"http://www.soap-making-resource.com/saponification-table.html","timestamp":"2024-11-05T01:13:30Z","content_type":"text/html","content_length":"48687","record_id":"<urn:uuid:898f056e-9620-485a-a5c3-44e356055081>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00481.warc.gz"} |
Benchmark of optimal control problems - Comparison of coordinate and dynamic formulations
Published: 2 August 2019| Version 1 | DOI: 10.17632/cj6c5k2pj7.1
These folders contain the optimal control problems to predict the swing movement from an initial to a final state. They were solved using 8 models of different complexity (from 3 to 10 degrres of
freedom - DoF), with 4 types of coordinates (absolute, natural, relative from hip and relative from ankle), and with two dynamic formulations (implicit and explicit). Each folder contains the files
needed to run the optimal control problems for each DoF model. In each folder, one can find four files starting with "main_...", which contains the optimal control problems for each type of
coordinate. The user can run the optimal control problems running the files starting with "main_...". At the beginning of these files, the user can choose either solving the optimal control problems
using an implicit dynamic formulation (Options.Implicit=1) or using an explicit dynamic formulation (Options.Implicit=0). The files starting with "Equations_" contain the equations of motion and the
user does not need to modify them. The results of these optimal control problems show that the number of iterations needed to find an optimal solution is related to the conditioning of the Hessian of
the NLP problem and mass matrices simultaneously.
Steps to reproduce
Requirements: - MATLAB installed - CasADi installed. Casadi can be downloaded from https://web.casadi.org/ Instructions to use the code: - Choose the folder corresponding to the model you want to
use. Run one of the four MATLAB files starting with "main_" acoording to the type of coordinate you want to use. Modify the Options.Implicit depending on you want to use an implicit (Options.Implicit
=1) or an explicit (Options.Implicit=0) formulation. Run the code.
Multibody Simulation, Biomechanical Models, Optimal Control Theory, Dynamics of Multibody Systems | {"url":"https://data.mendeley.com/datasets/cj6c5k2pj7/1","timestamp":"2024-11-09T00:53:58Z","content_type":"text/html","content_length":"104947","record_id":"<urn:uuid:2170aa6f-d8e8-481b-8f55-c28ea3af9675>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00084.warc.gz"} |
One max rep - canadianpharmacyput.com
Why you need a workout diary
One of the basic rules for muscle growth is constantly increasing weight during exercise. In order to track the change in this working weight, the trainee will definitely need a training
diary . Moreover, the paper version is more convenient.
If you exercise without a training diary, and without fixing the working weights, and rely solely on your memory, then monitoring your progress is almost unrealistic, since you will not be able to
remember the weekly weights in the exercises performed.
How often do you need to increase weight?
If you are working to increase muscle mass, fixing the weekly working weight in basic exercises (squats, bench press, standing press, deadlifts and deadlifts) is rule number one, since the weight
must constantly increase.
Note: despite the fact that every week it is necessary to add at least 1-2 kg to the weight of the barbell, this does not mean that in a year you will increase the working weight by 100
kg. Obviously, you cannot constantly increase the weight, and weight cycles alternate with light workouts.
How do I keep track of the progress of my scale?
Often the question is how to compare the progress of the working weight, and how to determine which of the loads was more – 5 reps with a weight of 80 kg or 7 reps with a weight of 75 kg? Sometimes
it is recommended to multiply the weight, but this is not entirely correct.
For example, in our case, you will need to compare 5 * 80 = 400 kg and 7 * 75 = 585 kg – in the second case, the figure is almost 50% more, but this does not mean that you did the exercise 50%
better. For a correct comparison, the 1MP indicator is used.
One max rep
In theory, 1MP (one maximum repetition) is the weight with which you are technically able to perform the exercise correctly once. But it is obvious that in reality this is impossible, since you will
not be able to work with such a large weight.
1MP is a purely theoretical number calculated by the formula, and is used only to compare the working weight. Trying to do the exercise with only one repetition of maximum weight is strictly not
recommended, as it is extremely traumatic.
How is 1MP calculated?
Empirically, based on multiple measurements, the following formula was derived to calculate this indicator: 1MP = WEIGHT / (1.0278- (0.0278 * POVT)). For ease of use, the coefficients are given
below ^(1) :
• 3 reps – 1.059
• 4 reps – 1.091
• 5 reps – 1.125
• 6 reps – 1.161
• 7 reps – 1,200
• 8 reps – 1.242
• 9 reps – 1.286
• 10 reps – 1.330
How do I use the odds?
Above, we tried to compare 5 reps with a weight of 80 kg and 7 reps with a weight of 75 kg. To determine 1MP, you need to multiply the working weight by the coefficient of repetitions made with this
weight. In our example, these will be the following numbers: 80 * 1.125 and 75 * 1.2.
Both in the first and in the second case, the result is 90 kg. Conclusion: despite the fact that more repetitions were done, there was no real progress in the working weight, although multiplying and
calculating the total lifted weight gave a completely different result.
What is 1MP for?
In addition to the task of tracking progress in basic exercises, the 1MP indicator may be required to calculate the optimal working weight. In this case, 1MP is taken as 100%, for the maximum, and
decreasing coefficients are applied.
For example, for muscle growth in a beginner, it is not critical with what weight he works – with 60% of 1 MP, or with 90% of 1 MP, but it is obvious that in the case of performing an exercise with
60% of 1 MP, the technique will be better, and safety will be higher. ^(2) . More details in the next article. | {"url":"https://canadianpharmacyput.com/steroids/one-max-rep.html","timestamp":"2024-11-05T06:34:16Z","content_type":"text/html","content_length":"87918","record_id":"<urn:uuid:c3347022-c8fb-4f8c-8e38-313327469012>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00065.warc.gz"} |
Convert H/m to µH/m (Magnetic permeability)
H/m into µH/m
Direct link to this calculator:
Convert H/m to µH/m (Magnetic permeability)
1. Choose the right category from the selection list, in this case 'Magnetic permeability'.
2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), square root (√), brackets and
π (pi) are all permitted at this point.
3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Henry per Meter [H/m]'.
4. Finally choose the unit you want the value to be converted to, in this case 'Microhenry per Meter [µH/m]'.
5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.
Utilize the full range of performance for this units calculator
With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '847 Henry per Meter'. In so doing, either the full name of the unit
or its abbreviation can be usedas an example, either 'Henry per Meter' or 'H/m'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case
'Magnetic permeability'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally
sought. Alternatively, the value to be converted can be entered as follows: '46 H/m to µH/m' or '10 H/m into µH/m' or '19 Henry per Meter -> Microhenry per Meter' or '91 H/m = µH/m' or '64 Henry per
Meter to µH/m' or '37 H/m to Microhenry per Meter' or '82 Henry per Meter into Microhenry per Meter'. For this alternative, the calculator also figures out immediately into which unit the original
value is specifically to be converted. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories
and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second.
Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(46 * 19) H/m'. But different
units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '1 Henry per Meter + 73 Microhenry per Meter' or '91mm x 64cm x 37dm = ?
cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question.
The mathematical functions sin, cos, tan and sqrt can also be used. Example: sin(π/2), cos(pi/2), tan(90°), sin(90) or sqrt(4).
If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 4.956 790 078 35×1019. For this form of presentation, the number will
be segmented into an exponent, here 19, and the actual number, here 4.956 790 078 35. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket
calculators, one also finds the way of writing numbers as 4.956 790 078 35E+19. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this
spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 49 567 900 783 500 000 000. Independent of the presentation of the
results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications. | {"url":"https://www.convert-measurement-units.com/convert+H+m+to+microH+m.php","timestamp":"2024-11-08T02:47:23Z","content_type":"text/html","content_length":"54754","record_id":"<urn:uuid:b1483e60-28cb-4de2-864c-7cfdcdf20efc>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00136.warc.gz"} |
Difference between Average Tax Rate and Marginal Tax Rate - Difference Betweenz
What is the difference between the average tax rate and the marginal tax rate? In essence, the average tax rate is how much of your income you paid in taxes over a period of time, while the marginal
tax rate is how much you will pay on your next dollar earned. Knowing this information is important for individuals looking to minimize their taxable income. Typically, people want to keep as much of
their income as possible within the lower tax brackets, in order to reduce their overall liability. Understanding these concepts can help taxpayers make more informed decisions when it comes to their
What is the Average Tax Rate?
The Average Tax Rate is the tax imposed on an average person. The Average Tax Rate is calculated by dividing the total amount of taxes collected by the total population. The Average Tax Rate is often
used to compare different tax systems. The Average Tax Rate is not the same as the Marginal Tax Rate, which is the tax rate imposed on an additional dollar of income. The Average Tax Rate can be
reduced by deductions, exemptions, and credits. The Average Tax Rate is usually highest for those with the highest incomes. Progressive tax systems have higher Average Tax Rates for higher income
levels, while regressive tax systems have lower Average Tax Rates for higher income levels. The Average Tax Rate can also be affected by changes in the tax code.
What is the Marginal Tax Rate?
The Marginal Tax Rate is the rate of tax paid on the next dollar of income. It is not a constant rate, but increases as income increases. The Federal government has a progressive tax system, which
means that the higher one’s income, the higher the marginal tax rate. For example, in 2020, the marginal tax rate for someone earning $50,000 is 15%. However, for someone earning $75,000, the
marginal tax rate is 20%. The marginal tax rate is important to understand because it affects how much tax is paid on additional income. For example, if someone earns an extra $1,000 and their
marginal tax rate is 15%, they would pay $150 in taxes on that additional income. However, if their marginal tax rate was 25%, they would pay $250 in taxes. As a result, it’s important to be aware of
your marginal tax rate when making decisions about how to earn and invest money. The Federal government has a progressive tax system, which means that the higher one’s income, the higher the marginal
tax rate.
Difference between Average Tax Rate and Marginal Tax Rate
The average Tax Rate is the percentage of your income that goes to taxes. the marginal tax rate is the rate you pay on each additional dollar of income. The average tax rate is always lower than the
marginal tax rate. The average tax rate can be calculated by dividing the total amount of taxes paid by the total taxable income. The marginal tax rate can be calculated by dividing the total amount
of taxes paid on the last dollar of income by that last dollar of income. The average tax rate is not a good measure of the progressivity of the tax system because it doesn’t account for how much tax
is paid on different levels of income. The marginal tax rate is a better measure because it shows how much tax is paid on each additional dollar of income.
The average tax rate is the total amount of taxes paid on all of a person’s income divided by their taxable income. The marginal tax rate, on the other hand, is the percentage of tax that is paid on
each additional dollar of income. In most cases, the marginal tax rate is higher than the average tax rate. Knowing this difference can help you make more informed financial decisions. | {"url":"https://differencebetweenz.com/difference-between-average-tax-rate-and-marginal-tax-rate/","timestamp":"2024-11-10T07:40:06Z","content_type":"text/html","content_length":"97965","record_id":"<urn:uuid:5f4adc72-bc35-419b-a5ce-e049060b0266>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00758.warc.gz"} |
Probability Matching Bias
In his book Thinking Fast and Slow, which summarizes his life and Tversky’s work, Kahneman introduces biases that stem from misalignment — the false belief that a combination of two events is more
likely than one event alone. Conjunction bias is a common error of reasoning in which we believe that two events occurring together are more likely than one of those events to occur on its own. While
representativeness bias occurs when we ignore low base rates, conjunction error occurs when we attribute a higher probability to an event with a higher specificity. [Sources: 9]
However, the coincidence of probabilities was not taken into account, and regarding the choice of the model compared to the mean, the author points out that it is difficult to determine exactly which
strategy was used by the participants. The observed probability matching behavior suggests that the nervous system samples the hypothesized distribution of the model at each trial. [Sources: 6]
These robust correspondences and discrepancies between human judgment and probability theory challenge non-sampling models of probabilistic bias; Costello and Watts (2014, 2016, 2018) have shown how
a sampling model captures both biases and patterns in human probabilistic judgments, demonstrating that these judgments, they say, are, after all, “remarkably rational” and that irrational judgments
are the result of noise [Sources: 2]
To illustrate, with one sample, each event will have a probability of 1 (i.e., 1 out of 1) or 0 (0 out of 1). If the brain can sample indefinitely, then under certain circumstances the sampling rates
will match the “true” probabilities with arbitrary precision. One of the biggest problems with observational studies is that the likelihood of being exposed or unirradiated to a group is not
accidental. [Sources: 2, 3]
The more correct covariates we use, the more accurate our prediction of the likelihood of exposure. We use covariates to predict the probability of exposure (PS). We want to match exposed and
unexposed subjects in terms of their likelihood of being exposed (their PS). Below 0.01, we can get a lot of variability within the estimate because it is difficult for us to find matches, and this
leads us to discard these items (incomplete match). [Sources: 3]
We would like to see a significant reduction in bias due to inconsistent and consistent analyzes. Ultimately, PSM scores are as good as the characteristics used for comparison. Since all
characteristics related to treatment participation and outcomes are observed in the dataset and are known to the investigator, propensity scores provide significant matches for assessing the impact
of the intervention. [Sources: 3, 8]
Specifically, PSM will calculate the probability of a unit participating in the program based on the observed characteristics. PSM then compares the processed units with the unprocessed units based
on the propensity score. This ensures that units with the same covariate value have a positive chance of healing, but will not be cured. [Sources: 8]
Evaluate the impact of the intervention on the fit sample and calculate the standard errors. Using these coincidences, the researcher can assess the impact of the intervention. The obtained match
pairs can also be analyzed using standard statistical methods, for example, Thus, if positive examples are observed in 60% of cases in the training sample, and negative examples are observed in 40%
of cases, then the observer using the probability matching strategy predicts (for unlabeled examples) class label “positive.” in 60% of cases and class label “negative” in 40% of cases. [Sources: 3,
5, 8]
But the combination, that is, heads in two-thirds of the cases and tails in one-third of the cases, will be corrected with a probability of (2/3 x 2/3) + (1/3 x 1/3) = 5/9. … While probabilistic
matching was a modal response strategy found in the current study, we are not suggesting that probabilistic matching is used in all perception problems, or even in all spatial problems. [Sources: 2,
Recent research has shown that observer behavior is consistent with the expected loss function in a visual discrimination problem [40], but the results are ambiguous with respect to the specific
decision-making strategy (mean, choice, and comparison of probabilities) they might make. similar predictions. Moreover, the effects of bias can be explained in terms of setting the response
criteria, rather than the goodness-of-fit criteria, as is the case with the Ratcliffe approximations. Different probability distributions, rewards, or changes in context did not affect the results.
It is more “rational” because deviations from probability theory arise from its use before improving probability estimates based on a small number of samples. [Sources: 1, 2, 6, 7]
It is also known that Thompson sampling is equivalent to probability matching, a heuristic often called suboptimal, but in fact it can work quite well under the assumption of a non-stationary
environment. It also ties in with the converging evidence that participants can use a combination of direct and random exploration in multi-armed bandits, and I’m not sure how this can be accounted
for in DBM. I believe that the direct inclusion of information acquisition in the model is analogous to the direct exploration strategy, while the softmax parameter can track random (value-based)
exploration. [Sources: 10]
It is clear that the maximization strategy outweighs the coincidence strategy. However, the maximization strategy is rarely found in the biological world. From bees to birds to humans, most animals
correspond to probabilities (Erev & Barron, 2005; CR Probabilistic Matching (PM) is a widely observed phenomenon in which subjects compare the likelihood of choice with the likelihood of reward in a
stochastic context. [Sources: 1]
Matchmaking strategy is to pick A 70% of the time and B 30% of the time. Combination with substitution reduces distortion by better matching between objects. This effect is especially evident in the
no-tip and no-feedback group, where the first ten-reel game was dominated by pairs (bottom right panel). [Sources: 1, 3, 4]
Of those participants who were asked to rate which strategy gives the higher expected return before forecasting, 74% correctly identified the maximization strategy as the best. When comparing these
three strategies, the behavior of the overwhelming majority of observers in performing this perceptual task was more consistent with the comparison of probabilities. The third strategy is to select a
causal structure in proportion to its likelihood, thus trying to match the likelihood of a putative causal structure. Choosing a matching strategy, subjects violate the axioms of decision theory, and
therefore their behavior cannot be rationalized. [Sources: 1, 4, 6]
Based on the theory of optimal foraging (Stephens & Krebs, 1986), IFD predicts that the distribution of individuals between food stalls will match the distribution of resources, a pattern often
observed in animals and humans (Grand, 1997; Harper, 1982; Lamb & Ollason, 1993 ; Madden et al., 2002; Sokolowski et al., 1999). There are discrepancies between the model and observed behavior, but
foraging groups tend to approach IFD. [Sources: 1]
Indeed, this is the conditional expansion probability given the set of covariates Pr (E + | covariates). Therefore, the likelihood of being exposed is the same as the likelihood of not being exposed.
[Sources: 3]
— Slimane Zouggari
##### Sources #####
[0]: https://stats.stackexchange.com/questions/392493/propensity-score-matching-bias-adjustment
[1]: http://naturalrationality.blogspot.com/2007/11/probability-matching-brief-intro.html
[2]: https://journals.sagepub.com/doi/full/10.1177/0963721420954801
[3]: https://www.publichealth.columbia.edu/research/population-health-methods/propensity-score-analysis
[4]: https://link.springer.com/article/10.3758/s13421-012-0268-3
[5]: https://en.wikipedia.org/wiki/Probability_matching
[6]: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000871
[7]: https://pubmed.ncbi.nlm.nih.gov/2913571/
[8]: https://dimewiki.worldbank.org/Propensity_Score_Matching
[9]: https://fs.blog/bias-conjunction-fallacy/
[10]: https://proceedings.neurips.cc/paper/2018/file/f55cadb97eaff2ba1980e001b0bd9842-Reviews.html | {"url":"https://flowless.eu/probability-matching-bias/","timestamp":"2024-11-10T14:03:27Z","content_type":"text/html","content_length":"41698","record_id":"<urn:uuid:19546870-bbc8-46ca-b367-23e7177df038>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00590.warc.gz"} |
Identification of Some Additional Loss Components in High-Power Low-Voltage Permanent Magnet Generators
Hämäläinen, Henry (2013-08-13)
Lappeenranta University of Technology
Acta Universitatis Lappeenrantaensis
Julkaisun pysyvä osoite on https://urn.fi/URN:ISBN:978-952-265-429-8
Permanent magnet generators (PMG) represent the cutting edge technology in modern wind
mills. The efficiency remains high (over 90%) at partial loads. To improve the machine
efficiency even further, every aspect of machine losses has to be analyzed. Additional losses
are often given as a certain percentage without providing any detailed information about the
actual calculation process; meanwhile, there are many design-dependent losses that have an
effect on the total amount of additional losses and that have to be taken into consideration.
Additional losses are most often eddy current losses in different parts of the machine. These
losses are usually difficult to calculate in the design process. In this doctoral thesis, some
additional losses are identified and modeled. Further, suggestions on how to minimize the
losses are given.
Iron losses can differ significantly between the measured no-load values and the loss values
under load. In addition, with embedded magnet rotors, the quadrature-axis armature reaction
adds losses to the stator iron by manipulating the harmonic content of the flux. It was,
therefore, re-evaluated that in salient pole machines, to minimize the losses and the loss
difference between the no-load and load operation, the flux density has to be kept below 1.5
T in the stator yoke, which is the traditional guideline for machine designers.
Eddy current losses may occur in the end-winding area and in the support structure of the
machine, that is, in the finger plate and the clamping ring. With construction steel, these
losses account for 0.08% of the input power of the machine. These losses can be reduced
almost to zero by using nonmagnetic stainless steel. In addition, the machine housing may be
subjected to eddy current losses if the flux density exceeds 1.5 T in the stator yoke.
Winding losses can rise rapidly when high frequencies and 10–15 mm high conductors are
used. In general, minimizing the winding losses is simple. For example, it can be done by dividing the conductor into transposed subconductors. However, this comes with the expense
of an increase in the DC resistance. In the doctoral thesis, a new method is presented to
minimize the winding losses by applying a litz wire with noninsulated strands. The
construction is the same as in a normal litz wire but the insulation between the subconductors
has been left out. The idea is that the connection is kept weak to prevent harmful eddy
currents from flowing. Moreover, the analytical solution for calculating the AC resistance
factor of the litz-wire is supplemented by including an end-winding resistance in the
analytical solution. A simple measurement device is developed to measure the AC resistance
in the windings. In the case of a litz-wire with originally noninsulated strands, vacuum
pressure impregnation (VPI) is used to insulate the subconductors. In one of the two cases
studied, the VPI affected the AC resistance factor, but in the other case, it did not have any
effect. However, more research is needed to determine the effect of the VPI on litz-wire with
noninsulated strands.
An empirical model is developed to calculate the AC resistance factor of a single-layer formwound
winding. The model includes the end-winding length and the number of strands and
turns. The end winding includes the circulating current (eddy currents that are traveling
through the whole winding between parallel strands) and the main current. The end-winding
length also affects the total AC resistance factor. | {"url":"https://lutpub.lut.fi/handle/10024/91691","timestamp":"2024-11-07T05:56:58Z","content_type":"text/html","content_length":"28594","record_id":"<urn:uuid:9ed66d1d-a71b-41c3-8b61-e8878f16f7d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00048.warc.gz"} |
Machine learning for networking
01DSMUV, 01DSMBG, 01DSMUW
A.A. 2024/25
Course Language
Degree programme(s)
Master of science-level of the Bologna process in Cybersecurity - Torino
Master of science-level of the Bologna process in Communications Engineering - Torino
Master of science-level of the Bologna process in Cybersecurity - Torino
Course structure
Teaching Hours
Lezioni 40
Esercitazioni in laboratorio 40
Teacher Status SSD h.Les h.Ex h.Lab h.Tut Years teaching
Vassio Luca Ricercatore a tempo det. L.240/10 art.24-B IINF-05/A 35 0 15 0 2
SSD CFU Activities Area context
ING-INF/05 8 D - A scelta dello studente A scelta dello studente
The course aims at providing a solid introduction to machine learning, a branch of artificial intelligence that deals with the development of algorithms able to extract knowledge from data, with a
focus on pattern recognition and classification problems. The course will cover the basic concepts of statistical machine learning, both from the frequentist and the Bayesian perspectives, and will
be focused on the broad class of generative linear Gaussian models and discriminative classifiers based on logistic regression and support vector machines. The objective of the course is to provide
the students with solid theoretical bases that will allow them to select, apply and evaluate different classification methods on real tasks. The students will also acquire the required competencies
to devise novel approaches based on the frameworks that will be presented during the classes. The course will include laboratory activities that will allow the students to practice the theoretical
notions on real data using modern programming frameworks that are widely employed both by research communities and companies.
This course explores how Machine Learning can help engineers solve problems in the world of networking. The course introduces the data science process and then provides theoretical and practical
knowledge about the machine learning approach and algorithms commonly used to analyze large and heterogeneous data. The students will also acquire Python programming competencies and learn how to use
its main libraries related to ML. Many practical examples will be focused on how to address inference problems in the field of communication networks and cybersecurity. A significant part of the
courses will be devoted to laboratory activities allowing the students to practice the theoretical notions on real problems, from traffic classification to anomaly detection. Many laboratory
sessions, based on a learning-by-doing approach, allow experimental activities on all the phases of a machine learning pipeline (e.g., data preparation and cleaning, data visualization and
characterization, ML algorithm selection, tuning, and result evaluation).
At the end of the course the students will - know and understand the basic principles of statistical machine learning applied to pattern recognition and classification; - know the principal
techniques for classification, including generative linear Gaussian models and discriminative approaches based on logistic regression and support vector machines, among others; - understand the
theoretical motivations behind different classification approaches, their main properties and domain of application, and their limitations; - be able to implement the different algorithms using
wide-spread programming frameworks (Python) - be able to apply different methods to real tasks, to critically evaluate their effectiveness and to analyze which strategies are better suited to
different applications; - be able to transfer the acquired knowledge and capabilities to solve novel classification problems, developing novel methods based on the frameworks that will be discussed
during classes
Knowledge and abilities: • Knowledge of Python programming language and the main Python libraries for machine learning; • Knowledge of the main phases characterizing a data science and ML process; •
Knowledge of the different data exploration, visualization and pre-processing techniques; • Knowledge of the basic theoretical principles of machine learning; • Knowledge of the principal models for
supervised and unsupervised learning; • Knowledge of the main theoretical properties, domains of application, and limitations of different machine learning approaches; • Knowledge of networking
problems that can be approached with ML; • Ability to design, implement and evaluate analytics scripts in the Python language. • Ability to manage large datasets of data, from pre-processing to
visualization. • Ability to employ the Python machine learning libraries to devise complete solutions for inference problems; • Ability to design, implement and evaluate a machine learning pipeline;
• Ability to apply different methods to real (networking and cybersecurity) tasks, to critically evaluate their effectiveness and to analyze which strategies are better suited to different
The students should have basic knowledge of probability and statistics, linear algebra and calculus.
The students should have basic knowledge of: • Programming skills (whatever the language) • Communication networks • Probability theory and statistics • Linear algebra • Calculus • Operational
Machine learning and pattern recognition - Introduction and definitions Probability theory concepts - Random Variables - Estimators - The Bayesian framework Introduction to Python - The language -
Main numerical libraries Decision Theory - Inference, expected loss - Model taxonomy: generative and discriminative approaches - Model optimization, hyperparameter selection, cross-validation Model
evaluation - Classification scores and log-likelihood ratios - Detection Cost Functions and optimal Bayes decisions Dimensionality reduction - Principal Component Analysis (PCA) - Linear Discriminant
Analysis (LDA) Generative Gaussian models - Generative Gaussian classifiers: univariate Gaussian, Naive Bayes, multivariate Gaussian (MVG) - Tied covariance MVG and LDA Logistic Regression (LR) -
From Tied MVG to LR - LR as ML solution for class labels - Binary and multiclass cross-entropy - From MVG to Quadratic LR - LR as empirical risk minimization - Overfitting and regularization Support
Vector Machines (SVM) - Optimal classification hyperplane: the maximum margin definition - Margin maximization and L2 regularization - SVM as minimization of classification errors - Primal and dual
SVM formulation - Non linear extension: brief introduction to kernels Density estimation and latent variable models - Gaussian mixture models (GMM) - The Expectation Maximization algorithm Continuous
latent variable models: Linear-Gaussian Models - Linear regression - Linear regression and Tied MVG - MVG with unknown class means: Probabilistic LDA (PLDA) - Bayesian MVG - Factor Analysis: PLDA,
Probabilistic PCA Approximated inference basics - Variational Bayes
Introduction to Machine Learning and its application to Networking (0.5 CFU) • Definitions of pipeline and taxonomy of Machine Learning tasks • Problems in networking: from traffic classification to
anomaly detection Python usage and libraries (2.0 CFU) • The Python language • Numerical libraries: Numpy, Pandas and Matplotlib • ML libraries (Scikit-learn, PyTorch) Data exploration and
preprocessing (1.5 CFU) • Data visualization • Data transformation and feature extraction • Dimensionality reduction techniques Basics of ML (1 CFU) • Empirical risk minimization • Loss functions and
performance metrics • Gradient-based learning • Model selection and validation Supervised and unsupervised ML (3 CFU) • Classification • Regression • Clustering • Algorithms: from linear models to
deep neural networks • Regularization
The course will include 3 hours of lectures and 1,5 hours of laboratory per week. The lectures will focus both on theoretical and practical aspects, and will include open discussions aimed at
developping suitable solutions for different problems. The laboratories will allow the students to implement most of the techniques that will be presented during the lectures, and to apply the
learned methods to real data.
The course will include 40 hours of lectures and 40 hours of laboratory activities. The lectures will focus both on theoretical and practical aspects of the course topics and will include open
discussions aimed at developing suitable solutions for different problems. Some simple practical exercises will be solved in the classroom. The course includes laboratory sessions on data science
processes and machine learning algorithms for engineering applications. The laboratories will allow the students to apply the methods presented during lectures to real data and tasks, with a
particular focus on networking and cybersecurity applications. Students will prepare a written report on a group project assigned during the course.
[1] Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg. [2] Kevin P. Murphy. 2012. Machine Learning: A
Probabilistic Perspective. The MIT Press. Additional material, including slides and code fragments, will be made available on the course website.
Copies of the slides used during the lectures, exercises, and manuals for the activities in the laboratory will be made available. All teaching material is downloadable from the course website or the
teaching Portal. Suggested books: [1] A. Jung, Machine Learning: The Basics, Springer, 2022 [2] Jake VanderPlas, Python Data Science Handbook: Essential Tools for Working with Data, O’Reilly, 2016
[3] Christopher M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006 [4] Kent D. Lee, Python Programming Fundamentals, Springer, 2015
Slides; Esercitazioni di laboratorio; Video lezioni dell’anno corrente;
Lecture slides; Lab exercises; Video lectures (current year);
Modalità di esame: Prova scritta (in aula); Elaborato progettuale individuale; Elaborato progettuale in gruppo;
Exam: Written test; Individual project; Group project;
... The exam will assess the knowledge of the course topics, and the ability of the candidate to apply such knowledge and the developed skills to solve specific problems. The exam will consist in two
parts: - A project to be developed during the course. The students will be able to choose individual or (small) group projects among a set of possible choices (max. 18 points). - A written
examination (max. 12 points). The final mark will be the sum of the report and written exam marks. To pass the exam, the report mark must be at least 9/18, the written exam mark must be at least 6/
12, and the final mark must be at least 18/30. The projects will address machine learning tasks. For each project, a dataset will be provided, and the students will have to develop suitable models
for the specific task based on the topics and tools presented during lectures and laboratories. Each candidate will have to provide a technical report detailing the employed methodology and a
critical analysis of the obtained results. The report will assess: - The degree of understanding of the theoretical principles of different machine learning approaches - The ability of the student to
analyze a specific problem, assessing which approaches, among those that have been presented, are more suited to solve the task - The ability of the student to apply the studied methods to devise
suitable solutions for the specific case study - The ability of the student to critically evaluate the effectiveness of the proposed approaches. The written examination will consists of open
questions covering the topics presented during the lectures. The written examination will assess: - The theoretical understanding of the basic principles of the presented machine learning approaches
- The knowledge and understanding of the different approaches that have been presented during the lectures - The ability of the student to critically analyze and evaluate the different approaches.
Gli studenti e le studentesse con disabilità o con Disturbi Specifici di Apprendimento (DSA), oltre alla segnalazione tramite procedura informatizzata, sono invitati a comunicare anche direttamente
al/la docente titolare dell'insegnamento, con un preavviso non inferiore ad una settimana dall'avvio della sessione d'esame, gli strumenti compensativi concordati con l'Unità Special Needs, al fine
di permettere al/la docente la declinazione più idonea in riferimento alla specifica tipologia di esame.
Exam: Written test; Individual project; Group project;
The exam includes two mandatory parts. The two mandatory parts are (i) a written exam and (ii) the evaluation of a group project. The final score is defined by considering both the evaluation of the
group project and the written part. The teacher may request an integrative oral test to confirm the evaluations that were obtained. The written examination lasts 90 minutes and will consist of open
and closed questions and exercises covering the topics presented during the lectures. A single-sided page of notes is allowed. Textbooks and electronic devices of any kind are not allowed. The
written examination will assess the following: • The theoretical understanding of the basic principles of the presented machine learning approaches • The knowledge and understanding of the different
approaches that have been presented during the lectures • The ability of the students to apply ML techniques to a simple numerical case study. • The ability of the students to design, implement and
evaluate code in the Python language and its ML libraries The projects will address machine learning tasks. For each project, a dataset will be provided, and the students will have to develop a
pipeline using suitable models for the specific tasks based on the topics and tools presented during lectures and laboratories. Each group will have to provide a technical report detailing the
methodology employed and critically analyzing the results. The report will assess: • The degree of understanding of the theoretical principles of different machine-learning approaches • The ability
of the students to analyze a specific problem, assessing which approaches, among those that have been presented, are more suited to solve the task • The working knowledge of the Python language and
the major data mining and machine learning libraries • The ability of the students to apply the studied methods to devise suitable solutions for the specific case study • The ability of the students
to critically evaluate the effectiveness of the proposed approaches After submitting their report, the students will have the possibility to peer-review other reports from the other group to obtain
bonus points. The final grade will be given by the weighted average of the written exam (40%) and the project report grade (60%). Each part will have a grade between 0 and 30 cum laude. Both parts
must be sufficient to pass the exam. • Individual written exam (40%, at least 18/30) • Project (60%, at least 18/30)
In addition to the message sent by the online system, students with disabilities or Specific Learning Disorders (SLD) are invited to directly inform the professor in charge of the course about the
special arrangements for the exam that have been agreed with the Special Needs Unit. The professor has to be informed at least one week before the beginning of the examination session in order to
provide students with the most suitable arrangements for each specific type of exam.
Esporta Word | {"url":"https://didattica.polito.it/pls/portal30/gap.pkg_guide.viewGap?p_cod_ins=01DSMUV&p_a_acc=2025&p_header=S&p_lang=IT&multi=N","timestamp":"2024-11-11T00:02:18Z","content_type":"text/html","content_length":"62164","record_id":"<urn:uuid:4a0695f4-48d4-4bf2-ba06-a813cecf5f6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00403.warc.gz"} |
estimation theory exam questions
C. Within a range from … The measure of location which is the most likely to be influenced by extreme values in … This PMP Question Bank includes + 1000 questions and responses, which will assist you
in preparing for the PMP certification exam. Earn Transferable Credit & Get your Degree. What is the short way to write the greatest common factor? Using this information, choose the exact answer
from the possibilities given. Some of the questions in this study note are taken from past examinations. Biological and Biomedical You want to purchase an item for $1.29. PD5 Exam Exemplar Questions
Mar2013 Page 3 of 10 other interested stakeholders. Stating the criteria for a number to be included in the set. What is the shorthand way of writing the least common multiple? May 2015 changes
Question 32 was modified (and re-modified in June) back When you have completed the practice exam, a green submit button will Sciences, Culinary Arts and Personal Study more effectively: skip
concepts you already know and focus on what you still need to learn. ... Patricia bought a dress for $23.99 and a coat for $47.50. 6 notes Maximum Likelihood Estimation, Ch.7 notes Least squares
estimation, Ch.8 notes Bayesian Estimation, select Ch.10-12 notes Kalman filtering, select Ch.12-13 notes It is not proper set notation, so nothing is included in the set. - Predict cost of a project
for a defined scope, to be completed at defined location and point if time in future. Choose your answers to the questions and click 'Next' to see the next set of questions. Practice Final Exam
Questions (2) -- Answers Part A. This means that the revision process can start earlier, leaving you better prepared to tackle whole exam papers closer to the exam. For each multiple choice question
circle the letter of the correct answer on the exam (a,b,c,d,e,f,g, or h). It helps by giving you a faster answer so you don't have to do the accurate work. ESTIMATION Materials required for
examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. This Excel Test is designed to help you assess your
knowledge of basic Excel functions and formulas. appear. At the completion stage of the project life cycle, the software can be used to produce the completion report, since all information on costs
and Which of the following is a composite number? Using the course textbook and slides are permitted during the exam. For which of the number pairs below could you NOT use the shortcut of simply
multiplying the two numbers together to find the LCM? A 10. We prepared these 100% FREE â 20 PMI ACP Sample Exam Questions and Answers for you. I usually print these questions as an A5 booklet and
issue them in class or give them out as a homework. (a) Every project is unique. The MA Theory exam covers the material from MA581 (Probability), MA582 (Mathematical Statistics), and MA583
(Introduction to Stochastic Processes). Estimation Practice Questions Click here for Questions . Which problem would you use estimation as a means to check for reasonableness? You can practise for
the test with over 900 theory test questions and answers, and 26 Odd numbers end in all of the following numbers except which one? estimating, approximations, approximating. B. Multiple Choice Circle
either A, B, C, or D to complete each question. Questions 207-237 were added April 2015. These 50 questions are essential for your multiple-choice theory test exam. It means you have to keep the two
sides separate. Take CFI’s Excel Test. Contact us by phone at (877) 266-4919, or by mail at 100 View Street #202, Mountain View, CA 94041. Even numbers end in all of the following numbers except
which one? ; Paper 1 will be of 200 marks and paper 2 will be of 300 marks. This is very useful for the following examinations. Multiple Choice Questions. Multiple-choice questions You can take free
mock tests for: cars motorcycles lorries buses and coaches The practice questions â ¦ What does reasonableness mean in math terms? Mathematics Practice Test Page 3 Question 7 The perimeter of the
shape is A: 47cm B: 72cm C: 69cm D: 94cm E: Not enough information to find perimeter Question 8 If the length of the shorter arc AB is 22cm and C is the centre of the circle then the circumference of
the circle is: When you have completed the practice exam, a green submit button will Textbook: S.M. Which set notation means 'Any number that is greater than 2'? January 2015 changes Questions 35-46
are new. ; As you practice more SSC JE previous year question papers, you can find a successful strategy for you. to them later with the "Go To First Skipped Question" button. Practice Questions;
Post navigation. It is not possible to choose the correct accurate answer based on an estimation. A definitive estimate is: A. Top-down estimating. x CCNA Practice Questions (Exam 640â 802) The book
has been organized to help direct your study to specific objectives. We'll review your answers and create a Test Prep Plan for you based Part I-(39 points)--13 3 point questions--Answer each multiple
choice and short-answer question. Statistics Solutions can assist with estimation and sample size calculation, click here for a free consultation. Topics: Estimation Theory: General Minimum Variance
Unbiased Estimation, Ch.2+Ch.3, and Chapter 5 notes Cramer-Rao Lower Bound, Ch.3 Linear Models+Unbiased Estimators, Ch.4 and Ch. Question 31 is the former Question 58 from the interest theory
question set. What is the shorthand way of writing the least common multiple? Sciences, Culinary Arts and Personal All rights reserved. © copyright 2003-2020 Study.com. For a short definition,
Hematology or Haematology is a branch of medicines which is concerned with the study, treatment and prevention of the diseases related to blood. For scrum master prep you must go through real exam.
Make sure you explain your answers in a way that illustrates your understanding of the problem. Questions 154-155 were added in October 2014. 8006 Exam Questions is written by lots of past materials'
rigorous analyses. Think you are ready to pass the real estate exam?Test Your Knowledge with our FREE Real Estate Exam! Question 14 of 15 1.0 Points This mock DVLA test includes another 50
multiple-choice questions that are similar to the actual DVSA theory test mock exam. A factor is defined as the different numbers that can be multiplied together to arrive at the original number. [â
¦] In this blog post Iâ m going to provide you with 100 free PMP exam sample questions. To accurately perform these tasks, you need econometric model-building skills, quality data, and appropriate
estimation strategies. Take this practice test to check your existing knowledge of the course material. No login or registration required. Please note that relevant diagrams are included with the
practice question wherever possible. The questions are comparatively easier than the real PMP exam questions. on your results. At the completion stage of the project life cycle, the software can be
used to produce the completion report, since all information on costs and time will have been captured during the life of the project. to them later with the "Go To First Skipped Question" button. By
Vinai Prakash The Monte Carlo Simulation technique traditionally appeard as option choices in PMP exam. Estimation Theory Minimum variance unbiased estimation, best linear unbiased estimation
Cramer-Rao lower bound (CRLB) Maximum Likelihood estimation (MLE): exact and â ¦ D 5. Which of the following is NOT a prime number? What is a concise phrase that sums up the definition of estimate?
If you are studying only for the ICND1 exam (640-822), you only need to review Chapters 1â 6. Good luck! appear. What: A few short questions about the course progress. A gathering of specific items
defined by a criteria for inclusion. Part of the estimating process on the PMP Certification Exam is looking at alternative project approaches. Premium members get access to this practice exam along
with our entire library of lessons taught by subject matter experts. on your results. X cannot be 0, but anything else is okay. These will give you a good idea of how the questions are worded and
structured. 6 Estimation & hypothesis testing. You can skip questions if you would like and come If you are Connections with other material. You must show your work or reason if â ¦ 8006 Which of the
following is an even number? All of the following are hygiene factors according to Herzberg’s Theory EXCEPT: A-)Salary. Questions 32-34 are new. When you have completed the practice exam, a green
submit button will Based on your results, we'll create a customized Test Prep Plan just for you! Created during initiation. I usually print these questions as an A5 booklet and issue them in class or
give them out as a homework. Part I: Estimation Theory Part II: Detection Theory Surveys: When: Occasionally at the end of class. It is what happens when concrete gets hard. TEXTBOOK: Steven M. Kay,
Fundamentals of Statistical Signal Processing, Vol.I Estimation Theory.Upper Saddle River, NJ: Prentice-Hall, Inc., 1993. 50 Questions â ¦ There are 50 driving theory questions in the test, which are
drawn from over 1,000. EXAMS QUESTIONS SOLUTIONS 1 Exam 1 Practice Questions I (PDF) Solutions to Exam 1 Practice Questions I (PDF) Exam 1 Practice Questions II (PDF) Solutions to Exam 1 Practice
Questions II (PDF) Exam 1 Answer each short-answer question in the space provided. A factor is defined as the different numbers that can be multiplied together to arrive at the original number. We
recommend you to enroll in our Fill in the boxes at the top of this page with your name, centre number and candidate number Most of these questions are definition based, well suited for you to try
during your studies to check your progress. Click it to see your results. -- If you like this resource, then please rate it and/or leave a comment. 1. B 7. Correct answers are presented at the end.
ECONOMETRICS BRUCE E. HANSEN ©2000, 20201 University of Wisconsin Department of Economics This Revision: November 30, 2020 Comments Welcome 1This manuscript may be printed and reproduced for
individual or instructional use, but may not be printed for commercial purposes. Remember to read the questions for key words such as these which will alert you to the relevant theory. New Portfolio
theory and asset pricing models multiple choice questions and answers PDF solve MCQ quiz answers on topics: Efficient portfolios, choosing optimal portfolio, assumptions of capital asset pricing
model, arbitrage pricing Test day will be here before you know it and can be intimidating for those who aren’t prepared. appear. Which of the following is NOT a way to begin the factor tree for 1230?
Round $1.29 to the nearest dollar, recalling that you need to have enough money to cover the expense. The theory and practice of optimal estimation is them presented, including filtering, smoothing,
and prediction. I also make them available for a student who wants to do focused independent study on a topic. Please note, do not limit your scope of reading to the questions and answers provided in
this post Sometimes, bringing in a full-time employee to fill a slot on the team is the best option; other times, going with contract labor is a better option. Questions 156-206 were added January
2015. How is estimating helpful when solving math problems in an academic setting? Thus we will be using the symbol p. Note that the claim can be written symbolically as p 1 4 The opposite of the
claim is p ≤1 4 © copyright 2003-2020 Study.com. Free PMP exam questions based on PMBOK Guide 5th Edition. Premium members get access to this practice exam along with our entire library of lessons
taught by subject matter experts. 0.25 marks will be deducted for every wrong question. For which of the number pairs below could you NOT use the shortcut of simply multiplying the two numbers
together to find the LCM. appear. Test Day Arrives Quickly. There is not a | in set builder notation. Normally, a central prestress is provided in which the compressive stress at all points of bridge
cross section is equal. Earn Transferable Credit & Get your Degree. Civil Engineering Multiple Choice Questions / Objective type questions, MCQ's, Civil Engineering, Multiple Choice Questions,
Objective type questions, Civil Engineering short notes, rapid fire notes, best theory, airport engineering Patient length of stay summary statistics available on all reported year 2000 hospital
discharges in California include a median length of stay of 3.0 days, a mean length Select a study method and stick to it. Click it to see your results. Old Exam Questions-Solutions Hypothesis
Testing (Chapter 7) 1. A 6. The exam consists of 200 multiple choice questions that outline the five process groups (Initiation, Planning, Executing, Monitoring and Controlling, and Closing) and nine
knowledge areas (Integration, Scope, Time, Cost, Quality Practice Midterm Questions Solutions Tues. 10/27 Minimax Estimators; Admissibility; Simultaneous Estimation Scribed Lecture 11 TPE 5.2, K
11.1-11.2 - - Thurs. Study more effectively: skip concepts you already know and focus on what you still need to learn. All questions and answers are randomized each time you start the test. Here you
can find Civil Engineering interview questions with answers and explanation. Estimate 61.3 to the nearest whole number. ECE 531 Detection and Estimation Theory Midterm 1 February 16, 2016. SSC Junior
Engineer Exam will be conducted in two phases. You need 42 correct theory test answers to pass. Take the Quiz for competitions and exams. Only working on problems you can easily do in under a minute
and in less than three lines, Not spending too much time on a math problem, Verifying your answer by either estimating or plugging it in, Working on a math problem twice and then moving on if it
still doesn't work. Ideas Is it correct? This download link will take you to the full document containing close to 100 Financial Accounting past questions and answers. Exam I: Finance Theory
Financial Instruments Financial Markets - 2015 Edition Study Question has fast reaction speed to market change and need. How many odd numbers are there between 6 and 18? The following practice
questions are representative of a Level 1 Mobile Crane Theory Exam. Go through these PMI ACP practice questions and assess your readiness for the PMI ACP exam. Contact us by phone at (877) 266-4919,
or by mail at 100 View Street #202, Mountain View, CA 94041. What is the BEST way to estimate the total cost of the purchase? Can you clear the exam with 90 minutes just like the state board exam?
2:00-3:15 in LH312. Understanding Number Theory & Estimation Chapter Exam Instructions Choose your answers to the questions and click 'Next' to see the next set of questions. B 1. You can skip
questions if you would like and come This + 1000 PMP Question Bank should be used after completion of the preparation research for the PMP examination. These questions and solutions are based on
material from the Corporate Finance textbook by Berk/DeMarzo (Learning Outcomes 1-5 of the Exam IFM syllabus) â ¦ Use estimation to find an estimate of the correct answer to 515 + 113 + 280 + 425.
There are six men and seven women in a ballroom dancing class. Some of these are the same as exam questions, but it is good if you do them again (if you had a mistake in the exam). Everything bigger
than -3 but also smaller than 2 (between -3 and 2). The Software Estimation mock exam testifies your capability to prepare accurate estimates of the project cost, effort, and time. ESTIMATING and
COSTING Multiple Choice Questions :-Q No: 01 The rate of payment is made for 100 cu m (per % cu m) in case of. If the number in question is 5, you round up the first time and down the second time.
Some of the questions have been reformatted from previous versions of this note. (c) … Earth work in excavation Choose your answers to the questions and click 'Next' to see the next set of questions.
Instructions Use black ink or ball-point pen. How could you make the prime number 3 a composite number? You can skip questions if you would like and come For that we provide scrum master practice
questions 2020 real test. Which sequence of numbers below is a list of multiples? Financial Statement Analysis-Sample Midterm Exam. When using set builder notation, what does it mean to 'Define a
set'? 200 Questions and Answers on Practical Civil Engineering Works Vincent T. H. CHU 5 (ii) The superstructure continually experiences alternative sagging and hogging moments during incremental
launching. Course outline ECE 531: Detection and Estimation University of Illinois at Chicago, ECE Spring 2010 Instructor: Natasha Devroye, [email protected] Course coordinates: Tuesday, Thursday
from 2-3:15pm in TH 208 (Taft Hall). Understanding Number Theory & Estimation Chapter Exam Instructions. Both linear and non-linear systems, and continuous- and discrete-time cases, are covered in
considerable detail. I. For which of the following problems could you plug in your answer to check for reasonableness? Previous Best Buys Practice Questions. MATH 344: THEORY OF ESTIMATION STREAMS:
B.SC (ECON&STAT) Y3S1 TIME: 2 HOURS DAY/DATE: MONDAY 22/4/2013 11.30 AM – 1.30 PM INSTRUCTIONS: Answer Question ONE and any other TWO Questions All working must be Clearly shown Statistical tables
are required for use in this exam. 2. ... but the basic form of a confidence interval and the basic form of a test statistic for a hypothesis test are the same. This website and its content is
subject to our Terms and Conditions. Note that while I may not be able to follow-through with every In this section you can learn and practice Civil Engineering (Questions with Answers) to improve
your skills in order to face the interview, competitive examination and various entrance test (CAT, GATE, GRE, MAT, Bank Exam, Railway Exam etc.) Click here for Answers . How To Use Our Exam
Questions By Topic When preparing for A Level Maths exams, it is extremely useful to tackle exam questions on a topic-by-topic basis. Why Civil Engineering? The following section consists of
Engineering Multiple Choice questions on Estimating and Costing. All other trademarks and copyrights are the property of their respective owners. A. You will need the following information to answer
questions 6 through 8: There were over 3.5 million hospital discharges in the year 2000 in the U.S. state of California. Primary Study Cards. Estimation is not used in academic settings. An easy and
efficient method to use when estimating is what? back Test your knowledge. GCSE Revision Cards. to them later with the "Go To First Skipped Question" button. Which choice below is the complete list
of factors of 30? 5-a-day Workbooks. If the number in question is less than 5, you round down. Use your time wisely! Choose your answers to the questions and click 'Next' to see the next set of
questions. Number Theory & Estimation Chapter Exam Instructions. Biological and Biomedical to them later with the "Go To First Skipped Question" button. QUESTION ONE (30 MARKS) (a) Define the
following terms: The first 200-question exam also references the 2015 PMI Exam Content Outline. First note that this is a claim about a population PROPORTION. Questions 238-240 were added May 2015. ;
Both paper 1 and paper 2 are Multiple Choice Questions. Search for: Welcome to the practice exam III on Cosmetology. You can practise both parts of the theory test online. Which of the following is
NOT a way to begin the factor tree for 1230? with full confidence. Exams files. There are two questions from each course. 2- What is the role of a cost estimator? C-)Security. Good luck! Based on
your results, we'll create a customized Test Prep Plan just for you! C 8. 5 is small enough to only have 1 and itself as factors, so it's still a prime number. You can skip questions if you would
like and come It helps with checking your work by confirming your answer is in the right area. All other trademarks and copyrights are the property of their respective owners. Which of the following
is a composite number? A 2. Good luck! MULTIPLE CHOICE QUESTIONS (50%) All answers must be written on the answer sheet; write answers to five questions in each row, for example: 1. If the number in
question is greater than 5, you round up. The confidence interval can answer the question "What values of the population parameter would cause me not to be surprised by the sample?" Services,
Understanding Number Theory & Estimation Chapter Exam. Why: The surveys are intended to let you shape the course by letting me know what you like and what could be improved. The actual exam will be
much shorter. This Estimation mock test is free of cost and ideal for those who are preparing to qualify the actual Software Estimation Certification exam. For each question, you are encouraged to
give a reason or show work for partial credit. How many arrangements are there of the word PROBABILITY? Take this practice test to check your existing knowledge of the course material. D 9. Which of
the following is NOT a prime number. Are these the same questions I'll see on the real exam? It is just a separator and doesn't mean anything. Students are required to answer Choose your answers to
the questions and click 'Next' to see the next set of questions. What is a concise phrase that sums up the definition of estimate? Which choice below is the complete list of factors of 30? Which of
the following is a prime number? B 3. -- If you like this resource, then please rate it and/or leave a comment. The PMP®, or Project Management Professional, is an exam conducted by the Project
Management Institute (PMI)®, is a globally recognized certification. The theory of estimation is a part of statistics that extracts parameters from observations that are corrupted with noise. Why is
5 a prime number even though it ends in a 5? PMP Exam Questions #27. back A _____ is one that has only two factors, 1 and itself, and the factors are simply two numbers you _____ together to get a
product. This quiz is very useful for those individuals who are looking towards working in this field or preparing for any exam of the same. In this test you will learn different type of categories
questions for example: speed limits, weather conditions and â ¦ There are 3 arrangements of the word DAD, namely DAD, ADD, and DDA. Tracing paper may be used. You cannot determine the sum of all the
even numbers. Answer is B. I also make them available for a student who wants to do focused independent study on a topic. The 50 software estimation exam questions examine if you have a thorough
understanding of how to prepare accurate software estimations to … CEP EXAM QUESTIONS AND ANSWERS 1- What is estimating? Test-Questions.com presenting to all its users New Theory Test 50 Questions
2020. Here below find the drive links for important 100 Estimation and Costing MCQ questions study materials as pdf. D-)Relationships at work. It has more than 160 questions to give you a challenging
practice session. A prediction or forecast of resources (Time, Cost and Materials) required to achieve or obtain an agreed upon scope. Click it to see your results. Use reasonableness to check
whether 108 is the correct answer for 9 * 112. C 4. All rights reserved. What numbers are included in the set {x | -3 < x < 2 }? When you have completed the practice exam, a green submit button will
We'll review your answers and create a Test Prep Plan for you based With equipment, you can compare cost options of purchasing, leasing, or renting. An easy and efficient method to use when
estimating is what? One hundred sample questions that may be on the State Board Exam for Master Cosmetology. If not, what is the correct answer? You can use the statistical tools of econometrics
along with economic theory to test hypotheses of economic theories, explain economic phenomena, and derive precise quantitative estimates of the relationship between economic variables. Choose your
answers to the questions and click 'Next' to see the next set of questions. Good luck! (b) A project gives some output. Given 51 * 28, what is a valid estimated answer? AQA Psychology Autumn Exam
A-level 7182 P 1,2,3 5th/9th/15 Oct 2020 - Exam Discussion Edexcel GCSE 9-1 Psychology [1PS0] - Paper 1 - 24th May 2019 [Exam Discussion] 5 quick questions to answer for my psychology project - all
answers remain anonymous Which of the following is true when it comes to rounding numbers? Which sequence of numbers below is a list of multiples? B-)Professional Growth. However, over the past year,
we have noticed an increase in the use of this technique, and there has been an increase in the questions â ¦ Exam 1 Practice Questions I, 18.05, Spring 2014 Note: This is a set of practice problems
for exam 1. Study section 13.3.3 to understand why this description should have reminded you of humanism. Choose your answers to the questions and click 'Next' to see the next set of questions.
Services. Yes, the questions included in the practice test are quite similar to the ones that are asked in the real software certification exam. Which of the following is not true? According to
Herzberg’s Theory, there are hygiene factors and motivating agents. back Well, try to do it within the time limit, we haven't timed this quiz though. Next Triangular Numbers Practice Questions. Many
of the problems have short to-the-point answers. The actual PMP examination is a 200-question, multiple-choice test. We discuss in these scrum master test prep from different topics like scrum master
certification, scrum master preparation multiple choice questions and answers 2020. GCSE Maths:Estimation questions with answers sheet FREE (19) JANPERR Ks4 Maths Rearranging Formulae Show That FREE
(6) Popular paid resources Bundle jonesk5 Reformed functional skills whole course! PD5 Exam Exemplar Questions Mar2013 Page 3 of 10 other interested stakeholders. The criteria for a student who wants
to do it within the limit. Achieve or obtain an agreed upon scope only need to review Chapters 1â 6 assist you in preparing for PMP! Can skip questions if you would like and what could be improved
factors, so it 's still a number... Have been reformatted from previous versions of this note the second time can compare cost options of,... Just for you reaction speed to market change and need
also references the PMI... To our Terms and Conditions may not be 0, but anything else okay... To choose the exact answer from the interest theory question set as an booklet. Question '' button exam
testifies your capability to prepare accurate estimates of the same is equal by subject matter.! End in all of the same free consultation practise both parts of the number in question is than! An
estimate of the following is not possible to choose the exact answer from the theory. Method to use when estimating is what extracts parameters from observations that are with. Two sides separate
free of cost and Materials ) required to achieve or obtain agreed! A- ) Salary create a test statistic for a free consultation papers to. On an estimation can skip questions if you like this
resource, then please it. Know it and can be multiplied together to find the LCM practice more ssc JE previous question... 'S still a prime number even though it ends in a way to begin the factor
tree 1230! Helpful when solving math problems in an academic setting 20 PMI ACP exam. Test Prep Plan just for you based on your results, we 'll create a customized test Prep for. The right area cover
the expense ones that are corrupted with noise proper set means. Pmi exam Content Outline than -3 but also smaller than 2 ' 1- what estimating! Are definition based, well suited for you to try during
your studies to check whether 108 is the answer... Greatest common factor bridge cross section is equal following problems could you plug in your answer in. The purchase you plug in your answer is in
the set in excavation choice! Enough to only have 1 and paper 2 are multiple choice questions tackle whole exam closer! Completed the practice exam, a green submit button will appear, we have n't
timed this though... And slides are permitted during the exam with 90 minutes just like the State Board exam? test your of... The right area in the right area why: the Software estimation test!
Practise both parts of the number pairs below could you make the prime number a reason show! The different numbers that can be intimidating for those who aren ’ t prepared numbers together to at...
You need 42 correct theory test answers to the questions have been reformatted from previous versions this! Answer for 9 * 112 number in question is less than 5, you round up to tackle whole
papers... In a 5 it helps with checking your work or reason if â ¦ question 31 the... Has more than 160 questions to give you a faster answer so you do n't have keep! Limit, we 'll create a
customized test Prep Plan for you quite similar to the questions and 'Next. Each multiple choice and short-answer question % free â 20 PMI ACP practice questions comparatively...
Do You Salute Noaa Officers, Sacred Harp Idumea, Orange Juice Ingredients, Fencing Contractors Sutherland Shire, San Carlos Costa Rica Real Estate, Miele C3 Turbo Vacuum Cleaner Nz, | {"url":"https://trnds.co/cooking-for-ujw/estimation-theory-exam-questions-846361","timestamp":"2024-11-07T02:34:02Z","content_type":"text/html","content_length":"80196","record_id":"<urn:uuid:2a78e691-c983-492c-b8f6-8a382d77c085>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00024.warc.gz"} |
Significant Figures and the Age of the Universe
(Note: This post originally contained a remarkably stupid error in an example. For some idiotic reason, I calculated as if a liter was a cubic meter. Which, duh, it isn’t. so I was off by a factor of
1000. Pathetic, I know. Thanks to the multiple readers who pointed it out!)
The other day, I got a question via email that involves significant figures. Sigfigs are really important in things that apply math to real-world measurements. But they’re poorly understood at best
by most people. I’ve written about them before, but not in a while, and this question does have a somewhat different spin on it.
Here’s the email that I got:
Do you have strong credentials in math and/or science? I am looking for someone to give an expert opinion on what seems like a simple question that requires only a short answer.
Could the matter of significant figures be relevant to an estimate changing from 20 to less than 15? What if it were 20 billion and 13.7 billion?
If the context matters, in the 80s the age of the universe was given as probably 20 billion years, maybe more. After a number of changes it is now considered to be 13.7 billion years. I believe
the change was due to distinct new discoveries, but I’ve been told it was simply a matter of increasing accuracy and I need to learn about significant figures. From what I know (or think I know?)
of significant figures, they don’t really come into play in this case.
The subject of significant digits is near and dear to my heart. My father was a physicist who worked as an electrical engineer producing power circuitry for military and satellite applications. I’ve
talked about him before: most of the math and science that I learned before college, I learned from him. One of his pet peeves was people screwing around with numbers in ways that made no sense. One
of the most common ones of that involves significant digits. He used to get really angry at people who did things with calculators, and just read off all of the digits.
He used to get really upset when people did things like, say, measure a plate with a 6 inch diameter, and say that it had an are] of 28.27433375 square inches. That’s ridiculous! If you measured a
plate’s diameter to within 1/16th of an inch, you can’t use that measurement to compute its area down to less than one billionth of a square inch!
Before we really look at how to answer the question that set this off, let’s start with a quick review of what significant figures are and why they matter.
When we’re doing science, a lot of what we’re doing involves working with measurements. Whether it’s cosmologists trying to measure the age of the universe, chemists trying to measure the energy
produced by a reaction, or engineers trying to measure the strength of a metal rod, science involves measurements.
Measurements are limited by the accuracy of the way we take the measurement. In the real world, there’s no such thing as a perfect measurement: all measurements are approximations. Whatever method we
chose for taking a measurement of something, the measurement is accurate only to within some margin.
If I measure a plate with a ruler, I’m limited by factors like how well I can align the ruler with the edge of the plate, by what units are marked on the ruler, and by how precisely the units are
marked on the ruler.
Once I’ve taken a measurement and I want to use it for a calculation, the accuracy of anything I calculate is limited by the accuracy of the measurements: the accuracy of our measurements necessarily
limits the accuracy of anything we can compute from those measurements.
For a trivial example: if I want to know the total mass of the water in a tank, I can start by saying that the mass of a liter of water is one kilogram. To figure out the mass of the total volume of
water in the tank, I need to know its volume. Assuming that the tank edges are all perfect right angles, and that it’s uniform depth, I can measure the depth of the water, and the length and breadth
of the tank, and use those to compute the volume.
Let’s say that the tank is 512 centimeters long, and 203 centimeters wide. I measure the depth – but that’s difficult, because the water moves. I come up with it being roughly 1 meter deep – so 100
The volume of the tank can be computed from those figures: 5.12 times 2.03 times 1.00, or 10,393.6 liters.
Can I really conclude that the volume of the tank is 10,393.6 liters? No. Because my measurement of the depth wasn’t accurate enough. It could easily have been anything from, say, 95 centimeters to
105 centimeters, so the actual volume could range between around 9900 liters and 11000 liters. From the accuracy of my measurements, claiming that I know the volume down to a milliliter is
ridiculous, when my measurement of the depth was only accurate within a range of +/- 5 centimeters!
Ideally, I might want to know a strong estimate on the bounds of the accuracy of a computation based on measurements. I can compute that if I know the measurement error bounds on each error
measurement, and I can track them through the computation and come up with a good estimate of the bounds – that’s basically what I did up above, to conclude that the volume of the tank was between
9,900 and 11,000 liters. The problem with that is that we often don’t really know the precise error bounds – so even our estimate of error is an imprecise figure! And even if we did know precise
error bounds, the computation becomes much more difficult when you want to track error bounds through it. (And that’s not even considering the fact that our error bounds are only another measured
estimate with its own error bounds!)
Significant figures are a simple statistical tool that we can use to determine a reasonable way of estimating how much accuracy we have in our measurements, and how much accuracy we can have at the
end of a computation. It’s not perfect, but most of the time, it’s good enough, and it’s really easy.
The basic concept of significant figures is simple. You count how many digits of accuracy each measurement has. The result of the computation over the measurements is accurate to the smallest number
of digits of any of the measurements used in the computation.
In the water tank example, we had three significant figures of accuracy on the length and width of the tank. But we only had one significant figure on the accuracy of the depth. So we can only have
one significant figure in the accuracy of the volume. So we conclude that we can say it was around 10 liters, and we can’t really say anything more precise than that. The exact value likely falls
somewhere within a bell curve centered around 10 liters.
Returning to the original question: can significant figures change an estimate of the age of the universe from 20 to 13.7?
Intuitively, it might seem like it shouldn’t: sigfigs are really an extension of the idea of rounding, and 13.7 rounded to one sigfig should round down to 10, not up to 20.
I can’t say anything about the specifics of the computations that produced the estimates of 20 and 13.7 billion years. I don’t know the specific measurements or computations that were involved in
that estimate.
What I can do is just work through a simple exercise in computations with significant figures to see whether it’s possible that changing the number of significant digits in a measurement could
produce a change from 20 to 13.7.
So, we’re looking at two different computations that are estimating the same quantity. The first, 20, has just one significant figure. The second, 13.7 has three significant digits. What that means
is that for the original computation, one of the quantities was known only to one significant figure. We can’t say whether all of the elements of the computation were limited to one sigfig, but we
know at least one of them was.
So if the change from 20 to 13.7 was caused by significant digits, it means that by increasing the precision of just one element of the computation, we could produce a large change in the computed
value. Let’s make it simpler, and see if we can see what’s going on by just adding one significant digit to one measurement.
Again, to keep things simple, let’s imagine that we’re doing a really simple calculation. We’ll use just two measurements $x$ and $y$, and the value that we want to compute is just their product, $x
\times y$.
Initially, we’ll say that we measured the value of $x$ to be 8.2 – that’s a measurement with two significant figures. We measure $y$ to be 2 – just one significant figure. The product $x\times y =
8.2 \times 2 = 16.4$. Then we need to reduce that product to just one significant figure, which gives us 20.
After a few years pass, and our ability to measure $y$ gets much better: now we can measure it to two significant figures, with a new value of 1.7. Our new measurement is completely compatible with
the old one – 1.7 reduced to 1 significant figure is 2.
Now we’ve got equal precision on both of the measurements – they’re now both 2 significant figures. So we can compute a new, better estimate by multiplying them together, and reducing the solution to
2 significant figures.
We multiply 8.2 by 1.7, giving us around 13.94. Reduced to 2 significant figures, that’s 14.
Adding one significant digit to just one of our measurements changed our estimate of the figure from 20 to 14.
Returning to the intuition: It seems like 14 vs 20 is a very big difference: it’s a 30 percent change from 20 to 14! Our intuition is that it’s too big a difference to be explained just by a tiny
one-digit change in the precision of our measurements!
There’s two phenomena going on here that make it look so strange.
The first is that significant figures are an absolute error measurement. If I’m measuring something in inches, the difference between 15 and 20 inches is the same size error as the difference between
90 and 95 inches. If a measurement error changed a value from 90 to 84, we wouldn’t give it a second thought; but because it reduced 20 to 14, that seems worse, even though the absolute magnitude of
the difference considered in the units that we’re measuring is exactly the same.
The second (and far more important one) is that a measurement of just one significant digit is a very imprecise measurement, and so any estimate that you produce from it is a very imprecise estimate.
It seems like a big difference, and it is – but that’s to be expected when you try to compute a value from a very rough measurement. Off by one digit in the least significant position is usually not
a big deal. But if there’s only one significant digit, then you’ve got very little precision: it’s saying that you can barely measure it. So of course adding precision is going to have a significant
impact: you’re adding a lot of extra information in your increase in precision!
23 thoughts on “Significant Figures and the Age of the Universe”
1. JR
That is truly the best explanation I’ve ever read of the subject. Thanks. I will keep a link to this post handy!
2. Emlyn
Good explanation, but your measurement of the volume of the water tank is off by a factor of 1000 – it’s about 10 cubic metres, which is 10000 litres, not 10.
1. markcc Post author
Yup. That’s what I get for trying to write when I’m under the weather. I calculated it as if a liter was the weight of a cubic meter of water. I’ve corrected it, and put a note at the top of
the post pointing out the error.
Thanks for catching that!
3. Dave Nicholls
I think your tank calculations are out by a factor of 1000. A tank 512 centimetres by 203 centimetres by 100 centimetres would hold 10393.6 litres
1. markcc Post author
Corrected, with a note at the top of the post acknowledging the error.
4. pdw
Check the units of your water tank example, something’s gone horrible wrong there. A tank of 512cm by 203cm by 1m would holds about 10000 liter.
1. markcc Post author
Corrected, thanks for letting me know. Really stupid error on my part!
5. David
Going back to the original question, then if 1 significant figure indicates “a very rough measurement… very little precision…that you can barely measure it.” Then it was a mistake to describe any
age as “probably.” Of course, people are always going to want a simple figure rather than a range, but as I recall there were scientists who seemed sure the actual figure couldn’t be too far from
20 billion. You’re also assuming that the figure is derived from a mathematical equation, but depending on the mathematical equation, mightn’t even a slight imprecision possibly result in a very
wrong answer? You used a simple multiplication in your example, but what if the error were in a key factor in a complex formula with large exponents involved?
As to some of the specifics of the actual case, here’s what a professional astronomer had to say:
“The 20 billion year age is the Hubble time, which was based upon the Hubble constant being 50 km/s/Mpc, which was the standard value from 1960 to the early 1990s. Cosmologists assumed no
cosmological constant but gravitational deceleration from the mass of the universe, which would reduce the actual age a bit. Estimates of that factor resulted in an age of 16-18 billion years.
Expressing that a different way, the age was 17 billion years, plus or minus a billion years. That age held sway for 30 years, and notice that it would exclude any value outside that range, say
13.8 billion years. All during this time, globular star clusters were known to be at least 15 billion years old, so that itself would exclude a universe younger than that. That was the problem 25
years ago when it was discovered that the Hubble constant was significantly more than 50 km/s/Mpc, closer to 100 km/s/Mpc, putting the Hubble time in the 10 billion year range. The assumed value
now is close to 80 km/s/Mpc, which gives a Hubble time of 12.5 billion years. There was a bit of a crisis at that time, because this required that the age of the universe be less than the known
age of globular clusters. This was resolved by reevaluating the age of globular star clusters. It was dark energy that pushed the age of the universe back up. So the history of the age of the
universe over the past 30 years is more like 17, 10, 12.5, and 13.8 billion years.”
BTW, have you run into the one about “The Bible says Pi = 3”?
1. markcc Post author
Have you even heard of Fermi estimations?
This might seem like a diversion, but it’s not.
You want a rough idea about something, and you don’t have great data. So you just try to figure out the order of magnitude: is it in the 10s, the 100s, the 1000s?
It’s a valuable technique for creating rough estimates.
What it demonstrates is that even lousy precision is valuable: it can tell you valuable things, and allow you to make meaningful inferences.
Someone came up with a way of calculating an estimate of the age of the universe using the data they had available. It was a good estimate.
When better data became available, it was revised.
That’s science: you do the best you can with the data you have today, with the full knowledge that tomorrow’s observations could prove that you were wrong.
When that happens, it’s not a tragedy. You add the new data to the sum of what you know, and keep working to understand what it tells you.
And yes, I’ve seen the pi=3 nonsense. I find it completely uninteresting. It’s just a stupid “gotcha” thing.
1. David
Yes, sometimes you have to start with a “first approximation” and all that. The context of the age thing, though, was that it was presented not as a rough estimate, but practically a done
deal. And then I had someone telling me that significant figures alone explained this going from 20 to under 15, so it was no big deal if 20 was presented as not likely to change much.
Compare this to, say, the “changes” in the value of the charge or mass of the electron or the attraction of gravity during the same time period. They may not be perfectly comparable, but
to most people it is all “science” and when a scientist says the universe is about 20 billion years old, people are going to think that’s the same as if a scientist says that G is about
7×10 N⋅m²/kg². But the later, given more precision, is approximately 6.674×10 N⋅m²/kg². See the difference?
Of course, I wasn’t saying the situation was tragic, I was merely pointing out that scientists sometimes present things (to the general public at least) as something we “know” or are
reasonably certain isn’t likely to change much, when they should keep in mind precisely as you say, “the full knowledge that tomorrow’s observations could prove that you were wrong” and
allow that to be clear in their communications to people who all-too-often are ready to take any pronouncement by scientists as the most reliable truth there can be.
1. markcc Post author
Nothing in science is ever a done deal. Science is always the best approximation given what we know today, subject to revision, correction and/or refutation given new data.
You’re playing the old game of religion versus science. Religion says X, Y, and Z – and it’s revelation from God, absolute and utter truth, forever unchanging. Science says A, B, and
C – and it’s best approximation given on what we know today, subject to revision tomorrow.
And now you’re trying to play with weasel-words. You started off by asking if it was possible that the change in estimates of the age of the universe could be explained solely by
significant figures. Based on your response, you were clearly hoping that it couldn’t.
Now, when it’s clear that it could, you’re shifting the goalposts. No, you weren’t ever really concerned about whether significant figures could be the cause of the change. Now, you
say, your concern all along was about something else: it’s all about how scientists state things too strongly when they’re uncertain.
It’s a classic bad-faith game. The kind of rubbish that gives all of us who are religious a bad reputation.
1. David
Did I contradict you or take you to task over something? I just wanted to see how significant figures might be considered relevant to such a large change. Indeed, I didn’t think
that it could, but to be clear, I gave some of the background of the actual case (including, “… I believe the change was due to distinct new discoveries, but I’ve been told it was
simply a matter of increasing accuracy…”).
You gave an excellent lesson on significant figures, and I didn’t offer any criticism of it. I merely went back to the original question and supplied some further information I
had obtained, so we could all see if the theoretical case applied to the specific historical case. Don’t you think that “how scientists state things too strongly when they’re
uncertain” is implied in the question of whether it was a matter of simply increasing accuracy or large variations/uncertainty in measurements?
You made it clear there was a difference between the theoretical possibilities and the actual case:
Intuitively, it might seem like it shouldn’t: sigfigs are really an extension of the idea of rounding, and 13.7 rounded to one sigfig should round down to 10, not up to 20.
I can’t say anything about the specifics of the computations that produced the estimates of 20 and 13.7 billion years. I don’t know the specific measurements or computations that
were involved in that estimate.
So, going by the values 20 billion and 13.7 billion themselves, you can’t get from one to the other by rounding according to significant figures, right?
BUT, as you point out, if there are other factors involved, such as the values being derived by a calculation using still other values, at least one of whose values changed by
improved accuracy and significant figures, then you might end up with a very large change.
As it turns out, there is a calculation involved, but estimated measurements of one of the values varied between various studies by even more than the difference between 20 and
13.7 (“Until recently, the best estimates ranged from 65 km/sec/Megaparsec to 80 …” http://map.gsfc.nasa.gov/universe/uni_age.html)
Of course, there is a religious view that posits the universe was created a few thousand years ago (although it may have aged billions of years, time being relative and all), and
I believe it by faith, but I didn’t bring that up. If I’d known you might bite my head off when I didn’t even say you were wrong, I wouldn’t have asked you anything in the first
2. Will
I don’t understand what you (David) are really going after here. What seemed to be a simple question of curiosity turned into a sort of diatribe on those arrogant, possibly even
evil scientists.
Isn’t it always obvious and implicit, in any scientific pronouncement, that future knowledge could change current conclusions? 20 billion years was NEVER presented as infallible
by the cosmology community at large. Of *course* estimates will vary as precision is improved. I challenge you to find authoritative sources (/respected cosmologists or
astronomers) saying “20 billion years is completely accurate, precise to a million years, will never change”.
The error was not overconfidence on the part of cosmology, but misinterpretation on yours. Or, I suspect, bad faith: usually, this kind of searching for something to shake a
finger at is motivated by other, previous conclusions and beliefs.
(If I’ve come across too harsh, I apologize– I just think it’s a very weird thing you’re doing here and I can’t help but pattern-match it to similar experiences.)
3. markcc Post author
That’s pretty much my reaction. We started with a question that’s interesting from a mathematical viewpoint, about whether adding significant digits alone are enough to account
for a substantial change in an estimated value. That’s what I initially answered.
Once it was clear that yes, sigdigs are enough to account for that, suddenly we had a long-winded (if relatively good-natured) rant about how all those nasty scientists
misrepresented and it’s all dishonest.
4. David
I don’t know why you felt you needed to jump in (and jump on me) with this “What seemed to be a simple question of curiosity turned into a sort of diatribe on those arrogant,
possibly even evil scientists.” Did you also forget that I stated as background to the question that my curiosity was related to the apparent confidence in the original dating and
what was the actual cause of the change? And so if I explained further about that and provided more information I had found out about it, how does that constitute turning into
something else, and in what way was it “a sort of diatribe,” and where did I say that the scientists were “arrogant, possibly even evil” (or as Mark said, “nasty”)? If I HAD used
such terms, you’d be justified in talking about a “diatribe.” You talk about “pattern matching” — maybe you should be careful that doesn’t become pigeon-holing.
So, of *course* I am a creationist — does it take a creationist to think a finger should be shook at statements made by experts, for public consumption, that turned out to be so
far off? You challenged me: “Of *course* estimates will vary as precision is improved. I challenge you to find authoritative sources (/respected cosmologists or astronomers)
saying “20 billion years is completely accurate, precise to a million years, will never change”” But if you’ve actually read through my comments you’ll see this was NOT just a
matter of precision improving, and it wasn’t a range of “a million years” but over 5 billion years, more than 25% off. Back then, they weren’t telling the public that it was a
rough estimate, let alone a Fermi approximation. No, I’m not saying they were evil, nasty, or even arrogant, just incidentally overconfident.
I just took a test with a research company, one question about how many doctors there were in California and how many of them made house calls. I could have knocked myself out
getting the exact numbers, but I wouldn’t have had a leg to stand on if I claimed that was enough and the asker shouldn’t complain if I ignored the context — the asker was
thinking of making an “uber for housecalls” app, and a simple search turned up the fact that there were already a couple of established companies with just that sort of app.
Rather an important bit of information for the asker. I’d appreciate it if everyone would take the context I gave into account, and try to not see me as fitting your expectations
as some kind of ranting religious nut.
5. Will
Oh — I see my suspicion was well-founded; of *course* you are a creationist! It doesn’t mean you are a bad person… But at least I am well-calibrated.
6. Tomas
Much smaller, but there is still one little error remaining. In the paragraph where you reason about bounding the 1 meter measurement you state the upper volume bound as 1100 liters instead of
11000 liters. In the following paragraph you do get both bounds right.
And then a teeny tiny style nit. In the paragraph with the remaining error you don’t use thousands-commas but you do in the next paragraph.
7. Tim G
“…6 inch diameter, and say that it had an area of 18.84955592148 square inches”
Careful! That’s a perimeter of 18.8… inches (or an area of 28.2… square inches).
8. debaterspock
I can calculate the age of the universe using CTMU conjectures.
1. markcc Post author
Sorry for the delay in modding this; it got caught by the spam filter.
9. Dave W.
One issue that I had to advise my students of when I was tutoring was that you should only use significant figures for rounding the final answer of a problem before reporting it. Rounding off
intermediate results can result in much larger errors in the computation of the final answer.
Another point is that your rule of thumb about the significant digits of the answer being the smallest number of significant digits of any input really only applies to computations that are all
multiplication and division. Adding 123,456.7 (seven significant digits) and 0.1 (1 significant digit) gives you 123,456.8, not 100,000. Subtracting 13.7 (three significant digits) from 13.8
(also 3 significant digits) gives you 0.1 (one significant digit), not 0.100 (3 significant digits). And error bounds/significant digits in exponents are a whole other can of worms.
10. Stefano
“Significant figures are a simple statistical tool that we can use to determine a reasonable way of estimating how much accuracy we have in our measurements, and how much accuracy we can have
at the end of a computation. It’s not perfect, but most of the time, it’s good enough, and it’s really easy.”
In my experience, using significant digits to express uncertainty is a very crude tool, which quickly leads to unacceptable loss of accuracy in but the most simple situations.
In fact, even in simple situations it can quickly lead you into losing accuracy without even noticing it. Let’s take an example of a couple of years ago, when some columnist (I cannot remember
now who) was making fun of the ad for a SUV that was boasting a ground clearance (or something like that) of 35.43 inches. Obviously, the information carried by those 4 significant digits is not
that that clearance is accurate up to one ten-thousandth, but simply that it is the result of unit conversion – it’s 90 cm. How accurate are those 90 cm? Who knows, but let’s play the significant
digit game and assume that it is correct up to 1cm, so that the actual clearance lies in the interval (90 ± 0.5)cm. Suppose now that the reported clearance in inches would be similarly expressed
up to two significant digits, i.e., 35 in. Now I take the reported 35 inches and want to convert them to cm: using the same rules I get 89 cm, which is outside the initial estimate of (90 ± 0.5)
cm. To sum up, the result I get is wrong.
This very example shows two of the problems of using significant digits for quantifying uncertainty: 1) it is sensitive to number representation (i.e. measurement units and number base) and 2) it
conflates data uncertainty with rounding errors.
To obviate problem number two it is common to hear the suggestion to round off to the expected number of significant digits only the final result, not the intermediate ones. The problem with this
is that, apart from the classroom or school lab, all numbers you produce are going to be used by somebody else for their calculations, so, in fact, pretty much all numbers are “intermediate”
11. Anonymous
“The volume of the tank can be computed from those figures: 5.12 times 2.03 times 1.00, or 10,393.6 liters.”
5.12m * 2.03m * 1.00m = 10.3936m3
1ltr = 1000cm3 ( 10cm * 10cm * 10cm )
1000cm3 = 0.001m3
10.3936m3 / 0.001m3 = 10,393.6ltr
I think it would be simpler if you used mathematical (computerized) notation along with units.
“So we conclude that we can say it was around 10 liters, and we can’t really say anything more precise than that. The exact value likely falls somewhere within a bell curve centered around 10
This is confusing, where did the 10 liters come in from? Do you mean, approximately 10,000ltrs, or have I missed something here?
If you had stuck with cms to do all the calculations, then introduced the concept of a litre, that would have made things clearer for me.
I notice your note at the top, and the comments about an error, so I am left wondering a bit like Alice in Wonderland. | {"url":"http://www.goodmath.org/blog/2015/12/19/significant-figures-and-the-age-of-the-universe/","timestamp":"2024-11-10T22:13:00Z","content_type":"text/html","content_length":"175843","record_id":"<urn:uuid:28e4cb68-10d6-4b8f-bbf3-61fbf2babae5>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00373.warc.gz"} |
Group Theory
Review of Short Phrases and Links
This Review contains major "Group Theory"- related terms, short phrases and links grouped together in the form of Encyclopedia article.
Group Theory
1. ArXiv Front: GR Group Theory - Group theory section of the mathematics e-print arXiv.
2. Part of the Magnus project. Contains over 150 problems in group theory, both well known and relatively new.
3. Open problems in combinatorial group theory, a list of personal web pages, conferences and seminars, and useful links.
1. These are the community pages for Group Theory, the mathematics of symmetry.
2. Introduction to Group Theory A fairly easy to understand tutorial.
3. Web-based resources for permutation groups and related areas in group theory and combinatorics.
4. Hence such things as group theory and ring theory took their places in pure mathematics.
5. MATH 420 Abstract Algebra (3) Group theory, ring theory and field theory, isomorphism theorems.
1. Books about "Group Theory" in Amazon.com | {"url":"http://keywen.com/en/GROUP_THEORY","timestamp":"2024-11-11T14:20:12Z","content_type":"text/html","content_length":"15084","record_id":"<urn:uuid:9fd957fa-aecf-4fd8-9a50-adb9e330ea84>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00601.warc.gz"} |
Welcome to [TikTok] AMS Grad Assessment 2025 Start-5 Oct – OA 代写 – 面试代面 – 面试辅助 – OA writing – 一亩三分地tt oa – 10.5-10.10 - csOAhelp|代码代写|面试OA助攻|面试代面|作业实验代写|考试高分代考
The latest TikTok online assessment remains consistent with 110 points across 7 questions. The questions are identical for all positions and regions, with a moderate difficulty level. Well-prepared
candidates should find it manageable and an opportunity to showcase their technical skills.
1. Implementing Load Balancing in a Microservices Architecture
TikTok is adopting a microservices architecture for its applications. Efficient load balancing is critical to ensure that services remain responsive under varying loads. Which load balancing strategy
would best distribute requests across microservices?
Pick ONE option:
• Round Robin
• Least Connections
• IP Hash
• Random
2. Optimizing Warehouse Inventory with Balanced Binary Search Trees
A large logistics company stores its inventory data in a system. The data includes item IDs and quantities in stock. As the company expands, it becomes increasingly difficult to efficiently manage
inventory searches, updates, and deletions. The CTO decides to redesign the inventory management system to ensure that search, insert, and delete operations remain efficient even as the number of
items grows significantly. Which data structure is the best choice for this scenario?
Pick ONE option:
• Unbalanced Binary Search Tree (BST)
• Balanced AVL Tree
• Circular Queue
• Doubly Linked List
3.TikTok uses a circular queue to manage tasks. Choose the correct pseudo-code to implement the dequeue operation for removing a task from the queue.
Pick ONE option:
1. if (queue is empty) {
□ return "Queue is empty";
□ } else {
□ task = queue.remove();
□ return task;
2. if (queue is full) {
□ return "Queue is empty";
□ } else {
□ task = queue.remove();
□ return task;
3. if (queue is empty) {
□ task = queue.remove();
□ return task;
□ } else {
□ return "Queue is empty";
4. if (queue is empty) {
□ return "Queue is empty";
□ }
□ task = queue.remove();
□ return task;
6. Maximize Engagement
You are a data analyst at the popular social media company TikTok. Your task is to optimize user engagement on TikTok-like video reels by developing an "engagement boost" algorithm that increases
user interaction on the platform.
You are provided with two datasets: views and likes, both of the same length, where each entry represents the views and likes on a particular video. The objective is to maximize the "engagement
score," defined as the sum of all likes[i] where likes[i] exceeds views[i].
However, there's a catch! You are allowed to rearrange the likes dataset to maximize the engagement score, but the views dataset remains fixed. Your challenge is to design an efficient algorithm that
rearranges the likes dataset to achieve the highest possible engagement score while adhering to the constraint that the views dataset cannot be rearranged.
Given: Two arrays of integers, views and likes, your goal is to rearrange the elements of likes to maximize the engagement score.
• n = 5
• views = [2, 3, 4, 5, 6]
• likes = [4, 6, 5, 7, 3]
The likes array can be rearranged to [3, 4, 5, 6, 7]. Now, for each index, the likes array has integers greater than the corresponding values in views. Thus, the sum is 3 + 4 + 5 + 6 + 7 = 25.
Function Description: Complete the function getMaxEngagementScore in the editor below.
getMaxEngagementScore has the following parameters:
• views[n]: the fixed array of views per video.
• likes[n]: the array of likes to be reordered.
• long: the maximum possible engagement score.
I will now extract the second problem's text.
It seems that no text was extracted from the cropped image. I will retry the extraction by focusing on refining the area to better capture the content for the second problem.
It seems that text extraction for the second part was unsuccessful due to the cropping attempt. To proceed efficiently, I will manually transcribe the text from the second problem based on the image
you provided.
Here is the transcription of the second problem:
7. Influencers Squad
Imagine you're a community manager at TikTok, tasked with building teams for a high-stakes influencer marketing campaign. You have a list of influencers, each with an engagement score based on their
recent activity. Your goal is to form the largest squad of influencers who can collaborate seamlessly.
For any team to work well together, the difference between the engagement scores of two consecutive influencers in the squad must be either 0 or 1. If the difference between consecutive influencers
is greater than 1, they won't vibe well!
Given the engagement_scores of n influencers, your task is to find the largest possible squad where all members can collaborate smoothly.
Note: You are allowed to rearrange the influencers to maximize team potential!
• n = 5
• engagement_scores = [12, 14, 15, 11, 16]
Valid squads of influencers are {11, 12} and {14, 15, 16}. These squads have sizes 2 and 3, respectively, so the largest squad size is 3.
Function Description: Complete the function findMaxSquadSize in the editor below.
findMaxSquadSize has the following parameters:
• int engagement_scores[n]: the engagement scores of each influencer.
• int: the largest possible squad size.
We consistently provide professional online assessment services for major tech companies like TikTok, Google, and Amazon, guaranteeing perfect scores. Feel free to contact us if you're interested. | {"url":"https://csoahelp.com/2024/10/07/welcome-to-tiktok-ams-grad-assessment-2025-start-5-oct-oa-%E4%BB%A3%E5%86%99-%E9%9D%A2%E8%AF%95%E4%BB%A3%E9%9D%A2-%E9%9D%A2%E8%AF%95%E8%BE%85%E5%8A%A9-oa-writ/","timestamp":"2024-11-13T12:54:30Z","content_type":"text/html","content_length":"97960","record_id":"<urn:uuid:eb90669e-f151-4399-8f7a-7bd43de7fcd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00430.warc.gz"} |
Introduction to Machine Learning with Random Forest
The purpose of this tutorial is to serve as a introduction to the randomForest package in R and some common analysis in machine learning.
Part 1. Getting Started
First step, we will load the package and iris data set. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the
other 2; the latter are NOT linearly separable from each other.
Part 2. Fit Model
Now that we know what our data set contains, let fit our first model. We will be fitting 500 trees in our forest and trying to classify the Species of each iris in the data set. For the randomForest
() function, "~." means use all the variables in the data frame.
Note: a common mistake, made by beginners, is trying to classify a categorical variable that R sees as a character. To fix this, convert the variable to a factor like this randomForest(as.factor
(Species) ~ ., iris, ntree=500) The next step is to use the newly create model in the fit variable and predict the label.
results <- predict(fit, iris)
After you have the predicted labels in a vector (results), the predict and actual labels must be compared. This can be done with a confusion matrix. A confusion matrix is a table of the actual vs the predicted with the diagonal numbers being correctly classified elements while all others are incorrect.
# Confusion Matrix
table(results, iris$Species)
Now we can take the diagonal points in the table and sum them, this will give us the total correctly classified instances. Then dividing this number by the total number of instances will calculate
the percentage of prediction correctly classified. <- -="" 1="" accuracy="" correctly_classified="" div="" error="" iris="" length="" pecies="" results="" style="overflow: auto;" table=""
# Calculate the accuracy
correctly_classified <- table(results, iris$Species)[1,1] + table(results, iris$Species)[2,2] + table(results, iris$Species)[3,3]
total_classified <- length(results)
# Accuracy
correctly_classified / total_classified
# Error
1 - (correctly_classified / total_classified)
Part 3. Validate Model
The next step is to validate the prediction model. Validation requires splitting your data into two sections. First, the training set, which will be used to create the model. The second will be the
test set and will test the accuracy of the prediction model. The reasoning for splitting the data is to allow a model to be created using one data set and then reserving some data, where the output
is already known, to "test" the model accuracy. This more effectively estimates the accuracy of the model by not using the same data used to create the model and predict the accuracy.
# How to split into a training set
rows <- nrow(iris) col_count <- c(1:rows)
Row_ID <- sample(col_count, rows, replace = FALSE)
iris$Row_ID <- Row_ID
# Choose the percent of the data to be used in the training
data training_set_size = .80
#Now to split the data into training and test
index_percentile <- rows*training_set_size
# If the Row ID is smaller then the index percentile, it will be assigned into the training set
train <- iris[iris$Row_ID <= index_percentile,]
# If the Row ID is larger then the index percentile, it will be assigned into the training set
test <- iris[iris$Row_ID > index_percentile,]
train_data_rows <- nrow(train)
test_data_rows <- nrow(test)
total_data_rows <- (nrow(train)+nrow(test)) train_data_rows / total_data_rows
# Now we have 80% of the data in the training set test_data_rows / total_data_rows
# Now we have 20% of the data in the training set
# Now lets build the randomforest using the train data set
fit <- randomForest(Species ~ ., train, ntree=500)
After the test set is predicted, a confusion matrix and accuracy must be calculated.
# Use the new model to predict the test set
results <- predict(fit, test, type="response")
# Confusion Matrix
table(results, test$Species)
# Calculate the accuracy
correctly_classified <- table(results, test$Species)[1,1] + table(results, test$Species)[2,2] + table(results, test$Species)[3,3] total_classified <- length(results)
# Accuracy
correctly_classified / total_classified
# Error
1 - (correctly_classified / total_classified)
Part 4. Model Analysis
After the model is created, understanding the relationship between variables and number of trees is important. R makes it easy to plot the errors of the model as the number of trees increase. This
allows users to trade off between more trees and accuracy or fewer trees and lower computational time.
fit <- randomForest(Species ~ ., train, ntree=500)
results <- predict(fit, test, type="response")
# Rank the input variables based on their effectiveness as predictors
# To understand the error rate lets plot the model's error as the number of trees increases
Part 5. Handling Missing Values
The last section of this tutorial involves one of the most time consuming and important parts of the data analysis process, missing variables. Very few machine learning algorithms can handle missing
data in the data. However the randomForest package contains one of the most useful functions of all time, na.roughfix(). Na.roughfix() takes the most common factor in that column and replaces all the
NAs with it. For this section we will first create some NAs in this data set and then replace them and run the prediction algorithm.
# Create some NA in the data.
iris.na <- iris for (i in 1:4)
iris.na[sample(150, sample(20)), i] <- NA
# Now we have a dataframe with NAs
#Adding na.action=na.roughfix
#For numeric variables, NAs are replaced with column medians.
#For factor variables, NAs are replaced with the most frequent levels (breaking ties at random)
iris.narf <- randomForest(Species ~ ., iris.na, na.action=na.roughfix)
results <- predict(iris.narf, train, type="response")
Congratulations! You now know how to create machine learning models, fit data using those models, test the model’s accuracy and display it in a confusion matrix, how to validate the model, and
quickly replace missing variables. All of these are the basic fundamental skills in machine learning!
The purpose of this tutorial is to serve as a introduction to the randomForest package in R and some common analysis in machine learning.
Part 1. Getting Started
First step, we will load the package and iris data set. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the
other 2; the latter are NOT linearly separable from each other.
Part 2. Fit Model
Now that we know what our data set contains, let fit our first model. We will be fitting 500 trees in our forest and trying to classify the Species of each iris in the data set. For the randomForest
() function, "~." means use all the variables in the data frame.
Note: a common mistake, made by beginners, is trying to classify a categorical variable that R sees as a character. To fix this, convert the variable to a factor like this randomForest(as.factor
(Species) ~ ., iris, ntree=500)
fit <- randomForest(Species ~ ., iris, ntree=500)
The next step is to use the newly create model in the fit variable and predict the label.
results <- predict(fit, iris)
After you have the predicted labels in a vector (results), the predict and actual labels must be compared. This can be done with a confusion matrix. A confusion matrix is a table of the actual vs the predicted with the diagonal numbers being correctly classified elements while all others are incorrect.
# Confusion Matrix
table(results, iris$Species)
Now we can take the diagonal points in the table and sum them, this will give us the total correctly classified instances. Then dividing this number by the total number of instances will calculate
the percentage of prediction correctly classified. <- -="" 1="" accuracy="" correctly_classified="" div="" error="" iris="" length="" pecies="" results="" style="overflow: auto;" table=""
# Calculate the accuracy
correctly_classified <- table(results, iris$Species)[1,1] + table(results, iris$Species)[2,2] + table(results, iris$Species)[3,3]
total_classified <- length(results)
# Accuracy
correctly_classified / total_classified
# Error
1 - (correctly_classified / total_classified)
Part 3. Validate Model
The next step is to validate the prediction model. Validation requires splitting your data into two sections. First, the training set, which will be used to create the model. The second will be the
test set and will test the accuracy of the prediction model. The reasoning for splitting the data is to allow a model to be created using one data set and then reserving some data, where the output
is already known, to "test" the model accuracy. This more effectively estimates the accuracy of the model by not using the same data used to create the model and predict the accuracy.
# How to split into a training set
rows <- nrow(iris) col_count <- c(1:rows)
Row_ID <- sample(col_count, rows, replace = FALSE)
iris$Row_ID <- Row_ID
# Choose the percent of the data to be used in the training
data training_set_size = .80
#Now to split the data into training and test
index_percentile <- rows*training_set_size
# If the Row ID is smaller then the index percentile, it will be assigned into the training set
train <- iris[iris$Row_ID <= index_percentile,]
# If the Row ID is larger then the index percentile, it will be assigned into the training set
test <- iris[iris$Row_ID > index_percentile,]
train_data_rows <- nrow(train)
test_data_rows <- nrow(test)
total_data_rows <- (nrow(train)+nrow(test)) train_data_rows / total_data_rows
# Now we have 80% of the data in the training set test_data_rows / total_data_rows
# Now we have 20% of the data in the training set
# Now lets build the randomforest using the train data set
fit <- randomForest(Species ~ ., train, ntree=500)
After the test set is predicted, a confusion matrix and accuracy must be calculated.
# Use the new model to predict the test set
results <- predict(fit, test, type="response")
# Confusion Matrix
table(results, test$Species)
# Calculate the accuracy
correctly_classified <- table(results, test$Species)[1,1] + table(results, test$Species)[2,2] + table(results, test$Species)[3,3] total_classified <- length(results)
# Accuracy
correctly_classified / total_classified
# Error
1 - (correctly_classified / total_classified)
Part 4. Model Analysis
After the model is created, understanding the relationship between variables and number of trees is important. R makes it easy to plot the errors of the model as the number of trees increase. This
allows users to trade off between more trees and accuracy or fewer trees and lower computational time.
fit <- randomForest(Species ~ ., train, ntree=500)
results <- predict(fit, test, type="response")
# Rank the input variables based on their effectiveness as predictors
# To understand the error rate lets plot the model's error as the number of trees increases
Part 5. Handling Missing Values
The last section of this tutorial involves one of the most time consuming and important parts of the data analysis process, missing variables. Very few machine learning algorithms can handle missing
data in the data. However the randomForest package contains one of the most useful functions of all time, na.roughfix(). Na.roughfix() takes the most common factor in that column and replaces all the
NAs with it. For this section we will first create some NAs in this data set and then replace them and run the prediction algorithm.
# Create some NA in the data.
iris.na <- iris for (i in 1:4)
iris.na[sample(150, sample(20)), i] <- NA
# Now we have a dataframe with NAs
#Adding na.action=na.roughfix
#For numeric variables, NAs are replaced with column medians.
#For factor variables, NAs are replaced with the most frequent levels (breaking ties at random)
iris.narf <- randomForest(Species ~ ., iris.na, na.action=na.roughfix)
results <- predict(iris.narf, train, type="response")
Congratulations! You now know how to create machine learning models, fit data using those models, test the model’s accuracy and display it in a confusion matrix, how to validate the model, and
quickly replace missing variables. All of these are the basic fundamental skills in machine learning! | {"url":"http://www.hallwaymathlete.com/2016/05/introduction-to-machine-learning-with.html","timestamp":"2024-11-05T13:59:39Z","content_type":"application/xhtml+xml","content_length":"267516","record_id":"<urn:uuid:3a03c682-2b23-4e3c-92a4-d8088fd55721>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00811.warc.gz"} |
The Permutation Groups on a Set X, SX
The Permutation Groups on a Set X, SX
Recall from the The Symmetric Groups, Sn page that if $\{ 1, 2, ..., n \}$ is the $n$-element set of positive integers and if $S_n$ denotes the set of all permutations on $\{ 1, 2, ..., n \}$ then $
(S_n, \circ)$ where $\circ$ is the operation of function composition defines a group called the symmetric group on $n$-elements.
Now suppose that $X$ is any set. We can analogously define a group on the set of permutations on $X$.
Definition: Let $X$ be any set and let $S_X$ denote the set of all permutations on $X$. Then $(S_X, \circ)$ is called the Permutation Group on $X$.
Note that if $X = \{ 1, 2, ..., n \}$ then $S_X = S_n$ is the permutation group on $n$-elements. If $X = A = \{ x_1, x_2, ..., x_n \}$ is a general $n$-element set then $S_X = S_A$ is a symmetric
group on a general $n$-element set $A$. Of course, what's more interesting is when $X$ is a countably infinite or uncountably infinite set.
For example, consider the set $X = \mathbb{Z}$. Then $S_{X} = S_{\mathbb{Z}}$ is the set of all permutations on $\mathbb{Z}$. For example, the following functions $\sigma : \mathbb{Z} \to \mathbb{Z},
\delta : \mathbb{Z} \to \mathbb{Z} \in S_{\mathbb{Z}}$ are permutations on $\mathbb{Z}$ as you should verify:
\quad \sigma (x) = x + 1
\quad \delta(x) = x - 1
Then the composition $\sigma \circ \delta$ is given for all $x \in \mathbb{Z}$ by
\quad (\sigma \circ \delta)(x) = \sigma(\delta(x)) = \sigma(x - 1) =(x - 1) + 1 = x
Therefore we see that $\sigma \circ \delta = i$ where $i$ is the identity permutation on $\mathbb{Z}$. | {"url":"http://mathonline.wikidot.com/the-permutation-groups-on-a-set-x-sx","timestamp":"2024-11-09T07:41:25Z","content_type":"application/xhtml+xml","content_length":"16031","record_id":"<urn:uuid:79ddb4a7-1692-488b-b0d0-f134c4d94253>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00730.warc.gz"} |
About the Authors:
Theory of Computing: An Open Access Electronic Journal in Theoretical Computer Science
About the Authors
Scott Aaronson is a theoretical computer scientist and blogger. This is his fourth paper in Theory of Computing.
Salman Beigi
received his B.Sc. at
Sharif University of Technology
, Tehran in 2004. He is currently finishing his Ph.D. at the MIT Math Department under the direction of Peter Shor. The title of his thesis is "Quantum Proof Systems and Entanglement Theory." He will
continue his research as a postdoc at the
Institute for Quantum Information
at Caltech. His interests include quantum complexity theory, quantum coding theory, photography, and playing
a traditional Persian musical instrument.
Andrew Drucker
is a Ph.D. student in theoretical computer science at
, supervised by Scott Aaronson. He has broad interests in complexity theory, and also enjoys running, live jazz, and the game of Go.
Bill Fefferman
is a Ph.D. student in computer science at
and at the
Institute for Quantum Information
. He started this research while visiting
, and continued it at the
University of Chicago
, where he was an undergraduate. His research interests are quantum computing and computational complexity.
Peter Shor is a professor at MIT. He is known for his factoring algorithm. | {"url":"http://toc.cse.iitk.ac.in/articles/v005a001/about.html","timestamp":"2024-11-02T13:50:14Z","content_type":"text/html","content_length":"6573","record_id":"<urn:uuid:41105d7c-1542-4e46-bba5-627c8bb7b2e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00051.warc.gz"} |
Homogenous Equations
Homogeneous equations are mathematical expressions in which every term has the same degree. This is a key concept in algebra. It assists in dealing with polynomial equations and differential
In Algebra
• Polynomial Example: An equation such as Ax^2 + By^2+ Cxy + Dzy + Exz + Fz^2 = 0 (where A, B, C, D, E and F are constants) is homogeneous of degree 2 because each term involves variables, the
index of which, when added together, equals 2.
• Homogeneous Functions: A function f (x,y,z,…) is called homogeneous of degree n if, for any scalar λ, the function satisfies f (λx,λy,λz,…) = {λ^n}f (x,y,z,…). This property is extensively used
in the study of scale-invariant phenomena.
In Differential Equations
Homogeneous equations also appear prominently in the study of differential equations:
• Ordinary Differential Equations (ODEs): A homogeneous differential equation (also [1]) is one where the function and its derivatives are proportionally scaled. For instance, the equation y'' + p(
x) y' + q(x) y = 0 is a linear homogeneous ODE if p(x) and q(x) are functions of x only.
• Partial Differential Equations (PDEs): Similarly, a PDE such as u_{xx} + u_{yy} = 0 (Laplace’s equation) is homogeneous because all terms involve derivatives of the same order, and there are no
free terms (non-derivative terms).
Application in Geometry
In projective geometry, homogeneous equations are used to describe geometric figures in a projective space using homogeneous coordinates:
• Projective Curves and Surfaces: An equation such as Ax^2 + Bxy + Cy^2 + Dxz + Eyz + Fz^2 = 0 (where A, B, C, D, E and F are constants) is used in projective geometry to define curves and
surfaces. Here, x,y,z are not usual Cartesian coordinates but homogeneous coordinates, where the actual location in projective space is represented as a ratio.
Properties and Uses
Homogeneous equations are particularly valued for their symmetry and invariance properties, making them crucial in a wide range of fields. They facilitate the analysis of systems where scaling and
proportionality plays a significant role. They are fundamental in situations where absolute sizes are irrelevant, and only ratios matter.
Understanding and solving homogeneous equations involves techniques that exploit their symmetrical properties, such as using substitution methods in differential equations or projective
transformations in geometry. This symmetry often allows for simplification in mathematical modelling and problem-solving.
Homogeneous Equations and Elliptic curves
Converting an Elliptic Curve, in the Weierstrass form (in 2 dimensions - x and y) into a Homogenous equation (involving 3 dimensions - x, y, and z), allows that equation to be manipulated, and
analysed, in terms of its partial derivatives, in ways that then allow its properties to become clearer than they otherwise would have been (Washington, L. C., 2008, pp-19-20, Section 2.3). | {"url":"https://www.creativearts.com.au/article/homogenous-equations","timestamp":"2024-11-06T09:05:49Z","content_type":"text/html","content_length":"1050512","record_id":"<urn:uuid:0c491f17-e5ca-4f58-9813-ff18ce776aad>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00299.warc.gz"} |
Question on Circular needles
I haven’t knit on circulars in a while, but I’m gearing up for Christmas present knitting and I’m thinking of making flower-shaped washcloths for some of the ladies and wrapping them around nice
soaps, etc… You know, real girlie gifts?!
Anyway, here is a link to the pattern: http://whimsicalknitting.home.comcast.net/~whimsicalknitting/flowerpowerwashclothsARBM.pdf
My question is - what size cable should I use? There is a total of 90 stitches once you join the petals. AND, do I have to use 2 circulars? I’ve never done that before. It doesn’t say in her pattern
and I’m not familiar enough with work on circulars to know what the right size is. I have to buy one anyway because the smallest size circ I have is a US10.5 16".
Can anyone help? Thanks in advance. | {"url":"https://forum.knittinghelp.com/t/question-on-circular-needles/52443","timestamp":"2024-11-14T03:59:26Z","content_type":"text/html","content_length":"21991","record_id":"<urn:uuid:47bb6e7a-5940-42b4-827b-7c9b3629c394>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00533.warc.gz"} |
Undergraduate Programme and Module Handbook 2008-2009 (archived)
Module FOUN0361: CORE FOUNDATION MATHS (COMBINED)
Department: Foundation Year [Queen's Campus, Stockton]
FOUN0361: CORE FOUNDATION MATHS (COMBINED)
Type Open Level 0 Credits 20 Availability Available in 2008/09 Module Cap None. Location Queen's Campus Stockton
Excluded Combination of Modules
• Core Foundation Maths 1 and Core foundation Maths 2.
• To improve confidence in algebraic manipulation through the study of mathematical techniques and development of investigative skills.
• To introduce and develop a knowledge of logarithms and their uses.
• To introduce and develop a knowledge of trigonometry.
• To introduce and develop understanding of a range of standard techniques for differentiation and integration.
• To include trigonometric and logarithmic functions.
• Quadratic equations, factorisation, graphs, quadratic formula.
• Trigonometry, sine, cosine, tangent.
• Sequences and Series , Arithmetic, geometric, use of sigma notation.
• Indices and Logarithms: laws, solution of equations.
• Reduction of a given relation to linear form, graphical determination of constants.
• Rate of change, increasing/decreasing functions, maxima and minima.
• Differentiation of: algebraic polynomials ,composite functions (chain rule), sum, product or quotient of two functions, trigonometric and exponential functions.
• Evaluation of integrals by using standard forms, substitution, partial fractions or integration by parts.
• Second derivatives of standard functions.
• Binomial expansion of (a+b)(to the power n) for positive integer n.
• Factor theorem.
Learning Outcomes
Subject-specific Knowledge:
• By the end of this module the student will have acquired the knowledge to be able to:
• confidently manipulate a range of algebraic expressions as needed in a variety of contexts.
• use logarithms to solve problems and to predict relationships from graphs.
• differentiate and integrate a number of different types of functions.
Subject-specific Skills:
• By the end of this module the student will have acquired the skills to be able to:
• recall, select and use knowledge of appropriate integration and differentiation techniques as needed in a variety of contexts.
• confidently manipulate a range of algebraic expressions and use a range of techniques as required in problems appropriate to the syllabus.
Key Skills:
• By the end of the module students will be able to:
• communicate effectively in writing.
• be able to apply number both in the tackling of numerical problems and in the collecting, recording, interpreting and presenting of data.
• be able to demonstrate problem solving skills.
Modes of Teaching, Learning and Assessment and how these contribute to the learning outcomes of the module
• Theory, initial concepts and techniques will be introduced during lectures and seminars.
• Much of the learning, understanding and consolidation will take place through the use of structured exercise during seminar and tutorial sessions and students own time.
• Manipulative skills and ability to recall, select and apply mathematics including calculus will be assessed by an end of module test and a portfolio of tasks including some short invigilated
tests and solutions to questions set on a weekly basis.
• Logarithms and prediction of relationships from graphs will be consolidated and assessed within a coursework task.
Teaching Methods and Learning Hours
Activity Number Frequency Duration Total/Hours
Lectures 11 Weekly 2 22 ■
Seminars 11 Weekly 2 22 ■
Tutorials 11 Weekly 2 22 ■
Preparation and Reading 34
Total 100
Summative Assessment
Component: Invigilated Test Component Weighting: 50%
Element Length / duration Element Weighting Resit Opportunity
Invigilated Test 2 hours 100%
Component: Potfolio of Tests and Coursework Component Weighting: 50%
Element Length / duration Element Weighting Resit Opportunity
Class Test 50%
Coursework 50%
Formative Assessment:
Students will be given self testing units on a weekly basis.
■ Attendance at all activities marked with this symbol will be monitored. Students who fail to attend these activities, or to complete the summative or formative assessment specified above, will be
subject to the procedures defined in the University's General Regulation V, and may be required to leave the University | {"url":"https://apps.dur.ac.uk/faculty.handbook/2008/UG/module/FOUN0361","timestamp":"2024-11-10T18:44:22Z","content_type":"text/html","content_length":"9620","record_id":"<urn:uuid:7cede6b5-0f9d-4474-84c4-9c9cca30a15f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00888.warc.gz"} |
The DHS Program User Forum: Dataset use in Stata » Applying weights to multilevel hazard analysis using Cox regression
Home » Data » Dataset use in Stata » Applying weights to multilevel hazard analysis using Cox regression
Show: Today's Messages :: Show Polls :: Message Navigator
Applying weights to multilevel hazard analysis using Cox regression [message #23946] Wed, 19 January 2022 16:18
Messages: 26
Registered: June 2020
Hello everyone,
I'm running a Stata code for frailty survival model (multilevel hazard analysis) using 2018 Nigeria Demographic and Health Survey Data.
I wrote the design code using the guide provided on this forum:
svyset v021, weight(wt2_1) strata(v022) , singleunit(centered) || _n, weight(wt1_1)
But when I applied the weight in the code below, it returned an error message that 'option shared() was not allowed with the svy prefix'
svy: stcox i.w_quintile elect i.anc i.v024 v025 poverty_prop sec_edu_prop, efron shared(v021) || v021:
Please how do I apply weight when running a multilevel hazard analysis using Cox regression?
Thank you.
Re: Applying weights to multilevel hazard analysis using Cox regression [message #23954 is a reply to message #23946] Thu, 20 January 2022 15:54
Messages: 3185
Registered: February 2013
Following is a response from DHS Research & Data Analysis Director, Tom Pullum:
We cannot provide support for this command. However, I'll say that there are probably just two possibilities. The first is that you have a problem with the svyset command. Please check it against the
syntax in DHS Methodological Report #27 (https://www.dhsprogram.com/pubs/pdf/MR27/MR27.pdf). The other possibility is that the command (with the "shared" option) simply will not work (as it is
currently programmed by Stata) with this multilevel svyset command. That sort of thing can happen. Many commands have become more flexible in successive releases of Stata.
If the command works without the shared option, then the syntax of svyset must be ok. If so, you will have to ask Stata or the Stata forum about the shared option.
Re: Applying weights to multilevel hazard analysis using Cox regression [message #23956 is a reply to message #23954] Thu, 20 January 2022 17:19
Messages: 26
Registered: June 2020
Thanks Bridgette and Tom for the response.
The only difference from my syntax and the one in DHS methodological report #27 is the psu. I understand I can either use v001 or v021 as the psu. Correct me if I'm wrong please.
The svyset command executed and showed this result:
'Note: Stage 1 is sampled with replacement; further stages will be ignored for variance
pweight: <none>
VCE: linearized
Single unit: centered
Strata 1: v022
SU 1: v021
FPC 1: <zero>
Weight 1: wt2_1
Strata 2: <one>
SU 2: <observations>
FPC 2: <zero>
Weight 2: wt1_1'
When I run the syntax "svy: stcox i.w_quintile elect i.anc i.v024 v025 poverty_prop sec_edu_prop || v021:" without the shared option it returns this error message:
'variable wt1_1*wt2_1 not found'.
And then when I run it without the svy option 'stcox i.w_quintile elect i.anc i.v024 v025 poverty_prop sec_edu_prop, efron shared(v021)' it executes successfully, even though it takes a long time to
do that.
I also observed that I could run this 'svy: melogit inf_death i.w_quintile elect || v021:' successfully, although that is not what I'm using in my analysis.
Could it be that 'svy' is not needed when running a Cox multilevel regression?
I have also posted the question on Stata forum.
Re: Applying weights to multilevel hazard analysis using Cox regression [message #23957 is a reply to message #23956] Fri, 21 January 2022 08:05
Messages: 3185
Registered: February 2013
Following is a response from DHS Research & Data Analysis Director, Tom Pullum:
Yes, for the cluster ID you can use either v021 and v001. When you get a Stata error message such as "variable wt1_1*wt2_1 not found" it means you have tried to do some algebra that is not allowed
within a command. You would just need a line such as "gen wt_1= wt1_1*wt2_1" and then insert "wt_1" where you may have had "wt1_1*wt2_1". However, there may be a bigger issue. There are some
instances in Stata in which two options are incompatible with each other, and you don't get an error message telling you that's why the command will not work.
It's very important to include as much of svyset as you can. I would try simplifying svyset, and/or simplifying the cox options, until you get a combination that works. Then proceed with your
analysis. You will want to get as close as possible to your ideal combination of svyset and cox, but you may not be able to get all the way there.
Re: Applying weights to multilevel hazard analysis using Cox regression [message #23964 is a reply to message #23957] Fri, 21 January 2022 15:17
Messages: 26
Registered: June 2020
Thanks for the follow-up response.
But since I used this "svyset v021, weight(wt2_1) strata(v022) , singleunit(centered) || _n, weight(wt1_1)" to apply the survey design, I'm wondering where I'm supposed to apply this
"gen wt_1= wt1_1*wt2_1". Given that I don't have "wt1_1*wt2_1" anywhere in my command.
Re: Applying weights to multilevel hazard analysis using Cox regression [message #23979 is a reply to message #23964] Mon, 24 January 2022 08:53
Messages: 3185
Registered: February 2013
Following is another response from DHS Research & Data Analysis Director, Tom Pullum:
If you don't have "wt1_1*wt2_1" anywhere in your code, then I can't account for the error statement. The product of those two level-weights would be the net weight, proportional to v005. I will ask
someone else to help -- I have no more suggestions.
Re: Applying weights to multilevel hazard analysis using Cox regression [message #23985 is a reply to message #23979] Mon, 24 January 2022 14:42
Messages: 26
Registered: June 2020
Hello Tom,
Thank you for your suggestions so far. Much appreciated.
Previous Topic: SUEST
Next Topic: underfive children HW data: why missing a lot in IR files
Goto Forum:
Current Time: Sun Nov 3 07:29:07 Coordinated Universal Time 2024 | {"url":"https://userforum.dhsprogram.com/index.php?t=msg&th=11480&goto=23979&","timestamp":"2024-11-03T12:29:07Z","content_type":"text/html","content_length":"36339","record_id":"<urn:uuid:dde94fc0-cb0c-44ae-804e-8018d3534c6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00391.warc.gz"} |
Composite plate damage localization based on modal parameters
A damage localization method based on natural frequency was proposed in order to complete the nondestructive diagnosis of a composite plate. The relationship between the damage position and the
natural frequency of the composite plate was studied both qualitatively and quantitatively. Furthermore, the damage localization method proposed in this paper was proved by simulation and
experimentation, with the results showing that this method can locate the position where the stiffness of the composite plate decreased in both simulation and experiment. Finally, the damage
localization method based on natural frequency can be applied for non-destructive diagnosis of a delamination composite plate.
1. Introduction
Composite plates are widely used in the shipbuilding industry because of their high strength-to-weight ratio, good shock absorption, safety, and molding process. When the composite plates are damaged
[1], their reliability will decline seriously and may even lead to a structural failure, which has far-reaching effects. As a result, the damage diagnosis of composite plates has become a highly
important task.
Due to the fact that the global strain region changes very little, and these changes are accompanied with local cracking. The strain is only noticeable at the crack tip. So the strain-based methods
were proposed for the prediction of crack origin and development [2-4]. Guided waves are now researched in the diagnosis of composite plates [5, 6]. The mechanism is straightforward: a PZT embedded
into a composite plate launches an ultrasonic pulse that propagates as an elastic wave, which can be received by other PZTs [7, 8]. When the received signals are compared to signals previously
received, the signal distortion shall bear evidence of the destruction of the composite plate.
According to previous researchers, a damage function composed of a natural frequency variation ratio was proposed in this study, and the following research was completed with the damage function. The
finite element method (FEM) was used to validate the damage localization using a damage function. After validation, a non-destructive diagnosis of delamination composite plates with a damage function
was applied to prove the experiment method.
2. Damage location theory of composite plate
The damage of composite plates often leads to a decrease in local stiffness, which causes natural frequency variation [9]. The degree $∆K$ and position $\stackrel{\to }{r}$ of stiffness reduction
have an impact on the natural frequency of the composite plate. Thus:
$∆{\omega }_{i}=f\left(∆K,\stackrel{\to }{r}\right),$
where $∆{\omega }_{i}$ is the variation value of the i-th natural frequency of the composite plate after damage.
Expand the function Eq. (1) to the undamaged state and ignore the second-order terms:
$∆{\omega }_{i}={f}_{i}\left(0,\stackrel{\to }{r}\right)+∆K\frac{\partial {f}_{i}\left(0,\stackrel{\to }{r}\right)}{\partial \left(∆K\right)}.$
Obviously, ${f}_{i}\left(0,\stackrel{\to }{r}\right)=0$. Hence:
$∆{\omega }_{i}=∆K{g}_{i}\left(\stackrel{\to }{r}\right).$
$∆{\omega }_{j}=∆K{g}_{j}\left(\stackrel{\to }{r}\right).$
By eliminating the change of stiffness $∆K$ from Eqs. (3-4):
$\frac{∆{\omega }_{i}}{∆{\omega }_{j}}=\frac{{g}_{i}\left(\stackrel{\to }{r}\right)}{{g}_{j}\left(\stackrel{\to }{r}\right)}.$
Eq. (5) shows that the ratio of frequency variations in two modes is only a function of the damage location. Because the natural frequency is easy to measure, the conclusion is of great significance
to the location of composite plate damage.
3. Dynamic analysis of damaged composite plate
In this section, the relationship between local damage and natural frequency of a composite plate was studied. If to ignore the influence of the environment and damping, it is possible get the
dynamic equation of a composite plate:
$\left(K-{\omega }^{2}M\right)\phi =0,$
where $K$ and $M$ are the global stiffness matrix and mass matrix of the composite plate, and $\omega$ and $\phi$ are the natural frequency and mode shape. The effect of the stiffness matrix change
shall be considered; Eq. (6) then becomes:
$\left\{\left(K+∆K\right)-\left({\omega }^{2}+∆{\omega }^{2}\right)M\right\}\left(\phi +∆\phi \right)=0.$
Eq. (7) reduces to:
$∆{\omega }^{2}=\frac{{\phi }^{T}∆K\phi }{{\phi }^{T}M\phi }.$
Eq. (8) shows the relationship between the change of the global stiffness matrix and the variation of natural frequency. The global mass matrix $M$ and the mode shape $\phi$ can be calculated by the
FEM, and the change of natural frequency can be expressed by Eq. (8).
4. Damage function for diagnosis
The variation of the $i$th natural frequency can be calculated with Eq. (8) if the degree and position of the damage are known. However, the damage degree of a composite plate is frequently unknown
in engineering. As a result, Eq. (5) can be used to find the damage position of a composite plate.
The diagnosis process is as follows:
(1) Divide the composite plate into several elements. When an element is damaged, according to Eq. (8), the reference values for the natural frequency variation ratio are:
${s}_{ijk}=\sqrt{\frac{{{\phi }_{i}}^{T}∆{K}_{k}{\phi }_{i}}{{{\phi }_{i}}^{T}M{\phi }_{i}}/\frac{{{\phi }_{j}}^{T}∆{K}_{k}{\phi }_{j}}{{{\phi }_{j}}^{T}M{\phi }_{j}}}.$
(2) Define the damage function by assuming the damage to be at position $k$, given frequency changes $∆{\omega }_{i}$ and $∆{\omega }_{j}$ in modes $i$ and $j$, respectively, as:
${ER}_{k}=\sum _{i=1}^{n-1}\sum _{j=i+1}^{n}\left|\frac{∆{\omega }_{i}/∆{\omega }_{j}-{s}_{ijk}}{∆{\omega }_{i}/∆{\omega }_{j}}\right|.$
(3) Calculate the damage function of all elements by Eq. (10). The element with the minimum damage function is the damaged element.
5. Simulation Validation
In this section, a composite plate damaged with delamination is determined using the FEM, and the composite plate damage localization method based on the damage function is verified.
Fig. 1(a) shows the geometric design of a composite plate (a).
Fig. 1Design of finite element model
a) Geometric design of composite plate
b) Finite element mesh generation of delamination damage of composite plate
The composite plate is divided into six areas, and the BC area has the fixed boundary condition that is used to eliminate the geometric symmetry of the composite plate. The composite plate is made of
two materials: PVC foam and epoxy e-glass.
The diagnosis process begins with establishing a finite element model of the composite plate and extracting its mass matrix. The finite element model is then modally analyzed to obtain natural
frequencies and mode shapes. Assuming that the damage reduces the stiffness of the local area by 50 %, the reference values of the natural frequency variation ratio may be calculated by Eq. (9), and
the results are shown in Table 1.
Table 1Reference values of natural frequency variation ratio
$k=$ 1 $k=$ 2 $k=$ 3 $k=$ 4 $k=$ 5 $k=$ 6
${s}_{12k}$ 5.39 13.61 105.00 4.00 4.18 35.00
${s}_{13k}$ 4.56 41.25 320.00 2.00 33.45 170.00
${s}_{14k}$ 3.05 68.61 1215.00 15.38 155.45 2180.00
${s}_{23k}$ 0.85 3.03 3.05 0.50 8.00 4.86
${s}_{24k}$ 0.57 5.04 11.57 3.84 37.17 62.29
${s}_{34k}$ 0.67 1.66 3.80 7.69 4.65 12.82
The stiffness reduction is not a rigorous method for simulating delamination damage of composite plates. The delamination damage is modeled using a volume split method with finite element nodes
separated by a small distance across the damage surface. In this simulation, the damage is set in Area 2 as shown in Fig. 1(b).
Table 2Natural frequencies of the composite plate before and after delamination damage
Natural frequency / Hz First order Second order Third order Fourth order
No damage 5.8603 27.316 39.950 85.438
Damaged 5.7852 26.716 38.362 82.324
Table 3Damage functions of composite plate after delamination damage
$k$ 1 2 3 4 5 6
$D{F}_{k}$ 4.27 2.63 56.90 6.03 13.36 79.37
Modal analysis is used to calculate the natural frequencies before and after delamination damage as shown in Table 2. The damage functions that are calculated in Eq. (10) are shown in Table 3. The DF
[2] is the minimum of all the damage functions, indicating that the damage is in Area 2 and that the damaged area is correctly located. The simulation results show that the damage localization method
proposed in this paper can be used for delamination damage localization of composite plates.
6. Experimental validation
An experiment is made to further verify the damage localization method. Fig. 2 shows the experiment platform. The sample is a 0.4 m×0.4 m composite plate that was customized by a yacht factory.
The Simcenter testlab system was adopted for this experiment, and the test method was called the impact testing method. A corner of the composite plate is fixed to eliminate the geometric symmetry.
The composite plate is damaged after the first impact test, with the damaged area located in the upper right corner. The second impact test is then carried out. Fig. 3 shows the test results. Natural
frequencies can be found in the SUM function.
Fig. 2Experiment platform and delamination damage of composite plate
Fig. 3SUM function of composite plate before and after delamination
The composite plate is diagnosed twice to obtain more accurate test results. First, the composite plate is divided into four areas as shown in Fig. 4(a). The damage functions are shown in Table 4.
Table 4 shows that the damage is located in Area 2. Then Area 2 can be divided into four areas as shown in Fig. 4(b). As per Table 5, the damage is located in Area 22. The experiment demonstrates
that the delamination damage of a composite plate may be located by using the damage function.
Fig. 4Area division for diagnosis
Table 4Damage functions in first diagnosis
$k$ 1 2 3 4
$D{F}_{k}$ 5.51 5.10 5.87 14.60
Table 5Damage functions in second diagnosis
$k$ 21 22 23 24
$D{F}_{k}$ 6.41 5.91 6.82 15.52
7. Conclusions
It can be seen that a damage of a composite plate leads to a decrease in local stiffness, resulting in a difference in natural frequencies. However, the natural frequency variation ratio is mainly
sensitive to the damage position.
A damage function has been proposed, and the damage to a composite plate can be located by comparing the values of the damage functions.
Through the simulation and experimental verification, the damage function can be used to locate the delamination damage of the composite plate.
• R. Di Sante, “Fibre optic sensors for structural health monitoring of aircraft composite structures: recent advances and applications,” Sensors, Vol. 15, No. 8, pp. 18666–18713, Jul. 2015, https:
• M. Mulle, A. Yudhanto, G. Lubineau, R. Yaldiz, W. Schijve, and N. Verghese, “Internal strain assessment using FBGs in a thermoplastic composite subjected to quasi-static indentation and
low-velocity impact,” Composite Structures, Vol. 215, pp. 305–316, May 2019, https://doi.org/10.1016/j.compstruct.2019.02.085
• C. Andreades, P. Mahmoodi, and F. Ciampa, “Characterisation of smart CFRP composites with embedded PZT transducers for nonlinear ultrasonic applications,” Composite Structures, Vol. 206, pp.
456–466, Dec. 2018, https://doi.org/10.1016/j.compstruct.2018.08.083
• Wang Rong et al., “Evaluation of composite matrix crack using nonlinear ultrasonic Lamb wave detected by fiber Bragg grating,” (in Chinese), Aeronautical Manufacturing Technology, Vol. 64, No.
21, pp. 51–56, 2021, https://doi.org/10.16080/j.issn1671-833x.2021.21.051
• A. Güemes, A. Fernandez-Lopez, A. R. Pozo, and J. Sierra-Pérez, “Structural health monitoring for advanced composite structures: a review,” Journal of Composites Science, Vol. 4, No. 1, p. 13,
Jan. 2020, https://doi.org/10.3390/jcs4010013
• G. Liu, Y. Xiao, H. Zhang, and G. Ren, “Elliptical ring distribution probability-based damage imaging method for complex aircraft structures,” Journal of Vibroengineering, Vol. 19, No. 7, pp.
4936–4952, Nov. 2017, https://doi.org/10.21595/jve.2017.17337
• Z.-B. Yang, M.-F. Zhu, Y.-F. Lang, and X.-F. Chen, “FRF-based lamb wave phased array,” Mechanical Systems and Signal Processing, Vol. 166, p. 108462, Mar. 2022, https://doi.org/10.1016/
• C. Fendzi, N. Mechbal, M. Rébillat, M. Guskov, and G. Coffignal, “A general Bayesian framework for ellipse-based and hyperbola-based damage localization in anisotropic composite plates,” Journal
of Intelligent Material Systems and Structures, Vol. 27, No. 3, pp. 350–374, Feb. 2016, https://doi.org/10.1177/1045389x15571383
• P. Cawley and R. D. Adams, “The location of defects in structures from measurements of natural frequencies,” The Journal of Strain Analysis for Engineering Design, Vol. 14, No. 2, pp. 49–57, Apr.
1979, https://doi.org/10.1243/03093247v14204
About this article
Modal analysis and applications
composite plate
modal test
non-destructive diagnosis
This work was supported by the Science and Technology Project of the Fujian Province (Nos. 2020H0018, 2021H0020).
Copyright © 2022 Jiayu Cao, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/22382","timestamp":"2024-11-11T23:54:47Z","content_type":"text/html","content_length":"118705","record_id":"<urn:uuid:e86aa3e3-a62e-4a91-b343-7ac060dc5cc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00747.warc.gz"} |
Alexa Ryder - OpenGenus IQ: Learn Algorithms, DL, System Design
Bellman-Ford Algorithm is an algorithm for single source shortest path where edges can be negative (but if there is a cycle with negative weight, then this problem will be NP). The credit of
Bellman-Ford Algorithm goes to Alfonso Shimbel, Richard Bellman, Lester Ford and Edward F. Moore. | {"url":"https://iq.opengenus.org/author/alexa/","timestamp":"2024-11-11T08:23:57Z","content_type":"text/html","content_length":"51210","record_id":"<urn:uuid:4dd15c73-21c1-45b0-b5bb-9c689c5c486a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00309.warc.gz"} |
12-16-14 - Daala PVQ Emails
First, I want to note that the
PVQ Demo page
has good links at the bottom with more details in them, worth reading.
Also, the Main Daala page has more links, including the "Intro to Video" series, which is rather more than an intro and is a good read. It's a broad survey of modern video coding.
Now, a big raw dump of emails between me, ryg, and JM Valin. I'm gonna try to color them to make it a bit easier to follow. Thusly :
And this all starts with me being not very clear on PVQ so the beginning is a little fuzzy.
I will be following this up with a "summary of PVQ as I now understand it" which is probably more useful for most people. So, read that, not this.
(also jebus the internet is rindoculuos. Can I have BBS's back? Like literally 1200 baud text is better than the fucking nightmare that the internet has become. And I wouldn't mind playing a little
Trade Wars once a day...)
Hi, I'm the author of the latest Daala demo on PVQ on which you commented recently. Here's some comments on your comments. I wasn't able to submit this to your blog, but feel free to copy them there.
> 1. I've long believed that blocks should be categorized into > "smooth", "detail" and "edge". For most of this discussion we're > going to ignore smooth and edge and just talk about detail. That
is, > blocks that are not entirely smooth, and don't have a dominant edge > through them (perhaps because that edge was predicted). I agree here, although right now we're treating "smooth" and
"detail" in the same way (smooth is just low detail). Do you see any reason to treat those separately? > 2. The most important thing in detail blocks is preserving the > amount of energy in the
various frequency subbands. This is something > that I've talked about before in terms of perceptual metrics. This is exactly what I had in mind with this PVQ work. Before Daala, I worked on the CELT
part of the Opus codec, which has strict preservation of the energy. In the case of Daala, it looks so far like we want to relax the energy constraint a little. Right now, the codebook has an
energy-preserving structure, but the actual search is R/D optimized with the same weight given to the amount of energy and its location. It's pretty easy to change the code to give more weight to
energy preservation. I could even show you how to play with it if you're interested. > 3. You can take a standard type of codec and optimize the encoding > towards this type of perceptual metric, and
that helps a bit, but > it's the wrong way to go. Because you're still spending bits to > exactly specify the noise in the high frequency area. Correct, hence the gain-shape quantization in my post.
> 4. What you really want is a joint quantizer of summed energy and > the distribution of that energy. At max bit rate you send all the > coefficients exactly. As you reduce bitrate, the sum is
preserved > pretty well, but the distribution of the lower-right (highest > frequency) coefficients becomes lossy. As you reduce bit rate more, > the total sum is still pretty good and the overall
distribution of > energy is mostly right, but you get more loss in where the energy is > going in the lower frequency subbands, and you also get more scalar > quantization of the lower frequency
subbands, etc. Well, my experience with both Opus and CELT is that you want the same resolution for the energy as you use for the "details". That being said, having an explicit energy still means you
can better preserve it in the quantization process (i.e. it won't increase or decrease too much due to the quantization). > 5. When the energy is unspecified, you'd like to restore in some nice >
way. That is, don't just restore to the same quantization vector > every time ("center" of the quantization bucket), since that could > create patterns. I dunno. Maybe restore with some randomness;
restore > based on prediction from the neighborhood; restore to maximum > likelihood? (ML based on neighborhood/prediction/image not just a > global ML) I experimented a little bit with adding noise
at the correct energy and while it slightly improved the quality on still images, it wasn't clear how to apply it to video because then you have the problem of static vs dynamic noise. > 6. An idea
I've tossed around for a while is a quadtree/wavelet-like > coding scheme. Take the 8x8 block of coefficients (and as always > exclude DC in some way). Send the sum of the whole thing. Divide into >
four children. So now you have to send a (lossy) distribution of that > sum onto the 4 child slots. Go to the upper left (LL band) and do it > again, etc. I considered something along these lines,
but it would not be easy to do because the lowest frequencies would kind of drown out the high frequencies. > 7. The more energy you have, the less important its exact > distribution, due to masking.
As you have more energy to distribute, > the number of vectors you need goes up a lot, but the loss you can > tolerate also goes up. In terms of the bits to send a block, it > should still increase
as a function of the energy level of that > block, but it should increase less quickly than naive > log2(distributions) would indicate. Yes, that is exactly what the PVQ companding does. > 8. Not all
AC's are equally likely or equally perceptually important. > Specifically the vector codebook should contain more entries that > preserve values in the upper-left (low frequency) area. This is the
equivalent of the quantization matrix, which PVQ has as well (though I didn't really talk about it). > 9. The interaction with prediction is ugly. (eg. I don't know how to > do it right). The nature
of AC values after mocomp or > intra-prediction is not the same as the nature of AC's after just > transform (as in JPEG). Specifically, ideas like variance masking and > energy preservation apply to
the transformed AC values, *not* to the > deltas that you typically see in video coding. Handling the prediction is exactly what the whole Householder reflection in PVQ is about (see the 6 steps
figure). The PVQ gain encoding scheme is always done on the input and not on the prediction. So the activity masking is applied on the input energy and not based on the energy of the residual. > 10.
You want to send the information about the AC in a useful order. > That is, the things you send first should be very strong classifiers > of the entropy of that block for coding purposes, and of the
masking > properties for quantization purposes. Well, coding the energy first achieves most of this. > You don't want sending the "category" or "masking" information to be > separate side-band data.
It should just be the first part of sending > the coefficients. So your category is maybe something like the > bit-vector of which coefficient groups have any non-zero > coefficients. Something like
that which is not redundant with sending > them, it's just the first gross bit of information about their > distribution. Well, the masking information is tied to the gain. For now, the category
information is only tied to the block size decision (based on the assumption that edges will be 4x4), but it's not ideal and it's something I'd like to improve. On the topic of lapped transform, this
has indeed been causing us all sorts of headaches, but it also has interesting properties. Jury's still out on that one, but so far I think we've managed to make reasonably good use of it. Cheers,
Thanks for writing! Before I address specific points, maybe you can teach me a bit about PVQ and how you use it? I can't find any good resources on the web (your abstract is rather terse). Maybe you
can point me at some relevant reference material. (the CELT paper is rather terse too!) Are you constructing the PVQ vector from the various AC's within a single block? Or gathering the same subband
from spatial neighbors? (I think the former, but I've seen the latter in papers) Assuming the former - Isn't it just wrong? The various AC's have different laplacian distributions (lower frequencies
more likely) so using PVQ just doesn't seem right. In particular PVQ assumes all coefficients are equally likely and equally distributed. In your abstract you seem to describe a coding scheme which
is not a uniform length codeword like traditional PVQ. It looks like it assigns shorter codes to vectors that have their values early on in some kind of z-scan order. How is K chosen?
Hi, On 02/12/14 08:52 PM, Charles Bloom wrote: > Thanks for writing! Before I address specific points, maybe you can > teach me a bit about PVQ and how you use it? I can't find any good > resources
on the web (your abstract is rather terse). Maybe you can > point me at some relevant reference material. (the CELT paper is rather > terse too!) I'm currently writing a longer paper for a conference
in February, but for now there isn't much more than the demo and the abstract I link to at the bottom. I have some notes that describe some of the maths, but it's a bit all over the place right now:
http://jmvalin.ca/video/video_pvq.pdf > Are you constructing the PVQ vector from the various AC's within a > single block? Or gathering the same subband from spatial neighbors? (I > think the former,
but I've seen the latter in papers) > > Assuming the former - Correct. You can see the grouping (bands) in Fig. 1 of: http://jmvalin.ca/video/spie_pvq_abstract.pdf > Isn't it just wrong? The various
AC's have different laplacian > distributions (lower frequencies more likely) so using PVQ just doesn't > seem right. > > In particular PVQ assumes all coefficients are equally likely and > equally
distributed. > > > In your abstract you seem to describe a coding scheme which is not a > uniform length codeword like traditional PVQ. It looks like it assigns > shorter codes to vectors that have
their values early on in some kind of > z-scan order. One thing to keep in mind if that the P in PVQ now stands for "perceptual". In Daala we are no longer using the indexing scheme from CELT (which
does assume identical distribution). Rather, we're using a coding scheme based on Laplace distribution of unequal variance. You can read more about the actual encoding process in another document:
http://jmvalin.ca/video/pvq_encoding.pdf > How is K chosen? The math is described (poorly) in section 6.1 of http://jmvalin.ca/video/video_pvq.pdf Basically, the idea is to have the same resolution
in the direction of the gain as in any other direction. In the no prediction case, it's roughly proportional to the gain times the square root of the number of dimensions. Because K only depends on
values that are available to the decoder, we don't actually need to signal it. Hope this helps, Jean-Marc
Thanks for the responses and the early release papers, yeah I'm figuring most of it out. K is chosen so that distortion from the PVQ (P = Pyramid) quantization is the same as distortion from gain
quantization. Presumably under a simple D metric like L2. The actual PVQ (P = Pyramid) part is the simplest and least ambiguous. The predictive stuff is complex. Let me make sure I understand this
correctly - You never actually make a "residual" in the classic sense by subtracting the prediction off. You form the prediction in transformed space. (perhaps by having a motion vector, taking the
pixels it points to and transforming them, dealing with lapping, yuck!) The gain of the current block is sent (for each subband). Not the gain of the delta. The gain of the prediction in the same
band is used as coding context? (the delta of the quantized gains could be sent). The big win that you guys were after in sending the gain seems to have been the non-linear quantization levels;
essentially you're getting "variance adaptive quantization" without explicitly sending per block quantizers. The Householder reflection is the way that vectors near the prediction are favored. This
is the only way that the predicted block is used!? Madness! (presumably if the prediction had detail that was finer than the quantization level of the current block that could be used to restore
within the quantization bucket; eg. for "golden frames")
On 03/12/14 12:18 AM, Charles Bloom wrote: > K is chosen so that distortion from the PVQ (P = Pyramid) quantization > is the same as distortion from gain quantization. Presumably under a > simple D
metric like L2. Yes, it's an L2 metric, although since the gain is already warped, the distortion is implicitly weighted by the activity masking, which is exactly what we want. > You never actually
make a "residual" in the classic sense by subtracting > the prediction off. Correct. > You form the prediction in transformed space. (perhaps by having a > motion vector, taking the pixels it points
to and transforming them, > dealing with lapping, yuck!) We have the input image and we have a predicted image. We just transform both. Lapping doesn't actually cause any issues there (unlike many
other places). As far as I can tell, this part is similar to what a wavelet coder would do. > The gain of the current block is sent (for each subband). Not the gain > of the delta. Correct. > The
gain of the prediction in the same band is used as > coding context? (the delta of the quantized gains could be sent). Yes, the gain is delta-coded, so coding "same gain" is cheap. Especially,
there's a special symbol for gain=0,theta=0, which means "skip this band and use prediction as is". > The big win that you guys were after in sending the gain seems to have > been the non-linear
quantization levels; essentially you're getting > "variance adaptive quantization" without explicitly sending per block > quantizers. Exactly. Not only that but it's adaptive based on the variance of
the current band, not just an entire macroblock. > The Householder reflection is the way that vectors near the prediction > are favored. This is the only way that the predicted block is used!? >
Madness! Well, the reference is used to compute the reflection *and* the gain. In the end, we're using exactly the same amount of information, just in a different space. > (presumably if the
prediction had detail that was finer than the > quantization level of the current block that could be used to restore > within the quantization bucket; eg. for "golden frames") Can you explain what
you mean here? Jean-Marc
So one thing that strikes me is that at very low bit rate, it would be nice to go below K=1. In the large high-frequency subbands, the vector dimension N is very large, so even at K=1 it takes a lot
of bits to specify where the energy should go. It would be nice to be more lossy with that location. It seems that for low K you're using a zero-runlength coder to send the distribution, with a kind
of Z-scan order, which makes it very similar to standard MPEG. (maybe you guys aren't focusing on such low bit rates; when I looked at low bit rate video the K=1 case dominated) At 09:42 PM 12/2/
2014, you wrote: > (presumably if the prediction had detail that was finer than the > quantization level of the current block that could be used to restore > within the quantization bucket; eg. for
"golden frames") Can you explain what you mean here? If you happen to have a very high quality previous block (much better than your current quantizer / bit rate should give you) - with normal mocomp
you can easily carry that block forward, and perhaps apply corrections to it, but the high detail of that block is preserved. With the PVQ scheme it's not obvious to me that that works. When you send
the quantized gain of the subbands you're losing precision (it looks like you guys have a special fudge to fix this, by offsetting the gain based on the prediction's gain?) But for the VQ part, you
can't really "carry forward" detail in the same way. I guess the reflection vector can be higher precision than the quantizer, so in a sense that preserves detail, but it doesn't carry forward the
same values, because they drift due to rotation and staying a unit vector, etc.
Some more questions - Is the Householder reflection method also used for Intra prediction? (do you guys do the directional Intra like H26x ?) How much of this scheme is because you believe it's the
best thing to do vs. you have to avoid H26x patents? If you're not sending any explicit per-block quantizer, it seems like that removes a lot of freedom for future encoders to do more sophisticated
perceptual optimization. (ROI bit allocation or whatever)
On 03/12/14 02:17 PM, Charles Bloom wrote: > So one thing that strikes me is that at very low bit rate, it would be > nice to go below K=1. In the large high-frequency subbands, the vector >
dimension N is very large, so even at K=1 it takes a lot of bits to > specify where the energy should go. It would be nice to be more lossy > with that location. Well, for large N, the first gain
step already has K>1, which I believe is better than K=1. I've considered adding an extra gain step with K=1 or below, but never had anything that was really worth it (didn't try very hard). > It
seems that for low K you're using a zero-runlength coder to send the > distribution, with a kind of Z-scan order, which makes it very similar > to standard MPEG. > > (maybe you guys aren't focusing
on such low bit rates; when I looked at > low bit rate video the K=1 case dominated) We're also targeting low bit-rates, similar to H.265. We're not yet at our target level of performance though. >
Is the Householder reflection method also used for Intra prediction? > (do you guys do the directional Intra like H26x ?) We also use it for intra prediction, though right now our intra prediction is
very limited because of the lapped transform. Except for chroma which we predict from the luma. PVQ makes this particularly easy. We just use the unit vector from luma as chroma prediction and code
the gain. > How much of this scheme is because you believe it's the best thing to > do vs. you have to avoid H26x patents? The original goal wasn't to avoid patents, but it's a nice added benefit. >
If you're not sending any explicit per-block quantizer, it seems like > that removes a lot of freedom for future encoders to do more > sophisticated perceptual optimization. (ROI bit allocation or >
whatever) We're still planning on adding some per-block/macroblock/something quantizers, but we just won't need them for activity masking. Cheers, Jean-Marc
Hi, Just read your "smooth blocks" post and I thought I'd mention on thing we do in Daala to improve the quality of smooth regions. It's called "Haar DC" and the idea is basically to apply a Haar
transforms to all the DCs in a superblock. This has the advantage of getting us much better quantization resolution at large scales. Unfortunately, there's absolutely no documentation about it, so
you'd have to look at the source code, mostly in od_quantize_haar_dc() and a bit of od_compute_dcts() http://git.xiph.org/?p=daala.git;a=blob;f=src/encode.c;h=879dda;hb=HEAD Cheers, Jean-Marc
Yeah I definitely can't follow that code without digging into it. But this : "much better quantization resolution at large scales." is interesting. When I did the DLI test : http://
cbloomrants.blogspot.com/2014/08/08-31-14-dli-image-compression.html something I noticed in both JPEG and DLI (and in everything else, I'm sure) is : Because everyone just does naive scalar
quantization on DC's, large regions of solid color will shift in a way that is very visible. That is, it's a very bad perceptual RD allocation. Some bits should be taken away from AC detail and put
into making that large region DC color more precise. The problem is that DC scalar quantization assumes the blocks are independent and random and so on. It models the distortion of each block as
being independent, etc. But it's not. If you have the right scalar quantizer for the DC when the blocks are in a region of high variation (lots of different DC's) then that is much too large a
quantizer for regions where blocks all have roughly the same DC. This is true even when there is a decent amount of AC energy, eg. the image I noticed it in was the "Porsche640" test image posted on
that page - the greens of the bushes all color shift in a very bad way. The leaf detail does not mask this kind of perceptual error.
Two more questions - 1. Do you use a quantization matrix (ala JPEG CSF or whatever) ? If so, how does that work with gain preservation and the Pyramid VQ unit vector? 2. Do you mind if I post all
these mails publicly?
On 11/12/14 02:03 PM, Charles Bloom wrote: > 1. Do you use a quantization matrix (ala JPEG CSF or whatever) ? If so, > how does that work with gain preservation and the Pyramid VQ unit vector? Right
now, we just set a different quantizer value for each "band", so we can't change resolution on a coefficient-by-coefficient basis, but it still looks like a good enough approximation. If needed we
might try doing something fancier at some point. > 2. Do you mind if I post all these mails publicly? I have no problem with that and in fact I encourage you to do so. Cheers, Jean-Marc
ryg: Don't wanna post this to your blog because it's a long comment and will probably fail Blogger's size limit. Re "3 1/2. The normal zig-zag coding schemes we use are really bad." Don't agree here
about zig-zag being the problem. Doesn't it just boil down to what model you use for the run lengths? Classic JPEG/MPEG style coding rules (H.264 and later are somewhat different) 1. assume short
runs are more probable than long ones and 2. give a really cheap way to end blocks early. The result is that the coder likes blocks with a fairly dense cluster in the first few coded components (and
only this is where zig-zag comes in) and truncated past that point. Now take Fischer-style PVQ (original paper is behind a paywall, but this: http://www.nul.com/pbody17.pdf covers what seems to be
the proposed coding scheme). You have two parameters, N and K. N is the dimensionality of the data you're coding (this is a constant at the block syntax level and not coded) and K is the number of
unit pulses (=your "energy"). You code K and then send an integer (with a uniform model!) that says which of all possible arrangements of K unit pulses across N dimensions you mean. For 16-bit ACs in
a 8x8 block so N=63, there's on the order of 2^(63*16) = 2^1008 different values you could theoretically code, so clearly for large K this integer denoting the configuration can get quite huge.
Anyway, suppose that K=1 (easiest case). Then the "configuration number" will tell us where the pulse goes and what sign it has, uniformly coded. That's essentially a run length with *uniform*
distribution plus sign. K=2: we have two pulses. There's N*2 ways to code +-2 in one AC and the rest zeros (code AC index, code sign), and (N choose 2) * 2^2 ways to code two slots at +-1 each. And
so forth for higher K. From there, we can extrapolate what the general case looks like. I think the overall structure ends up being isomorphic to this: 1. You code the number M (<=N) of nonzero
coefficients using a model derived from the combinatorics given N and K (purely counting-based). (K=1 implies M=1, so nothing to code in that case.) 2. Code the M sign bits. 3. Code the positions of
the M nonzero coeffs - (N choose M) options here. 4. Code another number denoting how we split the K pulses among the M coeffs - that's an integer partition of K into exactly M parts, not sure if
there's a nice name/formula for that. This is close enough to the structure of existing AC entropy coders that we can meaningfully talk about the differences. 1) and 2) are bog-standard (we use a
different model knowing K than a regular codec that doesn't know K would, but that's it). You can view 3) in terms of significance masks, and the probabilities have a reasonably simple form (I think
you can adapt the Reservoir sampling algorithm to generate them) - or, by looking at the zero runs, in term of run lengths. And 4) is a magnitude coder constrained by knowing the final sum of
everything. So the big difference is that we know K at the start, which influences our choice of models forthwith. But it's not actually changing the internal structure that much! That said, I don't
think they're actually doing Fischer-style PVQ of "just send a uniform code". The advantage of breaking it down like above is that you have separate syntax elements that you can apply additional
modeling on separately. Just having a giant integer flat code is not only massively unwieldy, it's also a bit of a dead end as far as further modeling is concerned. -Fabian
cbloom: At 01:36 PM 12/2/2014, you wrote: Don't wanna post this to your blog because it's a long comment and will probably fail Blogger's size limit. Re "3 1/2. The normal zig-zag coding schemes we
use are really bad." Don't agree here about zig-zag being the problem. Doesn't it just boil down to what model you use for the run lengths? My belief is that for R/D optimization, it's bad when
there's a big R step that doesn't correspond to a big D step. You want the prices of things to be "fair". So the problem is cases like : XX00 X01 0 vs XX00 X001 000 00 0 which is not a very big D
change at all, but is a very big R step. I think it's easy to see that even keeping something equivalent to the zigzag, you could change it so that the position of the next coded value is sent in a
way such that the rates better match entropy and distortion. But of course, really what you want is to send the positions of those later values in a lossy way. Even keeping something zigzagish you
can imagine easy ways to do it, like you send a zigzag RLE that's something like {1,2,3-4,5-7,8-13} whatever.
ryg: Actually the Fischer (magnitude enumeration) construction corresponds pretty much directly to a direct coder: from the IEEE paper, l = dim, k = number of pulses, then number of code words N(l,k)
is N(l,k) = sum_{i=-k}^k N(l-1, k-|i|) This is really direct: N(l,k) just loops over all possible values i for the first AC coeff. The remaining uncoded ACs then are l-1 dimensional and <= k-|i|.
Divide through by N(l,k) and you have a probability distribution for coding a single AC coeff. Splitting out the i=0 case and sign, we get: N(l,k) = N(l-1,k) + 2 * sum_{j=1}^k N(l-1,k-j) =: N(l-1,k)
+ 2 * S(l-1,k) which corresponds 1:1 to this encoder: // While energy (k) left for (i = 0; k > 0; i++) { { assert(i < Ndims); // shouldn't get to N with leftover energy int l = N - i; // remaining
dims int coeff = coeffs[i]; // encode significance code_binary(coeff == 0, N(l-1,k) / N(l,k)); if (coeff != 0) { // encode sign code_binary(coeff < 0, 0.5); int mag = abs(coeff); // encode magnitude
(multi-symbol) // prob(mag=j) = N(l-1,k-j) / S(l-1, k) // then: k -= mag; } } and this is probably how you'd want to implement it given an arithmetic back end anyway. Factoring it into multiple
decisions is much more convenient (and as said before, easier to do secondary modeling on) than the whole "one giant bigint" mess you get if you're not low-dimensional. Having the high-dimensional
crap in there blows because the probabilities can get crazy. Certainly Ndims=63 would suck to work with directly. Separately, I'd expect that for k "large" (k >= Ndims? Ndims*4? More? Less?) you can
use a simpler coder and/or fairly inaccurate probabilities because that's gonna be infrequent. Maybe given k = AC_sum(1,63) = sum_{i=1}^63 |coeff_i|, there's a reasonably nice way to figure out say
AC_sum(1,32) and AC_sum(33,63). And if you can do that once, you can do it more than once. Kind of a top-down approach: you start with "I have k energy for this block" and first figure out which
subband groups that energy goes into. Then you do the "detail" encode like above within each subband of maybe 8-16 coeffs; with l<=Ndim<=8 and k<=Ndim*small, you would have reasonable (practical)
model sizes. -Fabian
cbloom: No, I don't think that's right. The N recursion is just for counting the number of codewords, it doesn't imply a coding scheme. It explicitly says that the pyramid vector index is coded with
a fixed length word, using ceil( N ) bits. Your coding scheme is variable length. I need to find the original Fisher paper because this isn't making sense to me. The AC's aren't equally probable and
don't have the same Laplacian distribution so PVQ just seems wrong. I did find this paper ("Robust image and video coding with pyramid vector quantisation") which uses PVQ and is making the vectors
not from within the same block, but within the same *subband* in different spatial locations. eg. gathering all the AC20's from lots of neigboring blocks. That does make sense to me but I'm not sure
if that's what everyone means when they talk about PVQ ? (paper attached to next email)
ryg: On 12/2/2014 5:46 PM, Charles Bloom {RAD} wrote: No, I don't think that's right. The N recursion is just for counting the number of codewords, it doesn't imply a coding scheme. It explicitly
says that the pyramid vector index is coded with a fixed length word, using ceil( N ) bits. Your coding scheme is variable length. I wasn't stating that Fischer's scheme is variable-length; I was
stating that the decomposition as given implies a corresponding way to encode it that is equivalent (in the sense of exact same cost). It's not variable length. It's variable number of symbols but
the output length is always the same (provided you use an exact multi-precision arithmetic coder that is, otherwise it can end up larger due to round-off error). log2(N(l,k)) is the number of bits we
need to spend to encode which one out of N(l,k) equiprobable codewords we use. The ceil(log2(N)) is what you get when you say "fuck it" and just round it to an integral number of bits, but clearly
that's not required. So suppose we're coding to the exact target rate using a bignum rationals and an exact arithmetic coder. Say I have a permutation of 3 values and want to encode which one it is.
I can come up with a canonical enumeration (doesn't matter which) and send an index stating which one of the 6 candidates it is, in log2(6) bits. I can send one bit stating whether it's an even or
odd permutation, which partitions my 6 cases into 2 disjoint subsets of 3 cases each, and then send log2(3) bits to encode which of the even/odd permutations I am, for a total of log2(2) + log2(3) =
log2(6) bits. Or I can get fancier. In the general case, I can (arbitrarily!) partition my N values into disjoint subsets with k_1, k_2, ..., k_m elements, respectively, sum_i k_i = N. To code a
number, I then first code the number of the subset it's in (using probability p_i = k_i/N) and then send a uniform integer denoting which element it is, in log2(k_i) bits. Say I want to encode some
number x, and it falls into subset j. Then I will spend -log2(p_i) + log2(k_i) = -log2(k_i / N) + log2(k_i) = log2(N / k_i) + log2(k_i) = log2(N) bits (surprise... not). I'm just partitioning my
uniform distribution into several distributions over smaller sets, always setting probabilities exactly according to the number of "leaves" (=final coded values) below that part of the subtree, so
that the product along each path is still a uniform distribution. I can nest that process of course, and it's easy to do so in some trees but not others meaning I get non-uniform path lengths, but at
no point am I changing the size of the output bitstream. That's exactly what I did in the "coder" given below. What's the value of the first AC coefficient? It must obey -k <= ac_0 <= k per
definition of k, and I'm using that to partition our codebook C into 2k+1 disjoint subsets, namely C_x = { c in C | ac0(c) = x } and nicely enough, by the unit-pulse definition that leads to the
enumeration formula, each of the C_x corresponds to another PVQ codebook, namely with dimension l-1 and energy k-|x|. Which implies the whole thing decomposes into "send x and then do a PVQ encode of
the rest", i.e. the loop I gave. That said, one important point that I didn't cover in my original mail: from the purposes of coding this is really quite similar to a regular AC coder, but of course
the values being coded don't mean the same thing. In a JPEG/MPEG style entropy coder, the values I'm emitting are raw ACs. PVQ works (for convenience) with code points on an integer lattice Z^N, but
the actual AC coeffs coded aren't those lattice points, they're (gain(K) / len(lattice_point)) * lattice_point (len here being Euclidean and not 1-norm!). I need to find the original Fisher paper
because this isn't making sense to me. The AC's aren't equally probable and don't have the same Laplacian distribution so PVQ just seems wrong. I did find this paper ("Robust image and video coding
with pyramid vector quantisation") which uses PVQ and is making the vectors not from within the same block, but within the same *subband* in different spatial locations. eg. gathering all the AC20's
from lots of neigboring blocks. That does make sense to me but I'm not sure if that's what everyone means when they talk about PVQ ? (paper attached to next email) The link to the extended abstract
for the Daala scheme (which covers this) is on the Xiph demo page: http://jmvalin.ca/video/spie_pvq_abstract.pdf Page 2 has the assignment of coeffs to subbands. They're only using a handful, and
notably they treat 4x4 blocks as a single subband. -Fabian
cbloom: Ah yeah, you are correct of course. I didn't see how you had the probabilities in the coding. There are a lot of old papers I can't get about how to do the PVQ enumeration in an efficient
way. I'm a bit curious about what they do. But as I'm starting to understand it all a bit now, that just seems like the least difficult part of the problem. Basically the idea is something like -
divide the block into subbands. Let's say the standard wavelet tree for concreteness - 01447777 23447777 55667777 55667777 8.. Send the sum in each subband ; this is the "gain" ; let's say g_s g_s is
sent with some scalar quantizer (how do you choose q_s ?) (in Daala a non-linear quantizer is used) For each subband, scale the vector to an L1 length K_s (how do you choose K_s?) Quantize the vector
to a PVQ lattice point; send the lattice index So PVQ (P = Pyramid) solves this problem of how to enumerate the distribution given the sum. But that's sort of the trivial part. The how do you send
the subband gains, what is K, etc. is the hard part. Do the subband gains mask each other? Then there's the whole issue of PVQ where P = Predictive. This Householder reflection business. Am I correct
in understanding that Daala doesn't subtract off the motion prediction and make a residual? The PVQ (P = predictive) scheme is used instead? That's quite amazing. And it seems that Daala sends the
original gain, not the gain of the residual (and uses the gain of the prediction as context). The slides (reference #4) clear things up a bit.
ryg: On 12/2/2014 8:21 PM, Charles Bloom {RAD} wrote: Ah yeah, you are correct of course. I didn't see how you had the probabilities in the coding. There are a lot of old papers I can't get about how
to do the PVQ enumeration in an efficient way. I'm a bit curious about what they do. Well, the one I linked to has a couple variants already. But it's pretty much besides the point. You can of course
turn this into a giant combinatorical circle-jerk, but I don't see the use. For example (that's one of the things in the paper I linked to) if you're actually assigning indexes to values then yeah,
the difference between assigning codewords in order { 0, -1, 1, -2, 2, ... } and { -k, -k+1, ..., -1, 0, 1, 2, ..., k } matters, but once you decompose it into several syntax elements most of that
incidental complexity just disappears completely. But as I'm starting to understand it all a bit now, that just seems like the least difficult part of the problem. Yeah, agreed. Basically the idea is
something like - divide the block into subbands. Let's say the standard wavelet tree for concreteness - 01447777 23447777 55667777 55667777 8.. Yup. Send the sum in each subband ; this is the "gain"
; let's say g_s No, the gain isn't sum (1-norm), it's the Euclidean (2-norm) length. If you used 1-norm you wouldn't deform the integer lattice, meaning you're still just a scalar quantizer, just one
with a funky backend. E.g. in 2D, just sending k = q(|x| + |y|) (with q being a uniform scalar quantizer without dead zone for simplicity) and then coding where the pulses go is just using the same
rectangular lattice as you would have if you were sending q(x), q(y) directly. (Once you add a dead zone that's not true any more; scalar favors a "+" shape around the origin whereas the 1-norm PVQ
doesn't. But let's ignore that for now.) With an ideal vector quantizer you make the "buckets" (=Voronoi regions) approximately equally likely. For general arbitrary 2D points that means the usual
hex lattice. The PVQ equivalent of that is the NPVQ pattern: https://people.xiph.org/~jm/daala/pvq_demo/quantizer4.png That's clearly suboptimal (not a real hex lattice at all), but it has the nice
gain/shape-separation: the circles are all equal-gain. You unwrap each circle by normalizing the point in the 1-norm, and then sending the corresponding AC pulses. g_s is sent with some scalar
quantizer (how do you choose q_s ?) (in Daala a non-linear quantizer is used) q_s would come from the rate control, as usual. g codes overall intensity. You would want that to be roughly perceptually
uniform. And you're not sending g at all, you're sending K. CIELab gamma (which is ~perceptually uniform) is 3, i.e. linear->CIELab is pow(x, 1/3). The Daala gain compander uses, surprise, 1/3. This
would make sense except for the part where the CIE gamma deals in *linear* values and Daala presumably works on a gamma-infested color space, because that's what you get. My theory is this: the thing
they're companding is not g_s, but g_s^2, i.e. sum of squares of AC coeffs. That makes for a total companding curve of (g_s)^(2/3). Display gamma is ~2, perceptually uniform gamma is ~3, so this
would be in the right range to actually work out. They're not doing a good job of describing this though! For each subband, scale the vector to an L1 length K_s (how do you choose K_s?) You don't.
You have your companded g's. The companding warps the space so now we're in NPVQ land (the thing I sent the image URL for). The companded g is the radius of the circle you're actually on. But of
course this is a quantizer so your choices of radius are discrete and limited. You look at circles with a radius in the right neighborhood (most obviously, just floor(g) and ceil(g), though you might
want to widen the search if you're doing RDO). You find the closest lattice points on both circles (this is convex, so no risk of getting stuck in a local min). Choose whichever of the two circles is
better. (All points *on* the same circle have the same cost, at least with vanilla PVQ. So the only RD trade-off you do is picking which circle.) K_s is the index of the circle you're on. The origin
is K_s=0, the first real circle is K_s=1 (and has 2N points where N is your dimensionality), and so forth. Quantize the vector to a PVQ lattice point; send the lattice index Finding that is the
convex search. So PVQ (P = Pyramid) solves this problem of how to enumerate the distribution given the sum. But that's sort of the trivial part. Well, that's the combinatorical part. The actual
vector quantizer is the idea of warping the 1-norm diamonds into sensibly-spaced 2-norm circles. The regular structure enables the simplified search. The how do you send the subband gains, what is K,
etc. is the hard part. Do the subband gains mask each other? Not sure if they're doing any additional masking beyond that. If they do, they're not talking about it. Then there's the whole issue of
PVQ where P = Predictive. This Householder reflection business. Am I correct in understanding that Daala doesn't subtract off the motion prediction and make a residual? The PVQ (P = predictive)
scheme is used instead? That's quite amazing. And it seems that Daala sends the original gain, not the gain of the residual (and uses the gain of the prediction as context). As far as I can tell,
yeah. And yes, definitely gain of the overall block, not of the residual! Again, you have the separation into gain and shape here. The gains are coded separately, and hence out of the equation. What
remains is unit vectors for both your target block and your prediction. That means your points are on a sphere. You do a reflection that aligns your prediction vector with the 1st AC coefficient.
This rotates (well, reflects...) everything around but your block is still a unit vector on a sphere. 1st AC will now contain block_gain * dot(block_unit_vec, prediction_unit_vec). You already know
block_gain. They send the dot product (cosine of the angle, but speaking about this in terms of angles is just confusing IMO; it's a correlation coefficient, period). This tells you how good the
prediction is. If it's 0.9, you've just removed 90% of the energy to code. You need to quantize this appropriately - you want to make sure the quantizer resolution here is reasonably matched to
quantizer resolution of points on your sphere, or you're wasting bits. Now you turn whatever is left of g into K (as above). You can iterate this as necessary. If you do bi-prediction, you can do
another Householder reflection to align the energy of pred2 that was orthogonal to pred1 (the rest is gone already!) with the 2nd AC. You code another correlation coefficient and then deal with the
residuals. Fade-in / fade-out just kind of fall out when you do prediction like this. It's not a special thing. The "ACs" don't change, just the gains. Handling cross-fades with one predictor is
still shitty, but if you're doing bipred they kinda fall out as well. It all sounds pretty cool. But I have no clue at all how well it works in practice or where it stands cost/benefit-wise. -Fabian
ryg: That means your points are on a sphere. You do a reflection that aligns your prediction vector with the 1st AC coefficient. This rotates (well, reflects...) everything around but your block is
still a unit vector on a sphere. Important note for this and all that follows: For this to work as I described it, your block and the prediction need to be in the same space, which in this context
has to be frequency (DCT) space (since that's what you eventually want to code with PVQ), so you need to DCT your reference block first. This combined with the reflections etc. make this pretty
pricey, all things considered. If you weren't splitting by subbands, I believe you could finesse your way around this: (normalized) DCT and Householder reflections are both unitary, so they preserve
both the L2 norm and dot products. Which means you could calculate both the overall gain and the correlation coeffs for your prediction *before* you do the DCT (and hence in the decoder, add that
stuff back in post-IDCT, without having to DCT your reference). But with the subband splitting, that no longer works, at least not directly. You could still do it with a custom filter bank that just
passes through precisely the DCT coeffs we're interested in for each subband, but eh, somehow I have my doubts that this is gonna be much more efficient than just eating the DCT. It would certainly
add yet another complicated mess to the pile. -Fabian
cbloom: At 10:23 PM 12/2/2014, Fabian Giesen wrote: For this to work as I described it, your block and the prediction need to be in the same space, which in this context has to be frequency (DCT)
space (since that's what you eventually want to code with PVQ), so you need to DCT your reference block first. This combined with the reflections etc. make this pretty pricey, all things considered.
Yeah, I asked Valin about this. They form an entire predicted *image* rather than block-by-block because of lapping. They transform the predicted image the same way as the current frame. Each subband
gain is sent as a delta from the predicted image subband gain. Crazy! His words : > You form the prediction in transformed space. (perhaps by having a > motion vector, taking the pixels it points to
and transforming them, > dealing with lapping, yuck!) We have the input image and we have a predicted image. We just transform both. Lapping doesn't actually cause any issues there (unlike many other
places). As far as I can tell, this part is similar to what a wavelet coder would do.
cbloom: At 09:31 PM 12/2/2014, you wrote: q_s would come from the rate control, as usual. Yeah, I just mean the details of that is actually one of the most important issues. eg. how does Q vary for
the different subbands. Is there inter-subband masking, etc. In Daala the Q is non-linear (variance adaptive quantizer) g codes overall intensity. You would want that to be roughly perceptually
uniform. And you're not sending g at all, you're sending K. In Daala they send g and derive K. CIELab gamma (which is ~perceptually uniform) is 3, i.e. linear->CIELab is pow(x, 1/3). The Daala gain
compander uses, surprise, 1/3. This would make sense except for the part where the CIE gamma deals in *linear* values and Daala presumably works on a gamma-infested color space, because that's what
you get. My theory is this: the thing they're companding is not g_s, but g_s^2, i.e. sum of squares of AC coeffs. That makes for a total companding curve of (g_s)^(2/3). Display gamma is ~2,
perceptually uniform gamma is ~3, so this would be in the right range to actually work out. They're not doing a good job of describing this though! Err, yeah maybe. What they actually did was take
the x264 VAQ and try to reproduce it. For each subband, scale the vector to an L1 length K_s (how do you choose K_s?) You don't. You have your companded g's. The companding warps the space so now
we're in NPVQ land (the thing I sent the image URL for). The companded g is the radius of the circle you're actually on. But of course this is a quantizer so your choices of radius are discrete and
limited. No, that's not right. K is effectively your "distribution" quantizer. It should be proportional to g in some way (or some power of g) but it's not just g. As the quantizer for g goes up, K
goes down. In Daala they choose K such that the distortion due to PVQ is the same as the distortion due to gain scalar quantization. 1st AC will now contain block_gain * dot(block_unit_vec,
prediction_unit_vec). You already know block_gain. They send the dot product (cosine of the angle, but speaking about this in terms of angles is just confusing IMO; it's a correlation coefficient,
period). I think that in Daala they actually send the angle, not the cosine, which is important because of the non-linear quantization buckets. It's difficult for me to intuit what the Householder
reflection is doing to the residuals. But I guess it doesn't matter much. It also all seems to fall apart a bit if the prediction is not very good. Then the gains might mismatch quite a bit, and even
though you had some pixels that matched well, they will be scaled differently when normalized. It's a bit blah.
ryg: Yeah, I asked Valin about this. They form an entire predicted *image* rather than block-by-block because of lapping. That doesn't have anything to do with the lapping, I think - that's because
they don't use regular block-based mocomp. At least their proposal was to mix overlapping-block MC and Control Grid Interpolation (CGI, essentially you specify a small mesh with texture coordinates
and do per-pixel tex coord interpolation). There's no nice way to do this block-per-block in the first place, not with OBMC in the mix anyway; if you chop it up into tiles you end up doing a lot of
work twice.
ryg: On 12/03/2014 10:26 AM, Charles Bloom {RAD} g codes overall intensity. You would want that to be roughly perceptually uniform. And you're not sending g at all, you're sending K. In Daala they
send g and derive K. Ah, my bad. For each subband, scale the vector to an L1 length K_s (how do you choose K_s?) You don't. You have your companded g's. The companding warps the space so now we're in
NPVQ land (the thing I sent the image URL for). The companded g is the radius of the circle you're actually on. But of course this is a quantizer so your choices of radius are discrete and limited.
No, that's not right. K is effectively your "distribution" quantizer. It should be proportional to g in some way (or some power of g) but it's not just g. As the quantizer for g goes up, K goes down.
In Daala they choose K such that the distortion due to PVQ is the same as the distortion due to gain scalar quantization. Ah OK, that makes sense. 1st AC will now contain block_gain * dot
(block_unit_vec, prediction_unit_vec). You already know block_gain. They send the dot product (cosine of the angle, but speaking about this in terms of angles is just confusing IMO; it's a
correlation coefficient, period). I think that in Daala they actually send the angle, not the cosine, which is important because of the non-linear quantization buckets. Didn't check how they send it.
I do find thinking of this in terms of cross-correlation between block and pred a lot simpler than phrasing it in terms of angles. It's difficult for me to intuit what the Householder reflection is
doing to the residuals. But I guess it doesn't matter much. The reflection itself doesn't do anything meaningful. Your normalized points were on a unit sphere before, and still are after. You're just
spinning it around. It does mean that your coefficient numbering really loses all meaning. After one such reflection, you're already scrambled. Overall energy is still the same (because it's unitary)
but the direction is completely different. Since PVQ already assumes that the directions are equiprobable (well, more or less, since the PVQ doesn't actually uniformly cover the sphere), they don't
care. It also all seems to fall apart a bit if the prediction is not very good. Then the gains might mismatch quite a bit, and even though you had some pixels that matched well, they will be scaled
differently when normalized. It's a bit blah. Well, it's just a different goal for the predictors. Regular motion search tries to minimize SAD or similar, as do the H.264 spatial predictors. For this
kind of scheme you don't care about differences at all, instead you want to maximize the normalized correlation coeff between image and reference. (You want texture matches, not pixel matches.)
cbloom: The other thing I note is that it doesn't seem very awesome at low bit rate. Their subband chunks are very large. Even at K=1 the N slots that could have that one value is very large, so
sending the index of that one slot is a lot of bits. At that point, the way you model the zeros and the location of the 1 is the most important thing. What I'm getting at is a lossy way of sending
ryg: On 12/3/2014 1:04 PM, Charles Bloom {RAD} wrote: The other thing I note is that it doesn't seem very awesome at low bit rate. Their subband chunks are very large. Even at K=1 the N slots that
could have that one value is very large, so sending the index of that one slot is a lot of bits. Yeah, the decision to send a subband *at all* means you have to code gain, theta and your AC index.
For N=16 that's gonna be hard to get below 8 bits even for trivial signals. At which point you get a big jump in the RD curve, which is bad. Terriberry has a few slides that explain how they're doing
inter-band activity masking currently: https://people.xiph.org/~tterribe/daala/pvq201404.pdf The example image is kind of terrible though. The "rose" dress (you'll see what I mean) is definitely
better in the AM variant, but the rest is hard to tell for me unless I zoom in, which is cheating. At that point, the way you model the zeros and the location of the 1 is the most important thing.
What I'm getting at is a lossy way of sending that. This is only really interesting at low K, where the PVQ codebook is relatively small. So, er, let's just throw this one in: suppose you're actually
sending codebook indices. You just have a rate allocation function that tells you how many bits to send, independent of how big the codebook actually is. If you truly believe that preserving
narrowband energy is more important than getting the direction right, then getting a random vector with the right energy envelope is better than nothing. Say K=1, Ndim=16. You have N=32 codewords, so
a codebook index stored directly is 5 bits. Rate function says "you get 0 bits". So you don't send an index at all, and the decoder just takes codeword 0. Or rate function says "you get 2 bits" so
you send two bits of the codebook index, and take the rest as zero. This is obviously biased. So the values you send aren't raw codebook indices. You have some random permutation function family p_x
(i) : { 0, ..., N-1 } -> { 0, ..., N-1 } where x is a per-block value that both the encoder and decoder know (position or something), and what you send is not the codebook id but p_x(id). For any
given block (subband, whatever), this doesn't help you at all. You either guess right or you guess wrong. But statistically, suppose you shaved 2 bits off the codebook IDs for 1000 blocks. Then you'd
expect about 250 of these blocks to reconstruct the right ACs. For the rest, you reconstructed garbage ACs, but it's garbage with the right energy levels at least! :) No clue if this is actually a
good idea at all. It definitely allows you to remove a lot of potholes from the RD curve. -Fabian
ryg: On 12/3/2014 1:48 PM, Fabian Giesen wrote: > [..] This is obviously biased. So the values you send aren't raw codebook indices. You have some random permutation function family p_x(i) : { 0,
..., N-1 } -> { 0, ..., N-1 } where x is a per-block value that both the encoder and decoder know (position or something), and what you send is not the codebook id but p_x(id). Now this is all
assuming you either get the right code or you get garbage, and living with whichever one it is. You can also go in the other direction and try to get the direction at least mostly right. You can try
to determine an ordering of the code book so that distortion more or less smoothly goes down as you add extra bits. (First bit tells you which hemisphere, that kind of thing.) That way, if you get 4
bits out of 5, it's not a 50:50 chance between right vector and some random other vector, it's either the right vector or another vector that's "close". (Really with K=1 and high dim it's always
gonna be garbage, though, because you just don't have any other vector in the code book that's even close; this is more interesting at K=2 or up). This makes the per-block randomization (you want
that to avoid systematic bias) harder, though. One approach that would work is to do a Householder reflection with a random vector (again hashed from position or similar). All that said, I don't
believe in this at all. It's "solving" a problem by "reducing" it to a more difficult unsolved problem (in this case, "I want a VQ codebook that's close to optimal for embedded coding"). Of course,
even if you do a bad job here, it's still not gonna be worse than the direct "random permutation" stuff. But I doubt it's gonna be appreciably better either, and it's definitely more complex. -Fabian
cbloom: At 02:17 PM 12/3/2014, Fabian Giesen wrote: That way, if you get 4 bits out of 5, it's not a 50:50 chance between right vector and some random other vector, it's either the right vector or
another vector that's "close". Yes, this is the type of scheme I imagine. Sort of like a wavelet significant bit thing. As you send fewer bits the location gets coarser. The codebook for K=1 is
pretty obvious. You're just sending a location; you want the top bits to grossly classify the AC and the bottom bits to distinguish neighbors (H neighbors for H-type AC's, and V neighbors for V-type
AC's) For K=2 and up it's more complex. You could just train them and store them (major over-train risk) up to maybe K=3 but then you have to switch to an algorithmic method. Really the only missing
piece for me is how you get the # of bits used to specify the locations. It takes too many bits to actually send it, so it has to be implicit from some other factors like the block Q and K and I'm
not sure how to get that.
11 comments:
Unknown said...
About PVQ indexing schemes: lots of these are possible. You can see the first one I came up with at , before I'd even heard of Fischer or found his 1986 paper. Eventually for CELT we switched to
the original Fischer indexing scheme (because it was simpler to implement and being published over 20 years ago trumped any other minor advantages in bit error robustness another scheme might
have). If you look at Section 4.3.4.2, you can see it's pretty simple to decode, and in Section 5.3.8.2, it's _really_ simple to encode.
You don't want to plug these kind of dimension-at-a-time schemes directly into an arithmetic coder, though, because for large dimensions the probabilities become highly skewed. Enough that it's
hard to model with typical arithmetic coder precision (and since you pay computational overhead per-symbol, highly skewed distributions mean you spend a lot of cycles without learning much new
about your signal). Instead, you'd want to do something like we describe for SILK in Section 4.2.7.8.3, where you split the vector and say how many pulses lie in each half. That gives you much
more balanced probabilities.
But as Jean-Marc mentioned, we don't use these uniformly distributed schemes in Daala. They're a fun aside (highly relevant for audio, not so much for video).
Inter-band masking: we were doing this back in April (when I made the slides you point to). However, we removed it in October: . Its effect was always pretty minor, it was fairly expensive (as
originally implemented, I'm sure it could've been simplified), and removing it actually improved metrics slightly. More importantly, it introduced dependencies between the entropy coder and the
"signal processing" parts of PVQ, which prevented deeply pipelining these stages in hardware.
Also, apologies for the example image I used in those slides. It was literally the first image I tried.
Directional intra: I'm sure you've seen . After spending far too much time trying to get this to work in a reasonable computational complexity, we eventually disabled it. We now use a much
simpler scheme that can only predict pure horizontal or pure vertical edges.
Lossy AC encoding: I suspect for very low bitrates, rather than try fill with random noise or using splitting planes or whatever, what you really want to do is just use small, trained VQ
codebooks. At low rates the codebooks shouldn't be very large, and any other scheme is going to wind up equivalent to some form of VQ, without the benefit of training. One thing we've discussed
is trying to classify blocks based on the first (few) band(s), and then using the classification to select from one of several different VQ codebooks for the remaining bands. For inter, you could
potentially do the classification on the predictor instead of the LF, so you could apply this scheme to the LF bands, too. For intra, where we don't have good directional predictors anymore, you
could even use the trained VQ codeword as the PVQ "reference" for the diagonal bands, letting you extend this scheme to higher rates.
This comment has been removed by a blog administrator.
"Lossy AC encoding: I suspect for very low bitrates, rather than try fill with random noise or using splitting planes or whatever, what you really want to do is just use small, trained VQ
Yep, Charles and me talked about this in person right after that thread and that's basically what we ended up with. Pre-baked codebooks does mean that you want to have a small set of
dimensionalities you're coding.
That one's a bit awkward; take the Daala subband partitioning (as in the SPIE abstract), which has (up to 16x16 blocks) subbands with 15 coeffs (top-left 4x4 minus DC), 8 coeffs (8x8 top/left),
32 coeffs (8x8 bottom-right and 16x16 top/left), and 128 coeffs (16x16 rest).
The 32- and 128-coeff subbands, you just wouldn't bother trying to find a short code for, I think. Coding anything meaningful in there is just gonna take a bunch of bits, and the codebooks (even
with small number of entries) would be large and very hard to train without getting a lot of bias from your training set. But for the 8-coeff and 15-coeff subbands, a specialized codebook for the
low-rate cases is potentially interesting. Only with prediction and biprediction (which remove 1 coeff each), you really have 6 cases: [6,8] coeffs and [13,15] coeffs. That's a lot of
It might seem that this is over-thinking it, but at least for Bink 2 (which is a *super*-simple codec optimized for low complexity and low memory use, not state of the art low-bitrate
performance, mind), it really paid off visually to tweak the coding so that basically at any point in the code stream, you could spend another 5 bits and get a reduction in distortion, even when
doing so made the coding "asymptotically worse" in the sense of being suboptimal in the medium-to-high bit rate regime.
A typical problem with RD optimization is the "one blurry block in the middle of a sharp region" case. This is perceptually really bad (you notice it immediately), and it usually happens when a
block is coded intra but you have no way of coding more than DC and maybe 1 AC coeff without going over your bit budget. Giving your codec (perceptually) good options in that case is definitely
"Directional intra: I'm sure you've seen"
Presumably this is one of the difficulties in working with lapped transforms, and predicting in transformed-space, etc.
Directional intra is really just great for edges. I've often wondered if a more explicit edge-adaptive predictor that uses the neighbors and no explicit signalling would be better.
"Lossy AC encoding: I suspect for very low bitrates, rather than try fill with random noise or using splitting planes or whatever, what you really want to do is just use small, trained VQ
Yeah. My concern about this is if you're always restoring the missing high-frequency data in the same way, the pattern will become visible. But that remains to be seen, and perhaps that could be
a few codebooks, etc.
This is getting a bit into the realm of fantasy land, but in a lot of cases what you'd really like to do is send some parameters for a noise generating function. You just send the total energy of
high frequency noise, and you have some trained noise generators that you select from. eg. obviously for film grain. Even for things like grass and leaves, what you want to do is send a small
patch once at full precision, and then make all future blocks have the same spectrum as the sample.
"One thing we've discussed is trying to classify blocks based on the first (few) band(s), and then using the classification to select from one of several different VQ codebooks for the remaining
bands. For inter, you could potentially do the classification on the predictor instead of the LF, so you could apply this scheme to the LF bands, too."
Right. I've used a similar idea in wavelet experiments. When a high-band is cut off, rather than fill with zeros, fill from the parent, scaled down by some factor. Because of the subband octave
correlation property. You can tell from just the first few AC's what kind of detail is in the block (usually).
You could have a VQ codebook that comes from PCA/KLT of the previous frame.
Well, as I said, classifying things by directionality in the LF and using that to select a VQ codebook for the HF is one potential way to get small codebooks that still do something useful. I.e.,
if you have a straight edge with a known orientation, the HF is not empty, but it does not actually have that many degrees of freedom.
That means lots of codebooks for the large bands, though, which is exactly where you don't need them. It may wind up just too expensive.
One other approach we have (by which I mean Monty has) been playing with is to train filters that will compact the energy from straight edges in the HF into the LF, and then predict the HF from
just the LF coefficients. In VQ terms, you could think of this like an adaptive codebook, as is used in speech codecs (though the mechanism is very different). The problem is that you've traded
memory complexity of large codebook tables for the computational complexity of the filters (which are not nice separable things). So it's not clear yet any of this will be worth the cost.
"At least their proposal was to mix overlapping-block MC and Control Grid Interpolation (CGI, essentially you specify a small mesh with texture coordinates and do per-pixel tex coord
Actually, we removed CGI fairly early on. It was just too expensive. You basically can't vectorize the loads without a chip with N MMUs available, and not even x86 with its new gather
instructions really has that... a GPU-style 2D texture cache would work, but that's also a metric boatload of transistors. That might have been something we could live with, but didn't have any
clear gains. It could do something _different_, but it wasn't clear the encoder I had for it was taking advantage of that to do something _useful_, nor how to write an encoder that could. I
definitely could not reproduce the gains that other researchers reported by switching methods, though that may be related to the extra constraints added to avoid blocking artifacts where the
switch occurred.
One thought Jean-Marc had instead is to have a codeword that signals, "Make this whole region 4x4 blocks and fill in the MVs with interpolation instead of coding them." I.e., it's CGI, but at the
block level instead of the pixel level, so you get at least 4-way parallelism in your loads.
"I've often wondered if a more explicit edge-adaptive predictor that uses the neighbors and no explicit signalling would be better."
Well, at the very least you need to be able to shut something like this off, because when it's wrong it will be disastrously so.
"This is getting a bit into the realm of fantasy land, but in a lot of cases what you'd really like to do is send some parameters for a noise generating function."
This has been done for still images. See Marcus Nadenau's 2000 Ph.D thesis, "Integration of Human Color Vision Models into High Quality Image Compression" for one example with wavelets. Extending
this to video is hard, for the reasons Jean-Marc said above.
"Even for things like grass and leaves, what you want to do is send a small patch once at full precision, and then make all future blocks have the same spectrum as the sample."
"Same spectrum" here has to be a bit vague, because of phase offsets and leakage and various other things that will keep it from being "the same". However, my canonical example of "video
compression is nowhere near the limit of what it could do if we had infinite CPU available" is to apply some of the dynamic texture analysis research that's been done to generate low-dimensional
models for things like water, leaves, etc., and run that analysis in the decoder, so that the encoder can just start sending a few model parameters to get you waves or fluttering leaves or
whatever. The kinds of things that are predicted very poorly with traditional motion compensation, and for which local metrics like MSE are completely irrelevant.
(Using texture synthesis to fill in detail)
In the "infinite CPU" mindset, you probably wouldn't even bother sending a lot of model coefficients, but go for a predictor-corrector kind of structure on both the encoder and decoder side: the
decoder "learns" structure from frames (slices, tiles...) it's seen before. After one good keyframe with useful texture the decoder could learn it and replicate that structure later. Given (say)
a motion vector with high weight, the decoder would know that the target area should have similar texture properties to the region it was copied from. Hmm.
You could still use them as a parametric model for synthesis, but purely "appearance-preserving mocomp" would probably help reduce the generation artifacts (and resulting wasted bit rate) for
edge-heavy structure. Alas, not cheap at all.
Another thing that would need a lot more CPU than we're willing to throw at it right now: dictionary-based schemes (linear algebra dictionaries, i.e. over-complete "bases"). Orthogonal matching
pursuit, K-SVD etc. That's been done for still images (either sending the dictionary atoms in the bit stream, or having a pre-baked dict for special-purpose applications like faces) but with
video you could conceivably start with a generic basis (DCT-derived or similar) and then keep adapting it to match the data you actually have.
There's no shortage of things to try, all we need is a couple orders of magnitude increase in cycles per pixel. :)
"In the "infinite CPU" mindset, you probably wouldn't even bother sending a lot of model coefficients, but go for a predictor-corrector kind of structure on both the encoder and decoder side: the
decoder "learns" structure from frames (slices, tiles...) it's seen before."
Yeah, but the problem is you probably never sent a great sample of the texture in a previous frame. (if you did, it would be weird because it would be too high quality for the bit rate).
So you'd have to send a patch of 32x32 pixels as side data to be used as the sample for generation for a certain type of texture. Then you do standard texture-from-example kind of stuff.
"Another thing that would need a lot more CPU than we're willing to throw at it right now: dictionary-based schemes "
I think there are fundamental theoretical problems with these schemes that are unsolved. It's not CPU holding them back; there are definite steps in this direction that could be done right now.
For example the 4x4 spatial transforms can be done as a matrix multiply, so custom matrices could be sent per frame.
The problem is once you start making the bases adaptive, then you can no longer hand-tweak your codec for them. Where are the subbands, what should the CSF's be, or the quantizers, how do you
bit-rate allocate for them. All the perceptual kludges fall apart.
I believe you can only go to adaptive bases when you have a really good software perceptual metric to evaluate them, so that we aren't relying on offline evaluation for encoder tweaks. And we're
still quite far off there.
(and running the theoretical super-perceptual-metric in-loop to make coding decision is insanely CPU prohibitive)
Yeah, but the problem is you probably never sent a great sample of the texture in a previous frame. (if you did, it would be weird because it would be too high quality for the bit rate).
Not really true. Things like x264 MBTree already spend time to identify areas that are going to be frequently used as motion references, and boost their bit budget accordingly. That's not
experimental, that's been shipping (and on by default for two-pass encoding) for years.
You're not gonna have a pristine reference, but you can definitely assume that a) often-referenced areas will start out higher quality than the rest (and thus definitely useful candidate
references to take "probes" from), b) that working somewhat harder to "leak" less of that quality over time would save bits overall.
Well I don't agree. (this is getting way off topic though)
The whole point of the texture synthesis (or the lossy AC stuff) is to send detail at levels *beyond* what you can otherwise send.
By the very definition of the problem, you don't have that detail in previous frames.
You can try force that detail in previous frames, which is really rather more like VP8's "golden frames" than MB-tree, because it's a big difference, not a small tweak, and because you need
source regions larger than one block.
But doing that is a very nasty non-local optimization problem. Should I put way more bits on these blocks than usual so that they can be used as texture source for future frames? You have to try
putting more bits on those blocks and see how it affects N frames in the future. Totally unpossible. Also really hard to not make it a weird pop in quality.
MB-tree is a very small bit rate adjustment. Not really the same thing.
Of course there's a general problem with this idea, which is the binary nature of "this block gets texture synth and this one doesn't" that would also create weird visual quality variation. All
of a sudden you get tweeds that look really amazing because it gets good texture synth, but everything else is lower quality. I guess this is an issue with any category based S/D/E type coder,
you have to be very careful to not be creating perceptual quality level differences in the different block types.
Just two comments:
- Wanting to get more uniform lattice on l2 sphere, it is worth to add some deformation, e.g. coordinate-wise power x^p allows to reduce MSE up to 26 percents ( https://arxiv.org/pdf/1705.05285
- Enumerative coding often requires large arithmetic, also looses ~half bit in the final flush. For speedup and to avoid this inefficiency, it can be put into entropy coding. For this purpose we
need to use the combinatorial formulas to find probabilities: for first digit for all possible (L,K). | {"url":"https://cbloomrants.blogspot.com/2014/12/12-16-14-daala-pvq-emails.html?showComment=1419034199618","timestamp":"2024-11-12T12:44:20Z","content_type":"application/xhtml+xml","content_length":"149679","record_id":"<urn:uuid:b93d620e-41d8-47fe-b657-a6adcccca788>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00382.warc.gz"} |
22.1 Tracking Inflation
Learning Objectives
By the end of this section, you will be able to:
• Calculate the annual rate of inflation
• Explain and use index numbers and base years when simplifying the total quantity spent over a year for products
• Calculate inflation rates using index numbers
Dinner table conversations where you might have heard about inflation usually entail reminiscing about when “everything seemed to cost so much less. You used to be able to buy three gallons of
gasoline for a dollar and then go see an afternoon movie for another dollar.” Table 1 compares some prices of common goods in 1970 and 2014. Of course, the average prices shown in this table may not
reflect the prices where you live. The cost of living in New York City is much higher than in Houston, Texas, for example. In addition, certain products have evolved over recent decades. A new car in
2014, loaded with antipollution equipment, safety gear, computerized engine controls, and many other technological advances, is a more advanced machine (and more fuel efficient) than your typical
1970s car. However, put details like these to one side for the moment, and look at the overall pattern. The primary reason behind the price rises in Table 1—and all the price increases for the other
products in the economy—is not specific to the market for housing or cars or gasoline or movie tickets. Instead, it is part of a general rise in the level of all prices. In 2014, $1 had about the
same purchasing power in overall terms of goods and services as 18 cents did in 1972, because of the amount of inflation that has occurred over that time period.
Items 1970 2014
Pound of ground beef $0.66 $4.16
Pound of butter $0.87 $2.93
Movie ticket $1.55 $8.17
Sales price of new home (median) $22,000 $280,000
New car $3,000 $32,531
Gallon of gasoline $0.36 $3.36
Average hourly wage for a manufacturing worker $3.23 $19.55
Per capita GDP $5,069 $53,041.98
Table 1. Price Comparisons, 1970 and 2014. (Sources: See chapter References at end of book.)
Moreover, the power of inflation does not affect just goods and services, but wages and income levels, too. The second-to-last row of Table 1 shows that the average hourly wage for a manufacturing
worker increased nearly six-fold from 1970 to 2014. Sure, the average worker in 2014 is better educated and more productive than the average worker in 1970—but not six times more productive. Sure,
per capita GDP increased substantially from 1970 to 2014, but is the average person in the U.S. economy really more than eight times better off in just 44 years? Not likely.
A modern economy has millions of goods and services whose prices are continually quivering in the breezes of supply and demand. How can all of these shifts in price be boiled down to a single
inflation rate? As with many problems in economic measurement, the conceptual answer is reasonably straightforward: Prices of a variety of goods and services are combined into a single price level;
the inflation rate is simply the percentage change in the price level. Applying the concept, however, involves some practical difficulties.
The Price of a Basket of Goods
To calculate the price level, economists begin with the concept of a basket of goods and services, consisting of the different items individuals, businesses, or organizations typically buy. The next
step is to look at how the prices of those items change over time. In thinking about how to combine individual prices into an overall price level, many people find that their first impulse is to
calculate the average of the prices. Such a calculation, however, could easily be misleading because some products matter more than others.
Changes in the prices of goods for which people spend a larger share of their incomes will matter more than changes in the prices of goods for which people spend a smaller share of their incomes. For
example, an increase of 10% in the rental rate on housing matters more to most people than whether the price of carrots rises by 10%. To construct an overall measure of the price level, economists
compute a weighted average of the prices of the items in the basket, where the weights are based on the actual quantities of goods and services people buy. The following Work It Out feature walks you
through the steps of calculating the annual rate of inflation based on a few products.
Calculating an Annual Rate of Inflation
Consider the simple basket of goods with only three items, represented in Table 2. Say that in any given month, a college student spends money on 20 hamburgers, one bottle of aspirin, and five
movies. Prices for these items over four years are given in the table through each time period (Pd). Prices of some goods in the basket may rise while others fall. In this example, the price of
aspirin does not change over the four years, while movies increase in price and hamburgers bounce up and down. Each year, the cost of buying the given basket of goods at the prices prevailing at that
time is shown.
Items Hamburger Aspirin Movies Total Inflation Rate
Qty 20 1 bottle 5 – –
(Pd 1) Price $3.00 $10.00 $6.00 – –
(Pd 1) Amount Spent $60.00 $10.00 $30.00 $100.00 –
(Pd 2) Price $3.20 $10.00 $6.50 – –
(Pd 2) Amount Spent $64.00 $10.00 $32.50 $106.50 6.5%
(Pd 3) Price $3.10 $10.00 $7.00 – –
(Pd 3) Amount Spent $62.00 $10.00 $35.00 $107.00 0.5%
(Pd 4) Price $3.50 $10.00 $7.50 – –
(Pd 4) Amount Spent $70.00 $10.00 $37.50 $117.50 9.8%
Table 2. A College Student’s Basket of Goods
To calculate the annual rate of inflation in this example:
Step 1. Find the percentage change in the cost of purchasing the overall basket of goods between the time periods. The general equation for percentage changes between two years, whether in the
context of inflation or in any other calculation, is:
[latex]\frac{(Level\;in\;new\;year\;-\;Level\;in\;previous\;year)}{Level\;in\;previous\;year} = Percentage\;change[/latex]
Step 2. From period 1 to period 2, the total cost of purchasing the basket of goods in Table 2 rises from $100 to $106.50. Therefore, the percentage change over this time—the inflation rate—is:
[latex]\frac{(106.50\;-\;100)}{100.0} = 0.065 = 6.5\%[/latex]
Step 3. From period 2 to period 3, the overall change in the cost of purchasing the basket rises from $106.50 to $107. Thus, the inflation rate over this time, again calculated by the percentage
change, is approximately:
[latex]\frac{(107\;-\;106.50)}{106.50} = 0.0047 = 0.47\%[/latex]
Step 4. From period 3 to period 4, the overall cost rises from $107 to $117.50. The inflation rate is thus:
[latex]\frac{(117.50\;-\;107)}{107} = 0.098 = 9.8\%[/latex]
This calculation of the change in the total cost of purchasing a basket of goods takes into account how much is spent on each good. Hamburgers are the lowest-priced good in this example, and aspirin
is the highest-priced. If an individual buys a greater quantity of a low-price good, then it makes sense that changes in the price of that good should have a larger impact on the buying power of that
person’s money. The larger impact of hamburgers shows up in the “amount spent” row, where, in all time periods, hamburgers are the largest item within the amount spent row.
Index Numbers
The numerical results of a calculation based on a basket of goods can get a little messy. The simplified example in Table 2 has only three goods and the prices are in even dollars, not numbers like
79 cents or $124.99. If the list of products was much longer, and more realistic prices were used, the total quantity spent over a year might be some messy-looking number like $17,147.51 or
To simplify the task of interpreting the price levels for more realistic and complex baskets of goods, the price level in each period is typically reported as an index number, rather than as the
dollar amount for buying the basket of goods. Price indices are created to calculate an overall average change in relative prices over time. To convert the money spent on the basket to an index
number, economists arbitrarily choose one year to be the base year, or starting point from which we measure changes in prices. The base year, by definition, has an index number equal to 100. This
sounds complicated, but it is really a simple math trick. In the example above, say that time period 3 is chosen as the base year. Since the total amount of spending in that year is $107, we divide
that amount by itself ($107) and multiply by 100. Mathematically, that is equivalent to dividing $107 by 100, or $1.07. Doing either will give us an index in the base year of 100. Again, this is
because the index number in the base year always has to have a value of 100. Then, to figure out the values of the index number for the other years, we divide the dollar amounts for the other years
by 1.07 as well. Note also that the dollar signs cancel out so that index numbers have no units.
Calculations for the other values of the index number, based on the example presented in Table 2 are shown in Table 3. Because the index numbers are calculated so that they are in exactly the same
proportion as the total dollar cost of purchasing the basket of goods, the inflation rate can be calculated based on the index numbers, using the percentage change formula. So, the inflation rate
from period 1 to period 2 would be
[latex]\frac{(99.5\;-\;93.4)}{93.4} = 0.065 = 6.5\%[/latex]
This is the same answer that was derived when measuring inflation based on the dollar cost of the basket of goods for the same time period.
Total Spending Index Number Inflation Rate Since Previous Period
Period 1 $100 [latex]\frac{100}{1.07} = 93.4[/latex]
Period 2 $106.50 [latex]\frac{106.50}{1.07} = 99.5[/latex] [latex]\frac{(99.5\;-\;93.4)}{93.4} = 0.065 = 6.5\%[/latex]
Period 3 $107 [latex]\frac {107}{1.07} = 100.0[/latex] [latex]\frac{(100\;-\;99.5)}{99.5} = 0.005 = 0.5\%[/latex]
Period 4 $117.50 [latex]\frac{117.50}{1.07} = 109.8[/latex] [latex]\frac{(109.8\;-\;100)}{100} = 0.098 = 9.8\%[/latex]
Table 3. Calculating Index Numbers When Period 3 is the Base Year
If the inflation rate is the same whether it is based on dollar values or index numbers, then why bother with the index numbers? The advantage is that indexing allows easier eyeballing of the
inflation numbers. If you glance at two index numbers like 107 and 110, you know automatically that the rate of inflation between the two years is about, but not quite exactly equal to, 3%. By
contrast, imagine that the price levels were expressed in absolute dollars of a large basket of goods, so that when you looked at the data, the numbers were $19,493.62 and $20,009.32. Most people
find it difficult to eyeball those kinds of numbers and say that it is a change of about 3%. However, the two numbers expressed in absolute dollars are exactly in the same proportion of 107 to 110 as
the previous example. If you’re wondering why simple subtraction of the index numbers wouldn’t work, read the following Clear It Up feature.
Why do you not just subtract index numbers?
A word of warning: When a price index moves from, say, 107 to 110, the rate of inflation is not exactly 3%. Remember, the inflation rate is not derived by subtracting the index numbers, but rather
through the percentage-change calculation. The precise inflation rate as the price index moves from 107 to 110 is calculated as (110 – 107) / 107 = 0.028 = 2.8%. When the base year is fairly close to
100, a quick subtraction is not a terrible shortcut to calculating the inflation rate—but when precision matters down to tenths of a percent, subtracting will not give the right answer.
Two final points about index numbers are worth remembering. First, index numbers have no dollar signs or other units attached to them. Although index numbers can be used to calculate a percentage
inflation rate, the index numbers themselves do not have percentage signs. Index numbers just mirror the proportions found in other data. They transform the other data so that the data are easier to
work with.
Second, the choice of a base year for the index number—that is, the year that is automatically set equal to 100—is arbitrary. It is chosen as a starting point from which changes in prices are
tracked. In the official inflation statistics, it is common to use one base year for a few years, and then to update it, so that the base year of 100 is relatively close to the present. But any base
year that is chosen for the index numbers will result in exactly the same inflation rate. To see this in the previous example, imagine that period 1, when total spending was $100, was also chosen as
the base year, and given an index number of 100. At a glance, you can see that the index numbers would now exactly match the dollar figures, the inflation rate in the first period would be 6.5%, and
so on.
Now that we see how indexes work to track inflation, the next module will show us how the cost of living is measured.
Watch this video from the cartoon Duck Tales to view a mini-lesson on inflation.
Key Concepts and Summary
The price level is measured by using a basket of goods and services and calculating how the total cost of buying that basket of goods will increase over time. The price level is often expressed in
terms of index numbers, which transform the cost of buying the basket of goods and services into a series of numbers in the same proportion to each other, but with an arbitrary base year of 100. The
rate of inflation is measured as the percentage change between price levels or index numbers over time.
Self-Check Questions
1. Table 4 shows the prices of fruit purchased by the typical college student from 2001 to 2004. What is the amount spent each year on the “basket” of fruit with the quantities shown in column 2?
Items Qty (2001) Price (2001) Amount Spent (2002) Price (2002) Amount Spent (2003) Price (2003) Amount Spent (2004) Price (2004) Amount Spent
Apples 10 $0.50 $0.75 $0.85 $0.88
Bananas 12 $0.20 $0.25 $0.25 $0.29
Grapes 2 $0.65 $0.70 $0.90 $0.95
Raspberries 1 $2.00 $1.90 $2.05 $2.13 $2.13
Table 4.
2. Construct the price index for a “fruit basket” in each year using 2003 as the base year.
3. Compute the inflation rate for fruit prices from 2001 to 2004.
4. Edna is living in a retirement home where most of her needs are taken care of, but she has some discretionary spending. Based on the basket of goods in Table 5, by what percentage does Edna’s
cost of living increase between time 1 and time 2?
Items Quantity (Time 1) Price (Time 2) Price
Gifts for grandchildren 12 $50 $60
Pizza delivery 24 $15 $16
Blouses 6 $60 $50
Vacation trips 2 $400 $420
Table 5.
Review Questions
1. How is a basket of goods and services used to measure the price level?
2. Why are index numbers used to measure the price level rather than dollar value of goods?
3. What is the difference between the price level and the rate of inflation?
Critical Thinking Questions
Inflation rates, like most statistics, are imperfect measures. Can you identify some ways that the inflation rate for fruit does not perfectly capture the rising price of fruit?
1. The index number representing the price level changes from 110 to 115 in one year, and then from 115 to 120 the next year. Since the index number increases by five each year, is five the
inflation rate each year? Is the inflation rate the same each year? Explain your answer.
2. The total price of purchasing a basket of goods in the United Kingdom over four years is: year 1=£940, year 2=£970, year 3=£1000, and year 4=£1070. Calculate two price indices, one using year 1
as the base year (set equal to 100) and the other using year 4 as the base year (set equal to 100). Then, calculate the inflation rate based on the first price index. If you had used the other
price index, would you get a different inflation rate? If you are unsure, do the calculation and find out.
Sources for Table 1:
US Inflation Calculator. “Historical Inflation Rates: 1914-2013.” Accessed March 4, 2015. http://www.usinflationcalculator.com/inflation/historical-inflation-rates/.
base year
arbitrary year whose value as an index number is defined as 100; inflation from the base year to other years can easily be seen by comparing the index number in the other year to the index number
in the base year—for example, 100; so, if the index number for a year is 105, then there has been exactly 5% inflation between that year and the base year
basket of goods and services
a hypothetical group of different items, with specified quantities of each one meant to represent a “typical” set of consumer purchases, used as a basis for calculating how the price level
changes over time
index number
a unit-free number derived from the price level over a number of years, which makes computing inflation rates easier, since the index number has values around 100
a general and ongoing rise in the level of prices in an economy
Answers to Self-Check Questions
1. To compute the amount spent on each fruit in each year, you multiply the quantity of each fruit by the price.
□ 10 apples × 50 cents each = $5.00 spent on apples in 2001.
□ 12 bananas × 20 cents each = $2.40 spent on bananas in 2001.
□ 2 bunches of grapes at 65 cents each = $1.30 spent on grapes in 2001.
□ 1 pint of raspberries at $2 each = $2.00 spent on raspberries in 2001.
Adding up the amounts gives you the total cost of the fruit basket. The total cost of the fruit basket in 2001 was $5.00 + $2.40 + $1.30 + $2.00 = $10.70. The total costs for all the years are
shown in the following table.
$10.70 $13.80 $15.35 $16.31
Table 6.
2. If 2003 is the base year, then the index number has a value of 100 in 2003. To transform the cost of a fruit basket each year, we divide each year’s value by $15.35, the value of the base year,
and then multiply the result by 100. The price index is shown in the following table.
69.71 84.61 100.00 106.3
Table 7.
Note that the base year has a value of 100; years before the base year have values less than 100; and years after have values more than 100.
3. The inflation rate is calculated as the percentage change in the price index from year to year. For example, the inflation rate between 2001 and 2002 is (84.61 – 69.71) / 69.71 = 0.2137 = 21.37%.
The inflation rates for all the years are shown in the last row of the following table, which includes the two previous answers.
Items Qty (2001) Price (2001) Amount Spent (2002) Price (2002) Amount Spent (2003) Price (2003) Amount Spent (2004) Price (2004) Amount Spent
Apples 10 $0.50 $5.00 $0.75 $7.50 $0.85 $8.50 $0.88 $8.80
Bananas 12 $0.20 $2.40 $0.25 $3.00 $0.25 $3.00 $0.29 $3.48
Grapes 2 $0.65 $1.30 $0.70 $1.40 $0.90 $1.80 $0.95 $1.90
Raspberries 1 $2.00 $2.00 $1.90 $1.90 $2.05 $2.05 $2.13 $2.13
Total $10.70 $13.80 $15.35 $16.31
Price Index 69.71 84.61 100.00 106.3
Inflation Rate 21.37% 18.19% 6.3%
Table 8.
4. Begin by calculating the total cost of buying the basket in each time period, as shown in the following table.
Items Quantity (Time 1) Price (Time 1) Total Cost (Time 2) Price (Time 2) Total Cost
Gifts 12 $50 $600 $60 $720
Pizza 24 $15 $360 $16 $384
Blouses 6 $60 $360 $50 $300
Trips 2 $400 $800 $420 $840
Total Cost $2,120 $2,244
Table 9.
The rise in cost of living is calculated as the percentage increase:
(2244 – 2120) / 2120 = 0.0585 = 5.85%. | {"url":"https://pressbooks-dev.oer.hawaii.edu/principlesofeconomics/chapter/22-1-tracking-inflation/","timestamp":"2024-11-07T13:22:11Z","content_type":"text/html","content_length":"148197","record_id":"<urn:uuid:1fdfc6d1-2367-427c-8d77-75a409f95102>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00458.warc.gz"} |
General report on machine learning experiments for the Möbius function
Last week, I was at the Mathematics and Machine Learning program at Harvard's Center of Mathematical Sciences and Applications.^1 ^1Later Update: I'll be talking about this and related experiments on
October 29th at Harvard. The talk should be made available on youtube. The underlying topic was on number theory and I've been studying various number theoretic problems from a machine learning
I've been computing several experiments related to estimating the Mobius function $\mu(n)$. I don't expect $\mu(n)$ to be easily approximable; all earlier attempts to study $\mu$ using machine
learning have resisted much success. This is perhaps related to Mobius Randomness.^2 ^2See for example Peter Sarnak's Three Lectures on the Mobius Function Randomness and Dynamics.
Previous machine learning experiments on studying $\mu(n)$ have used neural networks or classifiers. Francois Charton made an integer sequence to integer sequence transformer-based translator,
Int2Int, and I thought it would be fun to see if this works any different.
Initially, I sought to get Int2Int to work. Then I set it on various examples. I describe some of them here.
I'm splitting my description into two parts: a general report and a technical report. This is the general report.^3 ^3This is also available as a pdf. The technical^4 ^4 By "technical" here, I mean
pertaining to technology (i.e. to programming). Both notes are nonelementary. But I acknowledge that there are very few people who are experts in both number theory and machine learning. report
includes technical details for running or re-running Int2Int experiments and other programming-related aspects.
Mobius Function
Recall that the mobius function $\mu(n)$ is $0$ if the square of any prime divides $n$, and otherwise is $(-1)^\omega(n)$, where $\omega(n)$ is the number of prime divisors of $n$. For example, $\mu
(1) = 1, \mu(2) = \mu(3) = -1, \mu(4) = 0, \mu(5) = -1, \mu(6) = 1,$ and so on.
Int2Int takes as input a sequence of integers, and the output is a sequence of integers. I struggled to make sense of studying many outputs, but this is really my own problem.
Inputs and Outputs for Möbius
Int2Int takes sequences of integers as input and produces sequences of integers as output. I tried several variations to estimate $\mu(n)$, including
1. Input just $n$ and output $\mu(n)$. (Or rather, make sure I can get Int2Int to process anything at all with the simplest possible example).
2. Input $n \bmod p$ and $p$ for the first $100$ primes.
3. Input $n \bmod p$ and $p$ for the second $100$ primes.
4. Input the Legendre symbol $(n/p)$ for the first $100$ primes.
5. Input $n$, $n \bmod p$, and $(n/p)$ for the first $100$ primes.
For each of these, I estimated $\mu(n)$, $\mu^2(n)$, and $\mu(n+1)$. The input $n$ were sampled uniformly randomly from $n$ between $2$ and $10^{13}$ (with a few larger experiments here and there),
using training sets between $2\cdot10^6$ for initial runs and $5\cdot10^{7}$ to investigate further. I also trained over $n$ restricted to be squarefree.
Better than Random: squarefree detection
I quickly saw that Int2Int could guess $\mu(n)$ better than random guesses. But the reason why was because it was determining if $n$ was squarefree or not with reasonable accuracy.^5 ^5This is
similar to a pattern observed by Jordan Ellenberg when attempting to train neural networks to estimate $\mu(n)$. The network seemed to figure out eventually that $4 \mid n \implies \mu(n) = 0$, and
then sometime later that $9 \mid n$ also implies $\mu(n) = 0$. Presumably it would figure out other squares later, eventually.
The Int2Int models were determining whether $n$ was squarefree or not with very high accuracy, and then guessing $\mu(n) = \pm 1$ randomly when it thought $n$ was squarefree. Some of these models
were guessing $\mu(n)$ correctly around $60$ percent of the time: far better than chance.
Looking closer, the best-performing model (which also had the most data: $n, n \bmod p,$ and $(n/p)$ for the first $100$ primes $p$) correctly recognized almost $92$ percent of squareful numbers,^6 ^
6Be careful with what is the condition here. In particular it doesn't say that the model computes $\mu(n)$ correctly $92$ percent of the time. but only correctly recognized whether $\mu(n) = \pm 1$
about $40$ percent of the time. Using that the density of squareful numbers is about $0.39$, this gave the overall correctness at \begin{equation*} 0.39 \cdot 0.92 + 0.61 \cdot 0.4 \approx 0.6, \end
{equation*} recovering the approximately $60$ percent overall correctness. The model tended to overestimate the number of squareful numbers and guessed that several squarefree numbers were squareful.
This occurred quickly when trained using quadratic residue symbols. I wasn't initially surprised by this because of course Legendre symbols include information about squares. Thus it should be
possible to quickly train a network to recognize most squares given $(n/p)$ for the first $100$ primes (most numbers are divisible mostly by small primes, and hence checking small prime behavior
usually suffices).
But here we're looking at numbers that are or are not squarefree: multiplying a square by a squarefree number mixes up all the quadratic residues and nonresidues.
With a bit more training, having only $n \bmod p$ for the first $100$ primes produced very similar behavior. How was it doing this?
This is an interesting purely mathematical question: how would you guess whether $n$ is squarefree or not given $n \bmod p$ for lots of primes $p?$
One way would be to perform the Chinese remainder theorem, reconstruct $n$, and then actually check. Is the model recognizing something like this?
To test, I ran several experiments along the following lines:
1. Given $(n \bmod p)$ for the first $100$ primes, output if $n$ is in the interval $[10^6, 2 \cdot 10^6]$.
2. Given $(n \bmod p)$ for the first $100$ primes but excluding $7$, output $n \bmod 7$.
These probe CRT-type knowledge. I sample input $n$ uniformly at random from large intervals. The frequencies of each residue class should be approximately uniformly randomly distributed.
But the model never did better than random guessing on either of this type of experiment. I guess the model isn't recovering CRT-like information.
I'm also looking to determine behavior mod $p^2$ or $p^3$ using this type of transformer model. This is similar to CRT-like information, but slightly different. I'll talk about this later.
How to guess if $\mu(n) = 0$ given $n \bmod p$
After talking with Noam Elkies and Andrew Sutherland, I think I know how the model is guessing when $\mu(n) = 0$ with such high accuracy. The point is that numbers that are not squarefree are
probably divisible by a small square and thus likely to be $0$ mod a small prime. Numbers that are squarefree might be $0$ mod a small prime, but not as often.
Let's look at this in greater detail.
The zeta function associated to squarefree numbers is
$$\zeta_{\mathrm{SF}}(s) = \prod_p\Big( 1 + \frac{1}{p^s} \Big) = \zeta(s) / \zeta(2s).$$ Thus the ratio of numbers up to $X$ that are squarefree is about^7 ^7This is tangentially related to my note
from yesterday
$$\mathrm{Res}_{s = 1} \zeta(s)/\zeta(2s) = 1/\zeta(2) = \frac{6}{\pi^2} \approx 0.6079$$
The default algorithm to use would be to guess that every integer is squarefree: this is right just over $60$ percent of the time. We need to do better than that.
The zeta function associated to even squarefree numbers is $$\frac{1}{2^s} \prod_{\substack{p \\ p eq 2}} \Big( 1 + \frac{1}{p^s} \Big) = \frac{1}{2^s} \frac{\zeta^{(2)}(s)}{\zeta^{(2)}(2s)} = \frac
{1}{2^s} \frac{(1 - 1/2^s)}{(1 - 1/4^s)} \frac{\zeta(s)}{\zeta(2s)}.$$ It follows that the ratio of numbers up to $X$ that are even and squarefree is about $$\frac{1}{2} \frac{1/2}{3/4} \frac{6}{\pi^
2} = \frac{1}{3} \frac{6}{\pi^2}.$$ This implies that the remaining $\frac{2}{3} \frac{6}{\pi^2} X$ squarefree integers up to $X$ are odd. We could see this directly by noting that the corresponding
zeta function is $$\prod_{\substack{p \\ p eq 2}} \Big( 1 + \frac{1}{p^s} \Big) = \frac{(1 - 1/2^s)}{(1 - 1/4^s)} \frac{\zeta(s)}{\zeta(2s)},$$ and computing the residue as $(1/2)/(3/4) \cdot \frac
{6}{\pi^2} = \frac{2}{3} \frac{6}{\pi^2}$.
A squarefree integer is twice as likely to be odd as even.
For this classification problem, we're interested in the converse conditional: what is the probability that $n$ is squarefree given that it is even (or odd)? Basic probability shows that $$P(\mathrm
{sqfree} | \mathrm{even}) = \frac{P(\mathrm{even \; and \; sqfree})}{P(\mathrm{even})} = \frac{\frac{1}{3} \frac{6}{\pi^2}}{\frac{1}{2}} \approx 0.4052$$ and $$P(\mathrm{sqfree} | \mathrm{odd}) = \
frac{P(\mathrm{odd \; and \; sqfree})}{P(\mathrm{odd})} = \frac{\frac{2}{3} \frac{6}{\pi^2}}{\frac{1}{2}} \approx 0.8105.$$
This already gives a better-than-naive strategy: if $n$ is even, guess that it's not squarefree (correct about $1 - 0.4052 \approx 0.6$ of the time); if $n$ is odd, then guess squarefree (correct
about $0.8105$ of the time). This should be correct about $0.5 \cdot (1 - 0.4052) + 0.5 \cdot (0.8105) \approx 0.7$ (or actually $0.7026423\ldots$) of the time.
As $0.7 > 6/\pi^2$, this type of thinking is an improvement.
This readily generalizes to other primes. The Dirichlet series for squarefree numbers that are divisible by a fixed prime $q$ is $$\label{eq:euler} \frac{1}{q^s} \prod_{\substack{p \\ p eq q}} \Big(
1 + \frac{1}{p^s} \Big) = \frac{1}{q^s} \frac{(1 - 1/q^s)}{(1 - 1/q^{2s})} \frac{\zeta(s)}{\zeta(2s)},$$ and the series for squarefree numbers that aren't divisible by a fixed prime $q$ is the same,
but without $q^{-s}$. Thus the percentage of integers that are squarefree and divisible by $q$ or not divisible by $q$ are, respectively, $$\label{eq:local_densities} \frac{1}{q+1} \frac{6}{\pi^2} \
quad \text{and} \quad \frac{q}{q+1} \frac{6}{\pi^2}.$$ Playing conditional probabilities as above shows that \begin{align*} P(\text{sqfree} | \text{q-even}) &= \frac{P(\text{sqfree and q-even})}{P(\
text{q-even})} = \frac{q}{q+1} \frac{6}{\pi^2} \\ P(\text{sqfree} | \text{q-odd}) &= \frac{P(\text{sqfree and q-odd})}{P(\text{q-odd})} = \frac{q^2}{q^2 - 1} \frac{6}{\pi^2}. \end{align*} I use the
adhoc shorthand $q$-even to mean divisible by $q$, and $q$-odd to mean not divisible by $q$.
The differences are the largest when the prime $q$ is small. A good strategy would then be to look at a couple of small primes $q$ and then predict whether $n$ is squarefree based on divisibility
rules for the primes $q$.
I've ignored all the joint probabilities. These are explicitly computable by computing the local densities at the appropriate primes, as above. But the point is that divisibility by primes $q$
correlates nontrivially with being squarefree, and this sort of table correlation is something that we should expect machine learning to figure out.
Explicit computation shows that using the first $20$ primes and guessing squarefree or not squarefree based on which divisibility pattern of those primes is more common yields an overall correct rate
of $70.3$ percent, only $0.1$ percent higher than using $2$ alone.
We can hope that machine learning algorithms could do better. Computing the table of cross correlations given sufficient data isn't hard. But ML models should also determine weights to use to better
predict outcomes. Predicting what ML can or can't do is much harder.
Pure squarefree detection
With the same inputs, I looked at guessing $\mu(n)^2$. That is, I tried to look just at the squarefree detection powers.
Overall, the models were correct about $70$ percent of the time. This is consistent with the above behavior and with the heuristic that it could only use mod $2$ information.
Restricting to squarefree $n$
In the other direction, I also restricted all inputs to squarefree $n$. This balances the expected outputs: about $50$ percent each should correspond to $-1$ and about $50$ percent should correspond
to $1$. Any prediction with accuracy greater than $50$ percent would be a major achievement.
But ultimately none of the models I checked did any better than $50$ percent consistently.
Removing $2$
Still input $n \bmod p$ for $100$ primes, but use the $100$ primes after $2$. As we saw above, $2$ has the most explanatory power using pure Bayesian probability. This asks: is the machine learning
doing anything else other than the $2$-based cross correlations described above?
In short, the performance plummeted to less than $50$ percent accuracy for guessing $\mu(n)$. The performance was consistent with determining whether $n$ was squarefree correctly about $60$ percent
of the time, and then guessing randomly between $+1$ and $-1$ when $n$ was determined to be squarefree.
And this is consistent with using the pure Bayesian probabilistic approach on exactly the prime $3$. Indeed, the probability that $n$ is squarefree given that $3$ divides $n$ is $(3/4) \cdot 6/\pi^2
\approx 0.4559$, and the probability that $n$ is squarefree given that $3$ doesn't divide $n$ is $(9/8) \cdot 6/\pi^2 \approx 0.6839$. Thus $1/3$ of the time, we would guess "not squarefree" with
accuracy $1 - 0.4559$ and the rest of the time we would guess "squarefree" with accuracy $0.6839$, giving a total accuracy around \begin{equation*} (1/3) \cdot (1 - \tfrac{3}{4} 6/\pi^2) + (2/3) \
cdot \tfrac{9}{8} 6/\pi^2 \approx 0.6372. \end{equation*}
Info on how to comment
To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used
next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.
bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline
math)$ or $$(your display equation)$$.
Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful. | {"url":"https://davidlowryduda.com/ml-mobius-general/","timestamp":"2024-11-14T05:26:59Z","content_type":"text/html","content_length":"20140","record_id":"<urn:uuid:459eee20-75d7-4bbf-96df-1abeab094c5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00550.warc.gz"} |
MATH 348
Welcome to Topology! For course info and policies, please see the syllabus. For grades, log into Moodle. If you need help, contact Prof. Wright.
Prof. Wright's office hours: Mon. 9–10am, Tues. 10–11am, Wed. 2:30–3:30pm, Thurs. 1–2pm, Fri. 11am–noon, and other times by appointment (in RMS 405)
Have a great fall break! No class October 15.
Have a great Thanksgiving break! No class November 28. | {"url":"http://mrwright.org/teaching/math348f24/","timestamp":"2024-11-04T01:11:26Z","content_type":"text/html","content_length":"32106","record_id":"<urn:uuid:091b0d9b-ca79-4eaf-a135-6111138608ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00198.warc.gz"} |
top of page
University of Illinois at Chicago (UIC)
• Ph.D. in Mathematics (June 1990) Major: Harmonic Analysis, Minor: Probability Theory
• D.A. in Mathematics (May 1997) Major: Mathematics Education, Minor: Applied Statistics
• M.S. in Mathematics (June 1986) Major: Applied Mathematics
• Honors (1989-1990) Fellowship - U.S. Department of Education/UIC - Department of Mathematics, Statistics, and Computer Science
San Jose State University, San Jose, California
• M.S. in Engineering (1971) Major: Electrical Engineering and Computer Science; Minor: Mathematics
University of California at Berkeley
• B.S. in Engineering (1967) Major: Electrical Engineering and Computer Science; Minor: Mathematics
Richard J. Daley College, Chicago, Illinois
Distinguished Professor of Mathematics
Loyola University Chicago, Chicago, Illinois
Adjunct Professor of Mathematics
California State University, Dominguez Hills, Carson, California
Visiting Full Time Mathematics Faculty (1990-1991)
University of Southern California (USC)
Adjunct Mathematics Faculty (1990-1991)
Santa Monica College, Santa Monica, California
Adjunct Mathematics Faculty (1990-1991)
Tehran Technical College, Tehran, Iran
Full Time Mathematics Faculty (1979-1981)
Chicago State University, Chicago, Illinois
Lecturer in Mathematics (1974-1979 & 1981-1982)
Central YMCA Community College, Chicago, Illinois
Mathematics Instructor (1974-1979)
Chicago Technical College, Chicago, Illinois
Mathematics Instructor (1974-1976)
I have served as active member of several committees, including the Faculty Council; Curriculum Committee; Rank and Promotion Committee; Budget Committee; Electronics Technology Advisory Committee;
Nursing General Education Advisory Council; Project Upward Bound Committee; Dean of Instruction and Dean of Students Selection Committees; UIC/CCC Math Articulation Committee; College NCA Steering
Committee; Institutional Effectiveness Committee; Assessment Committee; Strategic Planning Committee; Engineering Science Advisory Council, and College Leadership Team. I modernized and supervised
the Mathematics Laboratory (1987-1996), developed curricula (Technical Mathematics, Mathematics for Elementary Teachers Sequence I & II, and Honors Discrete Mathematics). I also served as the Chair
of the Mathematics Department for fifteen years.
Professional Organizations
• American Mathematical Society (AMS)
• American Mathematical Association of Two-Year Colleges (AMATYC)
• Mathematical Association of America (MAA)
• Illinois Council of Teachers of Mathematics (ICTM)
• American Educational Research Association (AERA)
bottom of page | {"url":"https://www.valisiadat.com/profile","timestamp":"2024-11-10T19:30:14Z","content_type":"text/html","content_length":"305980","record_id":"<urn:uuid:69f203aa-ca19-4f0c-b214-c1a8c0a8503b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00140.warc.gz"} |
Introduction to Complex Numbers
This ebook makes learning "complex" numbers easy through an interactive, fun and personalized approach. Features include: live YouTube video streams and closed captions that translate to 90
Complex numbers "break all the rules" of traditional mathematics by allowing us to take a square root of a negative number. This "radical" approach has fundamentally changed the capabilities of
science and engineering to enhance our world through such applications as: signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis.
A particularly beautiful connection between art and complex numbers lies in fractals, such as the Mandelbrot set. | {"url":"https://bookboon.com/sv/introduction-to-complex-numbers-ebook","timestamp":"2024-11-11T12:43:53Z","content_type":"text/html","content_length":"99597","record_id":"<urn:uuid:0962ea80-0bf3-4451-beb0-540963bbeec5>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00550.warc.gz"} |
Optionize: Blog: The Sell Off in Biotech is Coming to an End
The Sell Off in Biotech is Coming to an End
Posted by Derek Tomczyk on Apr 13, 2014 - 12:00am
The biotech ETF IBB has taken a beating over the past couple of months with a total decline of 20% since Feb 25th. It’s very tempting to look at this as a buying opportunity so let’s see what the
historical probabilities tell us.
The histograms below have the probabilities in percentage on the left y-axis and ranges of prices/returns on the bottom x-axis. The dark green indicates historical probabilities without consideration
for the current point in time while light green probabilities are calculated based on periods with similar characteristics to today.
First probabilities for May 17th
There is a slight tilt to extreme outcomes but if anything the probabilities are skewed slightly negative as compared to normal. The stats tell us a similar story with the range of outcomes showing a
negative slant
While the expected return is positive it is less so than normally and the probability of a positive investment period is lower than normal. It does not appear IBB offers a good risk/reward scenario
at this time, at least not with the holding period ending on May 17th.
Next probabilities for June 21st
The probabilities for June 21st are decidedly shifted to the positive. There is definitely a higher than normal probability for positive outcomes, however, there is also a more than usual probability
of extreme negative outcomes.
The stats also tell us that the odds are tilted in the favor of IBB for a holding period ending on June 21st. The probability of a positive investment return is above normal and the average rally is
above normal. However, the risk of taking an outright IBB position is also above normal with an average drop of 15.1% in case of a negative return.
The option contracts on IBB expiring on Jun 21st are the perfect way to play this situation and while the premiums have increased they have not increased enough across all strikes to reduce the
expected returns below acceptable levels. However, given the murky picture for May 17th a little bit of patience may be a good idea prior to initiating a position.
Click here to search for IBB options that will generate a profit by June 21st given an average rally
blog comments powered by
Apr 17, 2015 - 12:00am
The S&P 500 Still Not Overvalued Taking Into Account Interest Rate Environment
What we see is that outside of the depths of the 2008 financial crisis, the S&P 500 is still the cheapest it has been since around 1988. What is also evident is that the bull market that followed
1988 drove stock valuations to extremely overvalued levels but did not actually end until 2001.
read more ...
Jun 9, 2014 - 12:00am
Earnings Yield On S&P 500 Points To Higher Levels By End Of 2014
In a lower interest rate environment the required return on capital on equity investments should be lower than would otherwise be the case. The longer this environment is expected to persist the
lower the required rate of return on perpetuities such as equities would logically be.
read more ...
Apr 16, 2014 - 12:00am
The Odds for a NASDAQ 100 rebound
The probability of a large loss on QQQ by June 21st, in direct contrast to IBB, is far below normal so exposing yourself to the risk is a better choice than paying a premium for protection. There is
a also a larger than normal probability of a return up to 9% so a leveraged position may a good choice at this time. Therefore much better opportunities for a call option positions out of the money
lie in the May 17th option chain.
read more ...
Apr 13, 2014 - 12:00am
The Sell Off in Biotech is Coming to an End
The stats also tell us that the odds are tilted in the favor of IBB for a holding period ending on June 21st. The probability of a positive investment return is above normal and the average rally is
above normal. However, the risk of taking an outright IBB position is also above normal with an average drop of 15.1% in case of a negative return.
read more ...
Jan 15, 2014 - 12:00am
The Odds 2014 Will Be Just As Good As 2013
After a great year for the S&P 500 (SPY) in 2013, it's natural to wonder whether that type of pace can be maintained going forward into next year.
read more ...
Oct 17, 2013 - 8:33pm
The home page has been updated with a cumulative performance graph.
read more ...
May 27, 2013 - 10:53pm
How To Protect Yourself From Market Corrections – Or Worse
While personally I think the overall bull market has a long way to go there is no such thing as a sure thing in investing. I consciously keep in mind the small probability that this will be the third
top of the new century before we head lower. However, while I admit the probability of this happening, I do not worry about it, because if it does, I will come out more than alright.
read more ...
May 7, 2013 - 6:28pm
• Revamped home page
• Protection Calculator
• Free members section including new matrix calculators
• Graphical representations of calculator results
• Option indicators now using put/call ratios
read more ...
Apr 11, 2013 - 9:01pm
New calculators, put option support and HTML5 revamp!
The website just got a nice shiny HTML5 face-lift. New calculators were added. Put option support was implemented and a yield curve page is now also available.
read more ...
Feb 20, 2013 - 12:00am
The best way to think about gold price is to think in terms of exchange rates with gold being just another currency. The "Purchasing Power Parity" theory tells us that if the nominal price level
increases in one currency (assuming no price level change in the other currency) then the exchange must deteriorate in favor of the other currency by the same percentage. In other words, the real
exchange rate should always be static.
read more ...
Copyright © 2010-2021 Optionize Inc. All rights Reserved.
The information contained on this website is for information purposes only. Optionize Inc. is not responsible for the accuracy or timelines of any of the information. Please verify all prices with
your broker prior to trading. | {"url":"https://www.optionize.net/blog/blog?id=35","timestamp":"2024-11-13T19:27:30Z","content_type":"application/xhtml+xml","content_length":"15558","record_id":"<urn:uuid:3aced0da-b663-4059-a7db-998f7e781191>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00420.warc.gz"} |
RISC Activity Database
author = {Ali Kemal Uncu},
title = {{On double sum generating functions in connection with some classical partition theorems }},
language = {english},
abstract = { We focus on writing closed forms of generating functions for the number of partitions with gap conditions as double sums starting from a combinatorial construction. Some examples of
the sets of partitions with gap conditions to be discussed here are the set of Rogers--Ramanujan, Göllnitz--Gordon, and little Göllnitz partitions. This work also includes finding the finite
analogs of the related generating functions and the discussion of some related series and polynomial identities. Additionally, we present a different construction and a double sum representation
for the products similar to the ones that appear in the Rogers--Ramanujan identities. },
journal = {ArXiv e-prints},
pages = {1--20},
isbn_issn = {N/A},
year = {2018},
refereed = {yes},
length = {20} | {"url":"https://www3.risc.jku.at/publications/show-bib.php?activity_id=5800","timestamp":"2024-11-11T21:26:56Z","content_type":"text/html","content_length":"3585","record_id":"<urn:uuid:951af242-823f-48a8-8776-ac82263cca01>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00016.warc.gz"} |
Session 8: <em>Ensemble of Score Likelihood Ratios for the common source problem</em>
Forensic Statistics
Machine learning-based Score Likelihood Ratios have been proposed as an alternative to traditional Likelihood Ratios and Bayes Factor to quantify the value of evidence when contrasting two opposing
Under the common source problem, the opposing proposition relates to the inferential problem of assessing whether two items come from the same source. Machine learning techniques can be used to
construct a (dis)similarity score for complex data when developing a traditional model is infeasible, and density estimation is used to estimate the likelihood of the scores under both propositions.
In practice, the metric and its distribution are developed using pairwise comparisons constructed from a sample of the background population. Generating these comparisons results in a complex
dependence structure violating assumptions fundamental to most methods.
To remedy this lack of independence, we introduce a sampling approach to construct training and estimation sets where assumptions are met. Using these newly created datasets, we construct multiple
base SLR systems and aggregate their information into a final score to quantify the value of evidence.
Our experimental results show that this ensembled SLR can outperform traditional SLR in terms of the rate of misleading evidence, discriminatory power and show they are more reliable.
Start Date
2-7-2023 11:00 AM
End Date
2-7-2023 12:00 PM
Feb 7th, 11:00 AM Feb 7th, 12:00 PM
Session 8: Ensemble of Score Likelihood Ratios for the common source problem
Pasque 255
Machine learning-based Score Likelihood Ratios have been proposed as an alternative to traditional Likelihood Ratios and Bayes Factor to quantify the value of evidence when contrasting two opposing
Under the common source problem, the opposing proposition relates to the inferential problem of assessing whether two items come from the same source. Machine learning techniques can be used to
construct a (dis)similarity score for complex data when developing a traditional model is infeasible, and density estimation is used to estimate the likelihood of the scores under both propositions.
In practice, the metric and its distribution are developed using pairwise comparisons constructed from a sample of the background population. Generating these comparisons results in a complex
dependence structure violating assumptions fundamental to most methods.
To remedy this lack of independence, we introduce a sampling approach to construct training and estimation sets where assumptions are met. Using these newly created datasets, we construct multiple
base SLR systems and aggregate their information into a final score to quantify the value of evidence.
Our experimental results show that this ensembled SLR can outperform traditional SLR in terms of the rate of misleading evidence, discriminatory power and show they are more reliable. | {"url":"https://openprairie.sdstate.edu/datascience_symposium/2023/sessions/18/","timestamp":"2024-11-06T18:31:30Z","content_type":"text/html","content_length":"40742","record_id":"<urn:uuid:7b01e8d8-4720-4173-bb15-1dda8e36f23c>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00459.warc.gz"} |
Factor Boosts
Factor Boosts are the 2^nd Prestige Layer and the 3^rd Prestige Mechanic in Ordinal Markup. It is usable once the player reaches, at the minimum, g[ψ(Ω^Ω)] (10) Ordinal Points.
Successfully performing a Factor Boost resets everything up until that point, including Factor Shifts. With this, it rewards you with Boosters, which can be spent on Booster Upgrades or saved for
later use or for a boost to Tier 2 Automation from the u22 Booster Upgrade. The determined amount of Boosters you have is shown using the Formula ${\displaystyle \frac{n^2+n}{2}}$, where ${\
displaystyle n}$ signifies how many Factor Boosts you have performed in the Collapse. Factor Boosting resets everything up until that point, resetting everything that Factor Shifting resets, and
Factor Shifts themselves.
FB1 is initially disguised as FS8 in the Markup tab, requiring g[ψ(Ω^Ωω)] (10) OP to perform. When it is performed, you will unlock the Boosters tab permanently. Post-Collapse at FB1, it is no longer
disguised, and now costs g[ψ(Ω^Ω)] (10) OP. Every Factor Boost afterwards will not be disguised, and require an exponentially larger Ordinal/OP amount to perform. The succeeding Factor Boost will
typically take 3x as much time as the previous one, but it can take 9x longer, 27x longer, or even up to 243x longer.
Performing a Factor Boost[ ]
In order to perform a Factor Boost, the player must reach g[ψ(Ω^Ωω)] (10) Ordinal Points, as said before. The gameplay can change depending on your progression. In the first few runs, you will have
to alternate clicking the Markup button and the Max All buttons for Autoclickers in the Markup tab in order to raise your Ordinal and OP any further.
Once the Markup and Max All Autobuyers are bought, this process will become automatic. OP multipliers do not apply to Ordinal Points after reaching ψ(Ω).
Bulk Factor Boosting[ ]
Bulk Boosting is a useful feature in Ordinal Markup. Once you have performed FB1 the first time ever, you will unlock this feature. With this feature, you are able to perform many Factor Boosts at
once, if you have enough OP value. This feature becomes even more useful post-Collapse, where you will progress through Factor Boosts quickly.
When these criteria are met:
• Auto Max All active^[1]
• Complete SM12 (as it unlock Autoprestigers)
• Auto Shift on
• Auto Boost on
• Without u23 (as it will always set base to 5 after Factor Boost once)
• Not in any Normal Challenges (It was in Factor Boosts tier, include Omega Challenge 1)
You can passive produce factor boosts, based on your Tier 1, 2 and 3 automation speed, passive OP production and Factor Boosts multiplier, etc...
Factor Boosts per Second[ ]
When you reach the Singularity and can gain a small amount of FB/s, you will find that there's a quote in the Markup tab that displays You should be getting a total of [x] Factor Boost(s) per second.
That means you have reached an ℵ[2] rate to more than enough that you barely can see the OP changing! The formula of the FB/s is about${\displaystyle \log _{3}({\text{a}}\div
48,630,661,836,227,715,204)\times {\text{p}}\times ((1+2{\text{s}})^{f})\times 2.9452^{\text{o}}}$,where ${\displaystyle {\text{a}}}$ is your Tier 2 Automation clicks per second, ${\displaystyle {\
text{p}}}$ is your current ℵ[2] power, ${\displaystyle \text{s}}$ is your Singularity level, ${\displaystyle \text{f}}$ represents 1.4 if you've bought SFU72, and 1 otherwise, and ${\displaystyle {\
text{o}}}$ is your current completions of Omega Challenge 4.
Factor Boost Costs[ ]
Factor Boost Cost
Factor Boost Cost (OP) Time compared to previous Factor Boost Clicks Needed Time Ranking (Maybe wrong since developer try to reduce timewall)
1 ^[2] g[ψ(Ω^Ωω)] (10) (g[ψ(Ω^Ω)] (10) after Collapse) N/A 109 (108 after Collapse) N/A
2 g[ψ(Ω^Ω+1)] (10) ~2.9724x (3x after Collapse) 324 N/A
3 g[ψ(Ω^Ω+2)] (10) 3x 972 23rd (10m)
4 g[ψ(Ω^Ω2)] (10) 9x 8,748 22nd (10m)
5 g[ψ(Ω^Ω2+1)] (10) 3x 26,244 21st (~40m)
6 g[ψ(Ω^Ω2+2)] (10) 3x 78,732 20th (1h 30m) (maybe only 10m?)
7 g[ψ(Ω^Ω^2)] (10) 27x 2.125e6 2nd (1d 12h) (maybe only 40m)
8 g[ψ(Ω^Ω^2+1)] (10) 3x 6.377e6 19th (2h-3h)
9 g[ψ(Ω^Ω^2+2)] (10) 3x 1.913e7 9th (10h-12h)
10 g[ψ(Ω^Ω^2+Ω)] (10) 9x 1.721e8 16th (4h 30m)
11 g[ψ(Ω^Ω^2+Ω+1)] (10) 3x 5.165e8 6th (16h)
12 g[ψ(Ω^Ω^2+Ω+2)] (10) 3x 1.549e9 4th (18h)
13 g[ψ(Ω^Ω^2+Ω2)] (10) 9x 1.394e10 3rd (1d)
14 g[ψ(Ω^Ω^2+Ω2+1)] (10) 3x 4.184e10 10th (9h-10h)
15 g[ψ(Ω^Ω^2+Ω2+2)] (10) 3x 1.255e11 11th (8h-10h) {maybe only 3h}
16 g[ψ(Ω^Ω^22)] (10) 27x 3.389e12 13th (8h) (maybe only 1h 45m)
17 g[ψ(Ω^Ω^22+1)] (10) 3x 1.016e13 8th (11h-14h) (maybe only 2h 45m)
18 g[ψ(Ω^Ω^22+2)] (10) 3x 3.050e13 17th (3h-4h)
19 g[ψ(Ω^Ω^22+Ω)] (10) 9x 2.745e14 5th (16h)
20 g[ψ(Ω^Ω^22+Ω+1)] (10) 3x 8.235e14 7th (12h)
21 g[ψ(Ω^Ω^22+Ω+2)] (10) 3x 2.470e15 18th (3h)
22 g[ψ(Ω^Ω^22+Ω2)] (10) 9x 2.223e16 12th (8h)
23 g[ψ(Ω^Ω^22+Ω2+1)] (10) 3x 6.670e16 15th (4h)
24 g[ψ(Ω^Ω^22+Ω2+2)] (10) 3x 2.001e17 14th (5h)
25+ g[BHO] (10)^[3] 243x 4.863e19 1st (1d 12h) (maybe only 11h)
Tips for Factor Boosts[ ]
Focus on the Booster Upgrades that boost Tier 2 Automation. The best setup is u223, which includes u22, u11 (which benefits Challenge Multipliers which boosts Tier 2 Automation), u12 and u13 (for
enabling Tier 2), u32 and u12 (for automatic OP gain), and u33 (which also benefits Challenge Multipliers). Prioritize u11, u22 and u33 over the rest of the Booster Upgrades, as they (indirectly)
boost Tier 2 Automation.
Trivia[ ]
• The Singularity raises the requirement for Factor Boosts 25 or more, but allows you to get multiple of them at once, even before FB25.
• The Hotkey for this is B.
1. ↑ This introduce in v0.32, at this time, auto Max All isn't even required, which allow you to producing Factor Boosts without having Factor Boosts, fortunately now it was fixed.
2. ↑ Prior to Collapsing, this is instead FS8.
3. ↑ If the Singularity is unlocked, then this value would be the stated requirement shown in the Singularity sub-tab of the Collapse tab. | {"url":"https://ordinal-markup.fandom.com/wiki/Factor_Boosts","timestamp":"2024-11-12T07:36:48Z","content_type":"text/html","content_length":"185971","record_id":"<urn:uuid:644c7a99-1949-42b0-b6ad-e8ab219180df>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00140.warc.gz"} |
Squares & Cubes Worksheet (printable, online, answers, examples)
There are five sets of exponents worksheets:
Examples, solutions, videos, and worksheets to help Grade 6 students learn how to calculate squares and cubes of a number. The number, also called the base, can be a positive or negative whole
number, a positive or negative fraction, a positive or negative decimal.
How to calculate squares and cubes of a number?
There are four sets of squares and cubes worksheets:
• Positive Whole Number Base
• Negative Whole Number Base
• Fractional Base (Positive or Negative)
• Decimal Base (Positive or Negative)
Exponents, also known as powers or indices, are mathematical expressions that represent repeated multiplication of a base number by itself. They are a shorthand way of writing large or small numbers.
In an exponent expression, the base number is raised to a certain power, which is indicated by a superscript or a raised number to the right of the base. The power represents the number of times the
base is multiplied by itself.
For example, in the expression 2^3, the base is 2, and the power is 3. This means that 2 is multiplied by itself three times: 2 × 2 × 2 = 8. So, 2^3 equals 8.
Squares and cubes are special cases of exponents where the power is specifically 2 or 3, respectively.
Squares: When a number is raised to the power of 2, it is called a square. Squaring a number means multiplying it by itself. For example, 3 squared (written as 3^2) is 3 × 3 = 9. Similarly, (-2)^2 is
(-2) × (-2) = 4.
Squares have these properties:
• The square of a positive number is always positive.
• The square of a negative number is always positive.
• The square of 0 is 0.
• The square of a number greater than 1 is greater than the original number, while the square of a number between 0 and 1 is smaller than the original number.
Cubes: When a number is raised to the power of 3, it is called a cube. Cubing a number means multiplying it by itself twice. For example, 2 cubed (written as 2^3) is 2 × 2 × 2 = 8. Similarly, (-3)^3
is (-3) × (-3) × (-3) = -27.
Cubes have these properties:
• The cube of a positive number remains positive.
• The cube of a negative number remains negative.
• The cube of 0 is 0.
• The cube of a number greater than 1 is greater than the original number, while the cube of a number between 0 and 1 is smaller than the original number.
Have a look at this video if you need to review how to calculate squares and cubes.
Click on the following worksheet to get a printable pdf document.
Scroll down the page for more Squares & Cubes Worksheets.
More Squares & Cubes Worksheets
(Answers on the second page.)
Squares & Cubes Worksheet #1 (Positive Whole Number Bases)
Squares & Cubes Worksheet #2 (Negative Whole Number Bases)
Squares & Cubes Worksheet #3 (Positive or Negative Fractional Bases)
Squares & Cubes Worksheet #4 (Positive or Negative Decimal Bases)
Squares with bases 0 to 10
Squares with bases 2 to 20
Squares with bases -10 to 0
Squares with bases -20 to 0
Cubes with bases 0 to 10
Cubes with bases 2 to 20
Cubes with bases -10 to 0
Cubes with bases -20 to 0
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"https://www.onlinemathlearning.com/squares-cubes-worksheet.html","timestamp":"2024-11-04T23:35:18Z","content_type":"text/html","content_length":"41907","record_id":"<urn:uuid:2fc02aa1-803d-426b-8d2e-abc1b5030c24>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00769.warc.gz"} |
Sigurd Eriksson Matematik DEL II - Maplesoft Books - Maple
Precalculus Homework Help - Easy Access Study Guide
We find that these texts send conflicting messages and confidence, especially for Chemistry, Physics, PreCalculus and Biology. Algebra, Geometry, Pre-Calculus, Calculus, Geography, American
History, ISBN-10: 0134314344; ISBN-13: Precalculus; MyMathLab - Valuepack Access Card; Student Solutions Manual for College Algebra and Trigonometry and Play this game to review Pre-calculus. Find
the integral: ∫ x 2 ( 16 − x 3 ) 2 d x \int_{ }^{ }\frac{x^2}{\left(16-x^3\right)^2}\ dx ∫(16−x3)2x2 dx. Maple 18 används för att illustrera exempel i Pre Kalkyl i den här boken. Categories:
A pre-calculus tutor can backtrack to concepts taught in Algebra II to help you feel more comfortable with topics such as right angle trigonometry and finding the domain of a function. Precalculus.
Penn State Credit Evaluation: MATH 041--Trigonometry/ Analytic Geometry: 3 Credits. More Information about Credit for CLEP Exams.
With its clear and simple writing style, PRECALCULUS: MATHEMATICS FOR CALCULUS, 7E, INTERNATIONAL METRIC EDITION, will give you a solid Endim Analys A1 (precalculus); Endim Analys A2 (limits and
differentiation); Endim Analys A3 (integrals etc).
More Continuity - Notes - Grade 12 CONTINUITY Roughly
Which numbers have rational square roots? The decimal representation of irrationals.
MG64D5 - [GET] Pre-Calculus For Dummies - Yang Kuang #PDF
Learning Outcomes. Draw angles in standard position. Convert between degrees and radians.
Combines concepts from algebra, trigonometry, geometry, and every lower level denomination of math into a confusing clusterfuck of topics that has no correlation with Calculus- and doesn't even
introduce the derivative.
Bjorn lundberg
Conic Sections Trigonometry. Calculus.
Det ovanliga Advanced mathematical concepts with my precalculus, cool math, janet,. Desmos is a teacher manual classwork and math lessons.
Köpa fastighet kontantinsats
hållbar samhällsutveckling engelskanya skatten 2021vad ar vatskebalansboksning aalborgsjuksköterska distans malmö
Vad är skillnaden mellan Finite Math & Pre-Calculus
We find that these texts send conflicting messages and confidence, especially for Chemistry, Physics, PreCalculus and Biology. Algebra, Geometry, Pre-Calculus, Calculus, Geography, American
History, ISBN-10: 0134314344; ISBN-13: Precalculus; MyMathLab - Valuepack Access Card; Student Solutions Manual for College Algebra and Trigonometry and Play this game to review Pre-calculus. Find
the integral: ∫ x 2 ( 16 − x 3 ) 2 d x \int_{ }^{ }\frac{x^2}{\left(16-x^3\right)^2}\ dx ∫(16−x3)2x2 dx. Maple 18 används för att illustrera exempel i Pre Kalkyl i den här boken.
Eftermontera dragkroksjukskriven under provanstallning
Precalculus Equations with Real World Objects - Pinterest
integrand. The Precalculus course, often taught in the 12th grade, covers Polynomials; Complex Numbers; Composite Functions; Trigonometric Functions; Vectors; Matrices; Series; Conic Sections; and
Probability and Combinatorics. Khan Academy's Precalculus course is built to deliver a comprehensive, illuminating, engaging, and Common Core aligned experience! e In mathematics education,
precalculus or college algebra is a course, or a set of courses, that includes algebra and trigonometry at a level which is designed to prepare students for the study of calculus. Schools often
distinguish between algebra and trigonometry as two separate parts of the coursework. Precalculus is a course that is designed to prepare students for Calculus, either in high school or college.
Parametric equations 2 Parametric equations and polar
It weaves together algebra, geometry, and mathematical functions in order to prepare one for Calculus. Acellus Pre-Calculus is A-G Approved through the University of California. Course Objectives &
Student Learning Outcomes Upon successful completion of Acellus Pre-Calculus, students will have the a strong foundation of the basic mathematical skills necessary to maximize success in Calculus.
Pre-Calculus by Mansfield Independent School District. View More from This Institution. This course material is only available in the iTunes U app on iPhone or iPad. Course Description This course
will emphasize the study of polynomial, radical, exponential, logarithmic, and trigonometric functions.
Watch later. Share. Copy link. Info. Shopping. Tap to unmute. If Pre-Calculus. | {"url":"https://investerarpengarqfeua.netlify.app/59410/89576.html","timestamp":"2024-11-13T12:21:03Z","content_type":"text/html","content_length":"9673","record_id":"<urn:uuid:71d2ef5a-89f3-4a48-984f-9e5f88a02b17>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00253.warc.gz"} |
MathFiction: Improbable (Adam Fawer)
A probability expert suffering from epilepsy (with hints of schizophrenia) is in over his head with gambling debts to the Russian mob and a beautiful, renegade CIA agent before discovering that he
has the ability to predict the future. A running subplot is the mathematical aspects of determinism (i.e. Laplace's famous claim that the future can be predicted precisely by anyone with sufficient
ability to calculate and sufficient information). To most mathematicians, the downfall of Laplace's Demon was the realization that the "sufficient information" necessary to predict the future is
impossible to obtain in practice due to the sensitive dependence that is a hallmark of chaos theory. However, this book seems to tie it into the philosophical question of "free will" (which I have
put in quotes because I don't think anyone has ever really defined what it means), a Jungian sort of common unconsciousnesss, and the foundational questions of quantum mechanics (though the author's
understanding of modern physics seems to be rather superficial and overly influenced by metaphysical hype).
I like the scenes showing the protagonist as a math professor. The lectures he gives are interesting and creative, though not always 100% accurate. (For instance, his discussion of "minimizing error"
gives the impression that the expected value is the one which occurs most often. In fact, the expected value may not be a possible outcome at all. Rather, it is the number for which the differences
between it and the outcomes is minimized. It fits in well with what he was trying to say...he just didn't say it right.)
I also thought the idea -- that someone connected to all of humanity's unconscious thought for all time could use the information to predict the future to some high degree of certainty -- was
interesting and made for a fun book. However, I'm not sure it makes sense if you think about it. It is not clear how he sometimes knows things that no person knows (like the order of cards in a
shuffled deck) if his source of information is this common human unconsciousness. Moreover, Fawer ends up with some rather strange blend of the determinism of Laplace and the popular notion of "free
will". I mean, the whole point of Laplace is that if things are deterministic then all that you need is enough information and processing power to predict the future. But, once we're supposed to
believe that human decision is somehow non-deterministic, I would think the whole thing would fall apart and the ability to do anything like that would be lost.
I've already complained about Fawer's physics. Everything from his quantum mechanics (in which he uses the popular but inaccurate statement that Heisenberg's Uncertainty Principle is the statement
that things change when you observe them) to his special relativity (no, E=MC^2 does not explain why you get thrown back in your seat when your car accelerates -- it pretty much avoids dealing with
acceleration at all) is off. But his biology is perhaps even worse. Both of his claims regarding evolution (that it is necessarily non-deterministic and that it cannot explain instinctive behavior)
are ridiculous.
Fortunately, these little annoyances to not spoil the book. It is an engrossing thriller with some clever ideas and quite a bit of nice mathematics thrown in as well. The book's official website also
has a Flash game that you can play which quizes you on and teaches you some basic probability!
Contributed by kenn
Improbable by Adam Fawer is a great read. It's all fiction, leaning to SF but a good story where probability is core to the lead characters success. If you like: Rucker, Gibson, Stephenson, Sterling,
Morgan (Altered Carbon) and even Ball and Pickover you should like this book. It's a quick, fun read. | {"url":"https://kasmana.people.charleston.edu/MATHFICT/mfview.php?callnumber=mf487","timestamp":"2024-11-11T04:50:17Z","content_type":"text/html","content_length":"12481","record_id":"<urn:uuid:db2a1d4b-fe3b-41bb-8459-1429e6f15355>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00395.warc.gz"} |
Geometric Distribution Explained with Python Examples - Analytics Yogi
Geometric Distribution Explained with Python Examples
In this post, you will learn about the concepts of Geometric probability distribution with the help of real-world examples and Python code examples. It is of utmost importance for data scientists to
understand and get an intuition of different kinds of probability distribution including geometric distribution. You may want to check out some of my following posts on other probability
In this post, the following topics have been covered:
• Geometric probability distribution concepts
• Geometric distribution python examples
• Geometric distribution real-world examples
Geometric Probability Distribution Concepts
Geometric probability distribution is a discrete probability distribution. It represents the probability that an event having probability p will happen (success) after X number of Bernoulli trials
with X taking values of 1, 2, 3, …k. A Bernoulli trial is a trial which results in either success or failure. Geometric distribution of random variable, X, represents the probability that an event
will take X number of Bernoulli trials to happen. Here, X can be termed as discrete random variable. In other words, Geometric distribution is probability distribution of X trials represents the
probability that there will be X – 1 failure before the event event occurs. Here the basic assumption is that the trials are independent of each other.
Mathematically, if p is the probability that the event occurs, then the probability that event will not occur is 1 – p. The probability that the event will happen after k trials can be represented in
form of the following probability mass function.
[latex]\Large Pr(X = k) = (1-p)^{(k-1)}p[/latex]
Lets understand the concept in a more descriptive manner using basketball free throws shot example. In basketball, free throws or foul shots are unopposed attempts to score points by shooting from
behind the free throw line (informally known as the foul line or the charity stripe), a line situated at the end of the restricted area. Let’s say that the players in the below picture is contesting
as to how many shoots one will take to achieve a perfect throw (scoring a point). The goal is to find the probability that the shooter will have the first perfect throw in X number of shoots.
Let’s say the the shooter in the above picture has a probability of scoring the perfect throw is 0.6. So, the goal is to find out the probability that the shooter will have perfect throw in 1st
throw, 2nd throw (1st throw as unsuccessful), third throw (1st two throws as unsuccessful), fourth throw (1st three throws as unsuccessful), fifth throw ((1st four throws as unsuccessful) etc. You
may note that we may end up with probability distribution for random variable X representing the number of shoots a person will take to have first perfect throw.
Let’s calculate the probability of X = 1, 2, 3, 4, 5 number of throws for first successful throw. Given the probability of a perfect throw (success) is 0.6 and, thus, the probability of unsuccessful
throw (failure) is 0.4 (1-0.6), here is how the probability distribution would look like for different values of X.
X = (1, 2, 3..) Probability calculation that the Net Probability
prefect throw happen in X
1 0.6 0.6
2 0.4 x 0.6 ([latex]0.4^1*0.6[/latex]) 0.24
3 0.4 x 0.4 x 0.6 ([latex]0.4^2*0.6[/latex]) 0.096
4 0.4 x 0.4 x 0.4 x 0.6 ([latex]0.4^3*0.6[/latex]) 0.0384
… … …
k 0.4 x 0.4 x 0.4 … 0.6 ([latex]0.4^{(k-1)}*0.6[/latex]) [latex]0.4^{(k-1)}*0.6[/latex]
Geometric Distribution Example
You may note that the coefficients of X = k is k – 1.
Expectation and Variance of Geometric Distribution
The expectation of geometric distribution can be defined as expected number of trials in which the first success will occur. The mathematical formula to calculate the expected value of geometric
distribution can be calculated as the following where p is probability that the event occur.
[latex]\Large \frac{1}{p}[/latex]
The variance of geometric distribution can be defined as variance of number of trials it may take for success to happen. Mathematically, variance can be calculated using the following:
[latex]\Large \frac{q}{p^2}[/latex]
Geometric Distribution Python Example
Here is the Python code calculating geometric probability distribution. Pay attention to some of the following:
• Discrete random variable X is defined along with probability of the perfect throw (event to occur)
• Scipy.stats geom class is used to calculate the probability mass function using the method, pmf.
from scipy.stats import geom
import matplotlib.pyplot as plt
# X = Discrete random variable representing number of throws
# p = Probability of the perfect throw
X = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
p = 0.6
# Calculate geometric probability distribution
geom_pd = geom.pmf(X, p)
# Plot the probability distribution
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
ax.plot(X, geom_pd, 'bo', ms=8, label='geom pmf')
plt.ylabel("Probability", fontsize="18")
plt.xlabel("X - No. of Throws", fontsize="18")
plt.title("Geometric Distribution - No. of Throws Vs Probability", fontsize="18")
ax.vlines(X, 0, geom_pd, colors='b', lw=5, alpha=0.5)
Here is the plot representing the geometric distribution for P = 0.6 and different values of X.
Fig 2. Geometric Probability Distribution Plot
Geometric Distribution Real-world Examples
Here are some real-world examples of Geometric distribution with the assumption that the trials are independent of each other.
• Let’s say, the probability that an athlete achieves a distance of 6m in long jump is 0.7. Geometric distribution can be used to determine probability of number of attempts that the person will
take to achieve a long jump of 6m. In the second attempt, the probability will be 0.3 * 0.7 = 0.21 and the probability that the person will achieve in third jump will be 0.3 * 0.3 * 0.7 = 0.063
• Here is another example. Let’s say the probability that the person climbs the hill without stopping anywhere is 0.3. Geometric distribution can be used to represent the probability of number of
attempts that the person will take to climb the hill. The probability to achieve in first attempt is 0.3, second attempt is 0.7*0.3 = 0.21, third attempt is 0.7*0.7*0.3 = 0.147
Here is the summary of what you learned about the Geometric probability distribution:
• Geometric probability distribution is about determining probabilities of discrete random variable X which represents number of trials it would take for the event to happen (first time).
• The trials would need to be independent of each other.
Latest posts by Ajitesh Kumar
(see all)
I found it very helpful. However the differences are not too understandable for me
Very Nice Explaination. Thankyiu very much,
in your case E respresent Member or Oraganization which include on e or more peers?
Such a informative post. Keep it up
Thank you....for your support. you given a good solution for me.
Ajitesh Kumar
I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming
languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big
data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.
Posted in Data Science, statistics. Tagged with Data Science, statistics. | {"url":"https://vitalflux.com/geometric-distribution-explained-with-python-examples/","timestamp":"2024-11-07T06:08:24Z","content_type":"text/html","content_length":"112438","record_id":"<urn:uuid:c7778e44-a47a-428e-b90d-b6e79014798d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00731.warc.gz"} |
Travel Forecasting Resource
Trend models in project-level traffic forecasting
# Objective
A linear trend model is a simple statistical technique to extrapolate upon historical traffic counts. Trend models can be used to forecast the inputs to a regional travel model, to forecast the
inputs to a more complex statistical model of traffic volumes or to forecast directly traffic volumes from a time series of traffic count data.
# Background
A recently completed survey for NCHRP Report 765 found that linear trend models are widely used by state departments of transportation for project level forecasting purposes. A linear trend model can
be readily accomplished with bivariate linear regression analysis, typically with traffic count as the dependent variable and time as the independent variable. Time is an integer number corresponding
to the number of years from a reference year. A linear trend model has the form:
T[n] = an+b
Where T[n] is the forecasted traffic count, n is the year, and b is the forecasted traffic count in the reference year. The standard error of the forecast, S, may be taken as the 68% error range. The
50% error range may be computed from the standard error by this formula.
E[50] = 0.6745S
Statistical software packages will also provide a t-score for the trend term, which will indicate whether the trend is sufficiently strong for forecasting purposes.
# Guidelines
Historic traffic counts should be plotted against time, to assure that there is a good trend in the data and that there are no anomalies. For consistency the reference year (e.g., 1991) should be
held constant across all forecasts. It is possible for this reference year to be prior to the opening year for the road being studied, and thus it is possible for the constant b (y-intercept) to be a
negative number. Choosing a recent base year aids the comparison of y-intercepts from linear regressions at different sites.
Both coefficients, a and b should be used in the forecast. The forecast should not pivot off the most recent traffic count. There should be a minimum of ten different years of historical traffic
counts. The newest count should not be more than three years old. Forecasts should not extend farther into the future than historical data extends into the past. For example, a 20 year forecast
should have historical data from at least 20 years ago.
The primary statistic for indicating the strength of the estimate is the t-score. The absolute value of the t-score of the trend term should not be less than 3.0, which indicates that the coefficient
on the trend term is good to about one-half of a significant digit.
# Advice
Growth factor methods (i.e., models that assume a constant percent increase in traffic for each time period) should not be used, due to their inherently optimistic forecasts of traffic growth.
It is possible to forecast intersection turning movements by forecasting the volumes (in and out) on all legs of the intersection with trend models and then using an intersection refinement method
(See Turning movement refinements).
Scenarios are difficult to introduce into trend forecasts, so scenarios are usually not formulated. If desired, “high growth” and “low growth” scenarios can be computed by adding or subtracting a
fixed percentage from the yearly growth rate.
The analyst needs to be aware of the state of land use development near the highway segment when assessing how well a linear equation will forecast into into the distant future. Traffic growth could
be accelerating or decelerating depending upon the degree to which land has been saturated. The figure below, originally published in the Guidebook on Statewide Travel Forecasting, illustrates how
the rate of traffic growth can vary.
# Items to Report
• Regression statistics, including R<sup2, standard error of the estimate, and the t-score on the trend term.
• Forecasted traffic volume for the design year.
• The range associated with the 50% error in the forecast.
# References
NCHRP Report 765. | {"url":"https://tfresource.org/topics/Trend_models_in_project_level_traffic_forecasting","timestamp":"2024-11-03T10:20:54Z","content_type":"text/html","content_length":"32399","record_id":"<urn:uuid:ad6477ca-dbb4-44ac-be07-8b5b3eb0d8af>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00447.warc.gz"} |
Magic is a currency released with Initial Release on 17th March 2016.
About[ ]
Magic Bundles[ ]
When you exchange for Magic ( ) with Gems ( ), the conversion rate is dependent on the player level. In the beginning, the conversion rate is minimum, and increases as the player levels up. Players
are also often presented with Magic limited time sales, which can save them up to 50% on Gem Packs. These sales generally only last a day or two.
During Sales (Since 2022)
Trivia[ ]
Two hundred (200) Magics were introduced as Uncommon in the Radiant Chests from the 7th to the 28th November 2024 as part of The Muppets Event.
One hundred and fifty (150) Magics were introduced as Rare in the The Lion King Chests since the 5th November 2024 as part of Update 88.
Fifty (50) Magics were introduced as Common in the Frozen Chests since the 5th November 2024 as part of Update 88.
Fifty (50) Magics were introduced as Common in the Finding Nemo Chests since the 5th November 2024 as part of Update 88.
One hundred and fifty (150) Magics were introduced as Rare in the Beauty and the Beast Chests since the 5th November 2024 as part of Update 88.
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 10th to the 25th October 2024 as part of Tower Challenge (Season 10).
One hundred and fifty (150) Magics were introduced as Rare in the Up Chests since the 8th October 2024 as part of Update 87.
Fifty (50) Magics were introduced as Common in the The Muppets Chests since the 8th October 2024 as part of Update 87.
One hundred and fifty (150) Magics were introduced as Rare in the Snow White and the Seven Dwarfs Chests since the 8th October 2024 as part of Update 87.
One hundred and fifty (150) Magics were introduced as Rare in the Pinocchio Chests since the 8th October 2024 as part of Update 87.
One hundred (100) Magics were introduced as Uncommon in the Nightmare Before Christmas Chests since the 8th October 2024 as part of Update 87.
One hundred (100) Magics were introduced as Uncommon in the Mulan Chests since the 8th October 2024 as part of Update 87.
Fifty (50) Magics were introduced as Common in the Hercules Chests since the 8th October 2024 as part of Update 87.
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 12th September to the 3rd October 2024 as part of Silly Symphony Event.
Fifty (50) Magics were introduced as Common in the The Aristocats Chests since the 10th September 2024 as part of Update 86.
Fifty (50) Magics were introduced as Common in the Star Wars Chests since the 10th September 2024 as part of Update 86.
One hundred and fifty (150) Magics were introduced as Rare in the Soul Chests since the 10th September 2024 as part of Update 86.
Fifty (50) Magics were introduced as Common in the Robin Hood Chests since the 10th September 2024 as part of Update 86.
One hundred and fifty (150) Magics were introduced as Rare in the Inside Out Chests since the 10th September 2024 as part of Update 86.
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 11th to the 26th July 2024 as part of Tower Challenge (Season 9).
One hundred (100) Magics were introduced as Uncommon in the The Aristocats Chests from the 9th July to the 10th September 2024 as part of Update 84 to Update 86.
One hundred (100) Magics were introduced as Uncommon in the Finding Nemo Chests from the 9th July to the 5th November 2024 as part of Update 84 to Update 88.
Fifty (50) Magics were introduced as Common in the Beauty and the Beast Chests from the 9th July to the 5th November 2024 as part of Update 84 to Update 88.
Five hundred (500) Magics were introduced as Rare in the Radiant Chests from the 13th June to the 4th July 2024 as part of A Bug's Life Event.
One hundred and sixty (160) Magics were introduced as Legendary in the Turquoise Chests since the 11th June 2024 as part of Update 83.
Fifty (50) Magics were introduced as Common in the Nightmare Before Christmas Chests from the 11th June to the 8th October 2024 as part of Update 83 to Update 87.
One thousand two hundred and fifty (1,250) Magics were introduced as Special Chance in the Attraction Enchantment Chests since the 11th June 2024 as part of Update 83.
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 11th to the 26th April 2024 as part of Tower Challenge (Season 8).
Fifty (50) Magics were introduced as Common in the Inside Out Chests from the 9th April to the 10th September 2024 as part of Update 81 to Update 86.
Three hundred (300) Magics were introduced as Uncommon in the Sapphire Chests from the 14th March to the 9th April 2024 as part of Need a Companion?.
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 14th March to the 4th April 2024 as part of Ice Age Event.
Five thousand (5,000) Magics were introduced as Legendary in the Gold Chests from the 23rd to the 27th February 2024 as part of Princesses Chest.
One hundred and fifty (150) Magics were introduced as Rare in the Mulan Chests from the 6th February to the 8th October 2024 as part of Update 79 to Update 87.
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 11th to the 26th January 2024 as part of Tower Challenge (Season 7).
Fifty (50) Magics were introduced as Common in the Encanto Chests since the 11th January 2024 as part of Update 78.
One thousand (1,000) Magics were introduced as Common in the Delightful Chests from the 11th to the 26th January 2024 as part of Tower Challenge (Season 7).
One hundred (100) Magics were introduced as Common in the Blue Ribbon Chests from the 25th to the 26th December 2023 as part of Christmas 2023.
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 14th December 2023 to the 4th January 2024 as part of The Muppets Event.
Fifty (50) Magics were introduced as Common in the Soul Chests from the 17th November 2023 to the 10th September 2024 as part of Update 76 to Update 86.
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 16th November to the 1st December 2023 as part of Tower Challenge (Season 6).
Eighty (80) Magics were introduced as Rare in the Turquoise Chests from the 14th November 2023 to the 11th June 2024 as part of Update 76 to Update 83.
One hundred and fifty (150) Magics were introduced as Common in the Creepy Chests from the 14th October to the 14th November 2023 as part of Halloween 2023.
One hundred and fifty (150) Magics were introduced as Rare in the The Hunchback of Notre Dame Chests since the 12th October 2023 as part of Update 75.
Five hundred (500) Magics were introduced as Rare in the Radiant Chests from the 14th September to the 5th October 2023 as part of The Aristocats Event.
One hundred and fifty (150) Magics were introduced as Rare in the Turning Red Chests since the 14th September 2023 as part of Update 74.
Four thousand (4,000) Magics were introduced as Epic in the Creepy Chests from the 8th to the 12th August 2023 as part of Haunted Mansion Movie Released.
Two hundred (200) Magics were introduced as Common in the Magical Chests from the 13th to the 28th July 2023 as part of Tower Challenge (Season 5).
One hundred (100) Magics were introduced as Uncommon in the Luca Chests since the 13th July 2023 as part of Update 72.
One hundred and fifty (150) Magics were introduced as Common in the Amber Chests from the 13th to the 28th July 2023 as part of Tower Challenge (Season 5).
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 15th June to the 6th July 2023 as part of Inside Out Event.
One hundred and fifty (150) Magics were introduced as Rare in the Star Wars Chests from the 18th May 2023 to the 10th September 2024 as part of Update 70 to Update 86.
One hundred and fifty (150) Magics were introduced as Common in the Sapphire Chests from the 13th to the 28th April 2023 as part of Tower Challenge (Season 4).
One hundred and fifty (150) Magics were introduced as Common in the Ruby Chests from the 13th to the 28th April 2023 as part of Tower Challenge (Season 4).
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 13th to the 28th April 2023 as part of Tower Challenge (Season 4).
Fifty (50) Magics were introduced as Common in the Up Chests from the 13th April 2023 to the 8th October 2024 as part of Update 69 to Update 87.
One hundred and fifty (150) Magics were introduced as Common in the Amber Chests from the 13th to the 28th April 2023 as part of Tower Challenge (Season 4).
One hundred and fifty (150) Magics were introduced as Common in the Sapphire Chests from the 17th March to the 11th April 2023 as part of Need a Companion?.
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 16th March to the 6th April 2023 as part of Encanto Event.
One hundred and fifty (150) Magics were introduced as Rare in the Robin Hood Chests from the 15th March 2023 to the 10th September 2024 as part of Update 68 to Update 86.
One hundred and fifty (150) Magics were introduced as Common in the Sapphire Chests from the 12th to the 27th January 2023 as part of Tower Challenge (Season 3).
One hundred and fifty (150) Magics were introduced as Common in the Ruby Chests from the 12th to the 27th January 2023 as part of Tower Challenge (Season 3).
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 12th to the 27th January 2023 as part of Tower Challenge (Season 3).
One hundred and fifty (150) Magics were introduced as Common in the Amber Chests from the 12th to the 27th January 2023 as part of Tower Challenge (Season 3).
One hundred (100) Magics were introduced as Uncommon in the Raya and the Last Dragon Chests since the 10th January 2023 as part of Update 66.
Fifty (50) Magics were introduced as Common in the Mulan Chests from the 10th January 2023 to the 6th February 2024 as part of Update 66 to Update 79.
One hundred (100) Magics were introduced as Common in the Red Ribbon Chests from the 25th to the 26th December 2022 as part of Christmas 2022.
Five hundred (500) Magics were introduced as Rare in the Radiant Chests from the 18th December 2022 to the 5th January 2023 as part of The Hunchback of Notre Dame Event.
Fifty (50) Magics were introduced as Common in the Pinocchio Chests from the 15th December 2022 to the 8th October 2024 as part of Update 65 to Update 87.
One hundred and fifty (150) Magics were introduced as Common in the Sapphire Chests from the 17th November to the 2nd December 2022 as part of Tower Challenge (Season 2).
One hundred and fifty (150) Magics were introduced as Common in the Ruby Chests from the 17th November to the 2nd December 2022 as part of Tower Challenge (Season 2).
One hundred and fifty (150) Magics were introduced as Common in the Magical Chests from the 17th November to the 2nd December 2022 as part of Tower Challenge (Season 2).
One hundred and fifty (150) Magics were introduced as Common in the Amber Chests from the 17th November to the 2nd December 2022 as part of Tower Challenge (Season 2).
Three hundred (300) Magics were introduced as Uncommon in the Creepy Chests from the 15th October to the 15th November 2022 as part of Halloween 2022.
One hundred and fifty (150) Magics were introduced as Rare in the Nightmare Before Christmas Chests from the 11th October 2022 to the 11th June 2024 as part of Update 63 to Update 83.
Five hundred (500) Magics were introduced as Rare in the Radiant Chests from the 18th September to the 6th October 2022 as part of Turning Red Event.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 24th to the 29th July 2022 as part of Tower Challenge (Season 1).
One hundred (100) Magics were introduced as Common in the Amber Chests from the 19th to the 24th July 2022 as part of Tower Challenge (Season 1).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 14th to the 19th July 2022 as part of Tower Challenge (Season 1).
One thousand (1,000) Magics were introduced as Common in the Special Request Wishes since the 12th July 2022 as part of Update 32.
Eighty-five (85) Magics were introduced as Uncommon in the Silver Chests since the 12th July 2022 as part of Update 60.
Five hundred (500) Magics were introduced as Epic in the Resource Chests since the 12th July 2022 as part of Update 60.
One hundred (100) Magics were introduced as Uncommon in the Wreck-It Ralph Chests since the 12th July 2022 as part of Update 60.
One hundred and fifty (150) Magics were introduced as Rare in the Winnie the Pooh Chests since the 12th July 2022 as part of Update 60.
Fifty (50) Magics were introduced as Common in the The Little Mermaid Chests since the 12th July 2022 as part of Update 60.
Fifty (50) Magics were introduced as Common in the The Lion King Chests from the 12th July 2022 to the 5th November 2024 as part of Update 60 to Update 88.
One hundred (100) Magics were introduced as Uncommon in the Star Wars Chests from the 12th July 2022 to the 18th May 2023 as part of Update 60 to Update 70.
One hundred (100) Magics were introduced as Uncommon in the Snow White and the Seven Dwarfs Chests from the 12th July 2022 to the 8th October 2024 as part of Update 60 to Update 87.
Fifty (50) Magics were introduced as Common in the Raya and the Last Dragon Chests from the 12th July 2022 to the 11th January 2023 as part of Update 60 to Update 66.
Fifty (50) Magics were introduced as Common in the Onward Chests since the 12th July 2022 as part of Update 60.
One hundred (100) Magics were introduced as Uncommon in the Nightmare Before Christmas Chests from the 12th July to the 11th October 2022 as part of Update 60 to Update 63.
One hundred and fifty (150) Magics were introduced as Rare in the Mulan Chests from the 12th July 2022 to the 10th January 2023 as part of Update 60 to Update 66.
Fifty (50) Magics were introduced as Common in the Moana Chests since the 12th July 2022 as part of Update 60.
One hundred and fifty (150) Magics were introduced as Rare in the Hercules Chests from the 12th July 2022 to the 8th October 2024 as part of Update 60 to Update 87.
One hundred and fifty (150) Magics were introduced as Rare in the Frozen Chests from the 12th July 2022 to the 5th November 2024 as part of Update 60 to Update 88.
Fifty (50) Magics were introduced as Common in the Finding Nemo Chests from the 12th July 2022 to the 9th July 2024 as part of Update 60 to Update 84.
One hundred and fifty (150) Magics were introduced as Rare in the Big Hero 6 Chests since the 12th July 2022 as part of Update 60.
Fifty (50) Magics were introduced as Common in the Alice in Wonderland Chests since the 12th July 2022 as part of Update 60.
One hundred and fifty (150) Magics were introduced as Rare in the Aladdin Chests since the 12th July 2022 as part of Update 60.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 12th to the 17th June 2022 as part of Tower Challenge (Bailey).
One hundred (100) Magics were introduced as Common in the Amber Chests from the 7th to the 12th June 2022 as part of Tower Challenge (Bailey).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 2nd to the 7th June 2022 as part of Tower Challenge (Bailey).
Fifty (50) Magics were introduced as Common in the 101 Dalmatians Chests since the 2nd June 2022 as part of Update 59.
Two hundred (200) Magics were introduced as Rare in the Resource Chests from the 31st May to the 12th July 2022 as part of Update 59.
Fifty (50) Magics were introduced as Common in the Winnie the Pooh Chests from the 31st May to the 12th July 2022 as part of Update 59.
One hundred (100) Magics were introduced as Uncommon in the The Little Mermaid Chests from the 31st May to the 12th July 2022 as part of Update 59.
Fifty (50) Magics were introduced as Common in the Mulan Chests from the 31st May to the 12th July 2022 as part of Update 59.
One hundred (100) Magics were introduced as Uncommon in the Frozen Chests from the 31st May to the 12th July 2022 as part of Update 59.
One hundred and fifty (150) Magics were introduced as Rare in the Finding Nemo Chests from the 31st May to the 12th July 2022 as part of Update 59.
Fifty (50) Magics were introduced as Common in the Big Hero 6 Chests from the 8th March to the 12th July 2022 as part of Update 57 to Update 60.
One hundred and fifty (150) Magics were introduced as Rare in the Raya and the Last Dragon Chests from the 1st March to the 12th July 2022 as part of Update 56 to Update 60.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 6th to the 11th February 2022 as part of Tower Challenge (Shan Yu).
One hundred (100) Magics were introduced as Common in the Amber Chests from the 1st to the 6th February 2022 as part of Tower Challenge (Shan Yu).
One hundred and fifty (150) Magics were introduced as Common in the Sapphire Chests from the 27th January to the 19th February 2022 as part of Need a Companion?.
One hundred (100) Magics were introduced as Common in the Magical Chests from the 27th January to the 1st February 2022 as part of Tower Challenge (Shan Yu).
One hundred and fifty (150) Magics were introduced as Rare in the Mulan Chests from the 25th January to the 31st May 2022 as part of Update 56 to Update 59.
One hundred (100) Magics were introduced as Common in the Red Ribbon Chests from the 25th to the 26th December 2021 as part of Christmas 2021.
One hundred and fifty (150) Magics were introduced as Rare in the Lilo & Stitch Chests since the 14th December 2021 as part of Update 55.
Two hundred and fifty (250) Magics were introduced as Common in the Sapphire Chests from the 25th November to the 10th December 2021 as part of Tower Challenge (Lefou).
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 25th November to the 10th December 2021 as part of Tower Challenge (Lefou).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 25th November to the 10th December 2021 as part of Tower Challenge (Lefou).
One hundred (100) Magics were introduced as Common in the Amber Chests from the 25th November to the 10th December 2021 as part of Tower Challenge (Lefou).
One hundred and fifty (150) Magics were introduced as Rare in the Star Wars Chests from the 11th November 2021 to the 12th July 2022 as part of Update 54 to Update 60.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 2nd to the 7th November 2021 as part of Tower Challenge (Hatbox Ghost).
One hundred (100) Magics were introduced as Common in the Amber Chests from the 28th October to the 2nd November 2021 as part of Tower Challenge (Hatbox Ghost).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 22nd to the 28th October 2021 as part of Tower Challenge (Hatbox Ghost).
Three hundred (300) Magics were introduced as Uncommon in the Creepy Chests from the 15th October to the 7th November 2021 as part of Halloween 2021.
Two hundred (200) Magics were introduced as Special Chance in the Silver Chests from the 24th August 2021 to the 12th July 2022 as part of Update 52 to Update 59.
Twenty (20) Magics were introduced as Common in the Bronze Chests since the 24th August 2021 as part of Update 52.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 15th to the 20th August 2021 as part of Tower Challenge (Gord).
One hundred (100) Magics were introduced as Common in the Amber Chests from the 10th to the 15th August 2021 as part of Tower Challenge (Gord).
Two hundred and fifty (250) Magics were introduced as Common in the Sapphire Chests from the 5th to the 20th August 2021 as part of Tower Challenge (Gord).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 5th to the 10th August 2021 as part of Tower Challenge (Gord).
One hundred (100) Magics were introduced as Uncommon in the The Princess and the Frog Chests since the 13th July 2021 as part of Update 51.
One hundred and fifty (150) Magics were introduced as Rare in the Brave Chests since the 7th July 2021 as part of Update 50.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 13th to the 18th June 2021 as part of Tower Challenge (Bailey).
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 8th to the 13th June 2021 as part of Tower Challenge (Bailey).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 3rd to the 8th June 2021 as part of Tower Challenge (Bailey).
Fifty (50) Magics were introduced as Common in the The Little Mermaid Chests from the 1st June 2021 to the 31st May 2022 as part of Update 50 to Update 59.
Fifty (50) Magics were introduced as Common in the Finding Nemo Chests from the 1st June 2021 to the 31st May 2022 as part of Update 50 to Update 59.
One hundred (100) Magics were introduced as Uncommon in the Winnie the Pooh Chests from the 20th April 2021 to the 12th July 2022 as part of Update 49 to Update 59.
One hundred (100) Magics were introduced as Uncommon in the The Little Mermaid Chests from the 20th April to the 1st June 2021 as part of Update 49.
One hundred (100) Magics were introduced as Common in the Red Ribbon Chests from the 25th to the 26th December 2020 as part of Christmas 2020.
One hundred (100) Magics were introduced as Uncommon in the Onward Chests from the 15th December 2020 to the 12th July 2022 as part of Update 46 to Update 60.
Fifty (50) Magics were introduced as Common in the Hercules Chests from the 15th December 2020 to the 12th July 2022 as part of Update 46 to Update 60.
Fifty (50) Magics were introduced as Common in the Star Wars Chests from the 10th November 2020 to the 11th November 2021 as part of Update 45 to Update 54.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 6th to the 11th November 2020 as part of Tower Challenge (Gord).
One hundred and fifty (150) Magics were introduced as Common in the Sapphire Chests from the 8th October to the 1st November 2020 as part of Halloween 2020.
One hundred (100) Magics were introduced as Uncommon in the Alice in Wonderland Chests from the 6th October 2020 to the 12th July 2022 as part of Update 44 to Update 60.
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 1st to the 6th September 2020 as part of Tower Challenge (Gord).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 27th August to the 1st September 2020 as part of Tower Challenge (Gord).
One hundred and fifty (150) Magics were introduced as Rare in the Frozen Chests from the 25th August 2020 to the 31st May 2022 as part of Update 43 to Update 59.
One hundred (100) Magics were introduced as Common in the Ruby Chests from the 31st July to the 5th August 2020 as part of Tower Challenge (Owl).
One hundred (100) Magics were introduced as Common in the Radiant Chests from the 26th to the 31st July 2020 as part of Tower Challenge (Owl).
One hundred (100) Magics were introduced as Common in the Magical Chests from the 21st to the 26th July 2020 as part of Tower Challenge (Owl).
One hundred and fifty (150) Magics were introduced as Rare in the Winnie the Pooh Chests from the 14th July 2020 to the 20th April 2021 as part of Update 42 to Update 49.
One hundred and fifty (150) Magics were introduced as Rare in the Onward Chests from the 14th July to the 15th December 2020 as part of Update 42 to Update 36.
One hundred (100) Magics were introduced as Uncommon in the Aladdin Chests from the 14th July 2020 to the 12th July 2022 as part of Update 42 to Update 60.
Fifty (50) Magics were introduced as Common in the Star Wars Chests from the 3rd May to the 11th November 2020 as part of Update 40 to Update 45.
One hundred (100) Magics were introduced as Common in the Red Ribbon Chests from the 25th to the 26th December 2019 as part of Christmas 2019.
One hundred (100) Magics were introduced as Uncommon in the Frozen Chests from the 17th December 2019 to the 25th August 2020 as part of Update 36 to Update 43.
One hundred (100) Magics were introduced as Uncommon in the Coco Chests since the 17th December 2019 as part of Update 36.
Five hundred (500) Magics were introduced as Uncommon in the Sapphire Chests from the 28th November to the 13th December 2019 as part of Into the Mist Event.
One hundred and fifty (150) Magics were introduced as Rare in the Big Hero 6 Chests from the 19th November 2019 to the 8th March 2022 as part of Update 35 to Update 57.
Fifty (50) Magics were introduced as Common in the Snow White and the Seven Dwarfs Chests from the 15th October 2019 to the 12th July 2022 as part of Update 34 to Update 60.
One hundred and fifty (150) Magics were introduced as Rare in the Finding Nemo Chests from the 15th October 2019 to the 1st June 2021 as part of Update 34 to Update 50.
Four hundred (400) Magics were introduced as Uncommon in the Amber Chests from the 17th to the 19th September 2019 as part of Tower Challenge (Prince Charming).
One hundred and fifty (150) Magics were introduced as Rare in the The Lion King Chests from the 10th September 2019 to the 12th July 2022 as part of Update 33 to Update 60.
Fifty (50) Magics were introduced as Common in the Lilo & Stitch Chests from the 10th September 2019 to the 14th December 2021 as part of Update 33 to Update 55.
Fifty (50) Magics were introduced as Common in the Nightmare Before Christmas Chests from the 6th August 2019 to the 12th July 2022 as part of Update 32 to Update 60.
One hundred and twenty-five (125) Magics were introduced as Uncommon in the Resource Chests from the 2nd July 2019 to the 31st May 2022 as part of Update 31 to Update 59.
Fifty (50) Magics were introduced as Common in the The Princess and the Frog Chests from the 2nd July 2019 to the 13th July 2021 as part of Update 31 to Update 51.
Fifty (50) Magics were introduced as Common in the The Little Mermaid Chests from the 2nd July 2019 to the 20th April 2021 as part of Update 31 to Update 49.
One hundred (100) Magics were introduced as Uncommon in the Mulan Chests from the 2nd July 2019 to the 25th January 2022 as part of Update 31 to Update 56.
One hundred (100) Magics were introduced as Uncommon in the Moana Chests from the 2nd July 2019 to the 12th July 2022 as part of Update 31 to Update 60.
One hundred and fifty (150) Magics were introduced as Rare in the Aladdin Chests from the 2nd July 2019 to the 14th July 2020 as part of Update 31 to Update 42.
Two hundred (200) Magics were introduced as Special Chance in the Bronze Chests from the 2nd July 2019 to the 24th August 2021 as part of Update 31 to Update 52.
Eighty (80) Magics were introduced as Rare in the Silver Chests from the 21st May to the 2nd July 2019 as part of Update 30.
Eight hundred (800) Magics were introduced as Uncommon in the Platinum Chests from the 21st May to the 2nd July 2019 as part of Update 30.
Fifty (50) Magics were introduced as Common in the Wreck-It Ralph Chests from the 21st May 2019 to the 12th July 2022 as part of Update 30 to Update 60.
Twenty (20) Magics were introduced as Uncommon in the Bronze Chests from the 21st May to the 2nd July 2019 as part of Update 30.
One hundred (100) Magics were introduced as Rare in the Radiant Chests from the 19th April to the 14th May 2019 as part of Find the Way Event.
Thirty (30) Magics were introduced as Common in the Silver Chests from the 16th April to the 21st May 2019 as part of Update 29.
Two hundred and fifty (250) Magics were introduced as Common in the Gold Chests from the 16th April to the 2nd July 2019 as part of Update 29 to Update 31.
Forty (40) Magics were introduced as Rare in the Bronze Chests from the 16th April to the 21st May 2019 as part of Update 29.
Eighty (80) Magics were introduced as Rare in the Silver Chests from the 19th March to the 16th April 2019 as part of Update 28.
Six hundred (600) Magics were introduced as Common in the Platinum Chests from the 19th March to the 21st May 2019 as part of Update 28 to Update 29.
Fifteen (15) Magics were introduced as Common in the Bronze Chests from the 19th March to the 16th April 2019 as part of Update 28.
One hundred (100) Magics were introduced as Rare in the Radiant Chests from the 15th February to the 12th March 2019 as part of Dreams Do Come True! Event.
Forty (40) Magics were introduced as Uncommon in the Silver Chests from the 12th February to the 19th March 2019 as part of Update 27.
Eight hundred (800) Magics were introduced as Uncommon in the Platinum Chests from the 12th February to the 19th March 2019 as part of Update 27.
One hundred (100) Magics were introduced as Uncommon in the Nightmare Before Christmas Chests from the 12th February to the 6th August 2019 as part of Update 27 to Update 32.
Five hundred (500) Magics were introduced as Rare in the Gold Chests from the 12th February to the 16th April 2019 as part of Update 27 to Update 28.
Forty (40) Magics were introduced as Rare in the Bronze Chests from the 12th February to the 19th March 2019 as part of Update 27.
One hundred (100) Magics were introduced as Rare in the Radiant Chests from the 30th January to the 4th February 2019 as part of Tower Challenge (Cri-Kee).
One hundred (100) Magics were introduced as Rare in the Magical Chests from the 25th to the 30th January 2019 as part of Tower Challenge (Cri-Kee).
Six hundred (600) Magics were introduced as Common in the Platinum Chests from the 9th January to the 12th February 2019 as part of Update 26.
Seventy-five (75) Magics were introduced as Uncommon in the Ruby Chests from the 4th to the 9th January 2019 as part of Tower Challenge (Cri-Kee).
Three hundred (300) Magics were introduced as Epic in the Sapphire Chests from the 25th to the 26th December 2018 as part of Christmas 2018.
Two hundred and fifty (250) Magics were introduced as Common in the Radiant Chests from the 23rd November to the 20th December 2018 as part of I'm Gonna Wreck It! Event.
Thirty (30) Magics were introduced as Common in the Silver Chests from the 21st November 2018 to the 12th February 2019 as part of Update 25 to Update 27.
Fifty (50) Magics were introduced as Common in the Winnie the Pooh Chests from the 21st November 2018 to the 14th July 2020 as part of Update 25 to Update 42.
One hundred (100) Magics were introduced as Uncommon in the The Little Mermaid Chests from the 21st November 2018 to the 2nd July 2019 as part of Update 25 to Update 31.
One hundred (100) Magics were introduced as Uncommon in the The Incredibles Chests since the 21st November 2018 as part of Update 25.
Fifty (50) Magics were introduced as Common in the Nightmare Before Christmas Chests from the 21st November 2018 to the 12th February 2019 as part of Update 25 to Update 27.
One hundred and fifty (150) Magics were introduced as Rare in the Mulan Chests from the 21st November 2018 to the 2nd July 2019 as part of Update 25 to Update 31.
One hundred (100) Magics were introduced as Uncommon in the Lilo & Stitch Chests from the 21st November 2018 to the 10th September 2019 as part of Update 25 to Update 33.
Fifty (50) Magics were introduced as Common in the Frozen Chests from the 21st November 2018 to the 17th December 2019 as part of Update 25 to Update 36.
Fifty (50) Magics were introduced as Common in the Big Hero 6 Chests from the 21st November 2018 to the 19th November 2019 as part of Update 25 to Update 35.
One hundred (100) Magics were introduced as Uncommon in the Beauty and the Beast Chests from the 21st November 2018 to the 9th July 2024 as part of Update 25 to Update 84.
Fifty (50) Magics were introduced as Common in the Alice in Wonderland Chests from the 21st November 2018 to the 6th October 2020 as part of Update 25 to Update 44.
Fifty (50) Magics were introduced as Common in the Aladdin Chests from the 21st November 2018 to the 2nd July 2019 as part of Update 25 to Update 31.
Twenty (20) Magics were introduced as Uncommon in the Bronze Chests from the 21st November 2018 to the 12th February 2019 as part of Update 25 to Update 26.
One thousand seven hundred (1,700) Magics were introduced as Rare in the Ruby Chests from the 29th October to the 1st November 2018 as part of Halloween 2018.
Fifteen (15) Magics were introduced as Common in the Magical Chests from the 18th October to the 2nd November 2018 as part of Tower Challenge (The Mayor).
One thousand seven hundred (1,700) Magics were introduced as Rare in the Platinum Chests from the 16th October 2018 to the 9th January 2019 as part of Update 24 to Update 25.
Three hundred (300) Magics were introduced as Uncommon in the Big Hero 6 Chests from the 16th October to the 21st November 2018 as part of Update 24.
Three hundred (300) Magics were introduced as Uncommon in the Gold Chests from the 16th October 2018 to the 12th February 2019 as part of Update 24 to Update 26.
Fifteen (15) Magics were introduced as Common in the Bronze Chests from the 16th October to the 21st November 2018 as part of Update 24.
Two hundred and fifty (250) Magics were introduced as Common in the Radiant Chests from the 7th September to the 2nd October 2018 as part of A Watery Tale Event.
Forty (40) Magics were introduced as Uncommon in the Silver Chests from the 5th September to the 21st November 2018 as part of Update 23 to Update 24.
Fifteen (15) Magics were introduced as Common in the The Incredibles Chests from the 5th September to the 21st November 2018 as part of Update 23 to Update 25.
Five hundred (500) Magics were introduced as Rare in the Lilo & Stitch Chests from the 5th September to the 21st November 2018 as part of Update 23 to Update 25.
Five hundred (500) Magics were introduced as Rare in the Gold Chests from the 5th September to the 16th October 2018 as part of Update 23.
Thirty (30) Magics were introduced as Common in the Silver Chests from the 1st August to the 5th September 2018 as part of Update 22.
Twenty (20) Magics were introduced as Uncommon in the Bronze Chests from the 1st August to the 16th October 2018 as part of Update 22 to Update 23.
Eighty (80) Magics were introduced as Rare in the Silver Chests from the 4th July to the 1st August 2018 as part of Update 21.
Fifteen (15) Magics were introduced as Common in the Mulan Chests from the 4th July to the 21st November 2018 as part of Update 21 to Update 25.
Fifteen (15) Magics were introduced as Common in the Bronze Chests from the 4th July to the 1st August 2018 as part of Update 21.
Two hundred and fifty (250) Magics were introduced as Common in the Magical Chests from the 6th June to the 1st July 2018 as part of Trouble in San Fransokyo! Event.
Forty (40) Magics were introduced as Uncommon in the Silver Chests from the 30th May to the 4th July 2018 as part of Update 20.
Six hundred (600) Magics were introduced as Common in the Platinum Chests from the 30th May to the 16th October 2018 as part of Update 20 to Update 24.
Five hundred (500) Magics were introduced as Rare in the Winnie the Pooh Chests from the 30th May to the 21st November 2018 as part of Update 20 to Update 25.
Five hundred (500) Magics were introduced as Rare in the The Incredibles Chests from the 30th May to the 5th September 2018 as part of Update 20 to Update 23.
Five hundred (500) Magics were introduced as Rare in the Nightmare Before Christmas Chests from the 30th May to the 21st November 2018 as part of Update 20 to Update 25.
Forty (40) Magics were introduced as Uncommon in the Frozen Chests from the 30th May to the 21st November 2018 as part of Update 20 to Update 25.
Five hundred (500) Magics were introduced as Rare in the Beauty and the Beast Chests from the 30th May to the 21st November 2018 as part of Update 20 to Update 25.
Forty (40) Magics were introduced as Uncommon in the Alice in Wonderland Chests from the 30th May to the 21st November 2018 as part of Update 20 to Update 25.
Five hundred (500) Magics were introduced as Rare in the Aladdin Chests from the 30th May to the 21st November 2018 as part of Update 20 to Update 25.
Eight hundred (800) Magics were introduced as Uncommon in the Platinum Chests from the 18th April to the 30th May 2018 as part of Update 19.
Five hundred (500) Magics were introduced as Rare in the Snow White and the Seven Dwarfs Chests from the 18th April 2018 to the 15th October 2019 as part of Update 19 to Update 33.
Five hundred (500) Magics were introduced as Rare in the Mulan Chests from the 18th April to the 4th July 2018 as part of Update 19 to Update 21.
Two hundred and fifty (250) Magics were introduced as Common in the Gold Chests from the 18th April to the 5th September 2018 as part of Update 19 to Update 22.
Forty (40) Magics were introduced as Rare in the Bronze Chests from the 18th April to the 4th July 2018 as part of Update 19 to Update 20.
Two hundred and fifty (250) Magics were introduced as Common in the Amber Chests from the 18th April 2018 to the 2nd July 2019 as part of Update 19 to Update 31.
Fifteen (15) Magics were introduced as Common in the Amber Chests from the 8th March to the 3rd April 2018 as part of Honey Tree Troubles Event.
Eighty (80) Magics were introduced as Rare in the Silver Chests from the 7th March to the 30th May 2018 as part of Update 18 to Update 19.
Six hundred (600) Magics were introduced as Common in the Platinum Chests from the 7th March to the 18th April 2018 as part of Update 18.
Fifteen (15) Magics were introduced as Common in the The Lion King Chests from the 7th March 2018 to the 10th September 2019 as part of Update 18 to Update 33.
Forty (40) Magics were introduced as Uncommon in the The Incredibles Chests from the 7th March to the 30th May 2018 as part of Update 18 to Update 20.
Fifteen (15) Magics were introduced as Common in the Nightmare Before Christmas Chests from the 7th March to the 30th May 2018 as part of Update 18 to Update 20.
Forty (40) Magics were introduced as Uncommon in the Mulan Chests from the 7th March to the 18th April 2018 as part of Update 18.
Fifteen (15) Magics were introduced as Common in the Frozen Chests from the 7th March to the 30th May 2018 as part of Update 18 to Update 20.
Fifteen (15) Magics were introduced as Common in the Beauty and the Beast Chests from the 7th March to the 30th May 2018 as part of Update 18 to Update 20.
Five hundred (500) Magics were introduced as Rare in the Gold Chests from the 7th March to the 18th April 2018 as part of Update 18.
Fifteen (15) Magics were introduced as Common in the Bronze Chests from the 7th March to the 18th April 2018 as part of Update 18.
Thirty (30) Magics were introduced as Common in the Silver Chests from the 24th January to the 7th March 2018 as part of Update 17.
One thousand seven hundred (1,700) Magics were introduced as Rare in the Platinum Chests from the 24th January to the 7th March 2018 as part of Update 17.
Fifteen (15) Magics were introduced as Common in the Alice in Wonderland Chests from the 24th January to the 30th May 2018 as part of Update 17 to Update 21.
Three hundred (300) Magics were introduced as Uncommon in the Gold Chests from the 24th January to the 7th March 2018 as part of Update 17.
Forty (40) Magics were introduced as Rare in the Bronze Chests from the 24th January to the 7th March 2018 as part of Update 17.
Six hundred (600) Magics were introduced as Common in the Sapphire Chests from the 25th to the 26th December 2017 as part of Christmas 2017.
Five hundred (500) Magics were introduced as Rare in the Gold Chests from the 6th December 2017 to the 24th January 2018 as part of Update 16.
Twenty (20) Magics were introduced as Uncommon in the Bronze Chests from the 6th December 2017 to the 24th January 2018 as part of Update 16.
Fifteen (15) Magics were introduced as Common in the Magical Chests from the 31st October to the 1st November 2017 as part of Halloween 2017.
Forty (40) Magics were introduced as Uncommon in the Silver Chests from the 25th October 2017 to the 24th January 2018 as part of Update 15 to Update 16.
Eight hundred (800) Magics were introduced as Uncommon in the Platinum Chests from the 25th October 2017 to the 24th January 2018 as part of Update 15 to Update 16.
Forty (40) Magics were introduced as Uncommon in the The Lion King Chests from the 25th October 2017 to the 7th March 2018 as part of Update 15 to Update 18.
Forty (40) Magics were introduced as Uncommon in the Aladdin Chests from the 25th October 2017 to the 30th May 2018 as part of Update 15 to Update 20.
Three hundred (300) Magics were introduced as Uncommon in the Gold Chests from the 25th October to the 6th December 2017 as part of Update 15.
Fifteen (15) Magics were introduced as Common in the Bronze Chests from the 25th October to the 6th December 2017 as part of Update 15.
One thousand seven hundred (1,700) Magics were introduced as Rare in the Platinum Chests from the 20th September to the 25th October 2017 as part of Update 14.
Forty (40) Magics were introduced as Rare in the Bronze Chests from the 20th September to the 25th October 2017 as part of Update 14.
Thirty (30) Magics were introduced as Common in the Silver Chests from the 16th August to the 25th October 2017 as part of Update 13 to Update 14.
Six hundred (600) Magics were introduced as Common in the Platinum Chests from the 16th August to the 20th September 2017 as part of Update 13.
Five hundred (500) Magics were introduced as Rare in the The Incredibles Chests from the 16th August 2017 to the 7th March 2018 as part of Update 13 to Update 18.
Forty (40) Magics were introduced as Uncommon in the Nightmare Before Christmas Chests from the 16th August 2017 to the 7th March 2018 as part of Update 13 to Update 18.
Fifteen (15) Magics were introduced as Common in the Mulan Chests from the 16th August 2017 to the 7th March 2018 as part of Update 13 to Update 18.
Five hundred (500) Magics were introduced as Rare in the Frozen Chests from the 16th August 2017 to the 7th March 2018 as part of Update 13 to Update 18.
Five hundred (500) Magics were introduced as Rare in the Beauty and the Beast Chests from the 16th August 2017 to the 7th March 2018 as part of Update 13 to Update 18.
Two hundred and fifty (250) Magics were introduced as Common in the Gold Chests from the 16th August to the 25th October 2017 as part of Update 13 to Update 14.
Fifteen (15) Magics were introduced as Common in the Bronze Chests from the 16th August to the 20th September 2017 as part of Update 13.
Fifteen (15) Magics were introduced as Unknown in the Mulan Chests from the 25th May to the 16th August 2017 as part of Update 11 to Update 13.
Five hundred (500) Magics were introduced as Unknown in the Beauty and the Beast Chests from the 25th May to the 16th August 2017 as part of Update 11 to Update 13.
Five hundred (500) Magics were introduced as Unknown in the The Incredibles Chests from the 12th April to the 16th August 2017 as part of Update 10 to Update 13.
Five hundred (500) Magics were introduced as Unknown in the Nightmare Before Christmas Chests from the 12th April to the 16th August 2017 as part of Update 10 to Update 13.
Five hundred (500) Magics were introduced as Unknown in the Frozen Chests from the 12th April to the 16th August 2017 as part of Update 10 to Update 13.
Eighty-five (85) Magics were introduced as Unknown in the Silver Chests from the 29th July 2016 to the 16th August 2017 as part of Update 3 to Update 12.
One thousand (1,000) Magics were introduced as Unknown in the Platinum Chests from the 29th July 2016 to the 16th August 2017 as part of Update 3 to Update 12.
Six hundred and fifty (650) Magics were introduced as Unknown in the Gold Chests from the 29th July 2016 to the 16th August 2017 as part of Update 3 to Update 12.
Thirty (30) Magics were introduced as Unknown in the Bronze Chests from the 29th July 2016 to the 16th August 2017 as part of Update 3 to Update 12.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 26th November 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 19th November 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 13th November 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 8th November 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 1st November 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 29th October 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 25th October 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 19th October 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 16th October 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 9th October 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 1st October 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 27th September 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 19th September 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 13th September 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 8th September 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 1st September 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 28th August 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 20th August 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 14th August 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 3rd August 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 28th July 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 24th July 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 16th July 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 2nd July 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 25th June 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 22nd June 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 15th June 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 4th June 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 30th May 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 25th May 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 17th May 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 8th May 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 1st May 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 23rd April 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 10th April 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 5th April 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 26th March 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 22nd March 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 12th March 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 7th March 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 1st March 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 25th February 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 11th February 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 3rd February 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 28th January 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 18th January 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 13th January 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar since the 2nd January 2024.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 27th December 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 17th December 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 12th December 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 2nd December 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 22nd November 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 17th November 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 9th November 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 29th October 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 20th October 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 10th October 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 26th September 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 15th September 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 7th September 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 2nd September 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 29th August 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 19th August 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 13th August 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 2nd August 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 27th July 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 19th July 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 11th July 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 1st July 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 21st June 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 11th June 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 2nd June 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 25th May 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 17th May 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 6th May 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 22nd April 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 16th April 2023.
Players were given free Magics as Day 3 throught the Welcome Rewards Calendar since the 11th April 2023 as part of Update 69 (Alternate).
Players were given free Magics as Day 2 throught the Welcome Rewards Calendar since the 11th April 2023 as part of Update 69.
Players were given free Magics as Day 5 throught the Welcome Rewards Calendar since the 11th April 2023 as part of Update 69 (Alternate).
Players were given free Magics as Day 6 throught the Welcome Rewards Calendar since the 11th April 2023 as part of Update 69.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 7th April 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 29th March 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 21st March 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 15th March 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 2nd March 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 22nd February 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 12th February 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 7th February 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 31st January 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 19th January 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 13th January 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2023 since the 3rd January 2023.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 23rd December 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 14th December 2022.
Players were given free Magics as Day 6 throught the Welcome Rewards Calendar from the 13th December 2022 to the 11th April 2023 as part of Update 65 to Update 69.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 1st December 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 20th November 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 12th November 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 5th November 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 30th October 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 18th October 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 4th October 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 22nd September 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 9th September 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 1st September 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 19th August 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 11th August 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 2nd August 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 22nd July 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 15th July 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 6th July 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 28th June 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 19th June 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 11th June 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 1st June 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 25th May 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 15th May 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 6th May 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 29th April 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 22nd April 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 14th April 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 7th April 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 30th March 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 19th March 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 9th March 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 3rd March 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 27th February 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 20th February 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 11th February 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 2nd February 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 25th January 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 16th January 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 14th January 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2022 since the 3rd January 2022.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 27th December 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 18th December 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 12th December 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 5th December 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 23rd November 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 17th November 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 12th November 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 4th November 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 17th October 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 9th October 2021.
Players were given free Magics as Day 2 throught the Welcome Rewards Calendar from the 5th October 2021 to the 11th April 2023 as part of Update 53 to Update 69.
Players were given free Magics as Day 2 throught the Welcome Rewards Calendar from the 5th October 2021 to the 13th December 2022 as part of Update 53 to Update 65.
Players were given free Magics in Discovery 5 throught the Discovery Rewards since the 5th October 2021 as part of Update 53.
Players were given five hundred (500) free Magics in Discovery 1 throught the Discovery Rewards since the 5th October 2021 as part of Update 53.
Players were given free Magics in Discovery 3 throught the Discovery Rewards since the 5th October 2021 as part of Update 53.
Players were given free Magics in Discovery 2 throught the Discovery Rewards since the 5th October 2021 as part of Update 53.
Players were given free Magics in Discovery 4 throught the Discovery Rewards since the 5th October 2021 as part of Update 53.
Players were given free Magics in Discovery 6 throught the Discovery Rewards since the 5th October 2021 as part of Update 53.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 29th September 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 14th September 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 2nd September 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 22nd August 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 11th August 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 3rd August 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 25th July 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 17th July 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 6th July 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 1st July 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 23rd June 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 5th June 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 29th May 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 21st May 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 11th May 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 1st May 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 24th April 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 20th April 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 10th April 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 2nd April 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 30th March 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 21st March 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 10th March 2021.
Players were given free Magics in Discovery 5 throught the Discovery Rewards from the 9th March to the 5th October 2021 as part of Update 48 to Update 53.
Players were given free Magics in Discovery 4 throught the Discovery Rewards from the 9th March to the 5th October 2021 as part of Update 48 to Update 53.
Players were given free Magics in Discovery 3 throught the Discovery Rewards from the 9th March to the 5th October 2021 as part of Update 48 to Update 53.
Players were given five hundred (500) free Magics in Discovery 1 throught the Discovery Rewards from the 9th March to the 5th October 2021 as part of Update 48 to Update 53.
Players were given free Magics in Discovery 6 throught the Discovery Rewards from the 9th March to the 5th October 2021 as part of Update 48 to Update 53.
Players were given free Magics in Discovery 2 throught the Discovery Rewards from the 9th March to the 5th October 2021 as part of Update 48 to Update 53.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 5th March 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 28th February 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 20th February 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 16th February 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 3rd February 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 27th January 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 17th January 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 15th January 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2021 since the 6th January 2021.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 30th December 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 19th December 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 12th December 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 3rd December 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 24th November 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 11th November 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 5th November 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 27th October 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 20th October 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 11th October 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 27th September 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 20th September 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 15th September 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 2nd September 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 25th August 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 18th August 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 6th August 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 31st July 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 22nd July 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 17th July 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 12th July 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 4th July 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 27th June 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 17th June 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 2nd June 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 26th May 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 17th May 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 13th May 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 1st May 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 18th April 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 9th April 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 2nd April 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 18th March 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 13th March 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 10th March 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 1st March 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 26th February 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 13th February 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 7th February 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 1st February 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 19th January 2020.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2020 since the 14th January 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2020 since the 4th January 2020.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 30th November 2019.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2019 since the 17th November 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 16th October 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 6th October 2019.
Players were given seven hundred and fifty (750) free Magics throught the Daily Rewards Calendar/2019 since the 1st October 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 28th August 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 13th August 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 1st August 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 20th July 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 3rd July 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 30th June 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 27th June 2019.
Players were given three hundred and seventy-five (375) free Magics throught the Daily Rewards Calendar/2019 since the 23rd June 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 20th June 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 13th June 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 6th June 2019.
Players were given three hundred and seventy-five (375) free Magics throught the Daily Rewards Calendar/2019 since the 29th May 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 26th May 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 22nd May 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 18th May 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 16th May 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 10th May 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 1st May 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 27th April 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 21st April 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 18th April 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 11th April 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 7th April 2019.
Players were given three hundred and twenty-five (325) free Magics throught the Daily Rewards Calendar/2019 since the 3rd April 2019.
Players were given three hundred and twenty-five (325) free Magics throught the Daily Rewards Calendar/2019 since the 30th March 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 24th March 2019.
Players were given two hundred and fifty (250) free Magics throught the Daily Rewards Calendar/2019 since the 19th March 2019.
Players were given three hundred and seventy-five (375) free Magics throught the Daily Rewards Calendar/2019 since the 13th March 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 6th March 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 1st March 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 26th February 2019.
Players were given three hundred and seventy-five (375) free Magics throught the Daily Rewards Calendar/2019 since the 21st February 2019.
Players were given three hundred and twenty-five (325) free Magics throught the Daily Rewards Calendar/2019 since the 13th February 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 7th February 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 1st February 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 29th January 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 24th January 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 17th January 2019.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2019 since the 11th January 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2019 since the 3rd January 2019.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 30th December 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 27th December 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 18th December 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 13th December 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 8th December 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 4th December 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 1st December 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 28th November 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 7th November 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 2nd November 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 26th October 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 21st October 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 17th October 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 13th October 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 7th October 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 4th October 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 2nd October 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 28th September 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 21st September 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 16th September 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 12th September 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 8th September 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 5th September 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 1st September 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 23rd August 2018.
Players were given four hundred and fifty (450) free Magics throught the Daily Rewards Calendar/2018 since the 15th August 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 10th August 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 5th August 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 1st August 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 27th July 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 18th July 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 13th July 2018.
Players were given four hundred (400) free Magics throught the Daily Rewards Calendar/2018 since the 8th July 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 1st July 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 27th June 2018.
Players were given three hundred (300) free Magics throught the Daily Rewards Calendar/2018 since the 23rd June 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 20th June 2018.
Players were given four hundred and fifty (450) free Magics throught the Daily Rewards Calendar/2018 since the 16th June 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 13th June 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 9th June 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 4th June 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 30th May 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 27th May 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 23rd May 2018.
Players were given two hundred and fifty (250) free Magics throught the Daily Rewards Calendar/2018 since the 20th May 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 17th May 2018.
Players were given two hundred and fifty (250) free Magics throught the Daily Rewards Calendar/2018 since the 12th May 2018.
Players were given four hundred (400) free Magics throught the Daily Rewards Calendar/2018 since the 8th May 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 6th May 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 2nd May 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 25th April 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 18th April 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 7th March 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 3rd March 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 1st March 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 27th February 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 22nd February 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 15th February 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 11th February 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 10th February 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 7th February 2018.
Players were given five hundred (500) free Magics throught the Daily Rewards Calendar/2018 since the 3rd February 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 1st February 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 30th January 2018.
Players were given three hundred and fifty (350) free Magics throught the Daily Rewards Calendar/2018 since the 24th January 2018.
Players were given two hundred and fifty (250) free Magics throught the Daily Rewards Calendar/2018 since the 23rd January 2018.
Players were given five hundred (500) free Magics throught the December Holiday Gifting 2017 since the 2nd December 2017.
Players were given five hundred (500) free Magics as Day 27 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred and fifty (650) free Magics as Day 78 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 76 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given four hundred (400) free Magics as Day 8 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given seven hundred (700) free Magics as Day 82 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 81 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 84 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 75 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given seven hundred and fifty (750) free Magics as Day 72 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given eight hundred (800) free Magics as Day 68 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given nine hundred (900) free Magics as Day 73 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given two hundred and fifty (250) free Magics as Day 7 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 71 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 70 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 66 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 85 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given seven hundred (700) free Magics as Day 87 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given four hundred (400) free Magics as Day 12 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 14 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given two hundred (200) free Magics as Day 11 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given fifty (50) free Magics as Day 1 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given seven hundred and fifty (750) free Magics as Day 1 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 86 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given two hundred and fifty (250) free Magics as Day 16 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred (600) free Magics as Day 18 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 89 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given four hundred (400) free Magics as Day 17 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred (600) free Magics as Day 9 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given eight hundred (800) free Magics as Day 20 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given three hundred (300) free Magics as Day 21 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given eight hundred (800) free Magics as Day 29 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 65 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred and fifty (650) free Magics as Day 62 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 4 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given nine hundred (900) free Magics as Day 39 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given four hundred and fifty (450) free Magics as Day 41 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given nine hundred (900) free Magics as Day 43 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred (600) free Magics as Day 42 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 45 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given eight hundred (800) free Magics as Day 38 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given nine hundred and fifty (950) free Magics as Day 35 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given three hundred and fifty (350) free Magics as Day 26 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given four hundred and fifty (450) free Magics as Day 36 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given four hundred (400) free Magics as Day 31 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given eight hundred (800) free Magics as Day 33 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred (600) free Magics as Day 32 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given seven hundred (700) free Magics as Day 64 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 46 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given seven hundred and fifty (750) free Magics as Day 24 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred and fifty (650) free Magics as Day 23 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given seven hundred and fifty (750) free Magics as Day 58 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given four hundred and fifty (450) free Magics as Day 22 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 61 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given one hundred and fifty (150) free Magics as Day 6 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred and fifty (650) free Magics as Day 48 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 56 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given eight hundred (800) free Magics as Day 53 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 5 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given nine hundred (900) free Magics as Day 54 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 50 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given six hundred and fifty (650) free Magics as Day 52 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given five hundred (500) free Magics as Day 51 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were given free Magics as Day 3 throught the Daily Login Rewards from the 17th March 2016 to the 24th January 2017 as part of .
Players were able to purchase seven hundred and fifty (750) Magics in the Bundle Shop as part of the Bo Peep Token Bundle for Real Money since the 11th June 2024.
Gallery[ ] | {"url":"https://dmk.fandom.com/wiki/Magic","timestamp":"2024-11-14T18:59:11Z","content_type":"text/html","content_length":"923578","record_id":"<urn:uuid:0fba4eaf-55f5-4988-b119-63a7181de47e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00091.warc.gz"} |
algebra interface based on the BLAS
Authors and contributors
• Mark Hoemmen (mhoemmen@nvidia.com) (NVIDIA)
• Daisy Hollman (cpp@dsh.fyi) (Google)
• Christian Trott (crtrott@sandia.gov) (Sandia National Laboratories)
• Daniel Sunderland (dansunderland@gmail.com)
• Nevin Liber (nliber@anl.gov) (Argonne National Laboratory)
• Alicia Klinvex (alicia.klinvex@unnpp.gov) (Naval Nuclear Laboratory)
• Li-Ta Lo (ollie@lanl.gov) (Los Alamos National Laboratory)
• Damien Lebrun-Grandie (lebrungrandt@ornl.gov) (Oak Ridge National Laboratories)
• Graham Lopez (glopez@nvidia.com) (NVIDIA)
• Peter Caday (peter.caday@intel.com) (Intel)
• Sarah Knepper (sarah.knepper@intel.com) (Intel)
• Piotr Luszczek (luszczek@icl.utk.edu) (University of Tennessee)
• Timothy Costa (tcosta@nvidia.com) (NVIDIA)
• Zach Laine (particular thanks for R12 review and suggestions)
• Chip Freitag (chip.freitag@amd.com) (AMD)
• Bryce Adelstein Lelbach (brycelelbach@gmail.com) (NVIDIA)
• Srinath Vadlamani (Srinath.Vadlamani@arm.com) (ARM)
• Rene Vanoostrum (Rene.Vanoostrum@amd.com) (AMD)
Revision history
• Revision 0 (pre-Cologne) submitted 2019-06-17
□ Received feedback in Cologne from SG6, LEWGI, and (???).
• Revision 1 (pre-Belfast) to be submitted 2019-10-07
□ Account for Cologne 2019 feedback
☆ Make interface more consistent with existing Standard algorithms
☆ Change dot, dotc, vector_norm2, and vector_abs_sum to imitate reduce, so that they return their result, instead of taking an output parameter. Users may set the result type via optional
init parameter.
□ Minor changes to “expression template” classes, based on implementation experience
□ Briefly address LEWGI request of exploring concepts for input arguments.
□ Lazy ranges style API was NOT explored.
• Revision 2 (pre-Cologne) to be submitted 2020-01-13
□ Add “Future work” section.
□ Remove “Options and votes” section (which were addressed in SG6, SG14, and LEWGI).
□ Remove basic_mdarray overloads.
□ Remove batched linear algebra operations.
□ Remove over- and underflow requirement for vector_norm2.
□ Mandate any extent compatibility checks that can be done at compile time.
□ Add missing functions {symmetric,hermitian}_matrix_rank_k_update and triangular_matrix_{left,right}_product.
□ Remove packed_view function.
□ Fix wording for {conjugate,transpose,conjugate_transpose}_view, so that implementations may optimize the return type. Make sure that transpose_view of a layout_blas_packed matrix returns a
layout_blas_packed matrix with opposite Triangle and StorageOrder.
□ Remove second template parameter T from accessor_conjugate.
□ Make scaled_scalar and conjugated_scalar exposition only.
□ Add in-place overloads of triangular_matrix_matrix_{left,right}_solve, triangular_matrix_{left,right}_product, and triangular_matrix_vector_solve.
□ Add alpha overloads to {symmetric,hermitian}_matrix_rank_{1,k}_update.
□ Add Cholesky factorization and solve examples.
• Revision 3 (electronic) to be submitted 2021-04-15
□ Per LEWG request, add a section on our investigation of constraining template parameters with concepts, in the manner of P1813R0 with the numeric algorithms. We concluded that we disagree
with the approach of P1813R0, and that the Standard’s current GENERALIZED_SUM approach better expresses numeric algorithms’ behavior.
□ Update references to the current revision of P0009 (mdspan).
□ Per LEWG request, introduce std::linalg namespace and put everything in there.
□ Per LEWG request, replace the linalg_ prefix with the aforementioned namespace. We renamed linalg_add to add, linalg_copy to copy, and linalg_swap to swap_elements.
□ Per LEWG request, do not use _view as a suffix, to avoid confusion with “views” in the sense of Ranges. We renamed conjugate_view to conjugated, conjugate_transpose_view to
conjugate_transposed, scaled_view to scaled, and transpose_view to transposed.
□ Change wording from “then implementations will use T’s precision or greater for intermediate terms in the sum,” to “then intermediate terms in the sum use T’s precision or greater.” Thanks to
Jens Maurer for this suggestion (and many others!).
□ Before, a Note on vector_norm2 said, “We recommend that implementers document their guarantees regarding overflow and underflow of vector_norm2 for floating-point return types.”
Implementations always document “implementation-defined behavior” per [defs.impl.defined]. (Thanks to Jens Maurer for pointing out that “We recommend…” does not belong in the Standard.) Thus,
we changed this from a Note to normative wording in Remarks: “If either in_vector_t::element_type or T are floating-point types or complex versions thereof, then any guarantees regarding
overflow and underflow of vector_norm2 are implementation-defined.”
□ Define return types of the dot, dotc, vector_norm2, and vector_abs_sum overloads with auto return type.
□ Remove the explicitly stated constraint on add and copy that the rank of the array arguments be no more than 2. This is redundant, because we already impose this via the existing constraints
on template parameters named in_object*_t, inout_object*_t, or out_object*_t. If we later wish to relax this restriction, then we only have to do so in one place.
□ Add vector_sum_of_squares. First, this gives implementers a path to implementing vector_norm2 in a way that achieves the over/underflow guarantees intended by the BLAS Standard. Second, this
is a useful algorithm in itself for parallelizing vector 2-norm computation.
□ Add matrix_frob_norm, matrix_one_norm, and matrix_inf_norm (thanks to coauthor Piotr Luszczek).
□ Address LEWG request for us to investigate support for GPU memory. See section “Explicit support for asynchronous return of scalar values.”
□ Add ExecutionPolicy overloads of the in-place versions of triangular_matrix_vector_solve, triangular_matrix_left_product, triangular_matrix_right_product, triangular_matrix_matrix_left_solve,
and triangular_matrix_matrix_right_solve.
• Revision 4 (electronic), to be submitted 2021-08-15
□ Update authors’ contact info.
□ Rebase atop P2299R3, which in turn sits atop P0009R12. Make any needed fixes due to these changes. (P1673R3 was based on P0009R10, without P2299.) Update P0009 references to point to the
latest version (R12).
□ Fix requirements for {symmetric,hermitian}_matrix_{left,right}_product.
□ Change SemiRegular<Scalar> to semiregular<Scalar>.
□ Make Real requirements refer to [complex.numbers.general], rather than explicitly listing allowed types. Remove redundant constraints on Real.
□ In [linalg.algs.reqs], clarify that “unique layout” for output matrix, vector, or object types means is_always_unique() equals true.
□ Change file format from Markdown to Bikeshed.
□ Impose requirements on the types on which algorithms compute, and on the algorithms themselves (e.g., what rearrangements are permitted). Add a section explaining how we came up with the
requirements. Lift the requirements into a new higher-level section that applies to the entire contents of [linalg], not just to [linalg.algs].
□ Add “Overview of contents” section.
□ In the last review, LEWG had asked us to consider using exposition-only concepts and requires clauses to express requirements more clearly. We decided not to do so, because we did not think
it would add clarity.
□ Add more examples.
• Revision 5 (electronic), to be submitted 2021-10-15
□ P0009R13 (to be submitted 2021-10-15) changes mdspan to use operator[] instead of operator() as the array access operator. Revision 5 of P1673 adopts this change, and is “rebased” atop
• Revision 6 (electronic), to be submitted 2021-12-15
□ Update references to P0009 (P0009R14) and P2128 (P2128R6).
□ Fix typos in *rank_2k descriptions.
□ Remove references to any mdspan rank greater than 2. (These were left over from earlier versions of the proposal that included “batched” operations.)
□ Fix vector_sum_of_squares name in BLAS comparison table.
□ Replace “Requires” with “Preconditions,” per new wording guidelines.
□ Remove all overloads of symmetric_matrix_rank_k_update and hermitian_matrix_rank_k_update that do not take an alpha parameter. This prevents ambiguity between overloads that take
ExecutionPolicy&& but not alpha, and overloads that take alpha but not ExecutionPolicy&&.
□ Harmonize with the implementation, by adding operator+, operator*, and comparison operators to conjugated_scalar.
• Revision 7 (electronic), to be submitted 2022-04-15
□ Update author affiliations and e-mail addresses
□ Update proposal references
□ Fix typo observed here
□ Add missing ExecutionPolicy overload of in-place triangular_matrix_vector_product; issue was observed here
□ Fix mixed-up order of sum_of_squares_result aggregate initialization arguments in vector_norm2 note
□ Fill in missing parts of matrix_frob_norm and vector_norm2 specification, addressing this issue
• Revision 8 (electronic), to be submitted 2022-05-15
□ Fix Triangle and R[0,0] in Cholesky TSQR example
□ Explain why we apply Triangle to the possibly transformed input matrix, while the BLAS applies UPLO to the original input matrix
□ Optimize transposed for all known layouts, so as to avoid use of layout_transpose when not needed; fix computation of strides for transposed layouts
□ Fix matrix extents in constraints and mandates for symmetric_matrix_rank_k_update and hermitian_matrix_rank_k_update (thanks to Mikołaj Zuzek (NexGen Analytics,
mikolaj.zuzek@ng-analytics.com) for reporting the issue)
□ Resolve vagueness in constness of return type of transposed
□ Resolve vagueness in constness of return type of scaled, and make its element type the type of the product, rather than forcing it back to the input mdspan’s element type
□ Remove decay member function from accessor_conjugate and accessor_scaled, as it is no longer part of mdspan’s accessor policy requirements
□ Make sure accessor_conjugate and conjugated work correctly for user-defined complex types, introduce conj-if-needed to simplify wording, and resolve vagueness in constness of return type of
conjugated. Make sure that conj-if-needed works for custom types where conj is not type-preserving. (Thanks to Yu You (NVIDIA, yuyou@nvidia.com) and Phil Miller (Intense Computing,
phil.miller@intensecomputing.com) for helpful discussions.)
□ Fix typo in givens_rotation_setup for complex numbers, and other typos (thanks to Phil Miller for reporting the issue)
• Revision 9 (electronic), to be submitted 2022-06-15
□ Apply to-be-submitted P0009R17 changes (see P2553 in particular) to all layouts, accessors, and examples in this proposal.
□ Improve triangular_matrix_matrix_{left,right}_solve() “mathematical expression of the algorithm” wording.
□ layout_blas_packed: Fix required_span_size() and operator() wording
□ Make sure all definitions of lower and upper triangle are consistent.
□ Changes to layout_transpose:
☆ Make layout_transpose::mapping(const nested_mapping_type&) constructor explicit, to avoid inadvertent transposition.
☆ Remove the following Constraint on layout_transpose::mapping: “for all specializations E of extents with E::rank() equal to 2, typename Layout::template mapping<E>::is_always_unique() is
true.” (This Constraint was not correct, because the underlying mapping is allowed to be nonunique.)
☆ Make layout_transpose::mapping::stride wording independent of rank() equals 2 constraint, to improve consistency with rest of layout_transpose wording.
□ Changes to scaled and conjugated:
☆ Include and specify all the needed overloaded arithmetic operators for scaled_scalar and conjugated_scalar, and fix accessor_scaled and accessor_conjugate accordingly.
☆ Simplify scaled to ensure preservation of order of operations.
☆ Add missing nested_accessor() to accessor_scaled.
☆ Add hidden friends abs, real, imag, and conj to common subclass of scaled_scalar and conjugated_scalar. Add wording to algorithms that use abs, real, and/or imag, to indicate that these
functions are to be found by unqualified lookup. (Algorithms that use conjugation already use conj-if-needed in their wording.)
□ Changes suggested by SG6 small group review on 2022/06/09
☆ Make existing exposition-only function conj-if-needed use conj if it can find it via unqualified (ADL-only) lookup, otherwise be the identity. Make it a function object instead of a
function, to prevent ADL issues.
☆ Algorithms that mathematically need to do division can now distinguish left division and right division (for the case of noncommutative multiplication), by taking an optional
BinaryDivideOp binary function object parameter. If none is given, binary operator/ is used.
□ Changes suggested by LEWG review of P1673R8 on 2022/05/24
☆ LEWG asked us to add a section to the paper explaining why we don’t define an interface for customization of the “back-end” optimized BLAS operations. This section already existed, but we
rewrote it to improve clarity. Please see the section titled “We do not require using the BLAS library or any particular ‘back-end’.”
☆ LEWG asked us to add a section to the paper showing how BLAS 1 and ranges algorithms would coexist. We added this section, titled “Criteria for including BLAS 1 algorithms; coexistence
with ranges.”
☆ Address LEWG feedback to defer support for custom complex number types (but see above SG6 small group response).
□ Fix P1674 links to point to R2.
• Revision 10 (electronic), submitted 2022-10-15
□ Revise scaled and conjugated wording.
□ Make all matrix view functions constexpr.
□ Rebase atop P2642R1. Remove wording saying that we rebase atop P0009R17. (We don’t need that any more, because P0009 was merged into the current C++ draft.)
□ Remove layout_blas_general, as it has been replaced with the layouts proposed by P2642 (layout_left_padded and layout_right_padded).
□ Update layout_blas_packed to match mdspan’s other layout mappings in the current C++ Standard draft.
□ Update accessors to match mdspan’s other accessors in the current C++ Standard draft.
□ Update non-wording to reflect current status of mdspan (voted into C++ Standard draft) and submdspan (P2630).
• Revision 11, to be submitted 2023-01-15
□ Remove requirement that in_{vector,matrix,object}*_t have unique layout.
□ Change from name-based requirements (in_{vector,matrix,object}*_t) to exposition-only concepts. (This is our interpretation of LEWG’s Kona 2022/11/10 request to “explore expressing
constraints with concepts instead of named type requirements” (see https://github.com/cplusplus/papers/issues/557#issuecomment-1311054803).) Add new section for exposition-only concepts and
□ Add new exposition-only concept possibly-packed-inout-matrix to constrain symmetric and Hermitian update algorithms. These may write either to a unique-layout mdspan or to a
layout_blas_packed mdspan (whose layout is nonunique).
□ Remove Constraints made redundant by the new exposition-only concepts.
□ Remove unnecessary constraint on all algorithms that input mdspan parameter(s) have unique layout.
□ Remove the requirement that vector / matrix / object template parameters may deduce a const lvalue reference or a (non-const) rvalue reference to an mdspan. The new exposition-only concepts
make this unnecessary.
□ Fix dot Remarks to include both vector types.
□ Fix wording of several functions and examples to use mdspan’s value_type alias instead of its (potentially cv-qualified) element_type alias. This includes dot, vector_sum_of_squares,
vector_norm2, vector_abs_sum, matrix_frob_norm, matrix_one_norm, matrix_inf_norm, and the QR factorization example.
□ Make matrix_vector_product template parameter order consistent with parameter order.
□ Follow LEWG guidance to simplify Effects and Constraints (e.g., removing wording referring to the “the mathematical expression for the algorithm”) (as a kind of expression-implying
constraints) by describing them mathematically (in math font). This is our interpretation of the “hand wavy do math” poll option that received a majority of votes at Kona on 2022/11/10 (see
https://github.com/cplusplus/papers/issues/557#issuecomment-1311054803). Revise [linalg.reqs.val] accordingly.
□ Change matrix_one_norm Precondition (that the result of abs of a matrix element is convertible to T) to a Constraint.
□ Change vector_abs_sum Precondition (that the result of init + abs(v[i]) is convertible to T) to a Constraint.
□ Reformat entire document to use Pandoc instead of Bikeshed. This made it possible to add paragraph numbers and fix exposition-only italics.
□ In conjugated-scalar, make conjugatable<ReferenceValue> a Constraint instead of a Mandate.
□ Delete “If an algorithm in [linalg.algs] accesses the elements of an out-vector, out-matrix, or out-object, it will do so in write-only fashion.” This would prevent implementations from,
e.g., zero-initializing the elements and then updating them with +=.
□ Rebase atop P2642R2 (updating from R1).
□ Add explicit defaulted default constructors to all the tag types, in imitation of in_place_t.
□ If two objects refer to the same matrix or vector, we no longer say that they must have the same layout. First, “same layout” doesn’t have to mean the same type. For example, a
layout_stride::mapping instance may represent the same layout as a layout_left::mapping instance. Second, the two objects can’t represent “the same matrix” or “the same vector” if they have
different layout mappings (in the mathematical sense, not in the type sense).
□ Make sure that all updating methods say that the input(s) can be the same as the output (not all inputs, just the ones for which that makes sense – e.g., for the matrix-vector product z=y
+Ax, y and z can refer to the same vector, but not x and z).
□ Change givens_rotation_setup to return outputs in the new givens_rotation_setup_result struct, instead of as output parameters.
• Revision 12 to be submitted 2023/03/15
□ Change “complex version(s) thereof” to “specialization(s) of complex”
□ Remove Note (“conjugation is self-annihilating”) repeating the contents of a note a few lines before
□ Remove Notes with editorial or tutorial content
□ Remove outdated wildcard name-based requirements language
□ Remove Notes that incorrectly stated that the input matrix was symmetric or Hermitian (it’s not necessarily symmetric or Hermitian; it’s just interpreted that way)
□ Remove implementation freedom Notes
□ Update non-wording text referring to P1467 (which was voted into C++23)
□ Change “the following requirement(s)” to “the following element(s),” as a “requirement” is a kind of element
□ Change “algorithm or method” to “function”
□ Change Preconditions elements that should be something else (generally Mandates) to that something else
□ In some cases where it makes sense, use extents::operator== in elements, instead of comparing extent(r) for each r
□ For vector_sum_of_squares, remove last remnants of R10 “mathematical expression of the algorithm” wording
□ Add section [linalg.general] (“General”) that explains mathematical notation, the interpretation of Triangle t parameters, and that calls to abs, conj, imag, and real are unqualified. Move
the definitions of lower triangle, upper triangle, and diagonal from [linalg.tags.triangle] into [linalg.general]. Move the definitions of implicit unit diagonal and explicit diagonal from
[linalg.tags.diagonal] into [linalg.general]. Remove Remarks on Triangle and DiagonalStorage and definitions of A^T and A^H that [linalg.general] makes redundant.
□ Replace “Effects: Equivalent to: return X;” with “Returns: X”. Fix formatting of multiple Returns: cases. Change “name(s) the type” to “is” or “be.”
□ In [linalg.tags.order], add missing forward reference.
□ Replace “if applicable” with hopefully canonical wording.
□ Audit complexity elements and revise their wording.
□ Nonwording: Update reference to P2128 to reflect its adoption into C++23, and remove outdated future work.
□ Nonwording: Add implementation experience section.
□ Nonwording: Update P1385 reference from R6 to R7, and move the “interoperable with other linear algebra proposals” section to a more fitting place.
□ Remove reference from the list of linear algebra value types in [linalg.reqs.val].
□ Add feature test macro __cpp_lib_linalg.
□ Remove “Throws: Nothing” from givens_rotation_setup because it doesn’t need to be explicitly stated, and make all overloads noexcept.
□ transposed no longer returns a read-only mdspan.
□ Remove const from by-value parameters, and pass linear algebra value types by value, including complex.
□ Rename vector_norm2 to vector_two_norm.
□ symmetric_matrix_rank_k_update and hermitian_matrix_rank_k_update (“the BLAS 3 *_update functions”) now have overloads that do not take an alpha scaling parameter.
□ Change {symmetric,hermitian,triangular}_matrix_{left,right}_product to {symmetric,hermitian,triangular}_matrix_product. Distinguish the left and right cases by order of the A, t and B (or C,
in the case of in-place triangular_matrix_product) parameters. NOTE: As a result, triangular_matrix_product (for the in-place right product case) is now an exception to the rule that output
or in/out parameters appear last.
□ In [in]out-{matrix,vector,object}, instead of checking if the element type is const, check if the element type can be assigned to the reference type.
□ Split [linalg.reqs.val] into two sections: requirements on linear algebra value types (same label) and algorithm and class requirements [linalg.reqs.alg].
□ In wording of conj-if-needed, add missing definition of the type T.
□ Add exposition-only real-if-needed and imag-if-needed, and use them to make vector_abs_sum behave the same for custom complex types as for std::complex.
□ Fix two design issues with idx_abs_max.
1. For the complex case, make it behave like the BLAS’s ICAMAX or IZAMAX, by using abs-if-needed( real-if-needed (z[k])) + abs-if-needed( imag-if-needed (z[k])) as the element absolute value
definition for complex numbers, instead of abs(z[k]).
2. Make idx_abs_max behave the same for custom complex types as for std::complex.
□ Simplify wording for proxy-reference’s hidden friends real, imag, and conj, so that they just defer to the corresponding *-if-needed exposition-only functions, rather than duplicating the
wording of those functions.
□ Add exposition-only function abs-if-needed to address std::abs not being defined for unsigned integer types (which manifests as an ambiguous lookup compiler error). Simplify wording for
proxy-reference’s hidden friend abs to defer to abs-if-needed. Use abs-if-needed instead of abs throughout P1673.
□ Change remarks about aliasing (e.g., “y and z may refer to the same vector”) to say “object” instead of “vector” or “matrix.”
□ For in-place overwriting triangular matrix-matrix {left, right} product, restore the “left” and “right” in their names, and always put the input/output parameter at the end. (This restores
triangular_matrix_left_product and triangular_matrix_right_product for only the in-place overwriting case. See above changes for this revision.)
□ Add Demmel 2002 reference to the (C++ Standard) Bibliography.
□ Rephrase [linalg.reqs.val] to use “The type T must” language, and add real-if-needed and imag-if-needed to the list of expressions there.
□ Rephase [linalg.reqs.alg] to make it more like sort, e.g., not making it explicitly a constraint that certain expressions are well formed.
□ Add to [linalg.reqs.val] that a value-initialized object of linear algebra value type acts as the additive identity.
□ Define what it means for two mdspan to “alias” each other. Instead of saying that two things may refer to the same object, say that they may alias.
□ Change the name of the T init parameters and the template parameter of sum_of_squares_result to allow simplification of [linalg.reqs.val].
□ Delete Note explaining BLAS 1, 2, and 3.
• Revision 13 - running revision for LWG review
□ make scaled_accessor::reference const element_type
□ add converting, default and copy ctor to scaled_accessor
□ rename accessor_conjugate to conjugated_accessor
□ rename accessor_scaled to scaled_accessor
□ rename givens_rotation_setup to setup_givens_rotation
□ rename givens_rotation_apply to apply_givens_rotation
□ fix example for scaled
□ implement helper functions for algorithm mandates and preconditions and apply it to gemv
□ fix hanging sections throughout
□ use stable tags instead of “in this clause”
□ make linalg-reqs-flpt a note
□ fix linalg.algs.reqs constraint -> precondition
□ fix unary plus and well formed
□ remove proxy-reference , scaled_scalar and conjugated_scalar
□ Redo conjugated to simply rely on deduction guide for mdspan
□ fix numbering via dummy sections
□ Define what it means for two mdspan to “overlap” each other. Replace “shall view a disjoint set of elements of” wording (was [linalg.concepts] 3 in R12) with “shall not overlap.” “Alias”
retains its R12 meaning (view the same elements in the same order). This lets us retain existing use of “alias” in describing algorithms.
□ rename idx_abs_max to vector_idx_abs_max
□ make “may alias” a transitive verb phrase, and put all such wording expressions in the form “Output may alias Input”
□ fix matrix_rank_1_update* wording (rename template parameters to InOutMat, and fix Effects so that the algorithms are updating; use new “Computes” wording)
□ make sure all Effects-equivalent-to that use the execution policy use std::forward<ExecutionPolicy>(exec), instead of passing exec directly
□ change [linalg.alg.*] stable names to [linalg.algs.*], for consistency
□ fix stable names for BLAS 2 rank-1 symmetric and Hermitian updates
□ Remove any wording (e.g., for transposed) that depends on P2642 (padded mdspan layouts). We can restore and correct that wording later.
□ Fix [linalg.algs.reqs] 1 by changing “type requirements” to “Constraints.” Add the Constraint that ExecutionPolicy is an execution policy. Remove the requirement that the algorithms that take
ExecutionPolicy are parallel algorithms, because that would be circular with [algorithms.parallel] 2 (“A parallel algorithm is a function template listed in this document with a template
parameter named ExecutionPolicy”).
□ make layout_blas_packed::mapping::operator() take exactly two parameters, rather than a pack
□ for default BinaryDivideOp, replace lambda with divides<void>{}
□ add definitions of the “rows” and “columns” of a matrix to [linalg.general], so that [linalg.tags.order] can refer to rows and columns
□ layout_blas_packed::mapping::operator(): Pass input parameters through index-cast before using them in the formulas.
□ remove spurious return value from layout_blas_packed::mapping::stride (the case where the Precondition would have been violated anyway)
□ Move exposition-only helpers transpose-extents and transpose-extents-t to the new section [linalg.transp.helpers]. Redefine transpose-extents-t in terms of transpose-extents, rather than the
other way around.
□ For triangular_*, symmetric_*, and hermitian_* functions that take a Triangle parameter and an mdspan with layout_blas_packed layout, change the requirement that the layout’s Triangle match
the function’s Triangle parameter, from a Constraint to a Mandate. This should not result in ambiguous overloads, since Triangle is already Constrained to be upper_triangle_t or
lower_triangle_t. Add a nonwording section explaining this design choice.
□ Add triangle_type and storage_order_type public type aliases to layout_blas_packed.
□ Fix layout_blas_packed requirements so that wording of operator() and other members doesn’t need to consider overflow.
□ Add possibly-packed-inout-matrix to the list of concepts in [linalg.helpers.concepts] 3 that forbid overlap unless explicitly permitted.
□ Make sure that Complexity clauses use a math multiplication symbol instead of code-font *. (The latter would cause unwarranted overflow issues, especially for BLAS 3 functions.)
□ Many more wording fixes based on LWG review
Purpose of this paper
This paper proposes a C++ Standard Library dense linear algebra interface based on the dense Basic Linear Algebra Subroutines (BLAS). This corresponds to a subset of the BLAS Standard. Our proposal
implements the following classes of algorithms on arrays that represent matrices and vectors:
• elementwise vector sums;
• multiplying all elements of a vector or matrix by a scalar;
• 2-norms and 1-norms of vectors;
• vector-vector, matrix-vector, and matrix-matrix products (contractions);
• low-rank updates of a matrix;
• triangular solves with one or more “right-hand side” vectors; and
• generating and applying plane (Givens) rotations.
Our algorithms work with most of the matrix storage formats that the BLAS Standard supports:
• “general” dense matrices, in column-major or row-major format;
• symmetric or Hermitian (for complex numbers only) dense matrices, stored either as general dense matrices, or in a packed format; and
• dense triangular matrices, stored either as general dense matrices or in a packed format.
Our proposal also has the following distinctive characteristics.
• It uses free functions, not arithmetic operator overloading.
• The interface is designed in the spirit of the C++ Standard Library’s algorithms.
• It uses mdspan (adopted into C++23), a multidimensional array view, to represent matrices and vectors.
• The interface permits optimizations for matrices and vectors with small compile-time dimensions; the standard BLAS interface does not.
• Each of our proposed operations supports all element types for which that operation makes sense, unlike the BLAS, which only supports four element types.
• Our operations permit “mixed-precision” computation with matrices and vectors that have different element types. This subsumes most functionality of the Mixed-Precision BLAS specification,
comprising Chapter 4 of the BLAS Standard.
• Like the C++ Standard Library’s algorithms, our operations take an optional execution policy argument. This is a hook to support parallel execution and hierarchical parallelism.
• Unlike the BLAS, our proposal can be expanded to support “batched” operations (see P2901) with almost no interface differences. This will support machine learning and other applications that need
to do many small matrix or vector operations at once.
Here are some examples of what this proposal offers. In these examples, we ignore std:: namespace qualification for anything in our proposal or for mdspan. We start with a “hello world” that scales
the elements of a 1-D mdspan by a constant factor, first sequentially, then in parallel.
constexpr size_t N = 40;
std::vector<double> x_vec(N);
mdspan x(x_vec.data(), N);
for(size_t i = 0; i < N; ++i) {
x[i] = double(i);
linalg::scale(2.0, x); // x = 2.0 * x
linalg::scale(std::execution::par_unseq, 3.0, x);
for(size_t i = 0; i < N; ++i) {
assert(x[i] == 6.0 * double(i));
Here is a matrix-vector product example. It illustrates the scaled function that makes our interface more concise, while still permitting the BLAS’ performance optimization of fusing computations
with multiplications by a scalar. It also shows the ability to exploit dimensions known at compile time, and to mix compile-time and run-time dimensions arbitrarily.
constexpr size_t N = 40;
constexpr size_t M = 20;
std::vector<double> A_vec(N*M);
std::vector<double> x_vec(M);
std::array<double, N> y_vec(N);
mdspan A(A_vec.data(), N, M);
mdspan x(x_vec.data(), M);
mdspan y(y_vec.data(), N);
for(int i = 0; i < A.extent(0); ++i) {
for(int j = 0; j < A.extent(1); ++j) {
A[i,j] = 100.0 * i + j;
for(int j = 0; j < x.extent(0); ++j) {
x[i] = 1.0 * j;
for(int i = 0; i < y.extent(0); ++i) {
y[i] = -1.0 * i;
linalg::matrix_vector_product(A, x, y); // y = A * x
// y = 0.5 * y + 2 * A * x
linalg::scaled(2.0, A), x,
linalg::scaled(0.5, y), y);
This example illustrates the ability to perform mixed-precision computations, and the ability to compute on subviews of a matrix or vector by using submdspan (P2630, adopted into the C++26 draft).
(submdspan was separated from the rest of P0009 as a way to avoid delaying the adoption of P0009 into C++23. The reference implementation of mdspan includes submdspan.)
constexpr size_t M = 40;
std::vector<float> A_vec(M*8*4);
std::vector<double> x_vec(M*4);
std::vector<double> y_vec(M*8);
mdspan<float, extents<size_t, dynamic_extent, 8, 4>> A(A_vec.data(), M);
mdspan<double, extents<size_t, 4, dynamic_extent>> x(x_vec.data(), M);
mdspan<double, extents<size_t, dynamic_extent, 8>> y(y_vec.data(), M);
for(size_t m = 0; m < A.extent(0); ++m) {
for(size_t i = 0; i < A.extent(1); ++i) {
for(size_t j = 0; j < A.extent(2); ++j) {
A[m,i,j] = 1000.0 * m + 100.0 * i + j;
for(size_t i = 0; i < x.extent(0); ++i) {
for(size_t m = 0; m < x.extent(1); ++m) {
x[i,m] = 33. * i + 0.33 * m;
for(size_t m = 0; m < y.extent(0); ++m) {
for(size_t i = 0; i < y.extent(1); ++i) {
y[m,i] = 33. * m + 0.33 * i;
for(size_t m = 0; m < M; ++m) {
auto A_m = submdspan(A, m, full_extent, full_extent);
auto x_m = submdspan(x, full_extent, m);
auto y_m = submdspan(y, m, full_extent);
// y_m = A * x_m
linalg::matrix_vector_product(A_m, x_m, y_m);
Overview of contents
Section 5 motivates considering any dense linear algebra proposal for the C++ Standard Library.
Section 6 shows why we chose the BLAS as a starting point for our proposed library. The BLAS is an existing standard with decades of use, a rich set of functions, and many optimized implementations.
Section 7 lists what we consider general criteria for including algorithms in the C++ Standard Library. We rely on these criteria to justify the algorithms in this proposal.
Section 8 describes BLAS notation and conventions in C++ terms. Understanding this will give readers context for algorithms, and show how our proposed algorithms expand on BLAS functionality.
Section 9 lists functionality that we intentionally exclude from our proposal. We imitate the BLAS in aiming to be a set of “performance primitives” on which external libraries or applications may
build a more complete linear algebra solution.
Section 10 elaborates on our design justification. This section explains
• why we use mdspan to represent matrix and vector parameters;
• how we translate the BLAS’ Fortran-centric idioms into C++;
• how the BLAS’ different “matrix types” map to different algorithms, rather than different mdspan layouts;
• how we express quality-of-implementation recommendations about avoiding undue overflow and underflow;
• how we impose requirements on algorithms’ behavior and on the various value types that algorithms encounter;
• how we support for user-defined complex number types and address type preservation and domain issues with std::abs, std::conj, std::real, and std::imag;
• how we support division for triangular solves, for value types with noncommutative multiplication; and
• how we address consistency between layout_blas_packed having a Triangle template parameter, and functions also taking a Triangle parameter.
Section 11 lists future work, that is, ways future proposals could build on this one.
Section 12 gives the data structures and utilities from other proposals on which we depend. In particular, we rely heavily on mdspan (adopted into C++23), and add custom layouts and accessors.
Section 13 briefly summarizes the existing implementations of this proposal.
Section 14 explains how this proposal is interoperable with other linear algebra proposals currently under WG21 review. In particular, we believe this proposal is complementary to P1385, and the
authors of P1385 have expressed the same view.
Section 15 credits funding agencies and contributors.
Section 16 is our bibliography.
Section 17 is where readers will find the normative wording we propose.
Finally, Section 18 gives some more elaborate examples of linear algebra algorithms that use our proposal. The examples show how mdspan’s features let users easily describe “submatrices” with
submdspan, proposed in P2630 as a follow-on to mdspan. (The reference implementation of mdspan includes submdspan.) This integrates naturally with “block” factorizations of matrices. The resulting
notation is concise, yet still computes in place, without unnecessary copies of any part of the matrix.
Here is a table that maps between Reference BLAS function name, and algorithm or function name in our proposal. The mapping is not always one to one. “N/A” in the “BLAS name(s)” field means that the
function is not in the BLAS.
BLAS name(s) Our name(s)
xLARTG setup_givens_rotation
xROT apply_givens_rotation
xSWAP swap_elements
xSCAL scale, scaled
xCOPY copy
xAXPY add, scaled
xDOT, xDOTU dot
xDOTC dotc
N/A vector_sum_of_squares
xNRM2 vector_two_norm
xASUM vector_abs_sum
xIAMAX vector_idx_abs_max
N/A matrix_frob_norm
N/A matrix_one_norm
N/A matrix_inf_norm
xGEMV matrix_vector_product
xSYMV symmetric_matrix_vector_product
xHEMV hermitian_matrix_vector_product
xTRMV triangular_matrix_vector_product
xTRSV triangular_matrix_vector_solve
xGER, xGERU matrix_rank_1_update
xGERC matrix_rank_1_update_c
xSYR symmetric_matrix_rank_1_update
xHER hermitian_matrix_rank_1_update
xSYR2 symmetric_matrix_rank_2_update
xHER2 hermitian_matrix_rank_2_update
xGEMM matrix_product
xSYMM symmetric_matrix_product
xHEMM hermitian_matrix_product
xTRMM triangular_matrix_product
xSYRK symmetric_matrix_rank_k_update
xHERK hermitian_matrix_rank_k_update
xSYR2K symmetric_matrix_rank_2k_update
xHER2K hermitian_matrix_rank_2k_update
xTRSM triangular_matrix_matrix_{left,right}_solve
Why include dense linear algebra in the C++ Standard Library?
1. “Direction for ISO C++” (P0939R4) explicitly calls out “Linear Algebra” as a potential priority for C++23.
2. C++ applications in “important application areas” (see P0939R4) have depended on linear algebra for a long time.
3. Linear algebra is like sort: obvious algorithms are slow, and the fastest implementations call for hardware-specific tuning.
4. Dense linear algebra is core functionality for most of linear algebra, and can also serve as a building block for tensor operations.
5. The C++ Standard Library includes plenty of “mathematical functions.” Linear algebra operations like matrix-matrix multiply are at least as broadly useful.
6. The set of linear algebra operations in this proposal are derived from a well-established, standard set of algorithms that has changed very little in decades. It is one of the strongest possible
examples of standardizing existing practice that anyone could bring to C++.
7. This proposal follows in the footsteps of many recent successful incorporations of existing standards into C++, including the UTC and TAI standard definitions from the International
Telecommunications Union, the time zone database standard from the International Assigned Numbers Authority, and the ongoing effort to integrate the ISO unicode standard.
Linear algebra has had wide use in C++ applications for nearly three decades (see P1417R0 for a historical survey). For much of that time, many third-party C++ libraries for linear algebra have been
available. Many different subject areas depend on linear algebra, including machine learning, data mining, web search, statistics, computer graphics, medical imaging, geolocation and mapping,
engineering, and physics-based simulations.
“Directions for ISO C++” (P0939R4) not only lists “Linear Algebra” explicitly as a potential C++23 priority, it also offers the following in support of adding linear algebra to the C++ Standard
• P0939R4 calls out “Support for demanding applications” in “important application areas, such as medical, finance, automotive, and games (e.g., key libraries…)” as an “area of general concern”
that “we should not ignore.” All of these areas depend on linear algebra.
• “Is my proposal essential for some important application domain?” Many large and small private companies, science and engineering laboratories, and academics in many different fields all depend
on linear algebra.
• “We need better support for modern hardware”: Modern hardware spends many of its cycles in linear algebra. For decades, hardware vendors, some represented at WG21 meetings, have provided and
continue to provide features specifically to accelerate linear algebra operations. Some of them even implement specific linear algebra operations directly in hardware. Examples include NVIDIA’s
Tensor Cores and Cerebras’ Wafer Scale Engine. Several large computer system vendors offer optimized linear algebra libraries based on or closely resembling the BLAS. These include AMD’s BLIS,
ARM’s Performance Libraries, Cray’s LibSci, Intel’s Math Kernel Library (MKL), IBM’s Engineering and Scientific Subroutine Library (ESSL), and NVIDIA’s cuBLAS.
Obvious algorithms for some linear algebra operations like dense matrix-matrix multiply are asymptotically slower than less-obvious algorithms. (For details, please refer to a survey one of us
coauthored, “Communication lower bounds and optimal algorithms for numerical linear algebra.”) Furthermore, writing the fastest dense matrix-matrix multiply depends on details of a specific computer
architecture. This makes such operations comparable to sort in the C++ Standard Library: worth standardizing, so that Standard Library implementers can get them right and hardware vendors can
optimize them. In fact, almost all C++ linear algebra libraries end up calling non-C++ implementations of these algorithms, especially the implementations in optimized BLAS libraries (see below). In
this respect, linear algebra is also analogous to standard library features like random_device: often implemented directly in assembly or even with special hardware, and thus an essential component
of allowing no room for another language “below” C++ (see P0939R4) and Stroustrup’s “The Design and Evolution of C++”).
Dense linear algebra is the core component of most algorithms and applications that use linear algebra, and the component that is most widely shared over different application areas. For example,
tensor computations end up spending most of their time in optimized dense linear algebra functions. Sparse matrix computations get best performance when they spend as much time as possible in dense
linear algebra.
The C++ Standard Library includes many “mathematical special functions” ([sf.cmath]), like incomplete elliptic integrals, Bessel functions, and other polynomials and functions named after various
mathematicians. Any of them comes with its own theory and set of applications for which robust and accurate implementations are indispensible. We think that linear algebra operations are at least as
broadly useful, and in many cases significantly more so.
Why base a C++ linear algebra library on the BLAS?
1. The BLAS is a standard that codifies decades of existing practice.
2. The BLAS separates “performance primitives” for hardware experts to tune, from mathematical operations that rely on those primitives for good performance.
3. Benchmarks reward hardware and system vendors for providing optimized BLAS implementations.
4. Writing a fast BLAS implementation for common element types is nontrivial, but well understood.
5. Optimized third-party BLAS implementations with liberal software licenses exist.
6. Building a C++ interface on top of the BLAS is a straightforward exercise, but has pitfalls for unaware developers.
Linear algebra has had a cross-language standard, the Basic Linear Algebra Subroutines (BLAS), since 2002. The Standard came out of a standardization process that started in 1995 and held meetings
three times a year until 1999. Participants in the process came from industry, academia, and government research laboratories. The dense linear algebra subset of the BLAS codifies forty years of
evolving practice, and has existed in recognizable form since 1990 (see P1417R0).
The BLAS interface was specifically designed as the distillation of the “computer science” or performance-oriented parts of linear algebra algorithms. It cleanly separates operations most critical
for performance, from operations whose implementation takes expertise in mathematics and rounding-error analysis. This gives vendors opportunities to add value, without asking for expertise outside
the typical required skill set of a Standard Library implementer.
Well-established benchmarks such as the LINPACK benchmark, reward computer hardware vendors for optimizing their BLAS implementations. Thus, many vendors provide an optimized BLAS library for their
computer architectures. Writing fast BLAS-like operations is not trivial, and depends on computer architecture. However, it is a well-understood problem whose solutions could be parameterized for a
variety of computer architectures. See, for example, Goto and van de Geijn 2008. There are optimized third-party BLAS implementations for common architectures, like ATLAS and GotoBLAS. A (slow but
correct) reference implementation of the BLAS exists and it has a liberal software license for easy reuse.
We have experience in the exercise of wrapping a C or Fortran BLAS implementation for use in portable C++ libraries. We describe this exercise in detail in our paper “Evolving a Standard C++ Linear
Algebra Library from the BLAS” (P1674). It is straightforward for vendors, but has pitfalls for developers. For example, Fortran’s application binary interface (ABI) differs across platforms in ways
that can cause run-time errors (even incorrect results, not just crashing). Historical examples of vendors’ C BLAS implementations have also had ABI issues that required work-arounds. This dependence
on ABI details makes availability in a standard C++ library valuable.
Criteria for including algorithms
Criteria for all the algorithms
We include algorithms in our proposal based on the following criteria, ordered by decreasing importance. Many of our algorithms satisfy multiple criteria.
1. Getting the desired asymptotic run time is nontrivial
2. Opportunity for vendors to provide hardware-specific optimizations
3. Opportunity for vendors to provide quality-of-implementation improvements, especially relating to accuracy or reproducibility with respect to floating-point rounding error
4. User convenience (familiar name, or tedious to implement)
Regarding (1), “nontrivial” means “at least for novices to the field.” Dense matrix-matrix multiply is a good example. Getting close to the asymptotic lower bound on the number of memory reads and
writes matters a lot for performance, and calls for a nonintuitive loop reordering. An analogy to the current C++ Standard Library is sort, where intuitive algorithms that many humans use are not
asymptotically optimal.
Regarding (2), a good example is copying multidimensional arrays. The Kokkos library spends about 2500 lines of code on multidimensional array copy, yet still relies on system libraries for low-level
optimizations. An analogy to the current C++ Standard Library is copy or even memcpy.
Regarding (3), accurate floating-point summation is nontrivial. Well-meaning compiler optimizations might defeat even simple technqiues, like compensated summation. The most obvious way to compute a
vector’s Euclidean norm (square root of sum of squares) can cause overflow or underflow, even when the exact answer is much smaller than the overflow threshold, or larger than the underflow
threshold. Some users care deeply about sums, even parallel sums, that always get the same answer, despite rounding error. This can help debugging, for example. It is possible to make floating-point
sums completely independent of parallel evaluation order. See e.g., the ReproBLAS effort. Naming these algorithms and providing ExecutionPolicy customization hooks gives vendors a chance to provide
these improvements. An analogy to the current C++ Standard Library is hypot, whose language in the C++ Standard alludes to the tighter POSIX requirements.
Regarding (4), the C++ Standard Library is not entirely minimalist. One example is std::string::contains. Existing Standard Library algorithms already offered this functionality, but a member
contains function is easy for novices to find and use, and avoids the tedium of comparing the result of find to npos.
The BLAS exists mainly for the first two reasons. It includes functions that were nontrivial for compilers to optimize in its time, like scaled elementwise vector sums, as well as functions that
generally require human effort to optimize, like matrix-matrix multiply.
Criteria for including BLAS 1 algorithms; coexistence with ranges
The BLAS developed in three “levels”: 1, 2, and 3. BLAS 1 includes “vector-vector” operations like dot products, norms, and vector addition. BLAS 2 includes “matrix-vector” operations like
matrix-vector products and outer products. BLAS 3 includes “matrix-matrix” operations like matrix-matrix products and triangular solve with multiple “right-hand side” vectors. The BLAS level
coincides with the number of nested loops in a naïve sequential implementation of the operation. Increasing level also comes with increasing potential for data reuse. For history of the BLAS “levels”
and a bibliography, see P1417.
We mention this here because some reviewers have asked how the algorithms in our proposal would coexist with the existing ranges algorithms in the C++ Standard Library. (Ranges was a feature added to
the C++ Standard Library in C++20.) This question actually encloses two questions.
1. Will our proposed algorithms syntactically collide with existing ranges algorithms?
2. How much overlap do our proposed algorithms have with the existing ranges algorithms? (That is, do we really need these new algorithms?)
Low risk of syntactic collision with ranges
We think there is low risk of our proposal colliding syntactically with existing ranges algorithms, for the following reasons.
• We propose our algorithms in a new namespace std::linalg.
• None of the algorithms we propose share names with any existing ranges algorithms.
• We take care not to use _view as a suffix, in order to avoid confusion or name collisions with “views” in the sense of ranges.
• We specifically do not use the names transpose or transpose_view, since LEWG has advised us that ranges algorithms may want to claim these names. (One could imagine “transposing” a range of
• We constrain our algorithms only to take vector and matrix parameters as mdspan. mdspan is not currently a range, and there are currently no proposals in flight that would make it a range.
Changing mdspan of arbitrary rank to be a range would require a design for multidimensional iterators. P0009’s coauthors have not proposed a design, and it has proven challenging to get compilers
to optimize any existing design for multidimensional iterators.
Minimal overlap with ranges is justified by user convenience
The rest of this section answers the second question. The BLAS 2 and 3 algorithms require multiple nested loops, and high-performing implementations generally need intermediate storage. This would
make it unnatural and difficult to express them in terms of ranges. Only the BLAS 1 algorithms in our proposal might have a reasonable translation to ranges algorithms. There, we limit ourselves to
the BLAS 1 algorithms in what follows.
Any rank-1 mdspan x can be translated into the following range:
auto x_range = views::iota(size_t(0), x.extent(0)) |
views::transform([x] (auto k) { return x[k]; });
with specializations possible for mdspan whose layout mapping’s range is a contiguous or fixed-stride set of offsets. However, just because code could be written in a certain way doesn’t mean that it
should be. We have ranges even though the language has for loops; we don’t need to step in the Turing tar-pit on purpose (see Perlis 1982). Thus, we will analyze the BLAS 1 algorithms in this
proposal in the context of the previous section’s four general criteria.
Our proposal would add 61 new unique names to the C++ Standard Library. Of those, 16 are BLAS 1 algorithms, while 24 are BLAS 2 and 3 algorithms. The 16 BLAS 1 algorithms fall into three categories.
1. Optimization hooks, like copy. As with memcpy, the fastest implementation may depend closely on the computer architecture, and may differ significantly from a straightforward implementation. Some
of these algorithms, like copy, can operate on multidimensional arrays as well, though it is traditional to list them as BLAS 1 algorithms.
2. Floating-point quality-of-implementation hooks, like vector_sum_of_squares. These give vendors opportunities to avoid preventable floating-point underflow and overflow (as with hypot), improve
accuracy, and reduce or even avoid parallel nondeterminism and order dependence of floating-point sums.
3. Uncomplicated elementwise algorithms like add, idx_abs_max, and scale.
We included the first category mainly because of Criterion (2) “Opportunity for vendors to provide hardware-specific optimizations,” and the second mainly because of Criterion (3) (“Opportunity for
vendors to provide quality-of-implementation improvements”). Fast implementations of algorithms in the first category are not likely to be simple uses of ranges algorithms.
Algorithms in the second category could be presented as ranges algorithms, as mdspan algorithms, or as both. The “iterating over elements” part of those algorithms is not the most challenging part of
their implementation, nor is it what makes an implementation “high quality.”
Algorithms in the third category could be replaced with a few lines of C++ that use ranges algorithms. For example, here is a parallel implementation of idx_abs_max, with simplifications for
exposition. (It omits template parameters’ constraints, uses std::abs instead of abs-if-needed, and does not address the complex number case. Here is a Compiler Explorer link to a working example.)
template<class Element, class Extents,
class Layout, class Accessor>
typename Extents::size_type idx_abs_max(
std::mdspan<Element, Extents, Layout, Accessor> x)
auto theRange = std::views::iota(size_t(0), x.extent(0)) |
std::views::transform([=] (auto k) { return std::abs(x[k]); });
auto iterOfMax =
std::max_element(std::execution::par_unseq, theRange.begin(), theRange.end());
auto indexOfMax = std::ranges::distance(theRange.begin(), iterOfMax);
// In GCC 12.1, the return type is __int128.
return static_cast<typename Extents::size_type>(indexOfMax);
Even though the algorithms in the third category could be implemented straightforwardly with ranges, we provide them because of Criterion 4 (“User convenience”). Criterion (4) applies to all the
algorithms in this proposal, and particularly to the BLAS 1 algorithms. Matrix algorithm developers need BLAS 1 and 2 as well as BLAS 3, because matrix algorithms tend to decompose into vector
algorithms. This is true even of so-called “block” matrix algorithms that have been optimized to use matrix-matrix operations wherever possible, in order to improve memory locality. Demmel et al.
1987 (p. 4) explains.
Block algorithms generally require an unblocked version of the same algorithm to be available to operate on a single block. Therefore, the development of the software will fall naturally into two
phases: first, develop unblocked versions of the routines, calling the Level 2 BLAS wherever possible; then develop blocked versions where possible, calling the Level 3 BLAS.
Dongarra et al. 1990 (pp. 12-15) outlines this development process for the specific example of Cholesky factorization. The Cholesky factorization algorithm (on p. 14) spends most of its time (for a
sufficiently large input matrix) in matrix-matrix multiplies (DGEMM), rank-k symmetric matrices updates (DSYRK, a special case of matrix-matrix multiply), and triangular solves with multiple
“right-hand side” vectors (DTRSM). However, it still needs an “unblocked” Cholesky factorization as the blocked factorization’s “base case.” This is called DLLT in Dongarra et al. 1990 (p. 15), and
it uses DDOT, DSCAL (both BLAS2), and DGEMV (BLAS 2). In the case of Cholesky factorization, it’s possible to express the “unblocked” case without using BLAS 1 or 2 operations, by using recursion.
This is the approach that LAPACK takes with the blocked Cholesky factorization DPOTRF and its unblocked base case DPOTRF2. However, even a recursive formulation of most matrix factorizations needs to
use BLAS 1 or 2 operations. For example, the unblocked base case DGETRF2 of LAPACK’s blocked LU factorization DGETRF needs to invoke vector-vector operations like DSCAL.
In summary, matrix algorithm developers need vector algorithms, because matrix algorithms decompose into vector algorithms. If our proposal lacked BLAS 1 algorithms, even simple ones like add and
scale, matrix algorithm developers would end up writing them anyway.
Notation and conventions
The BLAS uses Fortran terms
The BLAS’ “native” language is Fortran. It has a C binding as well, but the BLAS Standard and documentation use Fortran terms. Where applicable, we will call out relevant Fortran terms and highlight
possibly confusing differences with corresponding C++ ideas. Our paper P1674 (“Evolving a Standard C++ Linear Algebra Library from the BLAS”) goes into more detail on these issues.
We call “subroutines” functions
Like Fortran, the BLAS distinguishes between functions that return a value, and subroutines that do not return a value. In what follows, we will refer to both as “BLAS functions” or “functions.”
Element types and BLAS function name prefix
The BLAS implements functionality for four different matrix, vector, or scalar element types:
• REAL (float in C++ terms)
• DOUBLE PRECISION (double in C++ terms)
• COMPLEX (complex<float> in C++ terms)
• DOUBLE COMPLEX (complex<double> in C++ terms)
The BLAS’ Fortran 77 binding uses a function name prefix to distinguish functions based on element type:
• S for REAL (“single”)
• D for DOUBLE PRECISION
• C for COMPLEX
• Z for DOUBLE COMPLEX
For example, the four BLAS functions SAXPY, DAXPY, CAXPY, and ZAXPY all perform the vector update Y = Y + ALPHA*X for vectors X and Y and scalar ALPHA, but for different vector and scalar element
The convention is to refer to all of these functions together as xAXPY. In general, a lower-case x is a placeholder for all data type prefixes that the BLAS provides. For most functions, the x is a
prefix, but for a few functions like IxAMAX, the data type “prefix” is not the first letter of the function name. (IxAMAX is a Fortran function that returns INTEGER, and therefore follows the old
Fortran implicit naming rule that integers start with I, J, etc.) Other examples include the vector 2-norm functions SCNRM2 and DZNRM2, where the first letter indicates the return type and the second
letter indicates the vector element type.
Not all BLAS functions exist for all four data types. These come in three categories:
1. The BLAS provides only real-arithmetic (S and D) versions of the function, since the function only makes mathematical sense in real arithmetic.
2. The complex-arithmetic versions perform a different mathematical operation than the real-arithmetic versions, so they have a different base name.
3. The complex-arithmetic versions offer a choice between nonconjugated or conjugated operations.
As an example of the second category, the BLAS functions SASUM and DASUM compute the sums of absolute values of a vector’s elements. Their complex counterparts CSASUM and DZASUM compute the sums of
absolute values of real and imaginary components of a vector v, that is, the sum of |ℜ(v[i])|+|ℑ(v[i])| for all i in the domain of v. This operation is still useful as a vector norm, but it
requires fewer arithmetic operations.
Examples of the third category include the following:
• nonconjugated dot product xDOTU and conjugated dot product xDOTC; and
• rank-1 symmetric (xGERU) vs. Hermitian (xGERC) matrix update.
The conjugate transpose and the (nonconjugated) transpose are the same operation in real arithmetic (if one considers real arithmetic embedded in complex arithmetic), but differ in complex
arithmetic. Different applications have different reasons to want either. The C++ Standard includes complex numbers, so a Standard linear algebra library needs to respect the mathematical structures
that go along with complex numbers.
What we exclude from the design
Most functions not in the Reference BLAS
The BLAS Standard includes functionality that appears neither in the Reference BLAS library, nor in the classic BLAS “level” 1, 2, and 3 papers. (For history of the BLAS “levels” and a bibliography,
see P1417R0. For a paper describing functions not in the Reference BLAS, see “An updated set of basic linear algebra subprograms (BLAS),” listed in “Other references” below.) For example, the BLAS
Standard has
• several new dense functions, like a fused vector update and dot product;
• sparse linear algebra functions, like sparse matrix-vector multiply and an interface for constructing sparse matrices; and
• extended- and mixed-precision dense functions (though we subsume some of their functionality; see below).
Our proposal only includes core Reference BLAS functionality, for the following reasons:
1. Vendors who implement a new component of the C++ Standard Library will want to see and test against an existing reference implementation.
2. Many applications that use sparse linear algebra also use dense, but not vice versa.
3. The Sparse BLAS interface is a stateful interface that is not consistent with the dense BLAS, and would need more extensive redesign to translate into a modern C++ idiom. See discussion in
4. Our proposal subsumes some dense mixed-precision functionality (see below).
We have included vector sum-of-squares and matrix norms as exceptions, for the same reason that we include vector 2-norm: to expose hooks for quality-of-implementation improvements that avoid
underflow or overflow when computing with floating-point values.
The LAPACK Fortran library implements solvers for the following classes of mathematical problems:
• linear systems,
• linear least-squares problems, and
• eigenvalue and singular value problems.
It also provides matrix factorizations and related linear algebra operations. LAPACK deliberately relies on the BLAS for good performance; in fact, LAPACK and the BLAS were designed together. See
history presented in P1417R0.
Several C++ libraries provide slices of LAPACK functionality. Here is a brief, noninclusive list, in alphabetical order, of some libraries actively being maintained:
P1417R0 gives some history of C++ linear algebra libraries. The authors of this proposal have designed, written, and maintained LAPACK wrappers in C++. Some authors have LAPACK founders as PhD
advisors. Nevertheless, we have excluded LAPACK-like functionality from this proposal, for the following reasons:
1. LAPACK is a Fortran library, unlike the BLAS, which is a multilanguage standard.
2. We intend to support more general element types, beyond the four that LAPACK supports. It’s much more straightforward to make a C++ BLAS work for general element types, than to make LAPACK
algorithms work generically.
First, unlike the BLAS, LAPACK is a Fortran library, not a standard. LAPACK was developed concurrently with the “level 3” BLAS functions, and the two projects share contributors. Nevertheless, only
the BLAS and not LAPACK got standardized. Some vendors supply LAPACK implementations with some optimized functions, but most implementations likely depend heavily on “reference” LAPACK. There have
been a few efforts by LAPACK contributors to develop C++ LAPACK bindings, from Lapack++ in pre-templates C++ circa 1993, to the recent “C++ API for BLAS and LAPACK”. (The latter shares coauthors with
this proposal.) However, these are still just C++ bindings to a Fortran library. This means that if vendors had to supply C++ functionality equivalent to LAPACK, they would either need to start with
a Fortran compiler, or would need to invest a lot of effort in a C++ reimplementation. Mechanical translation from Fortran to C++ introduces risk, because many LAPACK functions depend critically on
details of floating-point arithmetic behavior.
Second, we intend to permit use of matrix or vector element types other than just the four types that the BLAS and LAPACK support. This includes “short” floating-point types, fixed-point types,
integers, and user-defined arithmetic types. Doing this is easier for BLAS-like operations than for the much more complicated numerical algorithms in LAPACK. LAPACK strives for a “generic” design
(see Jack Dongarra interview summary in P1417R0), but only supports two real floating-point types and two complex floating-point types. Directly translating LAPACK source code into a “generic”
version could lead to pitfalls. Many LAPACK algorithms only make sense for number systems that aim to approximate real numbers (or their complex extentions). Some LAPACK functions output error bounds
that rely on properties of floating-point arithmetic.
For these reasons, we have left LAPACK-like functionality for future work. It would be natural for a future LAPACK-like C++ library to build on our proposal.
Extended-precision BLAS
Our interface subsumes some functionality of the Mixed-Precision BLAS specification (Chapter 4 of the BLAS Standard). For example, users may multiply two 16-bit floating-point matrices (assuming that
a 16-bit floating-point type exists) and accumulate into a 32-bit floating-point matrix, just by providing a 32-bit floating-point matrix as output. Users may specify the precision of a dot product
result. If it is greater than the input vectors’ element type precisions (e.g., double vs. float), then this effectively performs accumulation in higher precision. Our proposal imposes semantic
requirements on some functions, like vector_two_norm, to behave in this way.
However, we do not include the “Extended-Precision BLAS” in this proposal. The BLAS Standard lets callers decide at run time whether to use extended precision floating-point arithmetic for internal
evaluations. We could support this feature at a later time. Implementations of our interface also have the freedom to use more accurate evaluation methods than typical BLAS implementations. For
example, it is possible to make floating-point sums completely independent of parallel evaluation order.
Arithmetic operators and associated expression templates
Our proposal omits arithmetic operators on matrices and vectors. We do so for the following reasons:
1. We propose a low-level, minimal interface.
2. operator* could have multiple meanings for matrices and vectors. Should it mean elementwise product (like valarray) or matrix product? Should libraries reinterpret “vector times vector” as a dot
product (row vector times column vector)? We prefer to let a higher-level library decide this, and make everything explicit at our lower level.
3. Arithmetic operators require defining the element type of the vector or matrix returned by an expression. Functions let users specify this explicitly, and even let users use different output
types for the same input types in different expressions.
4. Arithmetic operators may require allocation of temporary matrix or vector storage. This prevents use of nonowning data structures.
5. Arithmetic operators strongly suggest expression templates. These introduce problems such as dangling references and aliasing.
Our goal is to propose a low-level interface. Other libraries, such as that proposed by P1385, could use our interface to implement overloaded arithmetic for matrices and vectors. A constrained,
function-based, BLAS-like interface builds incrementally on the many years of BLAS experience.
Arithmetic operators on matrices and vectors would require the library, not necessarily the user, to specify the element type of an expression’s result. This gets tricky if the terms have mixed
element types. For example, what should the element type of the result of the vector sum x + y be, if x has element type complex<float> and y has element type double? It’s tempting to use
common_type_t, but common_type_t<complex<float>, double> is complex<float>. This loses precision. Some users may want complex<double>; others may want complex<long double> or something else, and
others may want to choose different types in the same program.
P1385 lets users customize the return type of such arithmetic expressions. However, different algorithms may call for the same expression with the same inputs to have different output types. For
example, iterative refinement of linear systems Ax=b can work either with an extended-precision intermediate residual vector r = b - A*x, or with a residual vector that has the same precision as the
input linear system. Each choice produces a different algorithm with different convergence characteristics, per-iteration run time, and memory requirements. Thus, our library lets users specify the
result element type of linear algebra operations explicitly, by calling a named function that takes an output argument explicitly, rather than an arithmetic operator.
Arithmetic operators on matrices or vectors may also need to allocate temporary storage. Users may not want that. When LAPACK’s developers switched from Fortran 77 to a subset of Fortran 90, their
users rejected the option of letting LAPACK functions allocate temporary storage on their own. Users wanted to control memory allocation. Also, allocating storage precludes use of nonowning input
data structures like mdspan, that do not know how to allocate.
Arithmetic expressions on matrices or vectors strongly suggest expression templates, as a way to avoid allocation of temporaries and to fuse computational kernels. They do not require expression
templates. For example, valarray offers overloaded operators for vector arithmetic, but the Standard lets implementers decide whether to use expression templates. However, all of the current C++
linear algebra libraries that we mentioned above have some form of expression templates for overloaded arithmetic operators, so users will expect this and rely on it for good performance. This was,
indeed, one of the major complaints about initial implementations of valarray: its lack of mandate for expression templates meant that initial implementations were slow, and thus users did not want
to rely on it. (See Josuttis 1999, p. 547, and Vandevoorde and Josuttis 2003, p. 342, for a summary of the history. Fortran has an analogous issue, in which (under certain conditions) it is
implementation defined whether the run-time environment needs to copy noncontiguous slices of an array into contiguous temporary storage.)
Expression templates work well, but have issues. Our papers P1417R0 and “Evolving a Standard C++ Linear Algebra Library from the BLAS” (P1674) give more detail on these concerns. A particularly
troublesome one is that C++ auto type deduction makes it easy for users to capture expressions before the expression templates system has the chance to evaluate them and write the result into the
output. For matrices and vectors with container semantics, this makes it easy to create dangling references. Users might not realize that they need to assign expressions to named types before actual
work and storage happen. Eigen’s documentation describes this common problem.
Our scaled, conjugated, transposed, and conjugate_transposed functions make use of one aspect of expression templates, namely modifying the mdspan array access operator. However, we intend these
functions for use only as in-place modifications of arguments of a function call. Also, when modifying mdspan, these functions merely view the same data that their input mdspan views. They introduce
no more potential for dangling references than mdspan itself. The use of views like mdspan is self-documenting; it tells users that they need to take responsibility for scope of the viewed data.
Banded matrix layouts
This proposal omits banded matrix types. It would be easy to add the required layouts and specializations of algorithms later. The packed and unpacked symmetric and triangular layouts in this
proposal cover the major concerns that would arise in the banded case, like nonstrided and nonunique layouts, and matrix types that forbid access to some multi-indices in the Cartesian product of
We exclude tensors from this proposal, for the following reasons. First, tensor libraries naturally build on optimized dense linear algebra libraries like the BLAS, so a linear algebra library is a
good first step. Second, mdspan has natural use as a low-level representation of dense tensors, so we are already partway there. Third, even simple tensor operations that naturally generalize the
BLAS have infintely many more cases than linear algebra. It’s not clear to us which to optimize. Fourth, even though linear algebra is a special case of tensor algebra, users of linear algebra have
different interface expectations than users of tensor algebra. Thus, it makes sense to have two separate interfaces.
Explicit support for asynchronous return of scalar values
After we presented revision 2 of this paper, LEWG asked us to consider support for discrete graphics processing units (GPUs). GPUs have two features of interest here. First, they might have memory
that is not accessible from ordinary C++ code, but could be accessed in a standard algorithm (or one of our proposed algorithms) with the right implementation-specific ExecutionPolicy. (For instance,
a policy could say “run this algorithm on the GPU.”) Second, they might execute those algorithms asynchronously. That is, they might write to output arguments at some later time after the algorithm
invocation returns. This would imply different interfaces in some cases. For instance, a hypothetical asynchronous vector 2-norm might write its scalar result via a pointer to GPU memory, instead of
returning the result “on the CPU.”
Nothing in principle prevents mdspan from viewing memory that is inaccessible from ordinary C++ code. This is a major feature of the Kokkos::View class from the Kokkos library, and Kokkos::View
directly inspired mdspan. The C++ Standard does not currently define how such memory behaves, but implementations could define its behavior and make it work with mdspan. This would, in turn, let
implementations define our algorithms to operate on such memory efficiently, if given the right implementation-specific ExecutionPolicy.
Our proposal excludes algorithms that might write to their output arguments at some time after the algorithm returns. First, LEWG insisted that our proposed algorithms that compute a scalar result,
like vector_two_norm, return that result in the manner of reduce, rather than writing the result to an output reference or pointer. (Previous revisions of our proposal used the latter interface
pattern.) Second, it’s not clear whether writing a scalar result to a pointer is the right interface for asynchronous algorithms. Follow-on proposals to Executors (P0443R14) include asynchronous
algorithms, but none of these suggest returning results asynchronously by pointer. Our proposal deliberately imitates the existing standard algorithms. Right now, we have no standard asynchronous
algorithms to imitate.
Design justification
We take a step-wise approach. We begin with core BLAS dense linear algebra functionality. We then deviate from that only as much as necessary to get algorithms that behave as much as reasonable like
the existing C++ Standard Library algorithms. Future work or collaboration with other proposals could implement a higher-level interface.
Please refer to our papers “Evolving a Standard C++ Linear Algebra Library from the BLAS” (P1674) and “Historical lessons for C++ linear algebra library standardization” (P1417) They will give
details and references for many of the points that we summarize here.
We do not require using the BLAS library or any particular “back-end”
Our proposal is inspired by and extends the dense BLAS interface. A natural implementation might look like this:
1. wrap an existing C or Fortran BLAS library,
2. hope that the BLAS library is optimized, and then
3. extend the wrapper to include straightforward Standard C++ implementations of P1673’s algorithms for matrix and vector value types and data layouts that the BLAS does not support.
P1674 describes the process of writing such an implementation. However, P1673 does not require implementations to wrap the BLAS. In particular, P1673 does not specify a “back-end” C-style interface
that would let users or implementers “swap out” different BLAS libraries. Here are some reasons why we made this choice.
First, it’s possible to write an optimized implementation entirely in Standard C++, without calling external C or Fortran functions. For example, one can write a cache-blocked matrix-matrix multiply
implementation entirely in Standard C++.
Second, different vendors may have their own libraries that support matrix and vector value types and/or layouts beyond what the standard dense BLAS supports. For example, they may have C functions
for mixed-precision matrix-matrix multiply, like BLIS’ bli_gemm (example here), or NVIDIA’s cublasGemmEx (example here).
Third, just because a C or Fortran BLAS library can be found, doesn’t mean that it’s optimized at all or optimized well. For example, many Linux distributions have a BLAS software package that is
built by compiling the Reference BLAS. This will give poor performance for BLAS 3 operations. Even “optimized” vendor BLAS libraries may not optimize all cases. Release notes even for recent versions
show performance improvements.
In summary: While a natural way to implement this proposal would be to wrap an existing C or Fortran BLAS library, we do not want to require this. Thus, we do not specify a “back-end” C-style
Why use mdspan?
View of a multidimensional array
The BLAS operates on what C++ programmers might call views of multidimensional arrays. Users of the BLAS can store their data in whatever data structures they like, and handle their allocation and
lifetime as they see fit, as long as the data have a BLAS-compatible memory layout.
The corresponding C++ data structure is mdspan. This class encapsulates the large number of pointer and integer arguments that BLAS functions take, that represent views of matrices and vectors. Using
mdspan in the C++ interface reduce the number of arguments and avoids common errors, like mixing up the order of arguments. It supports all the array memory layouts that the BLAS supports, including
row major and column major. It also expresses the same data ownership model that the BLAS expresses. Users may manage allocation and deallocation however they wish. In addition, mdspan lets our
algorithms exploit any dimensions known at compile time.
Ease of use
The mdspan class’ layout and accessor policies let us simplify our interfaces, by encapsulating transpose, conjugate, and scalar arguments. Features of mdspan make implementing BLAS-like algorithms
much less error prone and easier to read. These include its encapsulation of matrix indexing and its built-in “slicing” capabilities via submdspan.
BLAS and mdspan are low level
The BLAS is low level; it imposes no mathematical meaning on multidimensional arrays. This gives users the freedom to develop mathematical libraries with the semantics they want. Similarly, mdspan is
just a view of a multidimensional array; it has no mathematical meaning on its own.
We mention this because “matrix,” “vector,” and “tensor” are mathematical ideas that mean more than just arrays of numbers. This is more than just a theoretical concern. Some BLAS functions operate
on “triangular,” “symmetric,” or “Hermitian” matrices, but they do not assert that a matrix has any of these mathematical properties. Rather, they only read only one side of the matrix (the lower or
upper triangle), and compute as if the other side of the matrix satisfies the mathematical property. A key feature of the BLAS and libraries that build on it, like LAPACK, is that they can operate on
the matrix’s data in place. These operations change both the matrix’s mathematical properties and its representation in memory. For example, one might have an N x N array representing a matrix that
is symmetric in theory, but computed and stored in a way that might not result in exactly symmetric data. In order to solve linear systems with this matrix, one might give the array to LAPACK’s
xSYTRF to compute an LDL^T factorization, asking xSYTRF only to access the array’s lower triangle. If xSYTRF finishes successfully, it has overwritten the lower triangle of its input with a
representation of both the lower triangular factor L and the block diagonal matrix D, as computed assuming that the matrix is the sum of the lower triangle and the transpose of the lower triangle.
The resulting N x N array no longer represents a symmetric matrix. Rather, it contains part of the representation of a LDL^T factorization of the matrix. The upper triangle still contains the
original input matrix’s data. One may then solve linear systems by giving xSYTRS the lower triangle, along with other output of xSYTRF.
The point of this example is that a “symmetric matrix class” is the wrong way to model this situation. There’s an N x N array, whose mathematical interpretation changes with each in-place operation
performed on it. The low-level mdspan data structure carries no mathematical properties in itself, so it models this situation better.
Hook for future expansion
The mdspan class treats its layout as an extension point. This lets our interface support layouts beyond what the BLAS Standard permits. The accessor extension point offers us a hook for future
expansion to support heterogeneous memory spaces. (This is a key feature of Kokkos::View, the data structure that inspired mdspan.) In addition, using mdspan has made it easier for us to propose an
efficient “batched” interface in our separate proposal P2901, with almost no interface differences.
Generic enough to replace a “multidimensional array concept”
Our functions differ from the C++ Standard algorithms, in that they take a concrete type mdspan with template parameters, rather than any of an open set of types that satisfy some concept. LEWGI
requested in the 2019 Cologne meeting that we explore using a concept instead of mdspan to define the arguments for the linear algebra functions. This would mean that instead of having our functions
take mdspan parameters, the functions would be generic on one or more suitably constrained multidimensional array types. The constraints would form a “multidimensional array concept.”
We investigated this option, and rejected it, for the following reasons. First, our proposal uses enough features of mdspan that any concept generally applicable to all functions we propose would
replicate almost the entire definition of mdspan. This proposal refers to almost all of mdspan’s features, including extents, layouts, and accessors. The conjugated, scaled, and transposed functions
in this proposal depend specifically on custom layouts and accessors. These features make the algorithms have more functionality than their C or Fortran BLAS equivalents, while reducing the number of
parameters that the algorithms take. They also make the interface more consistent, in that each mdspan parameter of a function behaves as itself and is not otherwise “modified” by other parameters.
Second, conversely, we think that mdspan’s potential for customization gives it the power to represent any reasonable multidimensional array view. Thus, mdspan “is the concept.” Third, this proposal
could support any reasonable multidimensional array type, if the type just made it convertible to mdspan, for example via a general customization point get_mdspan that returns an mdspan that views
the array’s elements. Fourth, a multidimensional array concept would only have value if nearly all multidimensional arrays “in the wild” had the same interface, and if that were actually the
interface we wanted. However, the adoption of P2128R6 into C++23 makes operator[] the preferred multidimensional array access operator. As the discussion in P2128 points out, operator[] not
supporting multiple parameters before C++23 meant that different multidimensional array classes exposed array access with different syntax. While many of them used the function call operator operator
(), mdspan quite deliberately does not. P2128 explains why it’s a bad idea for a multidimensional array type to support both operator() and operator[]. Thus, a hypothetical multidimensional array
concept could not represent both pre-C++23 and post-C++23 multidimensional arrays. After further discussion at the 2019 Belfast meeting, LEWGI accepted our position that it is reasonable for our
algorithms to take the concrete (yet highly customizable) type mdspan, instead of template parameters constrained by a multidimensional array concept.
Function argument aliasing and zero scalar multipliers
1. The BLAS Standard forbids aliasing any input (read-only) argument with any output (write-only or read-and-write) argument.
2. The BLAS uses INTENT(INOUT) (read-and-write) arguments to express “updates” to a vector or matrix. By contrast, C++ Standard algorithms like transform take input and output iterator ranges as
different parameters, but may let input and output ranges be the same.
3. The BLAS uses the values of scalar multiplier arguments (“alpha” or “beta”) of vectors or matrices at run time, to decide whether to treat the vectors or matrices as write only. This matters both
for performance and semantically, assuming IEEE floating-point arithmetic.
4. We decide separately, based on the category of BLAS function, how to translate INTENT(INOUT) arguments into a C++ idiom:
a. For triangular solve and triangular multiply, in-place behavior is essential for computing matrix factorizations in place, without requiring extra storage proportional to the input matrix’s
dimensions. However, in-place functions may hinder implementations’ use of some forms of parallelism. Thus, we have both not-in-place and in-place overloads. Both take an optional
ExecutionPolicy&&, as some forms of parallelism (e.g., vectorization) may still be effective with in-place operations.
b. Else, if the BLAS function unconditionally updates (like xGER), we retain read-and-write behavior for that argument.
c. Else, if the BLAS function uses a scalar beta argument to decide whether to read the output argument as well as write to it (like xGEMM), we provide two versions: a write-only version (as if
beta is zero), and a read-and-write version (as if beta is nonzero).
For a detailed analysis, please see our paper “Evolving a Standard C++ Linear Algebra Library from the BLAS” (P1674).
Support for different matrix layouts
1. The dense BLAS supports several different dense matrix “types.” Type is a mixture of “storage format” (e.g., packed, banded) and “mathematical property” (e.g., symmetric, Hermitian, triangular).
2. Some “types” can be expressed as custom mdspan layouts. Other types actually represent algorithmic constraints: for instance, what entries of the matrix the algorithm is allowed to access.
3. Thus, a C++ BLAS wrapper cannot overload on matrix “type” simply by overloading on mdspan specialization. The wrapper must use different function names, tags, or some other way to decide what the
matrix type is.
For more details, including a list and description of the matrix “types” that the dense BLAS supports, please see our paper “Evolving a Standard C++ Linear Algebra Library from the BLAS” (P1674).
A C++ linear algebra library has a few possibilities for distinguishing the matrix “type”:
1. It could imitate the BLAS, by introducing different function names, if the layouts and accessors do not sufficiently describe the arguments.
2. It could introduce a hierarchy of higher-level classes for representing linear algebra objects, use mdspan (or something like it) underneath, and write algorithms to those higher-level classes.
3. It could use the layout and accessor types in mdspan simply as tags to indicate the matrix “type.” Algorithms could specialize on those tags.
We have chosen Approach 1. Our view is that a BLAS-like interface should be as low-level as possible. Approach 2 is more like a “Matlab in C++”; a library that implements this could build on our
proposal’s lower-level library. Approach 3 sounds attractive. However, most BLAS matrix “types” do not have a natural representation as layouts. Trying to hack them in would pollute mdspan – a simple
class meant to be easy for the compiler to optimize – with extra baggage for representing what amounts to sparse matrices. We think that BLAS matrix “type” is better represented with a higher-level
library that builds on our proposal.
Interpretation of “lower / upper triangular”
Triangle refers to what part of the matrix is accessed
The triangular, symmetric, and Hermitian algorithms in this proposal all take a Triangle tag that specifies whether the algorithm should access the upper or lower triangle of the matrix. This has the
same function as the UPLO argument of the corresponding BLAS routines. The upper or lower triangular argument only refers to what part of the matrix the algorithm will access. The “other triangle” of
the matrix need not contain useful data. For example, with the symmetric algorithms, A[j, i] need not equal A[i, j] for any i and j in the domain of A with i not equal to j. The algorithm just
accesses one triangle and interprets the other triangle as the result of flipping the accessed triangle over the diagonal.
This “interpretation” approach to representing triangular matrices is critical for matrix factorizations. For example, LAPACK’s LU factorization (xGETRF) overwrites a matrix A with both its L (lower
triangular, implicitly represented diagonal of all ones) and U (upper triangular, explicitly stored diagonal) factors. Solving linear systems Ax=b with this factorization, as LAPACK’s xGETRS routine
does, requires solving first a linear system with the upper triangular matrix U, and then solving a linear system with the lower triangular matrix L. If the BLAS required that the “other triangle” of
a triangular matrix had all zero elements, then LU factorization would require at least twice the storage. For symmetric and Hermitian matrices, only accessing the matrix’s elements nonredundantly
ensures that the matrix remains mathematically symmetric resp. Hermitian, even in the presence of rounding error.
BLAS applies UPLO to original matrix; we apply Triangle to transformed matrix
The BLAS routines that take an UPLO argument generally also take a TRANS argument. The TRANS argument says whether to apply the matrix, its transpose, or its conjugate transpose. The BLAS applies the
UPLO argument to the “original” matrix, not to the transposed matrix. For example, if TRANS='T' or TRANS='C', UPLO='U' means the routine will access the upper triangle of the matrix, not the upper
triangle of the matrix’s transpose.
Our proposal takes the opposite approach. It applies Triangle to the input matrix, which may be the result of a transformation such as transposed or conjugate_transposed. For example, if Triangle is
upper_triangle_t, the algorithm will always access the matrix for i,j in its domain with i ≤ j (or i strictly less than j, if the algorithm takes a Diagonal tag and Diagonal is
implicit_unit_diagonal_t). If the input matrix is transposed(A) for a layout_left mdspan A, this means that the algorithm will access the upper triangle of transposed(A), which is actually the lower
triangle of A.
We took this approach because our interface permits arbitrary layouts, with possibly arbitrary nesting of layout transformations. This comes from mdspan’s design itself, not even necessarily from our
proposal. For example, users might define antitranspose(A), that flips indices over the antidiagonal (the “other diagonal” that goes from the lower left to the upper right of the matrix, instead of
from the upper left to the lower right). Layout transformations need not even be one-to-one, because layouts themselves need not be (hence is_unique). Since it’s not possible to “undo” a general
layout, there’s no way to get back to the “original matrix.”
Our approach, while not consistent with the BLAS, is internally consistent. Triangle always has a clear meaning, no matter what transformations users apply to the input. Layout transformations like
transposed have the same interpretation for all the matrix algorithms, whether for general, triangular, symmetric, or Hermitian matrices. This interpretation is consistent with the standard meaning
of mdspan layouts.
C BLAS implementations already apply layout transformations like this so that they can use an existing column-major Fortran BLAS to implement operations on matrices with different layouts. For
example, the transpose of an M x N layout_left matrix is just the same data, viewed as an N x M layout_right matrix. Thus, transposed is consistent with current practice. In fact, transposed need not
use a special layout_transpose, if it knows how to reinterpret the input layout.
1. BLAS applies UPLO to the original matrix, before any transposition. Our proposal applies Triangle to the transformed matrix, after any transposition.
2. Our approach is the only reasonable way to handle the full generality of user-defined layouts and layout transformations.
1-norms and infinity-norms for vectors and matrices of complex numbers
We define complex 1-norms and infinity-norms for matrices using the magnitude of each element, but for vectors using the sum of absolute values of the real and imaginary components of each element.
We do so because the BLAS exists for the implementation of algorithms to solve linear systems, linear least-squares problems, and eigenvalue problems. The BLAS does not aim to provide a complete set
of mathematical operations. Every function in the BLAS exists because some LINPACK or LAPACK algorithm needs it.
For vectors, we use the sum of absolute values of the components because
• this more accurately expresses the condition number of the sum of the vector’s elements,
• it results in a tighter error bound, and
• it avoids a square root per element, with potentially the additional cost of preventing undue underflow or overflow (as hypot implementations do).
The resulting functions are not actually norms in the mathematical sense, so their names vector_abs_sum and vector_idx_abs_max do not include the word “norm.”
For matrices, we use the magnitude because the only reason LAPACK ever actually computes matrix 1-norms or infinity-norms is for estimating the condition number of a matrix. For this case, LAPACK
actually needs to compute the “true” matrix 1-norm (and infinity-norm), that uses the magnitude.
The 1-norm of a vector of real numbers is the sum of the absolute values of the vector’s elements. The infinity-norm of a vector of real numbers is the maximum of the absolute values of the vector’s
elements. Both of these are useful for analyzing rounding errors when solving common linear algebra problems. For example, the 1-norm of a vector expresses the condition number of the sum of the
vector’s elements (see Higham 2002, Section 4.2), while the infinity-norm expresses the normwise backward error of the computed solution vector when solving a linear system using Gaussian elimination
(see LAPACK Users’ Guide).
The straightforward extension of both of these definitions for vectors of complex numbers would be to replace “absolute value” with “magnitude.” C++ suggests this by defining std::abs for complex
arguments as the magnitude. However, the BLAS instead uses the sum of the absolute values of the real and imaginary components of each element. For example, the BLAS functions SASUM and DASUM compute
the actual 1-norm ∑[i]|v[i]| of their length-n input vector of real elements v, while their complex counterparts CSASUM and DZASUM compute ∑[i]|ℜ(z[i])|+|ℑ(z[i])| for their length n input vector of
complex elements z. Likewise, the real BLAS functions ISAMAX and IDAMAX find max[i]|v[i]|, while their complex counterparts ICAMAX and IZAMAX find max[i]|ℜ(z[i])|+|ℑ(z[i])|.
This definition of CSASUM and DZASUM accurately expresses the condition number of the sum of a complex vector’s elements. This is because complex numbers are added componentwise, so summing a complex
vector componentwise is really like summing two real vectors separately. Thus, it is the logical generalization of the vector 1-norm.
Annex A.1 of the BLAS Standard (p. 173) explains that this is also a performance optimization, to avoid the expense of one square root per vector element. Computing the magnitude is equivalent to
two-parameter hypot. Thus, for the same reason, high-quality implementations of magnitude for floating-point arguments may do extra work besides the square root, to prevent undue underflow or
This approach also results in tighter error bounds. We mentioned above how adding complex numbers sums their real and imaginary parts separately. Thus, the rounding error committed by the sum can be
considered as a two-component vector. For a vector x of length 2, ∥x∥[1]≤ sqrt(2) ∥x∥[2] and ∥x∥[2]≤∥x∥[1] (see LAPACK Users’ Guide), so using the “1-norm” |ℜ(z)|+|ℑ(z)| of a complex number z,
instead of the “2-norm” |z|, gives a tighter error bound.
This is why P1673’s vector_abs_sum (vector 1-norm) and vector_idx_abs_max (vector infinity-norm) functions use |ℜ(z)|+|ℑ(z)| instead of |z| for the “absolute value” of each vector element.
One disadvantage of these definitions is that the resulting quantities are not actually norms. This is because they do not preserve scaling factors. For magnitude, |αz| equals |α||z| for any real
number α and complex number z. However, |αℜ(z)|+|αℑ(z)| does not equal α(|ℜ(z)|+|ℑ(z)|) in general. As a result, the names of the functions vector_abs_sum and vector_idx_abs_max do not include the
word “norm.”
The 1-norm ∥A∥[1] of a matrix A is max[∥x∥[1]]∥Ax∥[1] for all vectors x with the same number of elements as A has columns. The infinity-norm ∥A∥[∞] of a matrix A is max[∥x∥[∞]]∥Ax∥[∞] for all vectors
x with the same number of elements as A has columns. The 1-norm is the infinity-norm of the transpose, and vice versa.
Given that these norms are defined using the corresponding vector norms, it would seem reasonable to use the BLAS’s optimizations for vectors of complex numbers. However, the BLAS exists to serve
LINPACK and its successor, LAPACK (see P1417 for citations and a summary of the history). Thus, looking at what LAPACK actually computes would be the best guide. LAPACK uses matrix 1-norms and
infinity-norms in two different ways.
First, equilibration reduces errors when solving linear systems using Gaussian elimination. It does so by scaling rows and columns to minimize the matrix’s condition number ∥A∥[1]∥A^−1∥[1] (or ∥A∥[∞]
∥A^−1∥[∞]; minimizing one minimizes the other). This effectively tries to make the maximum absolute value of each row and column of the matrix as close to 1 as possible. (We say “close to 1,” because
LAPACK equilibrates using scaling factors that are powers of two, to prevent rounding error in binary floating-point arithmetic.) LAPACK performs equilibration with the routines xyzEQU, where x
represents the matrix’s element type and the two letters yz represent the kind of matrix (e.g., GE for a general dense nonsymmetric matrix). The complex versions of these routines use |ℜ(A[ij])|+|ℑ
(A[ij])| for the “absolute value” of each matrix element A[ij]. This aligns mathematically with how LAPACK measures errors when solving a linear system Ax=b.
Second, condition number estimation estimates the condition number of a matrix A. LAPACK relies on the 1-norm condition number ∥A∥[1]∥A^−1∥[1] to estimate errors for nearly all of its computations.
LAPACK’s xyzCON routines perform condition number estimation. It turns out that the complex versions of these routines require the “true” 1-norm that uses the magnitude of each matrix element (as
explained in Higham 1988).
LAPACK only ever actually computes matrix 1-norms or infinity-norms when it estimates the matrix condition number. Thus, LAPACK actually needs to compute the “true” matrix 1-norm (and infinity-norm).
This is why P1673 defines matrix_one_norm and matrix_inf_norm to return the “true” matrix 1-norm resp. infinity-norm.
Over- and underflow wording for vector 2-norm
SG6 recommended to us at Belfast 2019 to change the special overflow / underflow wording for vector_two_norm to imitate the BLAS Standard more closely. The BLAS Standard does say something about
overflow and underflow for vector 2-norms. We reviewed this wording and conclude that it is either a nonbinding quality of implementation (QoI) recommendation, or too vaguely stated to translate
directly into C++ Standard wording. Thus, we removed our special overflow / underflow wording. However, the BLAS Standard clearly expresses the intent that implementations document their underflow
and overflow guarantees for certain functions, like vector 2-norms. The C++ Standard requires documentation of “implementation-defined behavior.” Therefore, we added language to our proposal that
makes “any guarantees regarding overflow and underflow” of those certain functions “implementation-defined.”
Previous versions of this paper asked implementations to compute vector 2-norms “without undue overflow or underflow at intermediate stages of the computation.” “Undue” imitates existing C++ Standard
wording for hypot. This wording hints at the stricter requirements in F.9 (normative, but optional) of the C Standard for math library functions like hypot, without mandating those requirements. In
particular, paragraph 9 of F.9 says:
Whether or when library functions raise an undeserved “underflow” floating-point exception is unspecified. Otherwise, as implied by F.7.6, the <math.h> functions do not raise spurious
floating-point exceptions (detectable by the user) [including the “overflow” exception discussed in paragraph 6], other than the “inexact” floating-point exception.
However, these requirements are for math library functions like hypot, not for general algorithms that return floating-point values. SG6 did not raise a concern that we should treat vector_two_norm
like a math library function; their concern was that we imitate the BLAS Standard’s wording.
The BLAS Standard says of several operations, including vector 2-norm: “Here are the exceptional routines where we ask for particularly careful implementations to avoid unnecessary over/underflows,
that could make the output unnecessarily inaccurate or unreliable” (p. 35).
The BLAS Standard does not define phrases like “unnecessary over/underflows.” The likely intent is to avoid naïve implementations that simply add up the squares of the vector elements. These would
overflow even if the norm in exact arithmetic is significantly less than the overflow threshold. The POSIX Standard (IEEE Std 1003.1-2017) analogously says that hypot must “take precautions against
overflow during intermediate steps of the computation.”
The phrase “precautions against overflow” is too vague for us to translate into a requirement. The authors likely meant to exclude naïve implementations, but not require implementations to know
whether a result computed in exact arithmetic would overflow or underflow. The latter is a special case of computing floating-point sums exactly, which is costly for vectors of arbitrary length.
While it would be a useful feature, it is difficult enough that we do not want to require it, especially since the BLAS Standard itself does not. The implementation of vector 2-norms in the Reference
BLAS included with LAPACK 3.10.0 partitions the running sum of squares into three different accumulators: one for big values (that might cause the sum to overflow without rescaling), one for small
values (that might cause the sum to underflow without rescaling), and one for the remaining “medium” values. (See Anderson 2017.) Earlier implementations merely rescaled by the current maximum
absolute value of all the vector entries seen thus far. (See Blue 1978.) Implementations could also just compute the sum of squares in a straightforward loop, then check floating-point status flags
for underflow or overflow, and recompute if needed.
For all of the functions listed on p. 35 of the BLAS Standard as needing “particularly careful implementations,” except vector norm, the BLAS Standard has an “Advice to implementors” section with
extra accuracy requirements. The BLAS Standard does have an “Advice to implementors” section for matrix norms (see Section 2.8.7, p. 69), which have similar over- and underflow concerns as vector
norms. However, the Standard merely states that “[h]igh-quality implementations of these routines should be accurate” and should document their accuracy, and gives examples of “accurate
implementations” in LAPACK.
The BLAS Standard never defines what “Advice to implementors” means. However, the BLAS Standard shares coauthors and audience with the Message Passing Interface (MPI) Standard, which defines “Advice
to implementors” as “primarily commentary to implementors” and permissible to skip (see e.g., MPI 3.0, Section 2.1, p. 9). We thus interpret “Advice to implementors” in the BLAS Standard as a
nonbinding quality of implementation (QoI) recommendation.
Constraining matrix and vector element types and scalars
The BLAS only accepts four different types of scalars and matrix and vector elements. In C++ terms, these correspond to float, double, complex<float>, and complex<double>. The algorithms we propose
generalize the BLAS by accepting any matrix, vector, or scalar element types that make sense for each algorithm. Those may be built-in types, like floating-point numbers or integers, or they may be
custom types. Those custom types might not behave like conventional real or complex numbers. For example, quaternions have noncommutative multiplication (a * b might not equal b * a), polynomials in
one variable over a field lack division, and some types might not even have subtraction defined. Nevertheless, many BLAS operations would make sense for all of these types.
“Constraining matrix and vector element types and scalars” means defining how these types must behave in order for our algorithms to make sense. This includes both syntactic and semantic constraints.
We have three goals:
1. to help implementers implement our algorithms correctly;
2. to give implementers the freedom to make quality of implementation (QoI) enhancements, for both performance and accuracy; and
3. to help users understand what types they may use with our algorithms.
The whole point of the BLAS was to identify key operations for vendors to optimize. Thus, performance is a major concern. “Accuracy” here refers to either to rounding error or to approximation error
(for matrix or vector element types where either makes sense).
Value type constraints do not suffice to describe algorithm behavior
LEWG’s 2020 review of P1673R2 asked us to investigate conceptification of its algorithms. “Conceptification” here refers to an effort like that of P1813R0 (“A Concept Design for the Numeric
Algorithms”), to come up with concepts that could be used to constrain the template parameters of numeric algorithms like reduce or transform. (We are not referring to LEWGI’s request for us to
consider generalizing our algorithm’s parameters from mdspan to a hypothetical multidimensional array concept. We discuss that above, in the “Why use mdspan?” section.) The numeric algorithms are
relevant to P1673 because many of the algorithms proposed in P1673 look like generalizations of reduce or transform. We intend for our algorithms to be generic on their matrix and vector element
types, so these questions matter a lot to us.
We agree that it is useful to set constraints that make it possible to reason about correctness of algorithms. However, we do not think constraints on value types suffice for this purpose. First,
requirements like associativity are too strict to be useful for practical types. Second, what we really want to do is describe the behavior of algorithms, regardless of value types’ semantics. “The
algorithm may reorder sums” means something different than “addition on the terms in the sum is associative.”
Associativity is too strict
P1813R0 requires associative addition for many algorithms, such as reduce. However, many practical arithmetic systems that users might like to use with algorithms like reduce have non-associative
addition. These include
• systems with rounding;
• systems with an “infinity”: e.g., if 10 is Inf, 3 + 8 - 7 could be either Inf or 4; and
• saturating arithmetic: e.g., if 10 saturates, 3 + 8 - 7 could be either 3 or 4.
Note that the latter two arithmetic systems have nothing to do with rounding error. With saturating integer arithmetic, parenthesizing a sum in different ways might give results that differ by as
much as the saturation threshold. It’s true that many non-associative arithmetic systems behave “associatively enough” that users don’t fear parallelizing sums. However, a concept with an exact
property (like “commutative semigroup”) isn’t the right match for “close enough,” just like operator== isn’t the right match for describing “nearly the same.” For some number systems, a rounding
error bound might be more appropriate, or guarantees on when underflow or overflow may occur (as in POSIX’s hypot).
The problem is a mismatch between the requirement we want to express – that “the algorithm may reparenthesize addition” – and the constraint that “addition is associative.” The former describes the
algorithm’s behavior, while the latter describes the types used with that algorithm. Given the huge variety of possible arithmetic systems, an approach like the Standard’s use of GENERALIZED_SUM to
describe reduce and its kin seems more helpful. If the Standard describes an algorithm in terms of GENERALIZED_SUM, then that tells the caller what the algorithm might do. The caller then takes
responsibility for interpreting the algorithm’s results.
We think this is important both for adding new algorithms (like those in this proposal) and for defining behavior of an algorithm with respect to different ExecutionPolicy arguments. (For instance,
execution::par_unseq could imply that the algorithm might change the order of terms in a sum, while execution::par need not. Compare to MPI_Op_create’s commute parameter, that affects the behavior of
algorithms like MPI_Reduce when used with the resulting user-defined reduction operator.)
Generalizing associativity helps little
Suppose we accept that associativity and related properties are not useful for describing our proposed algorithms. Could there be a generalization of associativity that would be useful? P1813R0’s
most general concept is a magma. Mathematically, a magma is a set M with a binary operation ×, such that if a and b are in M, then a × b is in M. The operation need not be associative or commutative.
While this seems almost too general to be useful, there are two reasons why even a magma is too specific for our proposal.
1. A magma only assumes one set, that is, one type. This does not accurately describe what the algorithms do, and it excludes useful features like mixed precision and types that use expression
2. A magma is too specific, because algorithms are useful even if the binary operation is not closed.
First, even for simple linear algebra operations that “only” use plus and times, there is no one “set M” over which plus and times operate. There are actually three operations: plus, times, and
assignment. Each operation may have completely heterogeneous input(s) and output. The sets (types) that may occur vary from algorithm to algorithm, depending on the input type(s), and the algebraic
expression(s) that the algorithm is allowed to use. We might need several different concepts to cover all the expressions that algorithms use, and the concepts would end up being less useful to users
than the expressions themselves.
For instance, consider the Level 1 BLAS “AXPY” function. This computes y[i] = alpha * x[i] + y[i] elementwise. What type does the expression alpha * x[i] + y[i] have? It doesn’t need to have the same
type as y[i]; it just needs to be assignable to y[i]. The types of alpha, x[i], and y[i] could all differ. As a simple example, alpha might be int, x[i] might be float, and y[i] might be double. The
types of x[i] and y[i] might be more complicated; e.g., x[i] might be a polynomial with double coefficients, and y[i] a polynomial with float coefficients. If those polynomials use expression
templates, then slightly different sum expressions involving x[i] and/or y[i] (e.g., alpha * x[i] + y[i], x[i] + y[i], or y[i] + x[i]) might all have different types, all of which differ from value
type of x or y. All of these types must be assignable and convertible to the output value type.
We could try to describe this with a concept that expresses a sum type. The sum type would include all the types that might show up in the expression. However, we do not think this would improve
clarity over just the expression itself. Furthermore, different algorithms may need different expressions, so we would need multiple concepts, one for each expression. Why not just use the
expressions to describe what the algorithms can do?
Second, the magma concept is not helpful even if we only had one set M, because our algorithms would still be useful even if binary operations were not closed over that set. For example, consider a
hypothetical user-defined rational number type, where plus and times throw if representing the result of the operation would take more than a given fixed amount of memory. Programmers might handle
this exception by falling back to different algorithms. Neither plus or times on this type would satisfy the magma requirement, but the algorithms would still be useful for such a type. One could
consider the magma requirement satisfied in a purely syntactic sense, because of the return type of plus and times. However, saying that would not accurately express the type’s behavior.
This point returns us to the concerns we expressed earlier about assuming associativity. “Approximately associative” or “usually associative” are not useful concepts without further refinement. The
way to refine these concepts usefully is to describe the behavior of a type fully, e.g., the way that IEEE 754 describes the behavior of floating-point numbers. However, algorithms rarely depend on
all the properties in a specification like IEEE 754. The problem, again, is that we need to describe what algorithms do – e.g., that they can rearrange terms in a sum – not how the types that go into
the algorithms behave.
In summary:
• Many useful types have nonassociative or even non-closed arithmetic.
• Lack of (e.g.,) associativity is not just a rounding error issue.
• It can be useful to let algorithms do things like reparenthesize sums or products, even for types that are not associative.
• Permission for an algorithm to reparenthesize sums is not the same as a concept constraining the terms in the sum.
• We can and do use existing Standard language, like GENERALIZED_SUM, for expressing permissions that algorithms have.
In the sections that follow, we will describe a different way to constrain the matrix and vector element types and scalars in our algorithms. We will start by categorizing the different quality of
implementation (QoI) enhancements that implementers might like to make. These enhancements call for changing algorithms in different ways. We will distinguish textbook from non-textbook ways of
changing algorithms, explain that we only permit non-textbook changes for floating-point types, then develop constraints on types that permit textbook changes.
Categories of QoI enhancements
An important goal of constraining our algorithms is to give implementers the freedom to make QoI enhancements. We categorize QoI enhancements in three ways:
1. those that depend entirely on the computer architecture;
2. those that might have architecture-dependent parameters, but could otherwise be written in an architecture-independent way; and
3. those that diverge from a textbook description of the algorithm, and depend on element types having properties more specific than what that description requires.
An example of Category (1) would be special hardware instructions that perform matrix-matrix multiplications on small, fixed-size blocks. The hardware might only support a few types, such as
integers, fixed-point reals, or floating-point types. Implementations might use these instructions for the entire algorithm, if the problem sizes and element types match the instruction’s
requirements. They might also use these instructions to solve subproblems. In either case, these instructions might reorder sums or create temporary values.
Examples of Category (2) include blocking to increase cache or translation lookaside buffer (TLB) reuse, or using SIMD instructions (given the Parallelism TS’ inclusion of SIMD). Many of these
optimizations relate to memory locality or parallelism. For an overview, see (Goto 2008) or Section 2.6 of (Demmel 1997). All such optimizations reorder sums and create temporary values.
Examples of Category (3) include Strassen’s algorithm for matrix multiplication. The textbook formulation of matrix multiplication only uses additions and multiplies, but Strassen’s algorithm also
performs subtractions. A common feature of Category (3) enhancements is that their implementation diverges from a “textbook description of the algorithm” in ways beyond just reordering sums. As a
“textbook,” we recommend either (Strang 2016), or the concise mathematical description of operations in the BLAS Standard. In the next section, we will list properties of textbook descriptions, and
explain some ways in which QoI enhancements might fail to adhere to those properties.
Properties of textbook algorithm descriptions
“Textbook descriptions” of the algorithms we propose tend to have the following properties. For each property, we give an example of a “non-textbook” algorithm, and how it assumes something extra
about the matrix’s element type.
a. They compute floating-point sums straightforwardly (possibly reordered, or with temporary intermediate values), rather than using any of several algorithms that improve accuracy (e.g.,
compensated summation) or even make the result independent of evaluation order (see Demmel 2013). All such non-straightforward algorithms depend on properties of floating-point arithmetic. We
will define below what “possibly reordered, or with temporary intermediate values” means.
b. They use only those arithmetic operations on the matrix and vector element types that the textbook description of the algorithm requires, even if using other kinds of arithmetic operations would
improve performance or give an asymptotically faster algorithm.
c. They use exact algorithms (not considering rounding error), rather than approximations (that would not be exact even if computing with real numbers).
d. They do not use parallel algorithms that would give an asymptotically faster parallelization in the theoretical limit of infinitely many available parallel processing units, at the cost of likely
unacceptable rounding error in floating-point arithmetic.
As an example of (b), the textbook matrix multiplication algorithm only adds or multiplies the matrices’ elements. In contrast, Strassen’s algorithm for matrix-matrix multiply subtracts as well as
adds and multiplies the matrices’ elements. Use of subtraction assumes that arbitrary elements have an additive inverse, but the textbook matrix multiplication algorithm makes sense even for element
types that lack additive inverses for all elements. Also, use of subtractions changes floating-point rounding behavior, though that change is understood and often considered acceptable (see Demmel
As an example of (c), the textbook substitution algorithm for solving triangular linear systems is exact. In contrast, one can approximate triangular solve with a stationary iteration. (See, e.g.,
Section 5 of (Chow 2015). That paper concerns the sparse matrix case; we cite it merely as an example of an approximate algorithm, not as a recommendation for dense triangular solve.) Approximation
only makes sense for element types that have enough precision for the approximation to be accurate. If the approximation checks convergence, than the algorithm also requires less-than comparison of
absolute values of differences.
Multiplication by the reciprocal of a number, rather than division by that number, could fit into (b) or (c). As an example of (c), implementations for hardware where floating-point division is slow
compared with multiplication could use an approximate reciprocal multiplication to implement division.
As an example of (d), the textbook substitution algorithm for solving triangular linear systems has data dependencies that limit its theoretical parallelism. In contrast, one can solve a triangular
linear system by building all powers of the matrix in parallel, then solving the linear system as with a Krylov subspace method. This approach is exact for real numbers, but commits too much rounding
error to be useful in practice for all but the smallest linear systems. In fact, the algorithm requires that the matrix’s element type have precision exponential in the matrix’s dimension.
Many of these non-textbook algorithms rely on properties of floating-point arithmetic. Strassen’s algorithm makes sense for unsigned integer types, but it could lead to unwarranted and unexpected
overflow for signed integer types. Thus, we think it best to limit implementers to textbook algorithms, unless all matrix and vector element types are floating-point types. We always forbid
non-textbook algorithms of type (d). If all matrix and vector element types are floating-point types, we permit non-textbook algorithms of Types (a), (b), and (c), under two conditions:
1. they satisfy the complexity requirements; and
2. they result in a logarithmically stable algorithm, in the sense of (Demmel 2007).
We believe that Condition (2) is a reasonable interpretation of Section 2.7 of the BLAS Standard. This says that “no particular computational order is mandated by the function specifications. In
other words, any algorithm that produces results ‘close enough’ to the usual algorithms presented in a standard book on matrix computations is acceptable.” Examples of what the BLAS Standard
considers “acceptable” include Strassen’s algorithm, and implementing matrix multiplication as C = (alpha * A) * B + (beta * C), C = alpha * (A * B) + (beta * C), or C = A * (alpha * B) + (beta * C).
“Textbook algorithms” includes optimizations commonly found in BLAS implementations. This includes any available hardware acceleration, as well as the locality and parallelism optimizations we
describe below. Thus, we think restricting generic implementations to textbook algorithms will not overly limit implementers.
Acceptance of P1467 (“Extended floating-point types and standard names”) into C++23 means that the set of floating-point types has grown. Before P1467, this set had three members: float, double, and
long double. After P1467, it includes implementation-specific types, such as short or extended-precision floats. This change may require implementers to look carefully at the definition of
logarithmically stable before making certain algorithmic choices, especially for short floats.
Reordering sums and creating temporaries
Even textbook descriptions of linear algebra algorithms presume the freedom to reorder sums and create temporary values. Optimizations for memory locality and parallelism depend on this. This freedom
imposes requirements on algorithms’ matrix and vector element types.
We could get this freedom either by limiting our proposal to the Standard’s current arithmetic types, or by forbidding reordering and temporaries for types other than arithmetic types. However, doing
so would unnecessarily prevent straightforward optimizations for small and fast types that act just like arithmetic types. This includes so-called “short floats” such as bfloat16 or binary16,
extended-precision floating-point numbers, and fixed-point reals. Some of these types may be implementation defined, and others may be user-specified. We intend to permit implementers to optimize for
these types as well. This motivates us to describe our algorithms’ type requirements in a generic way.
Special case: Only one element type
We find it easier to think about type requirements by starting with the assumption that all element and scalar types in algorithms are the same. One can then generalize to input element type(s) that
might differ from the output element type and/or scalar result type.
Optimizations for memory locality and parallelism both create temporary values, and change the order of sums. For example, reorganizing matrix data to reduce stride involves making a temporary copy
of a subset of the matrix, and accumulating partial sums into the temporary copy. Thus, both kinds of optimizations impose a common set of requirements and assumptions on types. Let value_type be the
output mdspan’s value_type. Implementations may:
1. create arbitrarily many objects of type value_type, value-initializing them or direct-initializing them with any existing object of that type;
2. perform sums in any order; or
3. replace any value with the sum of that value and a value-initialized value_type object.
Assumption (1) implies that the output value type is semiregular. Contrast with [algorithms.parallel.exec]: “Unless otherwise stated, implementations may make arbitrary copies of elements of type T,
from sequences where is_trivially_copy_constructible_v<T> and is_trivially_destructible_v<T> are true.” We omit the trivially constructible and destructible requirements here and permit any
semiregular type. Linear algebra algorithms assume mathematical properties that let us impose more specific requirements than general parallel algorithms. Nevertheless, implementations may want to
enable optimizations that create significant temporary storage only if the value type is trivially constructible, trivially destructible, and not too large.
Regarding Assumption (2): The freedom to compute sums in any order is not necessarily a type constraint. Rather, it’s a right that the algorithm claims, regardless of whether the type’s addition is
associative or commutative. For example, floating-point sums are not associative, yet both parallelization and customary linear algebra optimizations rely on reordering sums. See the above “Value
type constraints do not suffice to describe algorithm behavior” section for a more detailed explanation.
Regarding Assumption (3), we do not actually say that value-initialization produces a two-sided additive identity. What matters is what the algorithm’s implementation may do, not whether the type
actually behaves in this way.
General case: Multiple input element types
An important feature of P1673 is the ability to compute with mixed matrix or vector element types. For instance, add(y, scaled(alpha, x), z) implements the operation z = y + alpha*x, an elementwise
scaled vector sum. The element types of the vectors x, y, and z could be all different, and could differ from the type of alpha.
Accumulate into output value type
Generic algorithms would use the output mdspan’s value_type to accumulate partial sums, and for any temporary results. This is the analog of std::reduce’s scalar result type T. Implementations for
floating-point types might accumulate into higher-precision temporaries, or use other ways to increase accuracy when accumulating partial sums, but the output mdspan’s value_type would still control
accumulation behavior in general.
Proxy references or expression templates
1. Proxy references: The input and/or output mdspan might have an accessor with a reference type other than element_type&. For example, the output mdspan might have a value type value_type, but its
reference type might be atomic_ref<value_type>.
2. Expression templates: The element types themselves might have arithmetic operations that defer the actual computation until the expression is assigned. These “expression template” types typically
hold some kind of reference or pointer to their input arguments.
Neither proxy references nor expression template types are semiregular, because they behave like references, not like values. However, we can still require that their underlying value type be
semiregular. For instance, the possiblity of proxy references just means that we need to use the output mdspan‘s value_type when constructing or value-initializing temporary values, rather than
trying to deduce the value type from the type of an expression that indexes into the output mdspan. Expression templates just mean that we need to use the output mdspan’s value_type to construct or
value-initialize temporaries, rather than trying to deduce the temporaries’ type from the right-hand side of the expression.
The z = y + alpha*x example above shows that some of the algorithms we propose have multiple terms in a sum on the right-hand side of the expression that defines the algorithm. If algorithms have
permission to rearrange the order of sums, then they need to be able to break up such expressions into separate terms, even if some of those expressions are expression templates.
“Textbook” algorithm description in semiring terms
As we explain in the “Value type constraints do not suffice to describe algorithm behavior” section above, we deliberately constrain matrix and vector element types to require associative addition.
This means that we do not, for instance, define concepts like “ring” or “group.” We cannot even speak of a single set of values that would permit defining things like a “ring” or “group.” This is
because our algorithms must handle mixed value types, expression templates, and proxy references. However, it may still be helpful to use mathematical language to explain what we mean by “a textbook
description of the algorithm.”
Most of the algorithms we propose only depend on addition and multiplication. We describe these algorithms as if working on elements of a semiring with possibly noncommutative multiplication. The
only difference between a semiring and a ring is that a semiring does not require all elements to have an additive inverse. That is, addition is allowed, but not subtraction. Implementers may apply
any mathematical transformation to the expressions that would give the same result for any semiring.
Why a semiring?
We use a semiring because
1. we generally want to reorder terms in sums, but we do not want to order terms in products; and
2. we do not want to assume that subtraction works.
The first is because linear algebra computations are useful for matrix or vector element types with noncommutative multiplication, such as quaternions or matrices. The second is because algebra
operations might be useful for signed integers, where a formulation using subtraction risks unexpected undefined behavior.
Semirings and testing
It’s important that implementers be able to test our proposed algorithms for custom element types, not just the built-in arithmetic types. We don’t want to require hypothetical “exact real
arithmetic” types that take particular expertise to implement. Instead, we propose testing with simple classes built out of unsigned integers. This section is not part of our Standard Library
proposal, but we include it to give guidance to implementers and to show that it’s feasible to test our proposal.
Commutative multiplication
C++ unsigned integers implement commutative rings. (Rings always have commutative addition; a “commutative ring” has commutative multiplication as well.) We may transform (say) uint32_t into a
commutative semiring by wrapping it in a class that does not provide unary or binary operator-. Adding a “tag” template parameter to this class would let implementers build tests for mixed element
Noncommutative multiplication
The semiring of 2x2 matrices with element type a commutative semiring is itself a semiring, but with noncommutative multiplication. This is a good way to build a noncommutative semiring for testing.
• Constraining the matrix and vector element types and scalar types in our functions gives implementers the freedom to make QoI enhancements without risking correctness.
• We think describing algorithms’ behavior and implementation freedom is more useful than mathematical concepts like “ring.” For example, we permit implementations to reorder sums, but this does
not mean that they assume sums are associative. This is why we do not define a hierarchy of number concepts.
• We categorize different ways that implementers might like to change algorithms, list categories we exclude and categories we permit, and use the permitted categories to derive constraints on the
types of matrix and vector elements and scalar results.
• We explain how a semiring is a good way to talk about implementation freedom, even though we do not think it is a good way to constrain types. We then use the semiring description to explain how
implementers can test generic algorithms.
Fix issues with complex, and support user-defined complex number types
Motivation and summary of solution
The BLAS provides functionality specifically for vectors and matrices of complex numbers. Built into the BLAS is the assumption that the complex conjugate of a real number is just the number. This
makes it possible to write algorithms that are generic on whether the vector or matrix value type is real or complex. For example, such generic algorithms can use the “conjugate transpose” for both
cases. P1673 users may thus want to apply conjugate_transposed to matrices of arbitrary value types.
The BLAS also needs to distinguish between complex and real value types, because some BLAS functions behave differently for each. For example, for real numbers, the BLAS functions SASUM and DASUM
compute the sum of absolute values of the vector’s elements. For complex numbers, the corresponding BLAS functions SCASUM and DZASUM compute the sum of absolute values of the real and imaginary
components, rather than the sum of magnitudes. This is an optimization to avoid square roots (as Appendix A.1 of the BLAS Standard explains), but it is well founded in the theory of accuracy of
matrix factorizations.
The C++ Standard’s functions conj, real, and imag return the complex conjugate, real part, and imaginary part of a complex number, respectively. They also have overloads for arithmetic types that
give the expected mathematical behavior: the conjugate or real part of a real number is the number itself, and the imaginary part of a real number is zero. Thus, it would make sense for P1673 to use
these functions to express generic algorithms with complex or real numbers. However, these functions have three issues that preclude their direct use in P1673’s algorithms.
1. The existing overloads of these functions in the std namespace do not always preserve the type of their argument. For arguments of arithmetic type, conj returns complex. For arguments of integral
type, real and imag always return double. The resulting value type change is mathematically unexpected in generic algorithms, can cause information loss for 64-bit integers, and can hinder use of
an optimized BLAS.
2. Users sometimes need to define their own “custom” complex number types instead of using specializations of complex, but users cannot overload functions in the std namespace for these types.
3. How does P1673 tell if a user-defined type represents a complex number?
Libraries such as Kokkos address Issue (2) by defining conj, real, and imag functions in the library’s namespace (not std). Generic code then can rely on argument-dependent lookup (ADL) by invoking
these functions without namespace qualification. The library’s functions help make the custom complex type “interface-compatible” with std::complex. This ADL solution long predates any more elaborate
customization point technique. In our experience, many mathematical libraries use this approach.
This common practice suggests a way for P1673 to solve the other issues as well. A user-defined type represents a complex number if and only if it has ADL-findable conj, real, and imag functions.
This means that P1673 can “wrap” these functions to make them behave as mathematically expected for any value type. These “wrappers” can also adjust the behavior of std::conj, std::real, and
std::imag for arithmetic types.
P1673 defines these wrappers as the exposition-only functions conj-if-needed, real-if-needed, and imag-if-needed.
Why users define their own complex number types
Users define their own complex number types for three reasons.
1. The Standard only permits complex<R> if R is a floating-point type.
2. complex<R> only has sizeof(R) alignment, not 2 * sizeof(R).
3. Some C++ extensions cannot use complex<R>, because they require annotations on a type’s member functions.
First, the C++ Standard currently permits complex<R> only if R is a floating-point type. Before adoption of P1467 into C++23, this meant that R could only be float, double, or long double. Even after
adoption of P1467, implementations are not required to provide other floating-point types. This prevents use of other types in portable C++, such as
• “short” low-precision floating-point or fixed-point number types that are critical for performance of machine learning and signal processing applications;
• signed integers (the resulting complex numbers represent the Gaussian integers);
• extended-precision floating-point types that can improve the accuracy of floating-point sums, reduce parallel nondeterminism, and make sums less dependent on evaluation order; and
• custom number types such as “bigfloats” (arbitrary-precision floating-point types) or fractions of integers.
Second, the Standard explicitly specifies that complex<R> has the same alignment as R[2]. That is, it is aligned to sizeof(R). Some systems would give better parallel or vectorized performance if
complex numbers were aligned to 2 * sizeof(R). Some C++ extensions define their own complex number types partly for this reason. Software libraries that use these custom complex number types tempt
users to alias between complex<R> and these custom types, which would have the same bit representation except for alignment. This has led to crashes or worse in software projects that the authors
have worked on. Third, some C++ extensions cannot use complex, because they require types’ member functions to have special annotations, in order to compile code to make use of accelerator hardware.
These issues have led several software libraries and C++ extensions to define their own complex number types. These include CUDA, Kokkos, and Thrust. The SYCL standard is contemplating adding a
custom complex number type. One of the authors wrote Kokkos::complex circa 2014 to make it possible to build and run Trilinos’ linear solvers with such C++ extensions.
Why users want to “conjugate” matrices of real numbers
It’s possible to describe many linear algebra algorithms in a way that works for both complex and real numbers, by treating conjugation as the identity for real numbers. This makes the “conjugate
transpose” just the transpose for a matrix of real numbers. Matlab takes this approach, by defining the single quote operator to take the conjugate transpose if its argument is complex, and the
transpose if its argument is real. The Fortran BLAS also supports this, by letting users specify the 'Conjugate Transpose' (TRANSA='C') even for real routines like DGEMM (double-precision general
matrix-matrix multiply). Krylov subspace methods in Trilinos’ Anasazi and Belos packages also follow a Matlab-like generic approach.
Even though we think it should be possible to write “generic” (real or complex) linear algebra code using conjugate_transposed, we still need to distinguish between symmetric and Hermitian matrix
algorithms. This is because symmetric does not mean the same thing as Hermitian for matrices of complex numbers. For example, a matrix whose off-diagonal elements are all 3 + 4i is symmetric, but not
Hermitian. Complex symmetric matrices are useful in practice, for example when modeling damped vibrations (Craven 1969).
Effects of conj’s real-to-complex change
The fact that conj for arithmetic-type arguments returns complex may complicate or prevent implementers from using an existing optimized BLAS library. If the user calls matrix_product with matrices
all of value type double, use of the (mathematically harmless) conjugate_transposed function would make one matrix have value type complex<double>. Implementations could undo this value type change
for known layouts and accessors, but would need to revert to generic code otherwise.
For example, suppose that a custom real value type MyReal has arithmetic operators defined to permit all needed mixed-type expressions with double, where double times MyReal and MyReal times double
both “promote” to MyReal. Users may then call matrix_product(A, B, C) with A having value type double, B having value type MyReal, and C having a value type MyReal. However, matrix_product
(conjugate_transposed(A), B, C) would not compile, due to complex<decltype(declval<double>() * declval<MyReal>())> not being well formed.
LEWG feedback on R8 solution
In R8 of this paper, we proposed an exposition-only function conj-if-needed. For arithmetic types, it would be the identity function. This would fix Issue (1). For all other types, it would call conj
through argument-dependent lookup (ADL), just like how iter_swap calls swap. This would fix Issue (2). However, it would force users who define custom real number types to define a trivial conj (in
their own namespace) for their type. The alternative would be to make conj-if-needed the identity if it could not find conj via ADL lookup. However, that would cause silently incorrect results for
users who define a custom complex number type, but forget or misspell conj.
When reviewing R8, LEWG expressed a preference for a different solution.
1. Temporarily change P1673 to permit use of conjugated and conjugate_transposed only for value types that are either complex or arithmetic types. Add a Note that reminds readers to look out for
Steps (2) and (3) below.
2. Write a separate paper which introduces a user-visible customization point, provisionally named conjugate. The paper could use any of various proposed library-only customization point mechanisms,
such as the customization point objects used by ranges or tag_invoke (see P1895R0, with the expectation that LEWG and perhaps also EWG (see e.g., P2547) may express a preference for a different
3. If LEWG accepts the conjugate customization point, then change P1673 again to use conjugate (thus replacing R8’s conj-if-needed). This would thus reintroduce support for custom complex numbers.
SG6’s response to LEWG’s R8 feedback
SG6 small group (there was no quorum) reviewed P1673 on 2022/06/09, after LEWG’s R8 review on 2022/05/24. SG6 small group expressed the following:
• Being able to write conjugated(A) or conjugate_transposed(A) for a matrix or vector A of user-defined types is reasonably integral to the proposal. We generically oppose deferring it based on the
hope that we’ll be able to specify it in a nicer way in the future, with some new customization point syntax.
• A simple, teachable rule: Do ADL-ONLY lookup (preventing finding std::conj for primitive types) for conj (as with ranges); if you find something you use it, and if you don’t, you do nothing
(conjugation is the identity). (“Primitives aren’t that special.”) Benefit is that custom real types work out of the box.
• The alternative: specify that if users choose to use conjugated or conjugate_transposed with a user-defined type, then they MUST supply the conj ADL-findable thing, else ill-formed. This is a
safety mechanism that may not have been considered previously by LEWG. (Make primitives special, to regain safety. The cost is that custom real types need to have a conj ADL-findable, if users
use conjugated or conjugate_transposed.)
Current solution
We have adopted SG6 small group’s recommendation, with a slight wording modification to make it obvious that the conjugate of an arithmetic type returns the same type as its input.
We propose an exposition-only function object conj-if-needed. For arithmetic types, it behaves as the identity function. If it can call conj through ADL-only (unqualified) lookup, it does so.
Otherwise, it again behaves as the identity function.
We take the same approach to fix the issues discussed above with real and imag. That is, we define exposition-only functions real-if-needed and imag-if-needed. These assume that any non-arithmetic
type without ADL-findable real resp. imag is a non-complex type.
This approach has the following advantages.
1. Most custom number types, noncomplex or complex, will work “out of the box.” Existing custom complex number types likely already have ADL-findable conj, real, and imag. If they do not, then users
can define them.
2. It ensures type preservation for arithmetic types.
3. It uses existing C++ idioms and interfaces for complex numbers.
4. It does not depend on a future customization point syntax or library convention.
Support for division with noncommutative multiplication
An important feature of this proposal is its support for value types that have noncommutative multiplication. Examples include square matrices with a fixed number of rows and columns, and quaternions
and their generalizations. Most of the algorithms in this proposal only add or multiply arbitrary value types, so preserving the order of multiplication arguments is straightforward. The various
triangular solve algorithms are an exception, because they need to perform divisions as well.
If multiplication commutes and if a type has division, then the division x ÷ y is just x times (the multiplicative inverse of y), assuming that the multiplicative inverse of y exists. However, if
multiplication does not commute, “x times (the multiplicative inverse of y)” need not equal “(the multiplicative inverse of y) times x.” The C++ binary operator/ does not give callers a way to
distinguish between these two cases.
This suggests four ways to express “ordered division.”
1. Explicitly divide one by the quotient: x * (1/y), or (1/y) * x
2. Like (2), but instead of using literal 1, get “one” as a value_type input to the algorithm: x * (one/y), or (one/y) * x
3. inverse as a unary callable input to the algorithm: x * inverse(y), or inverse(y) * x
4. divide as a binary callable input to the algorithm: divide(x, y), or divide(y, x)
Both SG6 small group (in its review of this proposal on 2022/06/09) and the authors prefer Way (4), the divide binary callable input. The binary callable would be optional, and ordinary binary
operator/ would be used as the default. This would imitate existing Standard Library algorithms like reduce, with its optional BinaryOp that defaults to addition. For mixed-precision computation,
std::divides<void>{} or the equivalent [](const auto& x, const auto& y) { return x / y; } should work just fine. Users should avoid specifying T other than void in std::divides<T> for mixed-precision
computation, as std::divides<T> would coerce both its arguments to T and then force the division result to be T as well. Way (4) also preserves the original rounding behavior for types with
commutative multiplication.
The main disadvantage of the other approaches is that they would change rounding behavior for floating-point types. They also require two operations – computing an inverse, and multiplication –
rather than one. “Ordered division” may actually be the operation users want, and the “inverse” might be just a byproduct. This is the case for square matrices, where users often “compute an inverse”
only because they want to solve a linear system. Each of the other approaches has its own other disadvantages.
• Way (1) would assume that an overloaded operator/(int, value_type) exists, and that the literal 1 behaves like a multiplicative identity. In practice, not all custom number types may have defined
mixed arithmetic with int.
• Way (2) would complicate the interface. Users might make the mistake of passing in literal 1 (of type int) or 1.0 (of type double) as the value of one, thus leading to Way (1)’s issues.
• Way (3) would again complicate the interface. Users would be tempted to use [](const auto& y) { return 1 / y; } as the inverse function, thus leading back to Way (1)’s issues.
Packed layout’s Triangle must match function’s Triangle
• P1673 symmetric_*, hermitian_*, and triangular_* functions always take Triangle (and DiagonalStorage, if applicable) parameters so that the functions always have the same signature, regardless of
mdspan layout.
• For symmetric or Hermitian algorithms, mismatching the mdspan layout’s Triangle and the function’s Triangle would have no effect.
• For triangular algorithms, mismatch would have an effect that users likely would not want. (It means “use the other triangle,” which is zero.) Thus, it’s reasonable to make mismatch an error.
• In practice, users aren’t likely to encounter a triangular packed matrix in isolation. Such matrices usually occur in context of symmetric or Hermitian packed matrices. A common user error might
thus be mismatching the Triangles for both symmetric or Hermitian functions, and triangular functions. The first is harmless; the second is likely an error.
• Therefore, we recommend
1. retaining the Triangle (and DiagonalStorage, if applicable) parameters;
2. making it a Mandate that the layout’s Triangle match the function’s Triangle parameter, for all the functions (not just the triangular_* ones).
When do packed formats show up in practice?
Users aren’t likely to encounter a triangular packed matrix in isolation. It generally comes as an in-place transformation (e.g., factorization) of a symmetric or Hermitian packed matrix. For
example, LAPACK’s DSPTRF (Double-precision Symmetric Packed TRiangular Factorization) computes a symmetric LDL^T (or UDU^T) factorization in place, overwriting the input symmetric packed matrix A.
LAPACK’s DSPTRS (Double-precision Symmetric Packed TRiangular Solve) then uses the result of DSPTRF to solve a linear system. DSPTRF overwrites A with the triangle L (if A uses lower triangle
storage, or U, if A uses upper triangle storage). This is an idiom for which the BLAS was designed: factorizations typically overwrite their input, and thus reinterpret the input’s “data structure”
on the fly.
What the BLAS does
For a summary of the BLAS’ packed storage formats, please refer to the “Packed Storage” section of the LAPACK Users’ Guide, Third Edition (1999).
BLAS routines for packed storage have only a single argument, UPLO. This describes both whether the caller is storing the upper or lower triangle, and the triangle of the matrix on which the routine
will operate. (Packed BLAS formats always store the diagonal explicitly; they don’t have the analog of DiagonalStorage.) An example of a BLAS triangular packed routine is DTPMV, Double-precision (D)
Triangular Packed (TP) Matrix-Vector (MV) product.
BLAS packed formats don’t represent metadata explicitly; the caller is responsible for knowing whether they are storing the upper or lower triangle. Getting the UPLO argument wrong makes the matrix
wrong. For example, suppose that the matrix is 4 x 4, and the user’s array input for the matrix is [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. If the user is storing the upper triangle (in column-major order,
as the Fortran BLAS requires), then the matrix looks like this.
Mismatching the UPLO argument (by passing in 'Lower triangle' instead of 'Upper triangle') would result in an entirely wrong matrix – not even the transpose. Note how the diagonal elements differ,
for instance.
This would be incorrect for triangular, symmetric, or Hermitian matrices.
P1673’s interpretation of the BLAS
P1673 offers packed formats that encode the Triangle. This means that the mdspan alone conveys the data structure. P1673 retains the function’s separate Triangle parameter so that the function’s
signature doesn’t change based on the mdspan’s layout. P1673 requires that the function’s Triangle match the mdspan’s Triangle.
If P1673 were to permit mismatching the two Triangles, how would the function reasonably interpret the user’s intent? For triangular matrices with explicit diagonal, mismatch would mean multiplying
by or solving with a zero triangle matrix. For triangular matrices with implicit unit diagonal, mismatch would mean multiplying by or solving with a diagonal matrix of ones – that is, the identity
matrix. Users wouldn’t want to do either one of those.
For symmetric matrices, mismatch has no effect; the mdspan layout’s Triangle rules. For example, the lower triangle of an upper triangle storage format is just the upper triangle again. For Hermitian
matrices, again, mismatch has no effect. For example, suppose that the following is the lower triangle representation of a complex-valued Hermitian matrix (where i is the imaginary unit).
2+2i 4+4i
3+3i 5+5i 6+6i
If the user asks the function to operate on the upper triangle of this matrix, that would imply the following.
1+1i 2−2i 3−3i
4+4i 5−5i
(Note that the imaginary parts now have negative sign. The matrix is Hermitian, so A[j,i] equals conj(A[i,j]).) That’s just the “other triangle” of the matrix. These are Hermitian algorithms, so they
will interpret the “other triangle of the other triangle” in a way that restores the original matrix. Even though the user never stores the original matrix, it would look like this mathematically.
1+1i 2−2i 3−3i
2+2i 4+4i 5−5i
3+3i 5+5i 6+6i
Future work
Batched linear algebra
We have submitted a separate proposal, P2901, that adds “batched” versions of linear algebra functions to this proposal. “Batched” linear algebra functions solve many independent problems all at
once, in a single function call. For discussion, please see also Section 6.2 of our background paper P1417R0. Batched interfaces have the following advantages.
• They expose more parallelism and vectorization opportunities for many small linear algebra operations.
• They are useful for many different fields, including machine learning.
• Hardware vendors currently offer both hardware features and optimized software libraries to support batched linear algebra.
• There is an ongoing interface standardization effort, in which we participate.
The mdspan data structure makes it easy to represent a batch of linear algebra objects, and to optimize their data layout.
With few exceptions, the extension of this proposal to support batched operations will not require new functions or interface changes. Only the requirements on functions will change. Output arguments
can have an additional rank; if so, then the leftmost extent will refer to the batch dimension. Input arguments may also have an additional rank to match; if they do not, the function will use
(“broadcast”) the same input argument for all the output arguments in the batch.
Data structures and utilities borrowed from other proposals
This proposal depends on mdspan, a feature proposed by P0009 and follow-on papers, and voted into C++23. The mdspan class template views the elements of a multidimensional array. The rank (number of
dimensions) is fixed at compile time. Users may specify some dimensions at run time and others at compile time; the type of the mdspan expresses this. mdspan also has two customization points:
• Layout expresses the array’s memory layout: e.g., row-major (C++ style), column-major (Fortran style), or strided. We use a custom Layout later in this paper to implement a “transpose view” of an
existing mdspan.
• Accessor defines the storage handle (data_handle_type) stored in the mdspan, as well as the reference type returned by its access operator. This is an extension point for modifying how access
happens, for example by using atomic_ref to get atomic access to every element. We use custom Accessors later in this paper to implement “scaled views” and “conjugated views” of an existing
New mdspan layouts in this proposal
Our proposal uses the layout mapping policy of mdspan in order to represent different matrix and vector data layouts. The current C++ Standard draft includes three layouts: layout_left, layout_right,
and layout_stride. P2642 proposes two more: layout_left_padded and layout_right_padded. These two layouts represent exactly the data layout assumed by the General (GE) matrix type in the BLAS’ C and
Fortran bindings. They have has two advantages.
1. Unlike layout_left and layout_right, any “submatrix” (subspan of consecutive rows and consecutive columns) of a matrix with layout_left_padded resp. layout_right_padded layout also has
layout_left_padded resp. layout_right_padded layout.
2. Unlike layout_stride, the two layouts always have compile-time unit stride in one of the matrix’s two extents.
BLAS functions call the possibly nonunit stride of the matrix the “leading dimension” of that matrix. For example, a BLAS function argument corresponding to the leading dimension of the matrix A is
called LDA, for “leading dimension of the matrix A.”
This proposal introduces a new layout, layout_blas_packed. This describes the layout used by the BLAS’ Symmetric Packed (SP), Hermitian Packed (HP), and Triangular Packed (TP) “types.” The
layout_blas_packed class has a “tag” template parameter that controls its properties; see below.
We do not include layouts for unpacked “types,” such as Symmetric (SY), Hermitian (HE), and Triangular (TR). Our paper P1674. explains our reasoning. In summary: Their actual layout – the arrangement
of matrix elements in memory – is the same as General. The only differences are constraints on what entries of the matrix algorithms may access, and assumptions about the matrix’s mathematical
properties. Trying to express those constraints or assumptions as “layouts” or “accessors” violates the spirit (and sometimes the law) of mdspan. We address these different matrix types with
different function names.
The packed matrix “types” do describe actual arrangements of matrix elements in memory that are not the same as in General. This is why we provide layout_blas_packed. Note that layout_blas_packed is
the first addition to the existing layouts that is neither always unique, nor always strided.
Algorithms cannot be written generically if they permit output arguments with nonunique layouts. Nonunique output arguments require specialization of the algorithm to the layout, since there’s no way
to know generically at compile time what indices map to the same matrix element. Thus, we will impose the following rule: Any mdspan output argument to our functions must always have unique layout
(is_always_unique() is true), unless otherwise specified.
Some of our functions explicitly require outputs with specific nonunique layouts. This includes low-rank updates to symmetric or Hermitian matrices.
Implementation experience
As far as the authors know, there are currently two implementations of the proposal:
• the reference implementation written and maintained by the authors and others, and
• an implementation by NVIDIA, which was released as part of their HPC SDK, and which uses NVIDIA libraries such as cuBLAS to accelerate many of the algorithms.
This proposal depends on mdspan (adopted into C++23), submdspan (P2630, adopted into the current C++ draft), and (indirectly) on the padded mdspan layouts in P2642. The reference implementation of
mdspan, written and maintained by the authors and others, includes implementations of all these features.
Interoperable with other linear algebra proposals
We believe this proposal is complementary to P1385, a proposal for a C++ Standard linear algebra library that introduces matrix and vector classes with overloaded arithmetic operators. The P1385
authors and we have expressed together in a joint paper, P1891, that P1673 and P1385 “are orthogonal. They are not competing papers; … there is no overlap of functionality.”
We designed P1673 in part as a natural foundation or implementation layer for existing libraries with similar design and goals as P1385. Our view is that a free function interface like P1673’s
clearly separates algorithms from data structures, and more naturally allows for a richer set of operations such as what the BLAS provides. Our paper P1674 explains why we think our proposal is a
minimal C++ “respelling” of the BLAS.
A natural extension of the present proposal would include accepting P1385’s matrix and vector objects as input for the algorithms proposed here. A straightforward way to do that would be for P1385’s
matrix and vector objects to make views of their data available as mdspan.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International,
Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
Special thanks to Bob Steagall and Guy Davidson for boldly leading the charge to add linear algebra to the C++ Standard Library, and for many fruitful discussions. Thanks also to Andrew Lumsdaine for
his pioneering efforts and history lessons. In addition, I very much appreciate feedback from Davis Herring on constraints wording.
References by coathors
Other references
• E. Anderson, “Algorithm 978: Safe Scaling in the Level 1 BLAS,” ACM Transactions on Mathematical Software, Vol. 44, pp. 1-28, 2017.
• E. Anderson et al., LAPACK Users’ Guide, Third Edition, SIAM, 1999.
• “Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard,” International Journal of High Performance Applications and Supercomputing, Vol. 16, No. 1, Spring 2002.
• L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, and R. C. Whaley, “An updated set of basic
linear algebra subprograms (BLAS),” ACM Transactions on Mathematical Software, Vol. 28, No. 2, Jun. 2002, pp. 135-151.
• J. L. Blue, “A Portable Fortran Program to Find the Euclidean Norm of a Vector,” ACM Transactions on Mathematical Software, Vol. 4, pp. 15-23, 1978.
• B. D. Craven, “Complex symmetric matrices”, Journal of the Australian Mathematical Society, Vol. 10, No. 3-4, Nov. 1969, pp. 341–354.
• E. Chow and A. Patel, “Fine-Grained Parallel Incomplete LU Factorization”, SIAM J. Sci. Comput., Vol. 37, No. 2, C169-C193, 2015.
• G. Davidson and B. Steagall, “A proposal to add linear algebra support to the C++ standard library,” P1385R7, Oct. 2022.
• B. Dawes, H. Hinnant, B. Stroustrup, D. Vandevoorde, and M. Wong, “Direction for ISO C++,” P0939R4, Oct. 2019.
• J. Demmel, “Applied Numerical Linear Algebra,” Society for Industrial and Applied Mathematics, Philadelphia, PA, 1997, ISBN 0-89871-389-7.
• J. Demmel, I. Dumitriu, and O. Holtz, “Fast linear algebra is stable,” Numerische Mathematik 108 (59-91), 2007.
• J. Demmel and H. D. Nguyen, “Fast Reproducible Floating-Point Summation,” 2013 IEEE 21st Symposium on Computer Arithmetic, 2013, pp. 163-172, doi: 10.1109/ARITH.2013.9.
• J. Dongarra, J. Du Croz, S. Hammarling, and I. Duff, “A set of level 3 basic linear algebra subprograms,” ACM Transactions on Mathematical Software, Vol. 16, No. 1, pp. 1-17, Mar. 1990.
• J. Dongarra, R. Pozo, and D. Walker, “LAPACK++: A Design Overview of Object-Oriented Extensions for High Performance Linear Algebra,” in Proceedings of Supercomputing ’93, IEEE Computer Society
Press, 1993, pp. 162-171.
• M. Gates, P. Luszczek, A. Abdelfattah, J. Kurzak, J. Dongarra, K. Arturov, C. Cecka, and C. Freitag, “C++ API for BLAS and LAPACK,” SLATE Working Notes, Innovative Computing Laboratory,
University of Tennessee Knoxville, Feb. 2018.
• K. Goto and R. A. van de Geijn, “Anatomy of high-performance matrix multiplication,”, ACM Transactions on Mathematical Software, Vol. 34, No. 3, pp. 1-25, May 2008.
• N. J. Higham, Accuracy and Stability of Numerical Algorithms, Second Edition, SIAM, 2022.
• N. J. Higham, “FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation,” ACM Transactions on Mathematical Software, Vol. 14, No. 4, pp.
381-396, Dec. 1988.
• N. A. Josuttis, “The C++ Standard Library: A Tutorial and Reference,” Addison-Wesley, 1999.
• M. Kretz, “Data-Parallel Vector Types & Operations,” P0214R9, Mar. 2018.
• A. J. Perlis, “Epigrams on programming,” SIGPLAN Notices, Vol. 17, No. 9, pp. 7-13, 1982.
• G. Strang, “Introduction to Linear Algebra,” 5th Edition, Wellesley - Cambridge Press, 2016, ISBN 978-0-9802327-7-6, x+574 pages.
• D. Vandevoorde and N. A. Josuttis, “C++ Templates: The Complete Guide,” Addison-Wesley Professional, 2003.
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
Dummy Heading To Align Wording Numbering
The preceding headings just push the automatic numbering of the document generator so that “Wording” is in the place of 28.8 Numbers
Text in blockquotes is not proposed wording, but rather instructions for generating proposed wording. The � character is used to denote a placeholder section number which the editor shall
In the Bibliography, add the following reference:
J. Demmel, I. Dumitriu, and O. Holtz, “Fast linear algebra is stable,” Numerische Mathematik 108 (59-91), 2007.
In [algorithms.parallel.defns] modify paragraph 3.1:
• (3.1) All operations of the categories of the iterators that the algorithm is instantiated with.
In [algorithms.parallel.user] modify paragraph 1:
1 Unless otherwise specified, function objects passed into parallel algorithms as objects of type Predicate, BinaryPredicate, Compare, UnaryOperation, BinaryOperation, BinaryOperation1,
BinaryOperation2 and the operators used by the analogous overloads to these parallel algorithms that are formed by an invocation with the specified default predicate or operation (where applicable)
shall not directly or indirectly modify objects via their arguments, nor shall they rely on the identity of the provided objects.
In [headers], add the header <linalg> to [tab:headers.cpp].
In [diff.cpp23] add a new subsection
Clause 16: library introduction [diff.cpp23.library]
Affected subclause: 16.4.2.3
Change: New headers
Rationale: New functionality.
Effect on original feature: The following C++ headers are new: <linalg>. Valid C++2023 code that includes headers with these names may be invalid in this revision of C++.
In [version.syn], add
Adjust the placeholder value as needed so as to denote this proposal’s date of adoption.
At the end of Table � (“Numerics library summary”) in [numerics.general], add the following: [linalg], Linear algebra, <linalg>.
At the end of [numerics] (after subsection 28.8 [numbers]), add all the material that follows.
Basic linear algebra algorithms [linalg]
Overview [linalg.overview]
1 Subclause [linalg] defines basic linear algebra algorithms. The algorithms that access the elements of arrays view those elements through mdspan [mdspan].
namespace std::linalg {
// [linalg.tags.order], storage order tags
struct column_major_t;
inline constexpr column_major_t column_major;
struct row_major_t;
inline constexpr row_major_t row_major;
// [linalg.tags.triangle], triangle tags
struct upper_triangle_t;
inline constexpr upper_triangle_t upper_triangle;
struct lower_triangle_t;
inline constexpr lower_triangle_t lower_triangle;
// [linalg.tags.diagonal], diagonal tags
struct implicit_unit_diagonal_t;
inline constexpr implicit_unit_diagonal_t implicit_unit_diagonal;
struct explicit_diagonal_t;
inline constexpr explicit_diagonal_t explicit_diagonal;
// [linalg.layout.packed], class template layout_blas_packed
template<class Triangle,
class StorageOrder>
class layout_blas_packed;
// [linalg.helpers], exposition-only helpers
// [linalg.helpers.concepts], linear algebra argument concepts
template<class T>
constexpr bool is-mdspan = see below; // exposition only
template<class T>
concept in-vector = see below; // exposition only
template<class T>
concept out-vector = see below; // exposition only
template<class T>
concept inout-vector = see below; // exposition only
template<class T>
concept in-matrix = see below; // exposition only
template<class T>
concept out-matrix = see below; // exposition only
template<class T>
concept inout-matrix = see below; // exposition only
template<class T>
concept possibly-packed-inout-matrix = see below; // exposition only
template<class T>
concept in-object = see below; // exposition only
template<class T>
concept out-object = see below; // exposition only
template<class T>
concept inout-object = see below; // exposition only
// [linalg.scaled], scaled in-place transformation
// [linalg.scaled.scaledaccessor], class template scaled_accessor
template<class ScalingFactor,
class NestedAccessor>
class scaled_accessor;
// [linalg.scaled.scaled], function template scaled
template<class ScalingFactor,
class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto scaled(
ScalingFactor alpha,
mdspan<ElementType, Extents, Layout, Accessor> x);
// [linalg.conj], conjugated in-place transformation
// [linalg.conj.conjugatedaccessor], class template conjugated_accessor
template<class NestedAccessor>
class conjugated_accessor;
// [linalg.conj.conjugated], function template conjugated
template<class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto conjugated(
mdspan<ElementType, Extents, Layout, Accessor> a);
// [linalg.transp], transpose in-place transformation
// [linalg.transp.layout.transpose], class template layout_transpose
template<class Layout>
class layout_transpose;
// [linalg.transp.transposed], function template transposed
template<class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto transposed(
mdspan<ElementType, Extents, Layout, Accessor> a);
// [linalg.conjtransposed],
// conjugated transpose in-place transformation
template<class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto conjugate_transposed(
mdspan<ElementType, Extents, Layout, Accessor> a);
// [linalg.algs.blas1], BLAS 1 algorithms
// [linalg.algs.blas1.givens], Givens rotations
// [linalg.algs.blas1.givens.lartg], compute Givens rotation
template<class Real>
struct setup_givens_rotation_result {
Real c;
Real s;
Real r;
template<class Real>
struct setup_givens_rotation_result<complex<Real>> {
Real c;
complex<Real> s;
complex<Real> r;
template<class Real>
setup_givens_rotation(Real a, Real b) noexcept;
template<class Real>
setup_givens_rotation(complex<Real> a, complex<Real> b) noexcept;
// [linalg.algs.blas1.givens.rot], apply computed Givens rotation
template<inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
InOutVec1 x,
InOutVec2 y,
Real c,
Real s);
template<class ExecutionPolicy,
inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
ExecutionPolicy&& exec,
InOutVec1 x,
InOutVec2 y,
Real c,
Real s);
template<inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
InOutVec1 x,
InOutVec2 y,
Real c,
complex<Real> s);
template<class ExecutionPolicy,
inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
ExecutionPolicy&& exec,
InOutVec1 x,
InOutVec2 y,
Real c,
complex<Real> s);
// [linalg.algs.blas1.swap], swap elements
template<inout-object InOutObj1,
inout-object InOutObj2>
void swap_elements(InOutObj1 x,
InOutObj2 y);
template<class ExecutionPolicy,
inout-object InOutObj1,
inout-object InOutObj2>
void swap_elements(ExecutionPolicy&& exec,
InOutObj1 x,
InOutObj2 y);
// [linalg.algs.blas1.scal], multiply elements by scalar
template<class Scalar,
inout-object InOutObj>
void scale(Scalar alpha,
InOutObj x);
template<class ExecutionPolicy,
class Scalar,
inout-object InOutObj>
void scale(ExecutionPolicy&& exec,
Scalar alpha,
InOutObj x);
// [linalg.algs.blas1.copy], copy elements
template<in-object InObj,
out-object OutObj>
void copy(InObj x,
OutObj y);
template<class ExecutionPolicy,
in-object InObj,
out-object OutObj>
void copy(ExecutionPolicy&& exec,
InObj x,
OutObj y);
// [linalg.algs.blas1.add], add elementwise
template<in-object InObj1,
in-object InObj2,
out-object OutObj>
void add(InObj1 x,
InObj2 y,
OutObj z);
template<class ExecutionPolicy,
in-object InObj1,
in-object InObj2,
out-object OutObj>
void add(ExecutionPolicy&& exec,
InObj1 x,
InObj2 y,
OutObj z);
// [linalg.algs.blas1.dot],
// dot product of two vectors
// [linalg.algs.blas1.dot.dotu],
// nonconjugated dot product of two vectors
template<in-vector InVec1,
in-vector InVec2,
class Scalar>
T dot(InVec1 v1,
InVec2 v2,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
class Scalar>
T dot(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2,
Scalar init);
template<in-vector InVec1,
in-vector InVec2>
auto dot(InVec1 v1,
InVec2 v2);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2>
auto dot(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2);
// [linalg.algs.blas1.dot.dotc],
// conjugated dot product of two vectors
template<in-vector InVec1,
in-vector InVec2,
class Scalar>
Scalar dotc(InVec1 v1,
InVec2 v2,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
class Scalar>
Scalar dotc(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2,
Scalar init);
template<in-vector InVec1,
in-vector InVec2>
auto dotc(InVec1 v1,
InVec2 v2);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2>
auto dotc(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2);
// [linalg.algs.blas1.ssq],
// Scaled sum of squares of a vector's elements
template<class Scalar>
struct sum_of_squares_result {
Scalar scaling_factor;
Scalar scaled_sum_of_squares;
template<in-vector InVec,
class Scalar>
sum_of_squares_result<Scalar> vector_sum_of_squares(
InVec v,
sum_of_squares_result<Scalar> init);
template<class ExecutionPolicy,
in-vector InVec,
class Scalar>
sum_of_squares_result<Scalar> vector_sum_of_squares(
ExecutionPolicy&& exec,
InVec v,
sum_of_squares_result<Scalar> init);
// [linalg.algs.blas1.nrm2],
// Euclidean norm of a vector
template<in-vector InVec,
class Scalar>
Scalar vector_two_norm(InVec v,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec,
class Scalar>
Scalar vector_two_norm(ExecutionPolicy&& exec,
InVec v,
Scalar init);
template<in-vector InVec>
auto vector_two_norm(InVec v);
template<class ExecutionPolicy,
in-vector InVec>
auto vector_two_norm(ExecutionPolicy&& exec,
InVec v);
// [linalg.algs.blas1.asum],
// sum of absolute values of vector elements
template<in-vector InVec,
class Scalar>
Scalar vector_abs_sum(InVec v,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec,
class Scalar>
Scalar vector_abs_sum(ExecutionPolicy&& exec,
InVec v,
Scalar init);
template<in-vector InVec>
auto vector_abs_sum(InVec v);
template<class ExecutionPolicy,
in-vector InVec>
auto vector_abs_sum(ExecutionPolicy&& exec,
InVec v);
// [linalg.algs.blas1.iamax],
// index of maximum absolute value of vector elements
template<in-vector InVec>
typename InVec::extents_type vector_idx_abs_max(InVec v);
template<class ExecutionPolicy,
in-vector InVec>
typename InVec::extents_type vector_idx_abs_max(
ExecutionPolicy&& exec,
InVec v);
// [linalg.algs.blas1.matfrobnorm],
// Frobenius norm of a matrix
template<in-matrix InMat,
class Scalar>
Scalar matrix_frob_norm(InMat A,
Scalar init);
template<class ExecutionPolicy,
in-matrix InMat,
class Scalar>
Scalar matrix_frob_norm(
ExecutionPolicy&& exec,
InMat A,
Scalar init);
template<in-matrix InMat>
auto matrix_frob_norm(
InMat A);
template<class ExecutionPolicy,
in-matrix InMat>
auto matrix_frob_norm(
ExecutionPolicy&& exec,
InMat A);
// [linalg.algs.blas1.matonenorm],
// One norm of a matrix
template<in-matrix InMat,
class Scalar>
Scalar matrix_one_norm(
InMat A,
Scalar init);
template<class ExecutionPolicy,
in-matrix InMat,
class Scalar>
Scalar matrix_one_norm(
ExecutionPolicy&& exec,
InMat A,
Scalar init);
template<in-matrix InMat>
auto matrix_one_norm(
InMat A);
template<class ExecutionPolicy,
in-matrix InMat>
auto matrix_one_norm(
ExecutionPolicy&& exec,
InMat A);
// [linalg.algs.blas1.matinfnorm],
// Infinity norm of a matrix
template<in-matrix InMat,
class Scalar>
Scalar matrix_inf_norm(
InMat A,
Scalar init);
template<class ExecutionPolicy,
in-matrix InMat,
class Scalar>
Scalar matrix_inf_norm(
ExecutionPolicy&& exec,
InMat A,
Scalar init);
template<in-matrix InMat>
auto matrix_inf_norm(
InMat A);
template<class ExecutionPolicy,
in-matrix InMat>
auto matrix_inf_norm(
ExecutionPolicy&& exec,
InMat A);
// [linalg.algs.blas2], BLAS 2 algorithms
// [linalg.algs.blas2.gemv],
// general matrix-vector product
template<in-matrix InMat,
in-vector InVec,
out-vector OutVec>
void matrix_vector_product(InMat A,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
in-vector InVec,
out-vector OutVec>
void matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
InVec x,
OutVec y);
template<in-matrix InMat,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void matrix_vector_product(InMat A,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
InVec1 x,
InVec2 y,
OutVec z);
// [linalg.algs.blas2.symv],
// symmetric matrix-vector product
template<in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void symmetric_matrix_vector_product(InMat A,
Triangle t,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void symmetric_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec x,
OutVec y);
template<in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void symmetric_matrix_vector_product(
InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void symmetric_matrix_vector_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
// [linalg.algs.blas2.hemv],
// Hermitian matrix-vector product
template<in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void hermitian_matrix_vector_product(InMat A,
Triangle t,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void hermitian_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec x,
OutVec y);
template<in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void hermitian_matrix_vector_product(InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void hermitian_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
// [linalg.algs.blas2.trmv],
// Triangular matrix-vector product
// Overwriting triangular matrix-vector product
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_product(
InMat A,
Triangle t,
DiagonalStorage d,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec x,
OutVec y);
// In-place triangular matrix-vector product
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_product(
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec y);
// Updating triangular matrix-vector product
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void triangular_matrix_vector_product(InMat A,
Triangle t,
DiagonalStorage d,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void triangular_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec1 x,
InVec2 y,
OutVec z);
// [linalg.algs.blas2.trsv],
// Solve a triangular linear system
// Solve a triangular linear system, not in place
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x,
BinaryDivideOp divide);
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x);
// Solve a triangular linear system, in place
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b,
BinaryDivideOp divide);
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b);
// [linalg.algs.blas2.rank1],
// nonsymmetric rank-1 matrix update
template<in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update(
InVec1 x,
InVec2 y,
InOutMat A);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A);
template<in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update_c(
InVec1 x,
InVec2 y,
InOutMat A);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update_c(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A);
// [linalg.algs.blas2.symherrank1],
// symmetric or Hermitian rank-1 matrix update
template<in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
ExecutionPolicy&& exec,
InVec x,
InOutMat A,
Triangle t);
template<class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
ExecutionPolicy&& exec,
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
template<in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
ExecutionPolicy&& exec,
InVec x,
InOutMat A,
Triangle t);
template<class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
ExecutionPolicy&& exec,
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
// [linalg.algs.blas2.rank2],
// Symmetric and Hermitian rank-2 matrix updates
// symmetric rank-2 matrix update
template<in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2_update(
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2_update(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
// Hermitian rank-2 matrix update
template<in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2_update(
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2_update(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
// [linalg.algs.blas3], BLAS 3 algorithms
// [linalg.algs.blas3.gemm],
// general matrix-matrix product
template<in-matrix InMat1,
in-matrix InMat2,
out-matrix OutMat>
void matrix_product(InMat1 A,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
out-matrix OutMat>
void matrix_product(ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void matrix_product(InMat1 A,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void matrix_product(ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
InMat3 E,
OutMat C);
// [linalg.algs.blas3.xxmm],
// symmetric, Hermitian, and triangular matrix-matrix product
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
OutMat C);
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
InMat3 E,
OutMat C);
// [linalg.algs.blas3.trmm],
// in-place triangular matrix-matrix product
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_left_product(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_left_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_right_product(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_right_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
// [linalg.algs.blas3.rankk],
// rank-k update of a symmetric or Hermitian matrix
// rank-k symmetric matrix update
template<class Scalar,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
template<class Scalar,
class ExecutionPolicy,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
ExecutionPolicy&& exec,
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
template<in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
InMat A,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
ExecutionPolicy&& exec,
InMat A,
InOutMat C,
Triangle t);
// rank-k Hermitian matrix update
template<class Scalar,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
class Scalar,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
ExecutionPolicy&& exec,
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
template<in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
InMat A,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
ExecutionPolicy&& exec,
InMat A,
InOutMat C,
Triangle t);
// [linalg.algs.blas3.rank2k],
// rank-2k update of a symmetric or Hermitian matrix
// rank-2k symmetric matrix update
template<in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2k_update(
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2k_update(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
// rank-2k Hermitian matrix update
template<in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2k_update(
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2k_update(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
// [linalg.algs.blas3.trsm],
// solve multiple triangular linear systems
// solve multiple triangular systems on the left, not-in-place
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_left_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
// solve multiple triangular systems on the right, not-in-place
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_right_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
// solve multiple triangular systems on the left, in-place
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_left_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
// solve multiple triangular systems on the right, in-place
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_right_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
General [linalg.general]
1 For the effects of all functions in [linalg], when the effects are described as “computes R=EXPR” or “compute R=EXPR” (for some R and mathematical expression EXPR), the following apply:
2 Some of the functions and types in [linalg] distinguish between the “rows” and the “columns” of a matrix. For a matrix A and a multidimensional index i, j in A.extents(),
• (2.1) row i of A is the set of elements A[i, k1] for all k1 such that i, k1 is in A.extents(); and
• (2.2) column j of A is the set of elements A[k0, j] for all k0 such that k0, j is in A.extents().
3 Some of the functions in [linalg] distinguish between the “upper triangle,” “lower triangle,” and “diagonal” of a matrix.
• (3.1) The diagonal is the set of all elements of A accessed by A[i,i] for 0 ≤ i < min(A.extent(0), A.extent(1)).
• (3.2) The upper triangle of a matrix A is the set of all elements of A accessed by A[i,j] with i ≤ j. It includes the diagonal.
• (3.3) The lower triangle of A is the set of all elements of A accessed by A[i,j] with i ≥ j. It includes the diagonal.
4 For any function F that takes a parameter named t, t applies to accesses done through the parameter preceding t in the parameter list of F. Let m be such an access-modified function parameter. F
will only access the triangle of m specified by t. For accesses m[i, j] outside the triangle specified by t, F will use the value`
• conj-if-needed(m[j, i]) if the name of F starts with hermitian,
• m[j, i] if the name of F starts with symmetric, or
• the additive identity if the name of F starts with triangular.
[Example: Small vector product accessing only specified triangle. It would not be a precondition violation for the non-accessed matrix element to be non-zero.
template<class Triangle>
void triangular_matrix_vector_2x2_product(
mdspan<const float, extents<int, 2, 2>> m,
Triangle t,
mdspan<const float, extents<int, 2>> x,
mdspan<float, extents<int, 2>> y) {
static_assert(is_same_v<Triangle, lower_triangle_t> ||
is_same_v<Triangle, upper_triangle_t>);
if constexpr (is_same_v<Triangle, lower_triangle_t>) {
y[0] = m[0,0] * x[0]; // + 0 * x[1]
y[1] = m[1,0] * x[0] + m[1,1] * x[1];
} else { // upper_triangle_t
y[0] = m[0,0] * x[0] + m[0,1] * x[1];
y[1] = /* 0 * x[0] + */ m[1,1] * x[1];
–end example]
5 For any function F that takes a parameter named d, d applies to accesses done through the previous-of-the-previous parameter of d in the parameter list of F. Let m be such an access-modified
function parameter. If d specifies that an implicit unit diagonal is to be assumed, then
• F will not access the diagonal of m; and
• the algorithm will interpret m as if it has a unit diagonal, that is, a diagonal each of whose elements behaves as a two-sided multiplicative identity (even if m’s value type does not have a
two-sided multiplicative identity).
Otherwise, if d specifies that an explicit diagonal is to be assumed, then F will access the diagonal of m.
6 Within all the functions in [linalg], any calls to abs, conj, imag, and real are unqualified.
7 Two mdspan objects x and y alias each other, if they have the same extents e, and for every pack of integers i which is a multidimensional index in e, x[i...] and y[i...] refer to the same element.
[Note: This means that x and y view the same elements in the same order. – end note]
8 Two mdspan objects x and y overlap each other, if for some pack of integers i that is a multidimensional index in x.extents(), there exists a pack of integers j that is a multidimensional index in
y.extents(), such that x[i...] and y[j...] refer to the same element. [Note: Aliasing is a special case of overlapping. If x and y do not overlap, then they also do not alias each other. – end note]
Requirements [linalg.reqs]
Linear algebra value types [linalg.reqs.val]
1 Throughout [linalg], the following types are linear algebra value types:
• (1.1) the value_type type alias of any input or output mdspan parameter(s) of any function in [linalg]; and
• (1.2) the Scalar template parameter (if any) of any function or class in [linalg].
2 Linear algebra value types shall model semiregular.
3 A value-initialized object of linear algebra value type shall act as the additive identity.
Algorithm and class requirements [linalg.reqs.alg]
1 [linalg.reqs.alg] lists common requirements for all algorithms and classes in [linalg].
2 All of the following statements presume that the algorithm’s asymptotic complexity requirements, if any, are satisfied.
• (2.1) The function may make arbitrarily many objects of any linear algebra value type, value-initializing or direct-initializing them with any existing object of that type.
• (2.2) The triangular solve algorithms in [linalg.algs] either have a BinaryDivideOp template parameter (see [linalg.algs.reqs]) and a binary function object parameter divide of that type, or they
have effects equivalent to invoking such an algorithm. Triangular solve algorithms interpret divide(a, b) as a times the multiplicative inverse of b. Each triangular solve algorithm uses a
sequence of evaluations of *, *=, divide, unary +, binary +, +=, unary -, binary -, -=, and = operators that would produce the result specified by the algorithm’s Effects and Remarks when
operating on elements of a field with noncommutative multiplication. It is a precondition of the algorithm that any addend, any subtrahend, any partial sum of addends in any order (treating any
difference as a sum with the second term negated), any factor, any partial product of factors respecting their order, any numerator (first argument of divide), any denominator (second argument of
divide), and any assignment is a well-formed expression.
• (2.3) Otherwise, the function will use a sequence of evaluations of *, *=, +, +=, and = operators that would produce the result specified by the algorithm’s Effects and Remarks when operating on
elements of a semiring with noncommutative multiplication. It is a precondition of the algorithm that any addend, any partial sum of addends in any order, any factor, any partial product of
factors respecting their order, and any assignment is a well-formed expression.
• (2.4) If the function has an output mdspan, then all addends, subtrahends (for the triangular solve algorithms), or results of the divide parameter on intermediate terms (if the function takes a
divide parameter) are assignable and convertible to the output mdspan’s value_type.
• (2.5) The function may reorder addends and partial sums arbitrarily. [Note: Factors in each product are not reordered; multiplication is not necessarily commutative. – end note]
[Note: The above requirements do not prohibit implementation approaches and optimization techniques which are not user-observable. In particular, if for all input and output arguments the value_type
is a floating-point type, implementers are free to leverage approximations, use arithmetic operations not explicitly listed above, and compute floating point sums in any way that improves their
accuracy. – end note]
[Note: For all functions in [linalg], suppose that all input and output mdspan have as value_type a floating-point type, and any Scalar template argument has a floating-point type.
Then, functions may do all of the following:
• compute floating-point sums in any way that improves their accuracy for arbitrary input;
• perform additional arithmetic operations (other than those specified by the function’s wording and [linalg.reqs.alg]) in order to improve performance or accuracy; and
• use approximations (that might not be exact even if computing with real numbers), instead of computations that would be exact if it were possible to compute without rounding error;
as long as
• the function satisfies the complexity requirements; and
• the function is logarithmically stable, as defined in Demmel 2007. Strassen’s algorithm for matrix-matrix multiply is an example of a logarithmically stable algorithm. – end note]
1 The storage order tags describe the order of elements in an mdspan with layout_blas_packed ([linalg.layout.packed]) layout.
struct column_major_t {
explicit column_major_t() = default;
inline constexpr column_major_t column_major{};
struct row_major_t {
explicit row_major_t() = default;
inline constexpr row_major_t row_major{};
2 column_major_t indicates a column-major order, and row_major_t indicates a row-major order.
struct upper_triangle_t {
explicit upper_triangle_t() = default;
inline constexpr upper_triangle_t upper_triangle{};
struct lower_triangle_t {
explicit lower_triangle_t() = default;
inline constexpr lower_triangle_t lower_triangle{};
1 These tag classes specify whether algorithms and other users of a matrix (represented as an mdspan) access the upper triangle (upper_triangle_t) or lower triangle (lower_triangle_t) of the matrix
(see also [linalg.general]). This is also subject to the restrictions of implicit_unit_diagonal_t if that tag is also used as a function argument; see below.
struct implicit_unit_diagonal_t {
explicit implicit_unit_diagonal_t() = default;
inline constexpr implicit_unit_diagonal_t
struct explicit_diagonal_t {
explicit explicit_diagonal_t() = default;
inline constexpr explicit_diagonal_t explicit_diagonal{};
1 These tag classes specify whether algorithms access the matrix’s diagonal entries, and if not, then how algorithms interpret the matrix’s implicitly represented diagonal values.
2 The implicit_unit_diagonal_t tag indicates that an implicit unit diagonal is to be assumed ([linalg.general]).
3 The explicit_diagonal_t tag indicates that an explicit diagonal is used ([linalg.general]).
Layouts for packed matrix types [linalg.layout.packed]
Overview [linalg.layout.packed.overview]
1 layout_blas_packed is an mdspan layout mapping policy that represents a square matrix that stores only the entries in one triangle, in a packed contiguous format. Its Triangle template parameter
determines whether an mdspan with this layout stores the upper or lower triangle of the matrix. Its StorageOrder template parameter determines whether the layout packs the matrix’s elements in
column-major or row-major order.
2 A StorageOrder of column_major_t indicates column-major ordering. This packs matrix elements starting with the leftmost (least column index) column, and proceeding column by column, from the top
entry (least row index).
3 A StorageOrder of row_major_t indicates row-major ordering. This packs matrix elements starting with the topmost (least row index) row, and proceeding row by row, from the leftmost (least column
index) entry.
[Note: layout_blas_packed describes the data layout used by the BLAS’ Symmetric Packed (SP), Hermitian Packed (HP), and Triangular Packed (TP) matrix types. – end note]
template<class Triangle,
class StorageOrder>
class layout_blas_packed {
using triangle_type = Triangle;
using storage_order_type = StorageOrder;
template<class Extents>
struct mapping {
using extents_type = Extents;
using index_type = typename extents_type::index_type;
using size_type = typename extents_type::size_type;
using rank_type = typename extents_type::rank_type;
using layout_type = layout_blas_packed;
// [linalg.layout.packed.cons], constructors
constexpr mapping() noexcept = default;
constexpr mapping(const mapping&) noexcept = default;
constexpr mapping(const extents_type&) noexcept;
template<class OtherExtents>
constexpr explicit(! is_convertible_v<OtherExtents, extents_type>)
mapping(const mapping<OtherExtents>& other) noexcept;
constexpr mapping& operator=(const mapping&) noexcept = default;
// [linalg.layout.packed.obs], observers
constexpr const extents_type& extents() const noexcept { return extents_; }
constexpr index_type required_span_size() const noexcept;
template<class Index0, class Index1>
constexpr index_type operator() (Index0 ind0, Index1 ind1) const noexcept;
static constexpr bool is_always_unique() noexcept {
return (extents_type::static_extent(0) != dynamic_extent &&
extents_type::static_extent(0) < 2) ||
(extents_type::static_extent(1) != dynamic_extent &&
extents_type::static_extent(1) < 2);
static constexpr bool is_always_exhaustive() noexcept { return true; }
static constexpr bool is_always_strided() noexcept
{ return is_always_unique(); }
constexpr bool is_unique() const noexcept {
return extents_.extent(0) < 2;
constexpr bool is_exhaustive() const noexcept { return true; }
constexpr bool is_strided() const noexcept {
return extents_.extent(0) < 2;
constexpr index_type stride(rank_type) const noexcept;
template<class OtherExtents>
friend constexpr bool
operator==(const mapping&, const mapping<OtherExtents>&) noexcept;
extents_type extents_{}; // exposition only
4 Mandates:
5 layout_blas_packed<T, SO>::mapping<E> is a trivially copyable type that models regular for each T, SO, and E.
Constructors [linalg.layout.packed.cons]
6 Preconditions:
• (6.1) Let N be equal to e.extent(0). Then, N×(N+1) is representable as a value of type index_type ([basic.fundamental]).
• (6.2) e.extent(0) equals e.extent(1).
7 Effects: Direct-non-list-initializes extents_ with e.
template<class OtherExtents>
explicit(! is_convertible_v<OtherExtents, extents_type>)
constexpr mapping(const mapping<OtherExtents>& other) noexcept;
8 Constraints: is_constructible_v<extents_type, OtherExtents> is true.
9 Preconditions: Let N be other.extents().extent(0). Then, N×(N+1) is representable as a value of type index_type ([basic.fundamental]).
10 Effects: Direct-non-list-initializes extents_ with other.extents().
Observers [linalg.layout.packed.obs]
11 Returns: extents_.extent(0) * (extents_.extent(0) + 1)/2. [Note: For example, a 5 x 5 packed matrix only stores 15 matrix elements. – end note]
template<class Index0, class Index1>
constexpr index_type operator() (Index0 ind0, Index1 ind1) const noexcept;
12 Constraints:
13 Preconditions: extents_type::index-cast(ind0), extents_type::index-cast(ind1) is a multidimensional index in extents_ ([mdspan.overview]).
14 Returns: Let N be extents_.extent(0), let i be extents_type::index-cast(ind0), and let j be extents_type::index-cast(ind1). Then
• (14.1) (*this)(j, i) if i > j is true; otherwise
• (14.2) i + j * (j + 1)/2 if is_same_v<StorageOrder, column_major_t> && is_same_v<Triangle, upper_triangle_t> is true or is_same_v<StorageOrder, row_major_t> && is_same_v<Triangle,
lower_triangle_t> is true; otherwise
• (14.3) j + N * i - i * (i + 1)/2.
15 Preconditions:
• (15.1) is_strided() is true, and
• (15.1) r < extents_type::rank() is true.
16 Returns: 1.
template<class OtherExtents>
friend constexpr bool
operator==(const mapping& x, const mapping<OtherExtents>& y) noexcept;
17 Effects: Equivalent to: return x.extents() == y.extents();
Exposition-only helpers [linalg.helpers]
abs-if-needed [linalg.helpers.abs]
1 The name abs-if-needed denotes an exposition-only function object. The expression abs-if-needed(E) for subexpression E whose type is T is expression-equivalent to:
• (1.1) E if T is an unsigned integer;
• (1.2) otherwise, std::abs(E) if T is an arithmetic type,
• (1.3) otherwise, abs(E), if that expression is valid, with overload resolution performed in a context that includes the declaration template<class T> T abs(T) = delete;. If the function selected
by overload resolution does not return the absolute value of its input, the program is ill-formed, no diagnostic required.
conj-if-needed [linalg.helpers.conj]
1 The name conj-if-needed denotes an exposition-only function object. The expression conj-if-needed(E) for subexpression E whose type is T is expression-equivalent to:
• (1.1) conj(E), if T is not an arithmetic type and the expression conj(E) is valid, with overload resolution performed in a context that includes the declaration template<class T> T conj(const T&)
= delete;. If the function selected by overload resolution does not return the complex conjugate of its input, the program is ill-formed, no diagnostic required;
• (1.2) otherwise, E.
real-if-needed [linalg.helpers.real]
1 The name real-if-needed denotes an exposition-only function object. The expression real-if-needed(E) for subexpression E whose type is T is expression-equivalent to:
• (1.1) real(E), if T is not an arithmetic type and the expression real(E) is valid, with overload resolution performed in a context that includes the declaration template<class T> T real(const T&)
= delete;. If the function selected by overload resolution does not return the real part of its input, the program is ill-formed, no diagnostic required;
• (1.2) otherwise, E.
imag-if-needed [linalg.helpers.imag]
1 The name imag-if-needed denotes an exposition-only function object. The expression imag-if-needed(E) for subexpression E whose type is T is expression-equivalent to:
• (1.1) imag(E), if T is not an arithmetic type and the expression imag(E) is valid, with overload resolution performed in a context that includes the declaration template<class T> T imag(const T&)
= delete;. If the function selected by overload resolution does not return the imaginary part of its input, the program is ill-formed, no diagnostic required;
• (1.2) otherwise, ((void)E, T{}).
Linear algebra argument concepts [linalg.helpers.concepts]
1 The exposition-only concepts defined in this section constrain the algorithms in [linalg.algs].
template<class T>
constexpr bool is-mdspan = false; // exposition only
template<class ElementType, class Extents, class Layout, class Accessor>
constexpr bool is-mdspan<mdspan<ElementType, Extents, Layout, Accessor>> = true; // exposition only
template<class T>
concept in-vector = // exposition only
is-mdspan<T> &&
T::rank() == 1;
template<class T>
concept out-vector = // exposition only
is-mdspan<T> &&
T::rank() == 1 &&
is_assignable_v<typename T::reference, typename T::element_type> &&
template<class T>
concept inout-vector = // exposition only
is-mdspan<T> &&
T::rank() == 1 &&
is_assignable_v<typename T::reference, typename T::element_type> &&
template<class T>
concept in-matrix = // exposition only
is-mdspan<T> &&
T::rank() == 2;
template<class T>
concept out-matrix = // exposition only
is-mdspan<T> &&
T::rank() == 2 &&
is_assignable_v<typename T::reference, typename T::element_type> &&
template<class T>
concept inout-matrix = // exposition only
is-mdspan<T> &&
T::rank() == 2 &&
is_assignable_v<typename T::reference, typename T::element_type> &&
template<class T>
constexpr bool is-layout-blas-packed = false; // exposition only
template<class Triangle, class StorageOrder>
constexpr bool is-layout-blas-packed<layout-blas-packed<Triangle, StorageOrder>> = true; // exposition only
template<class T>
concept possibly-packed-inout-matrix = // exposition only
is-mdspan<T> &&
T::rank() == 2 &&
is_assignable_v<typename T::reference, typename T::element_type> &&
(T::is_always_unique() || is-layout-blas-packed<typename T::layout_type>);
template<class T>
concept in-object = // exposition only
is-mdspan<T> &&
(T::rank() == 1 || T::rank() == 2);
template<class T>
concept out-object = // exposition only
is-mdspan<T> &&
(T::rank() == 1 || T::rank() == 2) &&
is_assignable_v<typename T::reference, typename T::element_type> &&
template<class T>
concept inout-object = // exposition only
is-mdspan<T> &&
(T::rank() == 1 || T::rank() == 2) &&
is_assignable_v<typename T::reference, typename T::element_type> &&
2 If a function in [linalg.algs] accesses the elements of a parameter constrained by in-vector, in-matrix, or in-object, those accesses will not modify the elements.
3 Unless explicitly permitted, any inout-vector, inout-matrix, inout-object, possibly-packed-inout-matrix, out-vector, out-matrix, or out-object parameter of a function in [linalg.algs] shall not
overlap any other mdspan parameter of the function.
Exposition-only helpers for algorithm mandates [linalg.helpers.mandates]
[Note: These exposition-only helper functions use the less constraining input concepts even for the output arguments, because the additional constraint for assignability of elements is not necessary,
and they are sometimes used in a context where the third argument is an input type too. - end Note.]
template<class MDS1, class MDS2>
requires(is-mdspan<MDS1> && is-mdspan<MDS2>)
constexpr bool compatible-static-extents(size_t r1, size_t r2) // exposition only
return MDS1::static_extent(r1) == dynamic_extent ||
MDS2::static_extent(r2) == dynamic_extent ||
MDS1::static_extent(r1) == MDS2::static_extent(r2));
template<in-vector In1, in-vector In2, in-vector Out>
constexpr bool possibly-addable() // exposition only
return compatible-static-extents<Out, In1>(0, 0) &&
compatible-static-extents<Out, In2>(0, 0) &&
compatible-static-extents<In1, In2>(0, 0);
template<in-matrix In1, in-matrix In2, in-matrix Out>
constexpr bool possibly-addable() // exposition only
return compatible-static-extents<Out, In1>(0, 0) &&
compatible-static-extents<Out, In1>(1, 1) &&
compatible-static-extents<Out, In2>(0, 0) &&
compatible-static-extents<Out, In2>(1, 1) &&
compatible-static-extents<In1, In2>(0, 0) &&
compatible-static-extents<In1, In2>(1, 1);
template<in-matrix InMat, in-vector InVec, in-vector OutVec>
constexpr bool possibly-multipliable() // exposition only
return compatible-static-extents<OutVec, InMat>(0, 0) &&
compatible-static-extents<InMat, InVec>(1, 0);
template<in-vector InVec, in-matrix InMat, in-vector OutVec>
constexpr bool possibly-multipliable() // exposition only
return compatible-static-extents<OutVec, InMat>(0, 1) &&
compatible-static-extents<InMat, InVec>(0, 0);
template<in-matrix InMat1, in-matrix InMat2, in-matrix OutMat>
constexpr bool possibly-multipliable() // exposition only
return compatible-static-extents<OutMat, InMat1>(0, 0) &&
compatible-static-extents<OutMat, InMat2>(1, 1) &&
compatible-static-extents<InMat1, InMat2>(1, 0);
Exposition-only checks for algorithm preconditions [linalg.helpers.precond]
[Note: These helpers use the less constraining input concepts even for the output arguments, because the additional constraint for assignability of elements is not necessary, and they are sometimes
used in a context where the third argument is an input type too. - end Note.]
constexpr bool addable( // exposition only
const in-vector auto& in1,
const in-vector auto& in2,
const in-vector auto& out)
return out.extent(0) == in1.extent(0) &&
out.extent(0) == in2.extent(0);
constexpr bool addable( // exposition only
const in-matrix auto& in1,
const in-matrix auto& in2,
const in-matrix auto& out)
return out.extent(0) == in1.extent(0) &&
out.extent(1) == in1.extent(1) &&
out.extent(0) == in2.extent(0) &&
out.extent(1) == in2.extent(1);
constexpr bool multipliable( // exposition only
const in-matrix auto& in_mat,
const in-vector auto& in_vec,
const in-vector auto& out_vec)
return out_vec.extent(0) == in_mat.extent(0) &&
in_mat.extent(1) == in_vec.extent(0);
constexpr bool multipliable( // exposition only
const in-vector auto& in_vec,
const in-matrix auto& in_mat,
const in-vector auto& out_vec)
return out_vec.extent(0) == in_mat.extent(1) &&
in_mat.extent(0) == in_vec.extent(0);
constexpr bool multipliable( // exposition only
const in-matrix auto& in_mat1,
const in-matrix auto& in_mat2,
const in-matrix auto& out_mat)
return out_mat.extent(0) == in_mat1.extent(0) &&
out_mat.extent(1) == in_mat2.extent(1) &&
in1_mat.extent(1) == in_mat2.extent(0);
Scaled in-place transformation [linalg.scaled]
Introduction [linalg.scaled.intro]
1 The scaled function takes a value alpha and an mdspan x, and returns a new read-only mdspan that represents the elementwise product of alpha with each element of x.
using Vec = mdspan<double, dextents<size_t, 1>>;
// z = alpha * x + y
void z_equals_alpha_times_x_plus_y(
double alpha, Vec x,
Vec y,
Vec z)
add(scaled(alpha, x), y, z);
// z = alpha * x + beta * y
void z_equals_alpha_times_x_plus_beta_times_y(
double alpha, Vec x,
double beta, Vec y,
Vec z)
add(scaled(alpha, x), scaled(beta, y), z);
–end example]
Class template scaled_accessor [linalg.scaled.scaledaccessor]
1 The class template scaled_accessor is an mdspan accessor policy which upon access produces scaled elements. reference. It is part of the implementation of scaled [linalg.scaled.scaled].
template<class ScalingFactor,
class NestedAccessor>
class scaled_accessor {
using element_type = add_const_t<decltype(declval<ScalingFactor>() * declval<NestedAccessor::element_type>())>;
using reference = remove_const_t<element_type>;
using data_handle_type = NestedAccessor::data_handle_type;
using offset_policy = scaled_accessor<ScalingFactor, NestedAccessor::offset_policy>;
constexpr scaled_accessor() = default;
template<class OtherNestedAccessor>
explicit(!is_convertible_v<OtherNestedAccessor, NestedAccessor>)
constexpr scaled_accessor(const scaled_accessor<ScalingFactor, OtherNestedAccessor>& other);
constexpr scaled_accessor(const ScalingFactor& s, const NestedAccessor& a);
constexpr reference access(data_handle_type p, size_t i) const;
constexpr offset_policy::data_handle_type
offset(data_handle_type p, size_t i) const;
constexpr const ScalingFactor& scaling_factor() const noexcept { return scaling-factor; }
constexpr const NestedAccessor& nested_accessor() const noexcept { return nested-accessor; }
ScalingFactor scaling-factor{}; // exposition only
NestedAccessor nested-accessor{}; // exposition only
2 Mandates:
template<class OtherNestedAccessor>
explicit(!is_convertible_v<OtherNestedAccessor, NestedAccessor>)
constexpr scaled_accessor(const scaled_accessor<ScalingFactor, OtherNestedAccessor>& other);
3 Constraints: is_constructible_v<NestedAccessor, const OtherNestedAccessor&> is true.
4 Effects:
• (4.1) Direct-non-list-initializes scaling-factor with other.scaling_factor(), and
• (4.2) direct-non-list-initializes nested-accessor with other.nested_accessor().
5 Effects:
• (5.1) Direct-non-list-initializes scaling-factor with s, and
• (5.2) direct-non-list-initializes nested-accessor with a.
6 Returns: scaling_factor() * NestedAccessor::element_type( nested-accessor.access(p, i))
7 Returns: nested-accessor.offset(p, i)
Function template scaled [linalg.scaled.scaled]
1 The scaled function template takes a scaling factor alpha and an mdspan x, and returns a new read-only mdspan with the same domain as x, that represents the elementwise product of alpha with each
element of x.
template<class ScalingFactor,
class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto scaled(
ScalingFactor alpha,
mdspan<ElementType, Extents, Layout, Accessor> x);
2 Let SA be scaled_accessor<ScalingFactor, Accessor>
3 Returns: mdspan<typename SA::element_type, Extents, Layout, SA>(x.data_handle(), x.mapping(), SA(alpha, x.accessor()))
void test_scaled(mdspan<double, extents<int, 10>> x)
auto x_scaled = scaled(5.0, x);
for(int i = 0; i < x.extent(0); ++i) {
assert(x_scaled[i] == 5.0 * x[i]);
–end example]
Conjugated in-place transformation [linalg.conj]
Introduction [linalg.conj.intro]
1 The conjugated function takes an mdspan x, and returns a new read-only mdspan y with the same domain as x, whose elements are the complex conjugates of the corresponding elements of x.
Class template conjugated_accessor [linalg.conj.conjugatedaccessor]
1 The class template conjugated_accessor is an mdspan accessor policy which upon access produces conjugate elements. It is part of the implementation of conjugated [linalg.conj.conjugated].
template<class NestedAccessor>
class conjugated_accessor {
using element_type = add_const_t<decltype(conj-if-needed(declval<NestedAccessor::element_type>()))>;
using reference = remove_const_t<element_type>;
using data_handle_type = typename NestedAccessor::data_handle_type;
using offset_policy = conjugated_accessor<NestedAccessor::offset_policy>;
constexpr conjugated_accessor() = default;
template<class OtherNestedAccessor>
explicit(!is_convertible_v<OtherNestedAccessor, NestedAccessor>>)
constexpr conjugated_accessor(const conjugated_accessor<OtherNestedAccessor>& other);
constexpr reference access(data_handle_type p, size_t i) const;
constexpr typename offset_policy::data_handle_type
offset(data_handle_type p, size_t i) const;
constexpr const Accessor& nested_accessor() const noexcept { return nested-accessor_; }
NestedAccessor nested-accessor_{}; // exposition only
2 Mandates:
3 Effects: Direct-non-list-initializes nested-accessor_ with acc.
template<class OtherNestedAccessor>
explicit(!is_convertible_v<OtherNestedAccessor, NestedAccessor>>)
constexpr conjugated_accessor(const conjugated_accessor<OtherNestedAccessor>& other);
4 Constraints: is_constructible_v<NestedAccessor, const OtherNestedAccessor&> is true.
5 Effects: Direct-non-list-initializes nested-accessor_ with other.nested_accessor().
6 Returns: conj-if-needed(NestedAccessor::element_type(nested-accessor_.access(p, i)))
7 Returns: nested-accessor_.offset(p, i)
Function template conjugated [linalg.conj.conjugated]
template<class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto conjugated(
mdspan<ElementType, Extents, Layout, Accessor> a);
1 Let A be remove_cvref_t<decltype(a.accessor().nested_accessor())> if Accessor is a specialization of conjugated_accessor, and otherwise conjugated_accessor<Accessor>.
2 Returns:
• (2.1) mdspan<typename A::element_type, Extents, Layout, A>(a.data_handle(), a.mapping(), a.accessor().nested_accessor()) if Accessor is a specialization of conjugated_accessor; otherwise
• (2.2) mdspan<typename A::element_type, Extents, Layout, A>(a.data_handle(), a.mapping(), conjugated_accessor(a.accessor())).
void test_conjugated_complex(
mdspan<complex<double>, extents<int, 10>> a)
auto a_conj = conjugated(a);
for(int i = 0; i < a.extent(0); ++i) {
assert(a_conj[i] == conj(a[i]);
auto a_conj_conj = conjugated(a_conj);
for(int i = 0; i < a.extent(0); ++i) {
assert(a_conj_conj[i] == a[i]);
void test_conjugated_real(
mdspan<double, extents<int, 10>> a)
auto a_conj = conjugated(a);
for(int i = 0; i < a.extent(0); ++i) {
assert(a_conj[i] == a[i]);
auto a_conj_conj = conjugated(a_conj);
for(int i = 0; i < a.extent(0); ++i) {
assert(a_conj_conj[i] == a[i]);
–end example]
Transpose in-place transformation [linalg.transp]
Introduction [linalg.transp.intro]
1 layout_transpose is an mdspan layout mapping policy that swaps the two indices, extents, and strides of any unique mdspan layout mapping policy.
2 The transposed function takes an mdspan representing a matrix, and returns a new mdspan representing the transpose of the input matrix.
Exposition-only helpers for layout_transpose and transposed [linalg.transp.helpers]
1 The exposition-only transpose-extents function takes an extents object representing the extents of a matrix, and returns a new extents object representing the extents of the transpose of the
2 The exposition-only alias template transpose-extents-t<InputExtents> gives the type of transpose-extents(e) for a given extents object e of type InputExtents.
template<class IndexType, size_t InputExtent0, size_t InputExtent1>
constexpr extents<IndexType, InputExtent1, InputExtent0>
transpose-extents(const extents<IndexType, InputExtent0, InputExtent1>& in); // exposition only
3 Returns: extents<IndexType, InputExtent1, InputExtent0>(in.extent(1), in.extent(0))
template<class InputExtents>
using transpose-extents-t =
decltype(transpose-extents(declval<InputExtents>())); // exposition only
Class template layout_transpose [linalg.transp.layout.transpose]
1 layout_transpose is an mdspan layout mapping policy that swaps the two indices, extents, and strides of any mdspan layout mapping policy.
template<class Layout>
class layout_transpose {
using nested_layout_type = Layout;
template<class Extents>
struct mapping {
using nested-mapping-type =
typename Layout::template mapping<
transpose-extents-t<Extents>>; // exposition only
using extents_type = Extents;
using index_type = typename extents_type::index_type;
using size_type = typename extents_type::size_type;
using rank_type = typename extents_type::rank_type;
using layout_type = layout_transpose;
constexpr explicit mapping(const nested-mapping-type&);
constexpr const extents_type& extents() const noexcept
{ return extents_; }
constexpr index_type required_span_size() const
{ return nested-mapping_.required_span_size(); }
template<class Index0, class Index1>
constexpr index_type operator()(Index0 ind0, Index1 ind1) const
{ return nested-mapping_(ind1, ind0); }
constexpr const nested-mapping-type& nested_mapping() const noexcept
{ return nested-mapping_; }
static constexpr bool is_always_unique() noexcept
{ return nested-mapping-type::is_always_unique(); }
static constexpr bool is_always_exhaustive() noexcept
{ return nested-mapping_type::is_always_exhaustive(); }
static constexpr bool is_always_strided() noexcept
{ return nested-mapping_type::is_always_strided(); }
constexpr bool is_unique() const
{ return nested-mapping_.is_unique(); }
constexpr bool is_exhaustive() const
{ return nested-mapping_.is_exhaustive(); }
constexpr bool is_strided() const
{ return nested-mapping_.is_strided(); }
constexpr index_type stride(size_t r) const;
template<class OtherExtents>
friend constexpr bool
operator==(const mapping& x, const mapping<OtherExtents>& y);
nested-mapping-type nested-mapping_; // exposition only
extents_type extents_; // exposition only
2 Layout shall meet the layout mapping policy requirements ([mdspan.layout.policy.reqmts]).
3 Mandates:
• (3.1) Extents is a specialization of std::extents, and
• (3.2) Extents::rank() equals 2.
4 Effects:
• (4.1) Initializes nested-mapping_ with map, and
• (4.1) initializes extents_ with transpose-extents(map.extents()).
5 Preconditions:
6 Returns: nested-mapping_ .stride(r == 0 ? 1 : 0)
template<class OtherExtents>
friend constexpr bool
operator==(const mapping& x, const mapping<OtherExtents>& y);
7 Constraints: The expression x.nested-mapping_ == y.nested-mapping_ is well-formed and its result is convertible to bool.
8 Returns: x.nested-mapping_ == y.nested-mapping_.
Function template transposed [linalg.transp.transposed]
1 The transposed function takes a rank-2 mdspan representing a matrix, and returns a new mdspan representing the input matrix’s transpose. The input matrix’s data are not modified, and the returned
mdspan accesses the input matrix’s data in place.
template<class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto transposed(
mdspan<ElementType, Extents, Layout, Accessor> a);
2 Mandates: Extents::rank() == 2 is true.
3 Let ReturnExtents be transpose-extents-t<Extents>. Let R be mdspan<ElementType, ReturnExtents, ReturnLayout, Accessor>, where ReturnLayout is:
4 Returns: With ReturnMapping being the type typename ReturnLayout::template mapping<ReturnExtents>:
void test_transposed(mdspan<double, extents<size_t, 3, 4>> a)
const auto num_rows = a.extent(0);
const auto num_cols = a.extent(1);
auto a_t = transposed(a);
assert(num_rows == a_t.extent(1));
assert(num_cols == a_t.extent(0));
assert(a.stride(0) == a_t.stride(1));
assert(a.stride(1) == a_t.stride(0));
for(size_t row = 0; row < num_rows; ++row) {
for(size_t col = 0; col < num_rows; ++col) {
assert(a[row, col] == a_t[col, row]);
auto a_t_t = transposed(a_t);
assert(num_rows == a_t_t.extent(0));
assert(num_cols == a_t_t.extent(1));
assert(a.stride(0) == a_t_t.stride(0));
assert(a.stride(1) == a_t_t.stride(1));
for(size_t row = 0; row < num_rows; ++row) {
for(size_t col = 0; col < num_rows; ++col) {
assert(a[row, col] == a_t_t[row, col]);
–end example]
Conjugate transpose in-place transform [linalg.conjtransposed]
1 The conjugate_transposed function returns a conjugate transpose view of an object. This combines the effects of transposed and conjugated.
template<class ElementType,
class Extents,
class Layout,
class Accessor>
constexpr auto conjugate_transposed(
mdspan<ElementType, Extents, Layout, Accessor> a);
2 Effects: Equivalent to: return conjugated(transposed(a));
void test_conjugate_transposed(
mdspan<complex<double>, extents<size_t, 3, 4>> a)
const auto num_rows = a.extent(0);
const auto num_cols = a.extent(1);
auto a_ct = conjugate_transposed(a);
assert(num_rows == a_ct.extent(1));
assert(num_cols == a_ct.extent(0));
assert(a.stride(0) == a_ct.stride(1));
assert(a.stride(1) == a_ct.stride(0));
for(size_t row = 0; row < num_rows; ++row) {
for(size_t col = 0; col < num_rows; ++col) {
assert(a[row, col] == conj(a_ct[col, row]));
auto a_ct_ct = conjugate_transposed(a_ct);
assert(num_rows == a_ct_ct.extent(0));
assert(num_cols == a_ct_ct.extent(1));
assert(a.stride(0) == a_ct_ct.stride(0));
assert(a.stride(1) == a_ct_ct.stride(1));
for(size_t row = 0; row < num_rows; ++row) {
for(size_t col = 0; col < num_rows; ++col) {
assert(a[row, col] == a_ct_ct[row, col]);
assert(conj(a_ct[col, row]) == a_ct_ct[row, col]);
–end example]
Algorithm Requirements based on template parameter name [linalg.algs.reqs]
1 Throughout [linalg.algs.blas1], [linalg.algs.blas2], and [linalg.algs.blas3], where the template parameters are not constrained, the names of template parameters are used to express the following
[Note: Function templates that have a template parameter named ExecutionPolicy are parallel algorithms ([algorithms.parallel.defns]). – end note]
BLAS 1 algorithms [linalg.algs.blas1]
Complexity [linalg.algs.blas1.complexity]
1 Complexity: All algorithms in [linalg.algs.blas1] with mdspan parameters perform a count of mdspan array accesses and arithmetic operations that is linear in the maximum product of extents of any
mdspan parameter.
Givens rotations [linalg.algs.blas1.givens]
Compute Givens rotation [linalg.algs.blas1.givens.lartg]
template<class Real>
setup_givens_rotation(Real a, Real b) noexcept;
template<class Real>
setup_givens_rotation(complex<Real> a, complex<Real> b) noexcept;
1 These functions compute the Givens plane rotation represented by the two values c and s such that the 2 x 2 system of equations
⋅ =
[Note: EDITORIAL NOTE: Preferred rendering would use the LaTeX code like the following.
\left[ \begin{matrix}
c & s \\
-\overline{s} & c \\
\end{matrix} \right]
\left[ \begin{matrix}
a \\
b \\
\end{matrix} \right]
\left[ \begin{matrix}
r \\
0 \\
\end{matrix} \right]
– end note]
holds, where c is always a real scalar, and c^2+|s|^2=1.
That is, c and s represent a 2 x 2 matrix, that when multiplied by the right by the input vector whose components are a and b, produces a result vector whose first component r is the Euclidean norm
of the input vector, and whose second component as zero.
[Note: These functions correspond to the LAPACK function xLARTG. – end note]
2 Returns: {c, s, r}, where c and s form the Givens plane rotation corresponding to the input a and b, and r is the Euclidean norm of the two-component vector formed by a and b.
Apply a computed Givens rotation to vectors [linalg.algs.blas1.givens.rot]
template<inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
InOutVec1 x,
InOutVec2 y,
Real c,
Real s);
template<class ExecutionPolicy,
inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
ExecutionPolicy&& exec,
InOutVec1 x,
InOutVec2 y,
Real c,
Real s);
template<inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
InOutVec1 x,
InOutVec2 y,
Real c,
complex<Real> s);
template<class ExecutionPolicy,
inout-vector InOutVec1,
inout-vector InOutVec2,
class Real>
void apply_givens_rotation(
ExecutionPolicy&& exec,
InOutVec1 x,
InOutVec2 y,
Real c,
complex<Real> s);
[Note: These functions correspond to the BLAS function xROT. – end note]
1 Mandates: compatible-static-extents<InOutVec1, InOutVec2>(0,0) is true.
2 Preconditions: x.extent(0) equals y.extent(0).
3 Effects: Applies the plane rotation specified by c and s to the input vectors x and y, as if the rotation were a 2 x 2 matrix and the input vectors were successive rows of a matrix with two rows.
Swap matrix or vector elements [linalg.algs.blas1.swap]
template<inout-object InOutObj1,
inout-object InOutObj2>
void swap_elements(InOutObj1 x,
InOutObj2 y);
template<class ExecutionPolicy,
inout-object InOutObj1,
inout-object InOutObj2>
void swap_elements(ExecutionPolicy&& exec,
InOutObj1 x,
InOutObj2 y);
[Note: These functions correspond to the BLAS function xSWAP. – end note]
1 Constraints: x.rank() equals y.rank().
2 Mandates: For all r in the range [ 0, x.rank()), compatible-static-extents<InOutObj1, InOutObj2>(r, r) is true.
3 Preconditions: x.extents() equals y.extents().
4 Effects: Swaps all corresponding elements of x and y.
Multiply the elements of an object in place by a scalar [linalg.algs.blas1.scal]
template<class Scalar,
inout-object InOutObj>
void scale(Scalar alpha,
InOutObj x);
template<class ExecutionPolicy,
class Scalar,
inout-object InOutObj>
void scale(ExecutionPolicy&& exec,
Scalar alpha,
InOutObj x);
[Note: These functions correspond to the BLAS function xSCAL. – end note]
5 Effects: Overwrites x with the result of computing the elementwise multiplication αx, where the scalar α is alpha.
Copy elements of one matrix or vector into another [linalg.algs.blas1.copy]
template<in-object InObj,
out-object OutObj>
void copy(InObj x,
OutObj y);
template<class ExecutionPolicy,
in-object InObj,
out-object OutObj>
void copy(ExecutionPolicy&& exec,
InObj x,
OutObj y);
[Note: These functions correspond to the BLAS function xCOPY. – end note]
1 Constraints: x.rank() equals y.rank().
2 Mandates: For all r in the range [ 0, x.rank()), compatible-static-extents<InObj, OutObj>(r, r) is true.
3 Preconditions: x.extents() equals y.extents().
4 Effects: Assigns each element of x to the corresponding element of y.
Add vectors or matrices elementwise [linalg.algs.blas1.add]
template<in-object InObj1,
in-object InObj2,
out-object OutObj>
void add(InObj1 x,
InObj2 y,
OutObj z);
template<class ExecutionPolicy,
in-object InObj1,
in-object InObj2,
out-object OutObj>
void add(ExecutionPolicy&& exec,
InObj1 x,
InObj2 y,
OutObj z);
[Note: These functions correspond to the BLAS function xAXPY. – end note]
1 Constraints: x.rank(), y.rank(), and z.rank() are all equal.
2 Mandates: possibly-addable<InObj1, InObj2, OutObj>() is true.
3 Preconditions: addable(x,y,z) is true.
4 Effects: Computes z=x+y.
5 Remarks: z may alias x or y.
Dot product of two vectors [linalg.algs.blas1.dot]
[Note: The functions in this section correspond to the BLAS functions xDOT, xDOTU, and xDOTC. – end note]
1 The following elements apply to all functions in [linalg.algs.blas1.dot].
2 Mandates: compatible-static-extents<InVec1, InVec2>(0, 0) is true.
3 Preconditions: v1.extent(0) equals v2.extent(0).
template<in-vector InVec1,
in-vector InVec2,
class Scalar>
Scalar dot(InVec1 v1,
InVec2 v2,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
class Scalar>
Scalar dot(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2,
Scalar init);
4 These functions compute a non-conjugated dot product with an explicitly specified result type.
5 Returns: Let N be v1.extent(0).
• (5.1) init if N is zero;
• (5.2) otherwise, GENERALIZED_SUM(plus<>(), init, v1[0]*v2[0], …, v1[N-1]*v2[N-1]).
6 Remarks: If InVec1::value_type, InVec2::value_type, and Scalar are all floating-point types or specializations of complex, and if Scalar has higher precision than InVec1::value_type or
InVec2::value_type, then intermediate terms in the sum use Scalar’s precision or greater.
template<in-vector InVec1,
in-vector InVec2>
auto dot(InVec1 v1,
InVec2 v2);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2>
auto dot(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2);
7 These functions compute a non-conjugated dot product with a default result type.
8 Effects: Let T be decltype(declval<typename InVec1::value_type>() * declval<typename InVec2::value_type>()). Then,
• (8.1) the two-parameter overload is equivalent to return dot(v1, v2, T{});, and
• (8.2) the three-parameter overload is equivalent to return dot(std::forward<ExecutionPolicy>(exec), v1, v2, T{});.
template<in-vector InVec1,
in-vector InVec2,
class Scalar>
Scalar dotc(InVec1 v1,
InVec2 v2,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
class Scalar>
Scalar dotc(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2,
Scalar init);
9 These functions compute a conjugated dot product with an explicit specified result type.
10 Effects:
• (10.1) The three-parameter overload is equivalent to return dot(conjugated(v1), v2, init);, and
• (10.2) the four-parameter overload is equivalent to return dot(std::forward<ExecutionPolicy>(exec), conjugated(v1), v2, init);.
template<in-vector InVec1,
in-vector InVec2>
auto dotc(InVec1 v1,
InVec2 v2);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2>
auto dotc(ExecutionPolicy&& exec,
InVec1 v1,
InVec2 v2);
11 These functions compute a conjugated dot product with a default result type.
12 Effects: Let T be decltype(conj-if-needed(declval<typename InVec1::value_type>()) * declval<typename InVec2::value_type>()). Then,
• (12.1) the two-parameter overload is equivalent to return dotc(v1, v2, T{});, and
• (12.2) the three-parameter overload is equivalent to return dotc(std::forward<ExecutionPolicy>(exec), v1, v2, T{});.
Scaled sum of squares of a vector’s elements [linalg.algs.blas1.ssq]
template<class Scalar>
struct sum_of_squares_result {
Scalar scaling_factor;
Scalar scaled_sum_of_squares;
template<in-vector InVec,
class Scalar>
sum_of_squares_result<Scalar> vector_sum_of_squares(
InVec v,
sum_of_squares_result<Scalar> init);
template<class ExecutionPolicy,
in-vector InVec,
class Scalar>
sum_of_squares_result<Scalar> vector_sum_of_squares(
ExecutionPolicy&& exec,
InVec v,
sum_of_squares_result<Scalar> init);
[Note: These functions correspond to the LAPACK function xLASSQ. – end note]
1 Mandates: decltype(abs-if-needed(declval<typename InVec::value_type>())) is convertible to Scalar.
2 Effects: Returns a value result such that
• (2.1) result.scaling_factor is the maximum of init.scaling_factor and abs-if-needed(x[i]) for all i in the domain of v; and
• (2.2) let s2init be init.scaling_factor * init.scaling_factor * init.scaled_sum_of_squares, then result.scaling_factor * result.scaling_factor * result.scaled_sum_of_squares equals the sum of
s2init and the squares of abs-if-needed(x[i]) for all i in the domain of v.
3 Remarks: If InVec::value_type, and Scalar are all floating-point types or specializations of complex, and if Scalar has higher precision than InVec::value_type, then intermediate terms in the sum
use Scalar’s precision or greater.
Euclidean norm of a vector [linalg.algs.blas1.nrm2]
template<in-vector InVec,
class Scalar>
Scalar vector_two_norm(InVec v,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec,
class Scalar>
Scalar vector_two_norm(ExecutionPolicy&& exec,
InVec v,
Scalar init);
[Note: These functions correspond to the BLAS function xNRM2. – end note]
1 Mandates: decltype(init + abs-if-needed(declval<typename InVec::value_type>()) * abs-if-needed(declval<typename InVec::value_type>())) is convertible to Scalar.
2 Returns: The square root of the sum of the square of init and the squares of the absolute values of the elements of v. [Note: For init equal to zero, this is the Euclidean norm (also called 2-norm)
of the vector v.– end note]
3 Remarks: If InVec::value_type, and Scalar are all floating-point types or specializations of complex, and if Scalar has higher precision than InVec::value_type, then intermediate terms in the sum
use Scalar’s precision or greater.
[Note: A possible implementation of this function for floating-point types T would use the scaled_sum_of_squares result from vector_sum_of_squares(x, {.scaling_factor=1.0, .scaled_sum_of_squares=
init}). – end note]
template<in-vector InVec>
auto vector_two_norm(InVec v);
template<class ExecutionPolicy,
in-vector InVec>
auto vector_two_norm(ExecutionPolicy&& exec, InVec v);
4 Effects: Let T be decltype( abs-if-needed(declval<typename InVec::value_type>()) * abs-if-needed(declval<typename InVec::value_type>())). Then,
• (4.1) the one-parameter overload is equivalent to return vector_two_norm(v, T{});, and
• (4.2) the two-parameter overload is equivalent to return vector_two_norm(std::forward<ExecutionPolicy>(exec), v, T{});.
Sum of absolute values of vector elements [linalg.algs.blas1.asum]
template<in-vector InVec,
class Scalar>
Scalar vector_abs_sum(InVec v,
Scalar init);
template<class ExecutionPolicy,
in-vector InVec,
class Scalar>
Scalar vector_abs_sum(ExecutionPolicy&& exec,
InVec v,
Scalar init);
[Note: These functions correspond to the BLAS functions SASUM, DASUM, SCASUM, and DZASUM.– end note]
1 Mandates: decltype(init + abs-if-needed( real-if-needed (declval<typename InVec::value_type>())) + abs-if-needed( imag-if-needed (declval<typename InVec::value_type>()))) is convertible to Scalar.
2 Returns: Let N be v.extent(0).
• (2.1) init if N is zero;
• (2.2) otherwise, GENERALIZED_SUM(plus<>(), init, abs-if-needed(v[0]), …, abs-if-needed(v[N-1])), if InVec::value_type is an arithmetic type;
• (2.3) otherwise, GENERALIZED_SUM(plus<>(), init, abs-if-needed( real-if-needed (v[0])) + abs-if-needed( imag-if-needed (v[0])), …, abs-if-needed( real-if-needed (v[N-1])) + abs-if-needed(
imag-if-needed (v[N-1]))).
3 Remarks: If InVec::value_type and Scalar are all floating-point types or specializations of complex, and if Scalar has higher precision than InVec::value_type, then intermediate terms in the sum
use Scalar’s precision or greater.
template<in-vector InVec>
auto vector_abs_sum(InVec v);
template<class ExecutionPolicy,
in-vector InVec>
auto vector_abs_sum(ExecutionPolicy&& exec, InVec v);
4 Effects: Let T be typename InVec::value_type. Then,
• (4.1) the one-parameter overload is equivalent to return vector_abs_sum(v, T{});, and
• (4.2) the two-parameter overload is equivalent to return vector_abs_sum(std::forward<ExecutionPolicy>(exec), v, T{});.
Index of maximum absolute value of vector elements [linalg.algs.blas1.iamax]
template<in-vector InVec>
typename InVec::size_type vector_idx_abs_max(InVec v);
template<class ExecutionPolicy,
in-vector InVec>
typename InVec::size_type vector_idx_abs_max(
ExecutionPolicy&& exec,
InVec v);
[Note: These functions correspond to the BLAS function IxAMAX. – end note]
1 Let T be decltype( abs-if-needed( real-if-needed (declval<typename InVec::value_type>())) + abs-if-needed( imag-if-needed (declval<typename InVec::value_type>()))).
2 Mandates: declval<T>() < declval<T>() is a valid expression.
3 Returns:
• (3.1) numeric_limits<typename InVec::size_type>::max() if v has zero elements;
• (3.2) otherwise, the index of the first element of v having largest absolute value, if InVec::value_type is an arithmetic type;
• (3.3) otherwise, the index of the first element v_e of v for which abs-if-needed( real-if-needed (v_e)) + abs-if-needed( imag-if-needed (v_e)) has the largest value.
Frobenius norm of a matrix [linalg.algs.blas1.matfrobnorm]
[Note: These functions exist in the BLAS standard but are not part of the reference implementation. – end note]
template<in-matrix InMat,
class Scalar>
Scalar matrix_frob_norm(
InMat A,
Scalar init);
template<class ExecutionPolicy,
in-matrix InMat,
class Scalar>
Scalar matrix_frob_norm(
ExecutionPolicy&& exec,
InMat A,
Scalar init);
1 Mandates: decltype(init + abs-if-needed(declval<typename InMat::value_type>()) * abs-if-needed(declval<typename InMat::value_type>())) is convertible to Scalar.
2 Returns: The square root of the sum of squares of init and the absolute values of the elements of A.
[Note: For init equal to zero, this is the Frobenius norm of the matrix A. – end note]
3 Remarks: If InMat::value_type and Scalar are all floating-point types or specializations of complex, and if Scalar has higher precision than InMat::value_type, then intermediate terms in the sum
use Scalar’s precision or greater.
template<in-matrix InMat>
auto matrix_frob_norm(InMat A);
template<class ExecutionPolicy,
in-matrix InMat>
auto matrix_frob_norm(
ExecutionPolicy&& exec, InMat A);
4 Effects: Let T be decltype( abs-if-needed(declval<typename InMat::value_type>()) * abs-if-needed(declval<typename InMat::value_type>())). Then,
• (4.1) the one-parameter overload is equivalent to return matrix_frob_norm(A, T{});, and
• (4.2) the two-parameter overload is equivalent to return matrix_frob_norm(std::forward<ExecutionPolicy>(exec), A, T{});.
One norm of a matrix [linalg.algs.blas1.matonenorm]
[Note: These functions exist in the BLAS standard but are not part of the reference implementation. – end note]
template<in-matrix InMat,
class Scalar>
Scalar matrix_one_norm(
InMat A,
Scalar init);
template<class ExecutionPolicy,
in-matrix InMat,
class Scalar>
Scalar matrix_one_norm(
ExecutionPolicy&& exec,
InMat A,
Scalar init);
1 Mandates: decltype(abs-if-needed(declval<typename InMat::value_type>())) is convertible to Scalar.
2 Returns:
• (2.1) init if A.extent(1) is zero;
• (2.2) otherwise, the sum of init and the one norm of the matrix A.
[Note: The one norm of the matrix A is the maximum over all columns of A, of the sum of the absolute values of the elements of the column. – end note]
3 Remarks: If InMat::value_type and Scalar are all floating-point types or specializations of complex, and if Scalar has higher precision than InMat::value_type, then intermediate terms in the sum
use Scalar’s precision or greater.
template<in-matrix InMat>
auto matrix_one_norm(InMat A);
template<class ExecutionPolicy,
in-matrix InMat>
auto matrix_one_norm(
ExecutionPolicy&& exec, InMat A);
4 Effects: Let T be decltype( abs-if-needed(declval<typename InMat::value_type>()). Then,
• (4.1) the one-parameter overload is equivalent to return matrix_one_norm(A, T{});, and
• (4.2) the two-parameter overload is equivalent to return matrix_one_norm(std::forward<ExecutionPolicy>(exec), A, T{});.
Infinity norm of a matrix [linalg.algs.blas1.matinfnorm]
[Note: These functions exist in the BLAS standard but are not part of the reference implementation. – end note]
template<in-matrix InMat,
class Scalar>
Scalar matrix_inf_norm(
InMat A,
Scalar init);
template<class ExecutionPolicy,
in-matrix InMat,
class Scalar>
Scalar matrix_inf_norm(
ExecutionPolicy&& exec,
InMat A,
Scalar init);
1 Mandates: decltype(abs-if-needed(declval<typename InMat::value_type>())) is convertible to Scalar.
2 Returns:
• (2.1) init if A.extent(0) is zero;
• (2.2) otherwise, the sum of init and the infinity norm of the matrix A.
[Note: The infinity norm of the matrix A is the maximum over all rows of A, of the sum of the absolute values of the elements of the row. – end note]
3 Remarks: If InMat::value_type and Scalar are all floating-point types or specializations of complex, and if Scalar has higher precision than InMat::value_type, then intermediate terms in the sum
use Scalar’s precision or greater.
template<in-matrix InMat>
auto matrix_inf_norm(InMat A);
template<class ExecutionPolicy,
in-matrix InMat>
auto matrix_inf_norm(
ExecutionPolicy&& exec, InMat A);
4 Effects: Let T be decltype( abs-if-needed(declval<typename InMat::value_type>()). Then,
• (4.1) the one-parameter overload is equivalent to return matrix_inf_norm(A, T{});, and
• (4.2) the two-parameter overload is equivalent to return matrix_inf_norm(std::forward<ExecutionPolicy>(exec), A, T{});.
BLAS 2 algorithms [linalg.algs.blas2]
General matrix-vector product [linalg.algs.blas2.gemv]
[Note: These functions correspond to the BLAS function xGEMV. – end note]
1 The following elements apply to all functions in [linalg.algs.blas2.gemv].
2 Mandates:
• (2.1) possibly-multipliable<decltype(A), decltype(x), decltype(y)>() is true, and
• (2.2) possibly-addable<decltype(x),decltype(y),decltype(z)>() is true for those overloads that take a z parameter.
3 Preconditions:
• (3.1) multipliable(A,x,y) is true, and
• (3.2) addable(x,y,z) is true for those overloads that take a z parameter.
4 Complexity: O( x.extent(0) ⋅ A.extent(1) )
template<in-matrix InMat,
in-vector InVec,
out-vector OutVec>
void matrix_vector_product(InMat A,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
in-vector InVec,
out-vector OutVec>
void matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
InVec x,
OutVec y);
5 These functions perform an overwriting matrix-vector product.
6 Effects: Computes y=Ax.
constexpr size_t num_rows = 5;
constexpr size_t num_cols = 6;
// y = 3.0 * A * x
void scaled_matvec_1(
mdspan<double, extents<size_t, num_rows, num_cols>> A,
mdspan<double, extents<size_t, num_cols>> x,
mdspan<double, extents<size_t, num_rows>> y)
matrix_vector_product(scaled(3.0, A), x, y);
// y = 3.0 * A * x + 2.0 * y
void scaled_matvec_2(
mdspan<double, extents<size_t, num_rows, num_cols>> A,
mdspan<double, extents<size_t, num_cols>> x,
mdspan<double, extents<size_t, num_rows>> y)
matrix_vector_product(scaled(3.0, A), x,
scaled(2.0, y), y);
// z = 7.0 times the transpose of A, times y
void scaled_transposed_matvec(mdspan<double, extents<size_t, num_rows, num_cols>> A,
mdspan<double, extents<size_t, num_rows>> y,
mdspan<double, extents<size_t, num_cols>> z)
matrix_vector_product(scaled(7.0, transposed(A)), y, z);
–end example]
template<in-matrix InMat,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void matrix_vector_product(InMat A,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
InVec1 x,
InVec2 y,
OutVec z);
7 These functions performs an updating matrix-vector product.
8 Effects: Computes z=y+Ax.
9 Remarks: z may alias y.
Symmetric matrix-vector product [linalg.algs.blas2.symv]
[Note: These functions correspond to the BLAS functions xSYMV and xSPMV. – end note]
1 The following elements apply to all functions in [linalg.algs.blas2.symv].
2 Mandates:
3 Preconditions:
• (3.1) A.extent(0) equals A.extent(1),
• (3.2) multipliable(A,x,y) is true, and
• (3.3) addable(x,y,z) is true for those overloads that take a z parameter.
4 Complexity: O( x.extent(0) ⋅ A.extent(1) )
template<in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void symmetric_matrix_vector_product(InMat A,
Triangle t,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void symmetric_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec x,
OutVec y);
5 These functions perform an overwriting symmetric matrix-vector product, taking into account the Triangle parameter that applies to the symmetric matrix A [linalg.general].
6 Effects: Computes y=Ax.
template<in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void symmetric_matrix_vector_product(
InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void symmetric_matrix_vector_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
7 These functions perform an updating symmetric matrix-vector product, taking into account the Triangle parameter that applies to the symmetric matrix A [linalg.general].
8 Effects: Computes z=y+Ax.
9 Remarks: z may alias y.
Hermitian matrix-vector product [linalg.algs.blas2.hemv]
[Note: These functions correspond to the BLAS functions xHEMV and xHPMV. – end note]
1 The following elements apply to all functions in [linalg.algs.blas2.hemv].
2 Mandates:
3 Preconditions:
• (3.1) A.extent(0) equals A.extent(1),
• (3.2) multipliable(A,x,y) is true, and
• (3.3) addable(x,y,z) is true for those overloads that take a z parameter.
4 Complexity: O( x.extent(0) ⋅ A.extent(1) )
template<in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void hermitian_matrix_vector_product(InMat A,
Triangle t,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void hermitian_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec x,
OutVec y);
5 These functions perform an overwriting Hermitian matrix-vector product, taking into account the Triangle parameter that applies to the Hermitian matrix A [linalg.general].
6 Effects: Computes y=Ax.
template<in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void hermitian_matrix_vector_product(InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void hermitian_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
InVec1 x,
InVec2 y,
OutVec z);
7 These functions perform an updating Hermitian matrix-vector product, taking into account the Triangle parameter that applies to the Hermitian matrix A [linalg.general].
8 Effects: Computes z=y+Ax.
9 Remarks: z may alias y.
Triangular matrix-vector product [linalg.algs.blas2.trmv]
[Note: These functions correspond to the BLAS functions xTRMV and xTPMV. – end note]
1 The following elements apply to all functions in [linalg.algs.blas2.trmv].
2 Mandates:
3 Preconditions:
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_product(
InMat A,
Triangle t,
DiagonalStorage d,
InVec x,
OutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec x,
OutVec y);
4 These functions perform an overwriting triangular matrix-vector product, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
5 Effects: Computes y=Ax.
6 Complexity: O( x.extent(0) ⋅ A.extent(1) )
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_product(
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec y);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec y);
7 These functions perform an in-place triangular matrix-vector product, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
[Note: Performing this operation in place hinders parallelization. However, other ExecutionPolicy specific optimizations, such as vectorization, are still possible. – end note]
8 Effects: Computes a vector y′ such that y′=Ay, and assigns each element of y′ to the corresponding element of y.
9 Complexity: O( y.extent(0) ⋅ A.extent(1) )
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void triangular_matrix_vector_product(InMat A,
Triangle t,
DiagonalStorage d,
InVec1 x,
InVec2 y,
OutVec z);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec1,
in-vector InVec2,
out-vector OutVec>
void triangular_matrix_vector_product(ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec1 x,
InVec2 y,
OutVec z);
10 These functions perform an updating triangular matrix-vector product, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
11 Effects: Computes z=y+Ax.
12 Remarks: z may alias y.
13 Complexity: O( x.extent(0) ⋅ A.extent(1) )
Solve a triangular linear system [linalg.algs.blas2.trsv]
[Note: These functions correspond to the BLAS functions xTRSV and xTPSV. – end note]
1 The following elements apply to all functions in [linalg.algs.blas2.trsv].
2 Mandates:
3 Preconditions:
• (3.1) A.extent(0) equals A.extent(1),
• (3.2) A.extent(0) equals b.extent(0), and
• (3.3) A.extent(0) equals x.extent(0) for those overloads that take an x parameter.
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x,
BinaryDivideOp divide);
4 These functions perform a triangular solve, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
5 Effects: Computes a vector x′ such that b=Ax′, and assigns each element of x′ to the corresponding element of x. If no such x′ exists, then the elements of x are valid but unspecified.
6 Complexity: O( A.extent(1) ⋅ b.extent(0) )
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x);
7 Effects: Equivalent to
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
in-vector InVec,
out-vector OutVec>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InVec b,
OutVec x);
8 Effects: Equivalent to
A, t, d, b, x, divides<void>{});
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec,
class BinaryDivideOp>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b,
BinaryDivideOp divide);
9 These functions perform an in-place triangular solve, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
[Note: Performing triangular solve in place hinders parallelization. However, other ExecutionPolicy specific optimizations, such as vectorization, are still possible. – end note]
10 Effects: Computes a vector x′ such that b=Ax′, and assigns each element of x′ to the corresponding element of b. If no such x′ exists, then the elements of b are valid but unspecified.
11 Complexity: O( A.extent(1) ⋅ b.extent(0) )
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b);
12 Effects: Equivalent to:
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-vector InOutVec>
void triangular_matrix_vector_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutVec b);
13 Effects: Equivalent to:
Rank-1 (outer product) update of a matrix [linalg.algs.blas2.rank1]
template<in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update(
InVec1 x,
InVec2 y,
InOutMat A);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A);
1 These functions perform a nonsymmetric nonconjugated rank-1 update.
[Note: These functions correspond to the BLAS functions xGER (for real element types) and xGERU (for complex element types). – end note]
2 Mandates: possibly-multipliable<InOutMat, InVec2, InVec1>() is true.
3 Preconditions: multipliable(A,y,x) is true.
4 Effects: Computes a matrix A′ such that A′=A+xy^T, and assigns each element of A′ to the corresponding element of A.
5 Complexity: O( x.extent(0) ⋅ y.extent(0) )
template<in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update_c(
InVec1 x,
InVec2 y,
InOutMat A);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
inout-matrix InOutMat>
void matrix_rank_1_update_c(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A);
6 These functions perform a nonsymmetric conjugated rank-1 update.
[Note: These functions correspond to the BLAS functions xGER (for real element types) and xGERC (for complex element types). – end note]
7 Effects:
• (7.1) For the overloads without an ExecutionPolicy argument, equivalent to matrix_rank_1_update(x, conjugated(y), A);;
• (7.2) otherwise, equivalent to matrix_rank_1_update(std::forward<ExecutionPolicy>(exec), x, conjugated(y), A);.
Symmetric or Hermitian Rank-1 (outer product) update of a matrix [linalg.algs.blas2.symherrank1]
[Note: These functions correspond to the BLAS functions xSYR, xSPR, xHER, and xHPR.
They have overloads taking a scaling factor alpha, because it would be impossible to express the update A=A−xx^T otherwise. – end note]
1 The following elements apply to all functions in [linalg.algs.blas2.symherrank1].
2 Mandates:
• (2.1) If InOutMat has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (2.2) compatible-static-extents<decltype(A), decltype(A)>(0,1) is true; and
• (2.3) compatible-static-extents<decltype(A), decltype(x)>(0,0) is true.
3 Preconditions:
• (3.1) A.extent(0) equals A.extent(1), and
• (3.2) A.extent(0) equals x.extent(0).
4 Complexity: O( x.extent(0) ⋅ x.extent(0) )
template<in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
ExecutionPolicy&& exec,
InVec x,
InOutMat A,
Triangle t);
5 These functions perform a symmetric rank-1 update of the symmetric matrix A, taking into account the Triangle parameter that applies to A [linalg.general].
6 Effects: Computes a matrix A′ such that A′=A+xx^T and assigns each element of A′ to the corresponding element of A.
template<class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_1_update(
ExecutionPolicy&& exec,
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
7 These functions perform a symmetric rank-1 update of the symmetric matrix A, taking into account the Triangle parameter that applies to A [linalg.general].
8 Effects: Computes a matrix A′ such that A′=A+αxx^T, where the scalar α is alpha, and assigns each element of A′ to the corresponding element of A.
template<in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
ExecutionPolicy&& exec,
InVec x,
InOutMat A,
Triangle t);
9 These functions perform a Hermitian rank-1 update of the Hermitian matrix A, taking into account the Triangle parameter that applies to A [linalg.general].
10 Effects: Computes a matrix A′ such that A′=A+xx^H and assigns each element of A′ to the corresponding element of A.
template<class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
class Scalar,
in-vector InVec,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_1_update(
ExecutionPolicy&& exec,
Scalar alpha,
InVec x,
InOutMat A,
Triangle t);
11 These functions perform a Hermitian rank-1 update of the Hermitian matrix A, taking into account the Triangle parameter that applies to A [linalg.general].
12 Effects: Computes A′ such that A′=A+αxx^H, where the scalar α is alpha, and assigns each element of A′ to the corresponding element of A.
Symmetric and Hermitian rank-2 matrix updates [linalg.algs.blas2.rank2]
[Note: These functions correspond to the BLAS functions xSYR2,xSPR2, xHER2 and xHPR2. – end note]
1 The following elements apply to all functions in [linalg.algs.blas2.rank2].
2 Mandates:
• (2.1) If InOutMat has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (2.2) compatible-static-extents<decltype(A), decltype(A)>(0, 1) is true; and
• (2.3) possibly-multipliable<decltype(A), decltype(x), decltype(y)>() is true.
3 Preconditions:
• (3.1) A.extent(0) equals A.extent(1), and
• (3.2) multipliable(A,x,y) is true.
4 Complexity: O( x.extent(0) ⋅ y.extent(0) )
template<in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2_update(
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2_update(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
5 These functions perform a symmetric rank-2 update of the symmetric matrix A, taking into account the Triangle parameter that applies to A [linalg.general].
6 Effects: Computes A′ such that A′=A+xy^T+yx^T and assigns each element of A′ to the corresponding element of A.
template<in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2_update(
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
template<class ExecutionPolicy,
in-vector InVec1,
in-vector InVec2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2_update(
ExecutionPolicy&& exec,
InVec1 x,
InVec2 y,
InOutMat A,
Triangle t);
7 These functions perform a Hermitian rank-2 update of the Hermitian matrix A, taking into account the Triangle parameter that applies to A [linalg.general].
8 Effects: Computes A′ such that A′=A+xy^H+yx^H and assigns each element of A′ to the corresponding element of A.
BLAS 3 algorithms [linalg.algs.blas3]
General matrix-matrix product [linalg.algs.blas3.gemm]
[Note: These functions correspond to the BLAS function xGEMM. – end note]
1 The following elements apply to all functions in [linalg.algs.blas3.gemm] in addition to function-specific elements.
2 Mandates: possibly-multipliable<decltype(A), decltype(B), decltype(C)>() is true.
3 Preconditions: multipliable(A, B, C) is true.
4 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ B.extent(1) )
template<in-matrix InMat1,
in-matrix InMat2,
out-matrix OutMat>
void matrix_product(InMat1 A,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
out-matrix OutMat>
void matrix_product(ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
OutMat C);
5 Effects: Computes C=AB.
template<in-matrix InMat1,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void matrix_product(InMat1 A,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void matrix_product(ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
InMat3 E,
OutMat C);
6 Mandates: possibly-addable<InMat3, InMat3, OutMat>() is true.
7 Preconditions: addable(E, E, C) is true.
8 Effects: Computes C=E+AB.
9 Remarks: C may alias E.
Symmetric, Hermitian, and triangular matrix-matrix product [linalg.algs.blas3.xxmm]
[Note: These functions correspond to the BLAS functions xSYMM, xHEMM, and xTRMM. – end note]
1 The following elements apply to all functions in [linalg.algs.blas3.xxmm] in addition to function-specific elements.
2 Mandates:
• (2.1) possibly-multipliable<decltype(A), decltype(B), decltype(C)>() is true, and
• (2.2) possibly-addable<decltype(E), decltype(E), decltype(C)>() is true for those overloads that take an E parameter.
3 Preconditions:
• (3.1) multipliable(A, B, C) is true, and
• (3.2) addable(E, E, C) is true for those overloads that take an E parameter.
4 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ B.extent(1) )
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
OutMat C);
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat C);
5 These functions perform a matrix-matrix multiply, taking into account the Triangle and DiagonalStorage (if applicable) parameters that apply to the symmetric, Hermitian, or triangular
(respectively) matrix A [linalg.general].
6 Mandates:
• (6.1) If InMat1 has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument; and
• (6.2) compatible-static-extents<InMat1,InMat1>(0,1).
7 Preconditions: A.extent(0) == A.extent(1) is true.
8 Effects: Computes C=AB.
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
OutMat C);
9 These functions perform a matrix-matrix multiply, taking into account the Triangle and DiagonalStorage (if applicable) parameters that apply to the symmetric, Hermitian, or triangular
(respectively) matrix B [linalg.general].
10 Mandates:
• (10.1) If InMat2 has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument; and
• (10.2) compatible-static-extents<InMat2,InMat2>(0,1).
11 Preconditions: B.extent(0) == B.extent(1) is true.
12 Effects: Computes C=AB.
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
InMat2 B,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
InMat3 E,
OutMat C);
13 These functions perform a potentially overwriting matrix-matrix multiply-add, taking into account the Triangle and DiagonalStorage (if applicable) parameters that apply to the symmetric,
Hermitian, or triangular (respectively) matrix A [linalg.general].
14 Mandates:
• (14.1) If InMat1 has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument; and
• (14.2) compatible-static-extents<InMat1,InMat1>(0,1).
15 Preconditions: A.extent(0) == A.extent(1) is true.
16 Effects: Computes C=E+AB.
17 Remarks: C may alias E.
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void symmetric_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
in-matrix InMat3,
out-matrix OutMat>
void hermitian_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
InMat3 E,
OutMat C);
template<in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
InMat3 E,
OutMat C);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
class Triangle,
class DiagonalStorage,
in-matrix InMat3,
out-matrix OutMat>
void triangular_matrix_product(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
Triangle t,
DiagonalStorage d,
InMat3 E,
OutMat C);
18 These functions perform a potentially overwriting matrix-matrix multiply-add, taking into account the Triangle and DiagonalStorage (if applicable) parameters that apply to the symmetric,
Hermitian, or triangular (respectively) matrix B [linalg.general].
19 Mandates:
• (19.1) If InMat2 has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument; and
• (19.2) compatible-static-extents<InMat2,InMat2>(0,1).
20 Preconditions: B.extent(0) == B.extent(1) is true.
21 Effects: Computes C=E+AB.
22 Remarks: C may alias E.
In-place triangular matrix-matrix product [linalg.algs.blas3.trmm]
1 These functions perform an in-place matrix-matrix multiply, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
[Note: These functions correspond to the BLAS function xTRMM. – end note]
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_left_product(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_left_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
2 Mandates:
• (2.1) If InMat has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (2.2) possibly-multipliable<InMat, InOutMat, InOutMat>() is true; and
• (2.3) compatible-static-extents<InMat,InMat>(0,1).
3 Preconditions:
• (3.1) multipliable(A, C, C) is true, and
• (3.2) A.extent(0) == A.extent(1) is true.
4 Effects: Computes a matrix C′ such that C′=AC and assigns each element of C′ to the corresponding element of C.
5 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ C.extent(0) )
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_right_product(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_right_product(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat C);
6 Mandates:
• (6.1) If InMat has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (6.2) possibly-multipliable<InOutMat, InMat, InOutMat>() is true; and
• (6.3) compatible-static-extents<InMat, InMat>(0,1).
7 Preconditions:
• (7.1) multipliable(C, A, C) is true, and
• (7.2) A.extent(0) == A.extent(1) is true.
8 Effects: Computes a matrix C′ such that C′=CA and assigns each element of C′ to the corresponding element of C.
9 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ C.extent(0) )
Rank-k update of a symmetric or Hermitian matrix [linalg.algs.blas3.rankk]
[Note: These functions correspond to the BLAS functions xSYRK and xHERK. – end note]
1 The following elements apply to all functions in [linalg.algs.blas3.rankk].
2 Mandates:
3 Preconditions:
• (3.1) A.extent(0) equals A.extent(1),
• (3.2) C.extent(0) equals C.extent(1), and
• (3.3) A.extent(0) equals C.extent(0).
4 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ C.extent(0) )
template<class Scalar,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
class Scalar,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
ExecutionPolicy&& exec,
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
5 Effects: Computes a matrix C′ such that C′=C+αAA^T, where the scalar α is alpha, and assigns each element of C′ to the corresponding element of C.
template<in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
InMat A,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_k_update(
ExecutionPolicy&& exec,
InMat A,
InOutMat C,
Triangle t);
6 Effects: Computes a matrix C′ such that C′=C+AA^T, and assigns each element of C′ to the corresponding element of C.
template<class Scalar,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
class Scalar,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
ExecutionPolicy&& exec,
Scalar alpha,
InMat A,
InOutMat C,
Triangle t);
7 Effects: Computes a matrix C′ such that C′=C+AA^H, where the scalar α is alpha, and assigns each element of C′ to the corresponding element of C.
template<in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
InMat A,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_k_update(
ExecutionPolicy&& exec,
InMat A,
InOutMat C,
Triangle t);
8 Effects: Computes a matrix C′ such that C′=C+AA^H, and assigns each element of C′ to the corresponding element of C.
Rank-2k update of a symmetric or Hermitian matrix [linalg.algs.blas3.rank2k]
[Note: These functions correspond to the BLAS functions xSYR2K and xHER2K. – end note]
1 The following elements apply to all functions in [linalg.algs.blas3.rank2k].
2 Mandates:
• (2.1) If InOutMat has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (2.2) possibly-addable<decltype(A), decltype(B), decltype(C)>() is true; and
• (2.3) compatible-static-extents<decltype(A), decltype(A)>(0, 1) is true.
3 Preconditions:
• (3.1) addable(A, B, C) is true, and
• (3.2) A.extent(0) equals A.extent(1).
4 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ C.extent(0) )
template<in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2k_update(
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void symmetric_matrix_rank_2k_update(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
5 Effects: Computes a matrix C′ such that C′=C+AB^T+BA^T, and assigns each element of C′ to the corresponding element of C.
template<in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2k_update(
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
template<class ExecutionPolicy,
in-matrix InMat1,
in-matrix InMat2,
possibly-packed-inout-matrix InOutMat,
class Triangle>
void hermitian_matrix_rank_2k_update(
ExecutionPolicy&& exec,
InMat1 A,
InMat2 B,
InOutMat C,
Triangle t);
6 Effects: Computes a matrix C′ such that C′=C+AB^T+BA^T, and assigns each element of C′ to the corresponding element of C.
Solve multiple triangular linear systems [linalg.algs.blas3.trsm]
[Note: These functions correspond to the BLAS function xTRSM. – end note]
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
1 These functions perform multiple matrix solves, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
2 Mandates:
• (2.1) If InMat1 has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (2.2) possibly-multipliable<InMat1, OutMat, InMat2>() is true; and
• (2.3) compatible-static-extents<InMat1, InMat1>(0,1) is true.
3 Preconditions:
• (3.1) multipliable(A,X,B) is true, and
• (3.2) A.extent(0) == A.extent(1) is true.
4 Effects: Computes X′ such that AX′=B, and assigns each element of X′ to the corresponding element of X. If no such X′ exists, then the elements of X are valid but unspecified.
5 Complexity: O( A.extent(0) ⋅ X.extent(1) ⋅ X.extent(1) )
[Note: Since the triangular matrix is on the left, the desired divide implementation in the case of noncommutative multiplication would be mathematically equivalent to y^−1x, where x is the first
argument and y is the second argument, and y^−1 denotes the multiplicative inverse of y. – end note]
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_left_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
6 Effects: Equivalent to:
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
7 Effects: Equivalent to:
A, t, d, B, X, divides<void>{});
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X,
BinaryDivideOp divide);
8 These functions perform multiple matrix solves, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
9 Mandates:
• (9.1) If InMat1 has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (9.2) possibly-multipliable<OutMat, InMat1, InMat2>() is true; and
• (9.3) compatible-static-extents<InMat1, InMat1>(0,1) is true.
10 Preconditions:
• (10.1) multipliable(X,A,B) is true, and
• (10.2) A.extent(0) == A.extent(1) is true.
11 Effects: Computes X′ such that X′A=B, and assigns each element of X′ to the corresponding element of X. If no such X′ exists, then the elements of X are valid but unspecified.
12 Complexity: O( B.extent(0) ⋅ B.extent(1) ⋅ A.extent(1) )
[Note: Since the triangular matrix is on the right, the desired divide implementation in the case of noncommutative multiplication would be mathematically equivalent to xy^−1, where x is the first
argument and y is the second argument, and y^−1 denotes the multiplicative inverse of y. – end note]
template<in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_right_solve(
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
13 Effects: Equivalent to:
template<class ExecutionPolicy,
in-matrix InMat1,
class Triangle,
class DiagonalStorage,
in-matrix InMat2,
out-matrix OutMat>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat1 A,
Triangle t,
DiagonalStorage d,
InMat2 B,
OutMat X);
14 Effects: Equivalent to:
A, t, d, B, X, divides<void>{});
Solve multiple triangular linear systems in-place [linalg.algs.blas3.inplacetrsm]
[Note: These functions correspond to the BLAS function xTRSM. – end note]
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
1 These functions perform multiple in-place matrix solves, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
[Note: This algorithm makes it possible to compute factorizations like Cholesky and LU in place.
Performing triangular solve in place hinders parallelization. However, other ExecutionPolicy specific optimizations, such as vectorization, are still possible. – end note]
2 Mandates:
• (2.1) If InMat has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (2.2) possibly-multipliable<InMat, InOutMat, InOutMat>() is true; and
• (2.3) compatible-static-extents<InMat, InMat>(0,1) is true.
3 Preconditions:
• (3.1) multipliable(A,B,B) is true, and
• (3.2) A.extent(0) == A.extent(1) is true.
4 Effects: Computes X′ such that AX′=B, and assigns each element of X′ to the corresponding element of B. If so such X′ exists, then the elements of B are valid but unspecified.
5 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ B.extent(1) )
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_left_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
6 Effects: Equivalent to:
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_left_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
7 Effects: Equivalent to:
A, t, d, B, divides<void>{});
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat,
class BinaryDivideOp>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B,
BinaryDivideOp divide);
8 These functions perform multiple in-place matrix solves, taking into account the Triangle and DiagonalStorage parameters that apply to the triangular matrix A [linalg.general].
[Note: This algorithm makes it possible to compute factorizations like Cholesky and LU in place.
Performing triangular solve in place hinders parallelization. However, other ExecutionPolicy specific optimizations, such as vectorization, are still possible. – end note]
9 Mandates:
• (9.1) If InMat has layout_blas_packed layout, then the layout’s Triangle template argument has the same type as the function’s Triangle template argument;
• (9.2) possibly-multipliable<InOutMat, InMat, InOutMat>() is true; and
• (9.3) compatible-static-extents<InMat, InMat>(0,1) is true.
10 Preconditions:
• (10.1) multipliable(B,A,B) is true, and
• (10.2) A.extent(0) == A.extent(1) is true.
11 Effects: Computes X′ such that X′A=B, and assigns each element of X′ to the corresponding element of B. If so such X′ exists, then the elements of B are valid but unspecified.
12 Complexity: O( A.extent(0) ⋅ A.extent(1) ⋅ B.extent(1) )
template<in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_right_solve(
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
13 Effects: Equivalent to:
template<class ExecutionPolicy,
in-matrix InMat,
class Triangle,
class DiagonalStorage,
inout-matrix InOutMat>
void triangular_matrix_matrix_right_solve(
ExecutionPolicy&& exec,
InMat A,
Triangle t,
DiagonalStorage d,
InOutMat B);
14 Effects: Equivalent to:
A, t, d, B, divides<void>{});
Cholesky factorization
1 This example shows how to compute the Cholesky factorization of a real symmetric positive definite matrix A stored as an mdspan with a unique non-packed layout. The algorithm imitates DPOTRF2 in
LAPACK 3.9.0. If Triangle is upper_triangle_t, then it computes the factorization A=U^TU in place, with U stored in the upper triangle of A on output. Otherwise, it computes the factorization A=L
L^T in place, with L stored in the lower triangle of A on output. The function returns 0 if success, else k+1 if row/column k has a zero or NaN (not a number) diagonal entry.
#include <linalg>
#include <cmath>
// Flip upper to lower, and lower to upper
lower_triangular_t opposite_triangle(upper_triangular_t) {
return {};
upper_triangular_t opposite_triangle(lower_triangular_t) {
return {};
// Returns nullopt if no bad pivots,
// else the index of the first bad pivot.
// A "bad" pivot is zero or NaN.
template<inout-matrix InOutMat,
class Triangle>
std::optional<typename InOutMat::size_type>
cholesky_factor(InOutMat A, Triangle t)
using std::submdspan;
using std::tuple;
using value_type = typename InOutMat::value_type;
using size_type = typename InOutMat::size_type;
constexpr value_type ZERO {};
constexpr value_type ONE (1.0);
const size_type n = A.extent(0);
if (n == 0) {
return std::nullopt;
else if (n == 1) {
if (A[0,0] <= ZERO || std::isnan(A[0,0])) {
return {size_type(1)};
A[0,0] = std::sqrt(A[0,0]);
else {
// Partition A into [A11, A12,
// A21, A22],
// where A21 is the transpose of A12.
const size_type n1 = n / 2;
const size_type n2 = n - n1;
auto A11 = submdspan(A, pair{0, n1}, pair{0, n1});
auto A22 = submdspan(A, pair{n1, n}, pair{n1, n});
// Factor A11
const auto info1 = cholesky_factor(A11, t);
if (info1.has_value()) {
return info1;
using std::linalg::explicit_diagonal;
using std::linalg::symmetric_matrix_rank_k_update;
using std::linalg::transposed;
if constexpr (std::is_same_v<Triangle, upper_triangle_t>) {
// Update and scale A12
auto A12 = submdspan(A, tuple{0, n1}, tuple{n1, n});
using std::linalg::triangular_matrix_matrix_left_solve;
// BLAS would use original triangle; we need to flip it
opposite_triangle(t), explicit_diagonal, A12);
// A22 = A22 - A12^T * A12
// The Triangle argument applies to A22,
// not transposed(A12), so we don't flip it.
symmetric_matrix_rank_k_update(-ONE, transposed(A12),
A22, t);
else {
// Compute the Cholesky factorization A = L * L^T
// Update and scale A21
auto A21 = submdspan(A, tuple{n1, n}, tuple{0, n1});
using std::linalg::triangular_matrix_matrix_right_solve;
// BLAS would use original triangle; we need to flip it
opposite_triangle(t), explicit_diagonal, A21);
// A22 = A22 - A21 * A21^T
symmetric_matrix_rank_k_update(-ONE, A21, A22, t);
// Factor A22
const auto info2 = cholesky_factor(A22, t);
if (info2.has_value()) {
return {info2.value() + n1};
return std::nullopt;
Solve linear system using Cholesky factorization
1 This example shows how to solve a symmetric positive definite linear system Ax=b, using the Cholesky factorization computed in the previous example in-place in the matrix A. The example assumes
that cholesky_factor(A, t) returned nullopt, indicating no zero or NaN pivots.
template<in-matrix InMat,
class Triangle,
in-vector InVec,
out-vector OutVec>
void cholesky_solve(
InMat A,
Triangle t,
InVec b,
OutVec x)
using std::linalg::explicit_diagonal;
using std::linalg::transposed;
using std::linalg::triangular_matrix_vector_solve;
if constexpr (std::is_same_v<Triangle, upper_triangle_t>) {
// Solve Ax=b where A = U^T U
// Solve U^T c = b, using x to store c.
opposite_triangle(t), explicit_diagonal, b, x);
// Solve U x = c, overwriting x with result.
triangular_matrix_vector_solve(A, t, explicit_diagonal, x);
else {
// Solve Ax=b where A = L L^T
// Solve L c = b, using x to store c.
triangular_matrix_vector_solve(A, t, explicit_diagonal, b, x);
// Solve L^T x = c, overwriting x with result.
opposite_triangle(t), explicit_diagonal, x);
Compute QR factorization of a tall skinny matrix
1 This example shows how to compute the QR factorization of a “tall and skinny” matrix V, using a cache-blocked algorithm based on rank-k symmetric matrix update and Cholesky factorization. “Tall and
skinny” means that the matrix has many more rows than columns.
// Compute QR factorization A = Q R, with A storing Q.
template<inout-matrix InOutMat,
out-matrix OutMat>
std::optional<typename InOutMat::size_type>
InOutMat A, // A on input, Q on output
OutMat R)
using std::full_extent;
using std::submdspan;
using std::tuple;
using size_type = typename InOutMat::size_type;
// One might use cache size, sizeof(element_type), and A.extent(1)
// to pick the number of rows per block. For now, we just pick
// some constant.
constexpr size_type max_num_rows_per_block = 500;
using R_value_type = typename OutMat::value_type;
constexpr R_element_type ZERO {};
for(size_type j = 0; j < R.extent(1); ++j) {
for(size_type i = 0; i < R.extent(0); ++i) {
R[i,j] = ZERO;
// Cache-blocked version of R = R + A^T * A.
const auto num_rows = A.extent(0);
auto rest_num_rows = num_rows;
auto A_rest = A;
while(A_rest.extent(0) > 0) {
const size_type num_rows_per_block =
std::min(A.extent(0), max_num_rows_per_block);
auto A_cur = submdspan(A_rest,
tuple{0, num_rows_per_block},
A_rest = submdspan(A_rest,
tuple{num_rows_per_block, A_rest.extent(0)},
// R = R + A_cur^T * A_cur
using std::linalg::symmetric_matrix_rank_k_update;
constexpr R_element_type ONE(1.0);
// The Triangle argument applies to R,
// not transposed(A_cur), so we don't flip it.
R, upper_triangle);
const auto info = cholesky_factor(R, upper_triangle);
if(info.has_value()) {
return info;
using std::linalg::triangular_matrix_matrix_left_solve;
triangular_matrix_matrix_left_solve(R, upper_triangle, A);
return std::nullopt;
// Compute QR factorization A = Q R.
// Use R_tmp as temporary R factor storage
// for iterative refinement.
template<in-matrix InMat,
out-matrix OutMat1,
out-matrix OutMat2,
out-matrix OutMat3>
std::optional<typename OutMat1::size_type>
InMat A,
OutMat1 Q,
OutMat2 R_tmp,
OutMat3 R)
assert(R.extent(0) == R.extent(1));
assert(A.extent(1) == R.extent(0));
assert(R_tmp.extent(0) == R_tmp.extent(1));
assert(A.extent(0) == Q.extent(0));
assert(A.extent(1) == Q.extent(1));
std::linalg::copy(A, Q);
const auto info1 = cholesky_tsqr_one_step(Q, R);
if(info1.has_value()) {
return info1;
// Use one step of iterative refinement to improve accuracy.
const auto info2 = cholesky_tsqr_one_step(Q, R_tmp);
if(info2.has_value()) {
return info2;
// R = R_tmp * R
using std::linalg::triangular_matrix_product;
triangular_matrix_product(R_tmp, upper_triangle,
explicit_diagonal, R);
return std::nullopt; | {"url":"https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p1673r13.html","timestamp":"2024-11-11T00:08:04Z","content_type":"application/xhtml+xml","content_length":"1038038","record_id":"<urn:uuid:8e949f46-b5bc-4d2c-b334-bddbf9fb5f86>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00388.warc.gz"} |
A problem-based approach to teaching a course in engineering mechanics
Problem-Based Learning (PBL) can be defined as a learning environment where problems drive the learning. A teaching session begins with a problem to be solved, in such a way that students need to
gain new knowledge before they can solve the problem. This paper discusses the application of PBL to teaching an introductory course in engineering mechanics at Aalborg University, Copenhagen,
Denmark for first semester students enrolled in the program “Sustainable Design". We pose realistic problems which do not necessarily have a single correct solution. Project work in groups also
presents itself as a supplement for conventional engineering education. The students themselves should interpret the problem posed, gather needed information, identify possible solutions, evaluate
options and present conclusions. The paper also presents an initial assessment of the experiences gained from implementing PBL in the course. We conclude with a discussion of some issues in
implementing PBL in engineering and mathematics courses.
Originalsprog Engelsk
Titel 8th International Research Symposium on Problem-Based Learning, IRSPBL 2020
Redaktører Aida Guerra, Anette Kolmos, Juebei Chen, Maiken Winther
Antal sider 11
Forlag Aalborg University
Publikationsdato 2020
Sider 499-509
ISBN (Trykt) 9788772103136
Status Udgivet - 2020
Begivenhed 8th International Research Symposium on Problem-Based Learning, IRSPBL 2020 - Virtual, Online
Varighed: 18 aug. 2020 → 21 aug. 2020
Konference 8th International Research Symposium on Problem-Based Learning, IRSPBL 2020
By Virtual, Online
Periode 18/08/2020 → 21/08/2020
Navn International Research Symposium on PBL
Bibliografisk note
Publisher Copyright:
© The authors, 2020. | {"url":"https://researchprofiles.ku.dk/da/publications/a-problem-based-approach-to-teaching-a-course-in-engineering-mech","timestamp":"2024-11-10T08:39:51Z","content_type":"text/html","content_length":"46700","record_id":"<urn:uuid:5de479a7-be2f-4d5a-88dc-7ac6541c3f03>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00437.warc.gz"} |
Quadratic Equation - Formula, Examples | Quadratic Formula - [[company name]] [[target location]], [[stateabr]]
Quadratic Equation Formula, Examples
If you’re starting to figure out quadratic equations, we are excited regarding your journey in mathematics! This is really where the fun begins!
The data can look enormous at start. However, give yourself some grace and room so there’s no rush or stress while solving these problems. To master quadratic equations like a professional, you will
need understanding, patience, and a sense of humor.
Now, let’s start learning!
What Is the Quadratic Equation?
At its center, a quadratic equation is a arithmetic equation that states different situations in which the rate of change is quadratic or relative to the square of few variable.
However it might appear like an abstract theory, it is simply an algebraic equation described like a linear equation. It usually has two results and uses complicated roots to work out them, one
positive root and one negative, through the quadratic equation. Working out both the roots should equal zero.
Meaning of a Quadratic Equation
First, bear in mind that a quadratic expression is a polynomial equation that includes a quadratic function. It is a second-degree equation, and its usual form is:
ax2 + bx + c
Where “a,” “b,” and “c” are variables. We can use this equation to solve for x if we put these variables into the quadratic equation! (We’ll go through it later.)
Any quadratic equations can be scripted like this, which results in working them out straightforward, relatively speaking.
Example of a quadratic equation
Let’s compare the given equation to the last equation:
x2 + 5x + 6 = 0
As we can observe, there are 2 variables and an independent term, and one of the variables is squared. Therefore, linked to the quadratic formula, we can confidently state this is a quadratic
Commonly, you can find these types of equations when scaling a parabola, which is a U-shaped curve that can be plotted on an XY axis with the details that a quadratic equation provides us.
Now that we learned what quadratic equations are and what they look like, let’s move ahead to figuring them out.
How to Figure out a Quadratic Equation Employing the Quadratic Formula
While quadratic equations might seem greatly complicated when starting, they can be broken down into several easy steps employing an easy formula. The formula for figuring out quadratic equations
involves setting the equal terms and applying rudimental algebraic functions like multiplication and division to get 2 answers.
After all operations have been carried out, we can solve for the numbers of the variable. The solution take us another step closer to work out the answer to our first question.
Steps to Solving a Quadratic Equation Utilizing the Quadratic Formula
Let’s quickly put in the common quadratic equation once more so we don’t forget what it seems like
ax2 + bx + c=0
Ahead of working on anything, bear in mind to separate the variables on one side of the equation. Here are the 3 steps to work on a quadratic equation.
Step 1: Note the equation in conventional mode.
If there are variables on both sides of the equation, total all equivalent terms on one side, so the left-hand side of the equation equals zero, just like the conventional model of a quadratic
Step 2: Factor the equation if feasible
The standard equation you will wind up with should be factored, usually through the perfect square method. If it isn’t possible, put the variables in the quadratic formula, which will be your best
friend for figuring out quadratic equations. The quadratic formula looks similar to this:
Every terms correspond to the same terms in a standard form of a quadratic equation. You’ll be utilizing this significantly, so it is wise to memorize it.
Step 3: Implement the zero product rule and work out the linear equation to discard possibilities.
Now that you possess two terms equivalent to zero, work on them to get two results for x. We possess 2 answers due to the fact that the solution for a square root can be both negative or positive.
Example 1
2x2 + 4x - x2 = 5
At the moment, let’s break down this equation. Primarily, simplify and place it in the standard form.
x2 + 4x - 5 = 0
Now, let's determine the terms. If we contrast these to a standard quadratic equation, we will get the coefficients of x as ensuing:
To figure out quadratic equations, let's plug this into the quadratic formula and solve for “+/-” to involve each square root.
We solve the second-degree equation to obtain:
Now, let’s simplify the square root to attain two linear equations and solve:
x=-4+62 x=-4-62
x = 1 x = -5
Now, you have your result! You can revise your work by checking these terms with the original equation.
12 + (4*1) - 5 = 0
1 + 4 - 5 = 0
-52 + (4*-5) - 5 = 0
25 - 20 - 5 = 0
That's it! You've worked out your first quadratic equation utilizing the quadratic formula! Congratulations!
Example 2
Let's work on one more example.
3x2 + 13x = 10
Initially, put it in the standard form so it is equivalent zero.
3x2 + 13x - 10 = 0
To figure out this, we will plug in the figures like this:
a = 3
b = 13
c = -10
figure out x utilizing the quadratic formula!
Let’s clarify this as far as possible by figuring it out exactly like we did in the last example. Solve all simple equations step by step.
You can work out x by taking the negative and positive square roots.
x=-13+176 x=-13-176
x=46 x=-306
x=23 x=-5
Now, you have your answer! You can check your work utilizing substitution.
3*(2/3)2 + (13*2/3) - 10 = 0
4/3 + 26/3 - 10 = 0
30/3 - 10 = 0
10 - 10 = 0
3*-52 + (13*-5) - 10 = 0
75 - 65 - 10 =0
And that's it! You will figure out quadratic equations like a professional with a bit of patience and practice!
Given this overview of quadratic equations and their basic formula, children can now take on this challenging topic with confidence. By opening with this simple definitions, learners acquire a solid
foundation before moving on to further intricate theories ahead in their studies.
Grade Potential Can Guide You with the Quadratic Equation
If you are fighting to understand these ideas, you may require a mathematics instructor to assist you. It is better to ask for assistance before you lag behind.
With Grade Potential, you can understand all the helpful hints to ace your subsequent mathematics exam. Turn into a confident quadratic equation problem solver so you are ready for the following big
theories in your math studies. | {"url":"https://www.clevelandinhometutors.com/blog/quadratic-equation-formula-examples-quadratic-formula","timestamp":"2024-11-05T12:04:40Z","content_type":"text/html","content_length":"78664","record_id":"<urn:uuid:1df61f80-dffb-4525-9743-be49f96d4d39>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00751.warc.gz"} |
May 2013
• 73 participants
• 70 discussions
Hi there, I agreed to help organising NumPy sprints during the scipy 2013 conference in Austin. As some of you may know, Stéfan and me will present a tutorial on NumPy C code, so if we do our job
correctly, we should have a few new people ready to help out during the sprints. It would be good to: - have some focus topics for improvements - know who is going to be there at the sprint to work
on things and/or help newcomers Things I'd like to work on myself is looking into splitting things from multiarray, think about a better internal API for dtype registration/hooks (with the goal to
remove any date dtype hardcoding in both multiarray and ufunc machinery), but I am sure others have more interesting ideas :) thanks, David
Hi all, I am pleased to announce that four new versions of WinPython have been released yesterday with Python 2.7.5 and 3.3.2, 32 and 64 bits. Many packages have been added or upgraded. Special
thanks to Christoph Gohlke for building most of the binary packages bundled in WinPython. WinPython is a free open-source portable distribution of Python for Windows, designed for scientists. It is a
full-featured (see http://code.google.com/p/winpython/wiki/PackageIndex) Python-based scientific environment: * Designed for scientists (thanks to the integrated libraries NumPy, SciPy, Matplotlib,
guiqwt, etc.: * Regular *scientific users*: interactive data processing and visualization using Python with Spyder * *Advanced scientific users and software developers*: Python applications
development with Spyder, version control with Mercurial and other development tools (like gettext) * *Portable*: preconfigured, it should run out of the box on any machine under Windows (without any
installation requirements) and the folder containing WinPython can be moved to any location (local, network or removable drive) * *Flexible*: one can install (or should I write "use" as it's
portable) as many WinPython versions as necessary (like isolated and self-consistent environments), even if those versions are running different versions of Python (2.7, 3.x in the near future) or
different architectures (32bit or 64bit) on the same machine * *Customizable*: using the integrated package manager (wppm, as WinPython Package Manager), it's possible to install, uninstall or
upgrade Python packages (see http://code.google.com/p/winpython/wiki/WPPM for more details on supported package formats). *WinPython is not an attempt to replace Python(x,y)*, this is just something
different (see http://code.google.com/p/winpython/wiki/Roadmap): more flexible, easier to maintain, movable and less invasive for the OS, but certainly less user-friendly, with less packages/contents
and without any integration to Windows explorer [*]. [*] Actually there is an optional integration into Windows explorer, providing the same features as the official Python installer regarding file
associations and context menu entry (this option may be activated through the WinPython Control Panel), and adding shortcuts to Windows Start menu. Enjoy! -Pierre
=========================== Announcing python-blosc 1.1 =========================== What is it? =========== python-blosc (http://blosc.pydata.org/) is a Python wrapper for the Blosc compression
library. Blosc (http://blosc.org) is a high performance compressor optimized for binary data. It has been designed to transmit data to the processor cache faster than the traditional, non-compressed,
direct memory fetch approach via a memcpy() OS call. Whether this is achieved or not depends of the data compressibility, the number of cores in the system, and other factors. See a series of
benchmarks conducted for many different systems: http://blosc.org/trac/wiki/SyntheticBenchmarks. Blosc works well for compressing numerical arrays that contains data with relatively low entropy, like
sparse data, time series, grids with regular-spaced values, etc. There is also a handy command line for Blosc called Bloscpack (https://github.com/esc/bloscpack) that allows you to compress large
binary datafiles on-disk. Although the format for Bloscpack has not stabilized yet, it allows you to effectively use Blosc from your favorite shell. What is new? ============ - Added new
`compress_ptr` and `decompress_ptr` functions that allows to compress and decompress from/to a data pointer, avoiding an itermediate copy for maximum speed. Be careful, as these are low level calls,
and user must make sure that the pointer data area is safe. - Since Blosc (the C library) already supports to be installed as an standalone library (via cmake), it is also possible to link
python-blosc against a system Blosc library. - The Python calls to Blosc are now thread-safe (another consequence of recent Blosc library supporting this at C level). - Many checks on types and
ranges of values have been added. Most of the calls will now complain when passed the wrong values. - Docstrings are much improved. Also, Sphinx-based docs are available now. Many thanks to Valentin
Hänel for his impressive work for this release. For more info, you can see the release notes in: https://github.com/FrancescAlted/python-blosc/wiki/Release-notes More docs and examples are available
in the documentation site: http://blosc.pydata.org Installing ========== python-blosc is in PyPI repository, so installing it is easy: $ pip install -U blosc # yes, you should omit the python- prefix
Download sources ================ The sources are managed through github services at: http://github.com/FrancescAlted/python-blosc Documentation ============= There is Sphinx-based documentation site
at: http://blosc.pydata.org/ Mailing list ============ There is an official mailing list for Blosc at: blosc(a)googlegroups.com http://groups.google.es/group/blosc Licenses ======== Both Blosc and
its Python wrapper are distributed using the MIT license. See: https://github.com/FrancescAlted/python-blosc/blob/master/LICENSES for more details. Enjoy! -- Francesc Alted
Hi all, Just seeking some info here. The file stdint.h was part of the C99 standard and has types for integers of specified width and thus could be used to simplify some of the numpy configuration.
I'm curious as to which compilers might be a problem and what folks think of that possibility. Chuck
hi, once again I want to bring up the median algorithm which is implemented in terms of sorting in numpy. median (and percentile and a couple more functions) can be more efficiently implemented in
terms of a selection algorithm. The complexity can them be linear instead of linearithmic. I found numerous discussions of this in the list archives [1, 2, 3] but I did not find why those attempts
failed, the threads all just seemed to stop. Did the previous attempts fail due to lack of time or was there a fundamental reason blocking this change? In the hope of the former, I went ahead and
implemented a prototype of a partition function (similar to [3] but only one argument) and implemented median in terms of it. partition not like C++ partition, its equivalent to nth_element in C++,
maybe its better to name it nth_element? The code is available here: https://github.com/juliantaylor/numpy/tree/select-median the partition interface is: ndarray.partition(kth, axis=-1) kth is an
integer The array is transformed so the k-th element of the array is in its final sorted order, all below are smaller all above are greater, but the ordering is undefined Example: In [1]: d =
np.arange(10); np.random.shuffle(d) In [2]: d Out[2]: array([1, 7, 0, 2, 5, 6, 8, 9, 3, 4]) In [3]: np.partition(d, 3) Out[3]: array([0, 1, 2, 3, 4, 6, 8, 9, 7, 5]) In [4]: _[3] == 3 Out[5]: True the
performance of median improves as expected: old vs new, 5000, uniform shuffled, out of place: 100us vs 40us old vs new, 50000, uniform shuffled, out of place: 1.12ms vs 0.265ms old vs new, 500000,
uniform shuffled, out of place: 14ms vs 2.81ms The implementation is very much still a prototype, apartition is not exposed (and only implemented as a quicksort) and there is only one algorithm
(quickselect). One could still add median of medians for better worst case performance. If no blockers appear I want to fix this up and file a pull request to have this in numpy 1.8. Guidance on
details of implementation in numpys C api is highly appreciated, its the first time I'm dealing with it. Cheers, Julian Taylor [1] http://thread.gmane.org/gmane.comp.python.numeric.general/50931/
focus=50941 [2] http://thread.gmane.org/gmane.comp.python.numeric.general/32507/focus=41716 [3] http://thread.gmane.org/gmane.comp.python.numeric.general/32341/focus=32348
Hi, From the dot documentation, I tried something simple: a = np.array([[1, 2], [3, 4]]) b = np.array([[1, 2], [3, 4]]) np.dot(a, b) -> array([[ 7, 10], [15, 22]]) And I got expected result but if I
use either a or b as output, results are wrong (and nothing in the dot documentation prevents me from doing this): a = np.array([[1, 2], [3, 4]]) b = np.array([[1, 2], [3, 4]]) np.dot(a,b,out=a) ->
array([[ 6, 20], [15, 46]]) a = np.array([[1, 2], [3, 4]]) b = np.array([[1, 2], [3, 4]]) np.dot(a,b,out=b) -> array([[ 6, 10], [30, 46]]) Can anyone confirm this behavior ? (tested using numpy
1.7.1) Nicolas
Hi, I would like to package pyhdf for Ubuntu and make the package publicly available. Since the license is not totally clear to me (I cannot find any information in the sources, and the cheeseshop
says "public", which doesn't mean anything to me), I tried to contact the maintainer, Andre Gosselin, however the email bounces, so I guess he's gone. Can anyone point me to how to proceed from here?
Cheers, Andreas.
Hi all, I got a weird output from the following script: import numpy as np U = np.zeros(1, dtype=[('x', np.float32, (4,4))]) U[0] = np.eye(4) print U[0] # output: ([[0.0, 1.875, 0.0, 0.0], [0.0, 0.0,
0.0, 0.0], [0.0, 0.0, 0.0, 1.875], [0.0, 0.0, 0.0, 0.0]],) U[0] = np.eye(4, dtype=np.float32) print U[0] # output: ([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0,
1.0]],) The first output is obviously wrong. Can anyone confirm ? (using numpy 1.7.1 on osx 10.8.3) Nicolas
I have a system that transmits signals for an alphabet of M symbols over and additive Gaussian noise channel. The receiver has a 1-d array of complex received values. I'd like to find the means of
the received values according to the symbol that was transmitted. So transmit symbol indexes might be: x = [0, 1, 2, 1, 3, ...] and receive output might be: y = [(1+1j), (1-1j), ...] Suppose the
alphabet was M=4. Then I'd like to get an array of means m[0...3] that correspond to the values of y for each of the corresponding values of x. I can't think of a better way than manually using
loops. Any tricks here?
Hello, I am using ceil() and floor() function to get upper and lower value of some numbers. Let's say: import math x1 = 0.35 y1 = 4.46 >>> math.ceil(x1) 1.0 >>> math.floor(y1) 4.0 The problem is that
If I want to get upper and lower values for the certain step, for example, step = 0.25, ceil() function should give: new_ceil(x1, step) => 0.5 new_floor(y1, step) => 4.25 Because, the step is 0.25
Question: How I can I achieve those results by using ceil() and floor() function, or Is there any equvalent function for that? -- Bakhti | {"url":"https://mail.python.org/archives/list/numpy-discussion@python.org/2013/5/?count=10&page=3","timestamp":"2024-11-14T05:48:33Z","content_type":"text/html","content_length":"108774","record_id":"<urn:uuid:c735d0fc-b065-40b1-91ea-225405afa080>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00672.warc.gz"} |
Superprism effect in a deformed triangular sonic crystal
The superprism effect in a two-dimensional sonic crystal composed of lead cylinders in water aligned on a lattice obtained by varying the angle between the primitive vectors of triangular lattice is
numerically investigated. Symmetry breaking influences the equi-frequency contours to reflect the lattice symmetry, so that compression along a direction leads to smaller critical angles of
incidence. The whole 0°-90° range is spanned by the refracted waves at the water/sonic crystal interface for frequencies between 165 and 183 kHz, in the third band, and angles of incidence between 0°
and 15°. The studied superprism behaviour can be used to achieve both spectral and angular resolution. The refraction angle varies linearly for small angles of incidence below 3° at a constant
frequency, while its frequency dependence at a given angle of incidence is quadratic for small frequencies. Finite-element computations reveal that waves are refracted into the angles calculated from
the equi-frequency contours with small beam divergence at any frequencies and angles of incidence.
Parmak izi
Superprism effect in a deformed triangular sonic crystal' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar. | {"url":"https://research.bau.edu.tr/tr/publications/superprism-effect-in-a-deformed-triangular-sonic-crystal","timestamp":"2024-11-14T21:29:32Z","content_type":"text/html","content_length":"54486","record_id":"<urn:uuid:ef178cb6-fd25-4134-98b6-22a776cf3ee1>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00794.warc.gz"} |
Mathematics Department
To post seminars taking place at the mathematics department, please send email to seminars at math dot harvard dot edu. Do you organize a Math seminar in the Boston area which is not included here?
Is there an event, which should be mentioned. Is an update to an existing link needed? Let us know! Use the "feedback to this page" link at the bottom of the page to submit or send new link to
At the Mathematics Department: At Harvard:
In the Boston area: Other seminars and events:
URL Suggestion: Do you organize a Math seminar in the Boston area which is not included here? Is there an event, which should be mentioned. Is an update to an existing link needed? Let us know! Use
the "feedback to this page" link at the bottom of the page to submit or send new link to webmaster@math.harvard.edu.
To submit seminars for the Mathematics department, please contact the main office. You will have to submit Seminar series title, Talk title, Speaker, Institution of the speaker, Time, Date, Location.
Back to the department homepage , see also | {"url":"https://legacy-www.math.harvard.edu/seminars/","timestamp":"2024-11-14T11:36:38Z","content_type":"application/xhtml+xml","content_length":"14890","record_id":"<urn:uuid:beef5e31-9be7-44d6-a81e-6e544366df02>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00345.warc.gz"} |
What is a visual model in math?
What is a visual model in math?
When we learn maths we develop understanding through visual models – these are “mental pictures” that explain a particular idea or concept. A “visual model” can be as simple as a using the slices of
a cake to represent fractions, but they can explain some pretty complex ideas in advanced maths too.
What are the methods of fractions?
Method 1 to Divide Fractions: Cross Multiplication To get the final answer’s denominator, we have to switch gears and multiply the first fraction’s denominator by the second fraction’s numerator. In
yellow: The first fraction’s numerator is multiplied by the second fraction’s denominator.
What are the three types of fraction models?
The three major categories of fraction models are the area model, linear model, and set model.
How do you divide fractions with paper?
The first step to dividing fractions is to find the reciprocal (reverse the numerator and denominator) of the second fraction. Next, multiply the two numerators. Then, multiply the two denominators.
Finally, simplify the fractions if needed.
What is example of visual representation?
An image is a visual representation of something that depicts or records visual perception. For example, a picture is similar in appearance to some subject, which provides a depiction of a physical
object or a person.
What is the two methods of divide?
The way of dividing something is called a method of division. The methods of division are of three types according to the difficulty level. These are the chunking method or division by repeated
subtraction, short division method or bus stop method and long division method.
What are the different fraction models?
The three major categories of fraction models are the area model, linear model, and set model. Evidence suggests that providing opportunities for students to work with all three models plays a
crucial role in developing a conceptual understanding of fractions.
What are the 4 steps in dividing fractions?
In a few simple steps, I will show you how it is done.
1. Step 1: Write Out the Equation. This is very straight forward.
2. Step 2: Take the Reciprocal of the Divisor. The divisor is the second fraction in the equation.
3. Step 3: Switch Out the Division Sign.
4. Step 4: Solve.
5. 3 Comments.
What are division models?
A repeated subtraction or measurement model of division is a situation where the dividend represents. the number of objects and the divisor represents the size of each group. For example, in the
problem. 12รท4, the 12 stands for the number of objects and the 4 stands for the size of each group. | {"url":"https://eleanorrigby-movie.com/what-is-a-visual-model-in-math/","timestamp":"2024-11-09T01:24:57Z","content_type":"text/html","content_length":"74855","record_id":"<urn:uuid:46fe592a-1bde-4faa-af0b-705c204177ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00510.warc.gz"} |
Trajectory and Equation of Position
The trajectory is the geometric line or path described by a moving body. In this section, we are going to study:
Concept of trajectory and position equation
When a body moves from one point to another, it does so by describing a geometric line in space. That geometric line is called trajectory, and it is formed by the successive positions of the end of
the position vector over time. Therefore, we often find the coordinates x, y and z of the position vector written as a function of time, like x(t),y(t) and z(t) to represent the evolution of the
position of the bodies with time.
The trajectory of a body is the geometric line described by a body in motion.
The position equation or trajectory equation represents the position vector as a function of time. Its expression, in Cartesian coordinates and in three dimensions, is given by:
• $\stackrel{\to }{r}\left(t\right)$ : is the position equation or the trajectory equation
• x(t), y(t), z(t): are the coordinates as a function of time
• $\stackrel{\to }{i}\text{,}\stackrel{\to }{j}\text{,}\stackrel{\to }{k}$ :are the unit vectors in the directions of OX, OY and OZ axes respectively
For those problems where you are working in fewer dimensions, you can simplify the previous formula by eliminating unnecessary terms. This way, the position equation:
• In two dimensions becomes $\stackrel{\to }{r}\left(t\right)=x\left(t\right)\stackrel{\to }{i}+y\left(t\right)\stackrel{\to }{j}+\overline{)z\left(t\right)\stackrel{\to }{k}} =x\left(t\right)\
stackrel{\to }{i}+y\left(t\right)\stackrel{\to }{j}$, since z=0
• In two dimensions becomes $\stackrel{\to }{r}\left(t\right)=x\left(t\right)\stackrel{\to }{i}+\overline{)y\left(t\right)\stackrel{\to }{j}}+\overline{)z\left(t\right)\stackrel{\to }{k}} =x\left(t
\right)\stackrel{\to }{i}$, since y=0 and z=0
The following animation illustrates the concept of the position equation or trajectory equation.
Types of trajectory equation
In addition to the above expression, there are other ways of expressing the trajectory of the motion of a body. Below, we show other types of position or trajectory equations:
• Parametric trajectory equations: Each of the coordinates is established as a function of time in the form x=x(t),y=y(t),z=z(t). For example, the parametric coordinates of a body that moves in the
plane x-y could be:
• Explicit trajectory equation: It is obtained by removing the parameter t of the previous expressions and solving one variable in function of the other. In our example, it would be:
□ $x=t+2 ⇒t=x-2$
□ $y={t}^{2}⇒\mathbit{y}\mathbf{=}\mathbf{\left(}\mathbit{x}\mathbf{-}\mathbf{2}{\mathbf{\right)}}^{\mathbf{2}}$
• Implicit trajectory equation: It is obtained by making f(x,y)=0.
□ $\left(x-2{\right)}^{2}-y=0$
Take the following example, imagine that a train is moving east 50 meters every second. After the first second the train is located 50 meters from the origin. After second 2, the train is located 100
m from the origin and so on. Therefore, we could write:
• The motion x coordinate as a function of time: $x=50t$ m
• The position equation: $\stackrel{\to }{r}=50t\stackrel{\to }{i}$ m
• The distance to the origin, given by the magnitude of the position vector: $\left|\stackrel{\to }{r}\right|=50t$ | {"url":"https://www.fisicalab.com/en/section/trajectory-equation","timestamp":"2024-11-03T22:11:27Z","content_type":"text/html","content_length":"114763","record_id":"<urn:uuid:f6ddae07-2081-47b9-8962-cfaba67a3488>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00867.warc.gz"} |
How to Use Math Scavenger Hunts in the Classroom - That One Cheerful ClassroomHow to Use Math Scavenger Hunts in the Classroom
Math scavenger hunts are a fun and engaging way to get students excited about learning math. These hunts involve students working together to solve math problems and clues that lead them to the next
challenge. They can be used in a variety of settings, including the classroom, after-school programs, and even at home. In this blog post, we will explore how to use math scavenger hunts in the
classroom to boost student engagement and understanding of math concepts.
How do math scavenger hunts work?
In a scavenger hunt activity, students can work independently or in pairs to solve math problems that are posted on each of the station cards. They can start at any station card and solve the problem
that’s posted there.
Once they solve the problem, they find another card posted around that room with that solution located on the bottom corner of the card. The new station card will include another problem to solve.
Students solve the new problem, then find the station card with that answer on it. If students solve all of the problems correctly, they’ll move from card to card, then end up back at the station
they started at.
Each student or pair can start at a different station and will move through the same loop of stations simultaneously.
What do students record while they’re working on the math scavenger hunts?
That’s up to you!
Some teachers have students record all of their work on a recording sheet that looks something like this:
Students record the letter of the first station they begin at, include all of their work for solving the problem, then find the card with that answer on it and continue the process.
Some teachers have students use individual whiteboards, since students are more likely to jump in and get started when working on a more fun surface than just paper and pencil. In this case, students
could complete a paper based recording sheet where they track the order of the stations, but there isn’t any work shown. This helps them check to make sure they loop through the scavenger hunt and
they went to every station.
How to Use a Math Scavenger Hunt
Any type of math problems can be added into a scavenger hunt. They require minimal prep once they are created because all you have to do is post them around the room, copy some recording sheets and
have students get started.
Benefits of Using Math Scavenger Hunts in the Classroom
Here’s a quick rundown of the benefits of using scavenger hunts to practice math skills.
Scavenger Hunts Get Students Talking
If you choose to have your students work in pairs, they will be discussing how they solved the problems and thinking through things cooperatively. They can also talk about potential solution
strategies, alternative strategies, or even troubleshooting incorrect answers if one of them didn’t solve correctly.
Scavenger Hunts Get Students Moving
Everyone loves movement, and it’s a great way to keep students engaged and on task. Scavenger hunts require movement all while working on important math skills! You can post scavenger hunts around
your classroom or in a nearby hallway or outside to change the scenery for your students.
Scavenger Hunts are Self-Checking and Give Immediate Feedback
The nature of the scavenger hunt provides immediate feedback because if they can’t find their answer on another card, they know that they made a mistake in their solving of the problem. I usually
talk to my students about What happens if we solve a problem… and then can’t find our answer on another card? Maybe our work has a mistake in it. Students can circle back to that station and
troubleshoot their work.
We also talk about, Maybe our work is correct… and the answer we’re looking for is presented differently. This is another huge benefit of the scavenger hunt format! You can work on the equivalence of
answers (like with fractions) or remembering to label answers (if you’re working with measurement).
Easy to Create from Instructional Materials
Scavenger hunts take careful planning and construction when making sure you place answers on the correct cards, but you can quickly build a scavenger hunt from any set of mathematical concepts
questions you already have handy. Put the questions on the cards, write a key for a random order that you want to use to have the students loop through, I like to use Random.Org’s list generator,
then put the solutions on the cards according to your answer key.
Frees the Teacher Up to Help Those Needing Extra Assistance
With students up and working around the room, the teacher can move from student to student to provide help to those who need it. You can also use this time to pull small groups and work on any skills
that the students need to work on to get to mastery. The students working around the room will be able to use the scavenger hunt as a math center activity and will provide meaningful math practice
within your math centers time.
Ready Made Math Scavenger Hunts
These scavenger hunts are so much fun for students and will feel like they are playing a fun math game. Don’t want to create them yourself? Here are some scavenger hunts I’ve put together to save you
time with planning and creating:
3rd Grade Math Scavenger Hunts
4th Grade Math Scavenger Hunt Bundle
5th Grade Math Scavenger Hunts
Want to try a math scavenger hunt activity with your class for free?
Click here to grab my 4 digit division scavenger hunt activity for free to try out.
One Response
1. I love the scavenger hunts. I used a division one with my 4th graders last year. It was a lot of fun! The hunt for the next problem to complete is exciting for the students. | {"url":"https://thatonecheerfulclassroom.com/math-scavenger-hunts/","timestamp":"2024-11-02T15:51:02Z","content_type":"text/html","content_length":"96232","record_id":"<urn:uuid:caf15984-4f29-4a21-841c-f1fc2349a5c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00687.warc.gz"} |
Pyramids Calculators | List of Pyramids Calculators
List of Pyramids Calculators
Pyramids calculators give you a list of online Pyramids calculators. A tool perform calculations on the concepts and applications for Pyramids calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Pyramids
calculators with all the formulas. | {"url":"https://www.calculatoratoz.com/en/pyramids-Calculators/CalcList-548","timestamp":"2024-11-03T22:05:28Z","content_type":"application/xhtml+xml","content_length":"115694","record_id":"<urn:uuid:093799c8-75af-4c50-aa4b-b3f580968526>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00378.warc.gz"} |
The Anatomy of Construction Cost Estimation: An Insightful Guide
Christian Monahan is a tech-savvy enthusiast and a respected reviewer of technological gadgets. His 15-year professional journey is marked by tech product reviews, aiding consumers in making
knowledgeable choices. Equipped with a Computer Science degree and driven by a zeal for groundbreaking technology, Christian is dedicated to exploring and understanding the intricacies of the tech
Post a comment
One moment... processing your comment.
0 comments | {"url":"https://costof.com/the-anatomy-of-construction-cost-estimation-an-insightful-guide","timestamp":"2024-11-05T01:26:21Z","content_type":"text/html","content_length":"116386","record_id":"<urn:uuid:7a8dc21b-b1a0-4e8e-a79e-ee7ca49c2d39>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00526.warc.gz"} |
Math Word Problems | Math Questions Answers | Various types of Word Problems
Math Word Problems
Practice various types of math word problems. These activities in math will help the students to understand the different facts of each problem that are useful to translate the information of the
questions in the steps and find exact answers of all the questions.
Math Word Problems:
1. Two cities are 1580 miles apart. A plane leaves one of them traveling towards the other at an average speed o 400 mile per hour. At that time a plane leaves the other traveling towards the first
at an average speed of 390 miles per hr. How long will it take them to meet?
Answer: 2 hours
2. How much pure acid is in 180 milliliters of a 15% solution?
Answer: 117 ml
3. A table is three times as long as it is wide. If it were 5ft shorter and 5 ft wider it would be square (with all sides equal). How long and how wide is the table?
Answer: Length 15, Width 5
4. Nico is saving money for his college education. He invests some money at 9% and 800 less than that amount @ 6%. The investments produced a total f $222 interest in 1 yr. How much did he invest at
each rate?
Answer: $1800 at 9% and $1000 at 6%
5. I had $1.60 in dimes and nickels and had four more dines then nickels. How many of both would I have?
Answer: Nickels 8, Dimes 12
6. How many litters of a 60% acid solution must be mixed with a 25¬id solution to get a 315L of a 50% acid?
7. A Dr. Wishes to mix a solution that is 7% monoxide. She has on hand 60 ml of a 6% solution ad wishes to add 9% solution to obtain the desired 7% solution she should add ___ ML
8. When 1 is added to the difference between four time’s number and 9, the result is greater than 13 added to 3 times the number. Find all such numbers.
9. Randy has $15,000 that he would like to invest. He has a choice of two interest paying bonds, one offering a 4% annual interest and the other paying 8% annual interest. He would like to earn $760
in interest in one year using both types of bonds that he can put towards his vacation. How much money should he invest in each bond?
Answer: $11000 at 4% and $4000 at 8%
10. An elementary school teacher wishes to use 17 feet of scalloped border to enclose a rectangular region on her bulletin board. What is the maximum area that she can enclose with the border?
11. The expression 10,000 – 1,000t can be used to calculate the tax value in dollars of a farm tractor that is t years old. How old will the tractor be when its tax value reaches zero?
(a) 12 yr old, (b) 11 yr old, (c) 10 yr old, (d) 13 yr old
Answer: (c) 10 years
12. The total cost of a video game was $42.39. This included 6% sales tax. Find the price without sales tax.
Answer: $ 39.99 (Approx)
13. Clear-water High School expects a 41% increase in enrollment next year. There are 1850 students enrolled this year. How many students will the school gain? What is the expected enrollment next
Answer: School gain 1632 students and expected enrollment next year = 2109
14. A fair die is tossed once. Which event is more likely, "the number showing is divisible by three" or "the number showing is even"?
Answer: 6
15. An electric clock was stopped by a power failure. What is the probability that the minute hand stopped between the following two numerals on the face of the clock? a) 12 and 3 b) 1 and 5 c) 11
and 1
Answer: (a) P(12 and 3) = 1/4, (b) P(1 and 5) = 1/3, (c) P(11 and 1) = 1/6
16. Amy the chicken is tied to one corner of a rectangular barn with a 12 foot rope. If the width of the barn is 8 ft and the length is 2 yards, find the area outside the barn in which Amy can
travel. Leave pi (Π) in the answer.
17. A swimming pool has to be drained for maintenance. The pool is shaped like a cylinder with a diameter of 10m and a depth of 2.2 m . If the water is pumped out of the pool at the rate of 17 m^3
per hour, how many hours does it take to empty the pool? Use the value 3.14 for , and round your answer to the nearest hour.
18. The height of a triangle is 4 feet more than 3 times the base. If the area is 32 ft2, find the base and height of the triangle.
Answer: Base = 4 feet, Height = 16 feet.
19. In general, the y-intercept of the function F(x) = a*b^x is the point _____.
Answer: (0, a)
20. The domain of F(x)=(2/3)^x is all negative numbers.
A. True B. False
Answer: B. False
21. For all values of a and b that make F(x) = a * b^x a valid exponential function, the graph always has a horizontal asymptote at y = 0.
A. True B. False
Answer: A. True
22. The range of F(x) = 5 * 2^x is all positive real numbers.
A. True B. False
Answer: A. True
23. The range of the function given below is the set of all positive real numbers greater than 6.
F(x) = 6 + 2^x
A. True B. False
Answer: A. True
24. Graduate students were asked to participate in an evaluation of beers. Each student was given three glasses in a random order. Glasses 1 and 2 both had the same inexpensive local beer, while
Glass 3 contained a more expensive imported beer. Students were told that the beer in glass 1 was expensive, but that the beer in glass 2 was not. They were not told anything about glass 3. Students
were asked to rate the beers from 1 to 20 with higher ratings indicating better taste. 8 students responses were randomly selected and are presented below. What is your question? Analyze the data to
answer the question.
Student Glass 1 Glass2 Glass 3
So, the rubric is:
(a) Provide support for a particular approach by exploring whether assumptions underlying parametric/nonparametric/et., assumptions are met.
(b) Surface ideas with techniques (e.g. scatter plot, outliers, etc.) used to analyze the data.3. Define the question and hypotheses ( which of course you will then test).
(c) Identify the key ideas, variables, or issues that frame the research problem
25. A rectangles perimeter is 16 units and its area is 11 square units. please find its exact dimensions.
Answer: Length = 4 + √5 units, Width = 4 - √5 units
26. Astronauts who waked on the moon felt as if they weighed about one sixth of their weight on earth. this is because the weight of an object is determined by the gravitational attraction between
the object and the planet (or moon) its on. you can use the following formula to estimate the weight of an object on the surface of any of the planets: gravitational force=GmM/r^2, where G=universal
gravitational constant,m=mass of the object, M=mass of the planet, and r=radius of the planet. For a given object, Grandma remain constant, so the force of gravity depends only on the variables named
27. A store is having a sale on liquid soap and sponges the cost of 6 bottles of soap is 9.12 the coast of 4 sponges is 5.00 what is the total cost of buying 2 bottles of liquid soap and 3 sponges.
28. A line contains the points(a, b) and ( a+3, b+3). Find the slope of the line.
Answer: Slope = 1
29. Put the following list of real numbers in order from LEAST to GREATEST. Make sure you show all the steps you took to get to your final answer. -√64 ,-7/2, 3.3, √10, 27/3
Answer: -√64, -3.5, √10, 3.3, 27/3
30. Johnathon and Talia simplified the following expression on their Algebra 1 midterm: 7 - 2(x - 5).
Johnathon's work Talia's work
7 - 2(x - 5) 7 - 2(x - 5)
= 7 - 2(x) - 2(-5) = 7 - 2(x) - 2(5)
= 7 - 2x + 10 = 7 - 2x - 10
= - 2x + 17 = - 2x - 3
Answer: Johnathon's work --- Correct Process
31. What is the sales tax and total price of an item that is $45.69 if the sales tax is 7.5 %?
a. sales tax = $3.00; total price = $50.00
b. sales tax = $3.40; total price = $49.09
c. sales tax = $3.43; total price = $49.12
d. sales tax = $3.42; total price = $49.11
Answer: c. sales tax = $3.43; total price = $49.12
32. Kate earns a weekly allowance of $12. How much money will she earn in 8 weeks? Do not enter $.____ dollars
Answer: $ 96
33. A table is shaped like a square and has an area of 100 square feet. Find the length of one side of the table.____ feet
Answer: 10 feet.
34. Juan bought n video games. Write an expression to show the total cost of the games if each game cost $16. Do not enter $ in your answer.
Answer: Total cost f(n) = 16n
35. A sandpit in the shape of a pentagon ABCDE is to be built in such a way that each of its sides are of equal length, but its angles are not all equal. The pentagon is symmetrical about DX, where X
is the midpoint of AB. The angle AXE and BXC are both 45 degrees and each side is 2m long.
(a) Find angle XEA.
(b) Find the length of EX.
(c) How much sand is required if the sandpit is 30 cm deep?
Give your answer to three decimal place.
36. An aircraft takes off from an airstrip and then flies for 16.2 km on a bearing of 066 degrees true. The pilot then makes a let turn of 88 degrees and flies for a further 39.51 km on this course
before deciding to return to the airstrip. (a) Through what angle must the pilot turn to return to the airstrip? (b) How far will the pilot have to fly to return to the airstrip?
37. Gracie decided to rent a limo for her wedding. The Bedazzled Limo Com- pany charges a $50 gas fee, $56.48 per hour to rent the limo and requires a tip of 20% on the total price of the rental (so
20% of the total cost of the rental). The Wedding Bell Limo Company charges $68.36 per hour and requires a 25% tip to the driver. For how long of a rental will the hourly cost be the same? (Round to
the nearest half hour).
38. Wolf Problem Naturalists find that the population of wolves varies sinusoidally with time on a particular land. After 2.5 years of keeping records, a maximum number of wolves, 1100, was recorded.
After 5.2 years, aminimum number of wolves, 300, were recorded.
a. Draw a neat sketch of a graph that models this situation for the first 16 years. Label your axes.
b. Derive an equation expressing the number of wolves, W, as a function of time.
c. Predict the population of wolves in 7 years after keeping records? In 9 years? In 15 years? In 56.5
d. What are the first 4 times (to the nearest hundredth of a year) after keeping records that there are
1000 wolves on the island?
Answers for the different types of math word problems are given below to check the exact answers of the above math questions.
39. Researchers find that the mongoose population is periodic and varies sinusoidally with time. Records were kept beginning with t = 0 years. A minimum number of 500 mongooses occurred when t = 3
years. The next maximum of 1700 mongooses occurred when t = 7 years.
a. Sketch a graph of this sinusoid. Be sure to label the axes on the graph.
b. Write an equation describing the mongoose population P as a function of the time elapsed, t.
c. Mongoose are placed on the endangered species list if the population falls below 900. What are the first two values of t for which this happens?
d. Without using a calculator, how many times does the mongoose population peak in the first 75 years?
Related Concepts
● Math Questions Answers
● Help with Math Problems
● Answer Math Problems
● Math Problem Solver
● Math Unsolved Questions
● Math Questions
● Math Word Problems
● Word Problems on Speed Distance Time
● Algebra Word Problems – Money
Math Problem Answers Index
From Math Word Problems to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question. | {"url":"https://www.math-only-math.com/math-word-problems.html","timestamp":"2024-11-12T19:57:24Z","content_type":"text/html","content_length":"52148","record_id":"<urn:uuid:718b195a-9674-4e1a-b294-0307474ef6c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00738.warc.gz"} |
HHC 2012 RPN Programming Challenge Conundrum
I enjoy the HHC programming challenges, usually tackling after the fact. For personal satisfaction, I try to develop my best solution before looking at the Museum Forum discussion. I cogitated a
little over the weekend of 22-23 September regarding this year's problem, but did not really start trying to write anything until Tuesday the 25th, and relatively quickly had a working routine (all
routines on a wp34s.) Of course I kept up my normal Forum review, not reading any posts that had "HHC Programming Challenge" in the subject line. But I could see the subject lines, and saw that one
mentioned something about a "12 Step Nashville Solution." 12 steps!!? Crap, my first attempt was 38 steps. OK, the first attempt was just to get a working program, I was sure it could be optimized,
but 26 steps worth of optimization seemed unlikely. Long story short, I finally got down to 14 steps before writing this message. In the past, I would probably have been satisfied with going from 38
steps down to 14, but since I know there is a 12 step solution, it is difficult to stop. I'm doubtful I can shave it down to 12 using the solution approach I have chosen. I suspect there is some
fundamental simplification to the solution that I am not going to see. So, if you have read this far, here is the conundrum. Should I:
1. Give up, read all of the Forum posts and be ready to slap my forehead when I see the obvious simplification I've been missing.
2. Give up, don't read the discussion and just try to be satisfied with my 14 step using-registers and 16 step all-stack programs.
3. Don't read the posts and keep on plugging away.
update - I followed option 3 and got down to 12 steps. Then followed option 1. With some insights gained, was able to write a 10 step program to solve the challenge.
Edited: 16 Oct 2012, 10:43 a.m. after one or more responses were posted
10-03-2012, 08:09 PM
There is an eleven step solution :-)
- Pauli
10-03-2012, 09:38 PM
Aarghh! You are a cruel, cruel man :-)
10-03-2012, 10:05 PM
Not really, cruel would have been saying ten.
- Pauli
10-03-2012, 08:59 PM
I for one am always ready to learn: opt. 1
Luiz (Brazil)
10-03-2012, 10:20 PM
Quote: 3. Don't read the posts and keep on plugging away.
I would suggest this option --if you're still having fun-- at least until you find a 14-step stack-only solution. This shouldn't be difficult even on the HP-42S using the formula you appear to have
I guess you'll be surprised to know an unusual 13-step solution is possible on the HP-15C, but not using only the stack, of course :-)
10-04-2012, 05:35 AM
I will suggest you another option:
4. Read the posts and keep on programming your own solutions
Clearly, the best solution is not only the shortest. Length of code greatly depends on calculator faculties. For this WP-34S overlords other great RPN majors (HP-42S, HP-41C, HP-15c) and other or
older calculators have not the tricky stack or integrated instruction to hit length records.
Try to table programs length versus calculator model
A longer personal code don't indicate you miss same important mathematical or logical fact. It's certainly more probable you don't notice any tricky detail in a specific function of your calculator.
This doesn’t mean you are a bad programmer; length of code optimization is only a play.
Day to day usage of your calculator implies a more clever integration of multiple code, an easy user-interface as well as specific capabilities for you specific daily applications.
I remember the old days when I use my Pocket Computer every day at work. Having the shortest or fastest code was not the key point. What save my day was that all the programs where well integrated,
allowing rapid use of the pocket efficient and confident, unambiguous results display. I was using my advanced calculator mainly to check result from other sources of computation.
This cost a few extra steps, but help to spare a lot of time programming, using or verifying results!
Edited: 4 Oct 2012, 5:36 a.m.
10-04-2012, 07:50 AM
I agree with C.Ret. There is probably some mathematical trick or programming instruction that you haven't thought of. Read the posts and learn.
I took a similar approach with the RPL contest. After returning from Nashville I coded up an answer that I liked and then read the forum. There were several tricks there that I never would have
thought of. To me, it's a learning process.
10-04-2012, 12:36 PM
Thanks Dave and to all who commented. I think I'll probably stew a little more on my own, then read the Forums. I had a new idea last night that I thought might help, but still landed at 14 steps*
when I tried using it.
* - Just to clarify, by steps I am using and counting the mandatory END at the end of a program in wp34s. Also, I am not using a label at the beginning. So my programs look something like this:
001 STOS
013 /
014 END
I sure hope the 11 and 12 step programs do not have a Label at the beginning and a RTN at the end counted among those steps, else I am even further away from success.
10-04-2012, 03:07 PM
The final END shouldn't be counted, as mentioned elsewhere. On the HP-42S, for instance, it is appended automatically and adds nothing to the byte count. So you are now at 13 steps. Congrats!
10-04-2012, 04:28 PM
Great! In that case I am now at 12 steps, having successfully exorcized a step from my former 13 step (was 14 steps counting END) version using storage registers. Only one more step to go to get to
11. (Unfortunately, I fear it will take an infinite amount of mental energy to do so, like going from 99.999% of the speed of light, which is pretty darn fast, up to the actual speed of light.)
10-06-2012, 11:37 PM
11 steps & 2 registers, the best I can.
10-08-2012, 12:14 PM
Can you post your answer? It would be interesting to see how it compares to Pauli's 11 step program.
10-08-2012, 02:41 PM
For what it is worth, I was planning to post my best results today. The first thing I did was develop the following as the algorithm I would use to attack the problem:
steps between two points = (D2+D1) * (D2-D1+1)/2 + (D2-D1) - y1 - x2
where D2 = x2+y2 and D1= x1 + y1
(I now realize that the above was not the most enlightened approach, but that’s what I came up with, so I stuck with it.)
I boiled the above into two different equations:
equation 1: steps between points = ((x2+y2)^2-(x1+y1)^2+3*(y2-y1)+x2-x1)/2
The shortest program I could come up with was based on the above, uses registers:
001 STOS 11
002 CPX RCL-Z
003 CPX x<>11
004 Y<>Z
005 CPX+
006 CPX X^2
007 RCL+12
009 RCLx11
010 +
012 /
equation 2: steps between points = (D2*(D2+3)-D1*(D1+1))/2 - y1 - x2
The shortest stack-only program I could come up with was based on the above:
001 RCL+Y
002 <>ZTYX
003 STO+Z
004 +
005 ENTER
006 x^2
007 +
009 RCL+T
010 RCLxT
011 RCL-Y
013 /
014 RCL-T
Of course the above could be shortened to 13 steps by eliminating the ENTER in step 5 and replacing step 7 with RCL+L. But since that would still be 13 steps, and the previous program is only 12,
there was no real point. So I leave the above as my shortest stack-only version. I played around trying to create stack-only programs from the first equation or shorter register-using versions from
the second equation, but did not seem to be able to improve upon the above. So I finally gave up and read the Forum posts over the weekend. While I did not approach the problem quite as efficiently
as Dave's described method, I was not as far off as I feared. But, I don’t think either of my programs breaks any new ground compared to the other solutions presented.
^edit - added new solution, then decided to post as new message
Edited: 9 Oct 2012, 10:50 a.m.
10-09-2012, 10:50 AM
Because this continues to be fun, I applied one of Dave's ideas to my original approach and came up with this equation:
steps between two points = (D2+D1) * (D2-D1+1)/2 - (y1 - y2) - D1
where D2 = x2+y2 and D1= x1 + y1
this led to the following 12 step all-stack solution:
X Y Z T
beginning values in stack -> y2 x2 y1 x1
001 y<>Z swap Y with Z for next step, enables
complex add to create D1 and D2
simultaneously y2 y1 x2 x1
002 CPX STO+Z Complex store add to create D2 in Z,
D1 in T, y2 and y1 still in X and Y y2 y1 D2 D1
003 - subtract y2 from y1 to create y1-y2,
and drop stack to create second copy
of D1 in T y1-y2 D2 D1 D1
004 x<>Y swap X with Y for next step D2 y1-y2 D1 D1
005 STO+T add D2 to D1 in T to create D2+D1 D2 y1-y2 D1 D2+D1
006 RCL-Z Subtract D1 from D2 to create D2-D1 in X D2-D1 y1-y2 D1 D2+D1
007 INC X increment X to create D2-D1+1 D2-D1+1 y1-y2 D1 D2+D1
008 RCLxT mutiply D2-D1+1 by D2+D1 (D2-D1+1)*(D2+D1) y1-y2 D1 D2+D1
009 2 enter 2 2 (D2+D1+1)*(D2-D1) y1-y2 D1
010 / divide 2 into (D2-D1+1)*(D2+D1) (D2-D1+1)*(D2+D1)/2 y1-y2 D1 D1
011 RCL-Y subtract y1-y2 (D2-D1+1)*(D2+D1)/2-(y1-y2) y1-y2 D1 D1
012 RCL-Z subtract D1, final answer (D2-D1+1)*(D2+D1)/2-(y1-y2)-D1
As it is all-stack and is the same length as my previous shortest, I'm pretty sure it is the best I can do. Since my equation has an extra term as compared to Paul's 11 step solution which uses
Dave's equation (I think), I don't believe it is possible to get down to 11 steps. (I'd be happy to see an 11 step version using this equation if anyone can do so.) So hopefully I will stop now.
^edited to add comments and stack diagram
Edited: 9 Oct 2012, 4:20 p.m.
10-08-2012, 03:34 PM
Quote: Can you post your answer? It would be interesting to see how it compares to Pauli's 11 step program.
Sure, Dave. I've used Pauli's magic ^C+ which does two sums at a time. There's no gain (quite the contrary, as two registers are needed), but this exploration of complex instructions might be useful
for other applications:
001 y<> Z
002 ^CSTO 00
003 ^C+
004 ^Cx^2
005 ^CRCL L
006 -
007 -
009 /
010 RCL+ 00
011 RCL- 01 | {"url":"https://archived.hpcalc.org/museumforum/thread-232404-post-232601.html","timestamp":"2024-11-12T05:38:59Z","content_type":"application/xhtml+xml","content_length":"74662","record_id":"<urn:uuid:4317e9d4-675a-4ea6-a4f1-8eff707acf7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00077.warc.gz"} |
Using GARCH (1,1) Approach to Estimate Volatility - Finance Train
Using GARCH (1,1) Approach to Estimate Volatility
This video provides an introduction to the GARCH approach to estimating volatility, i.e., Generalized AutoRegressive Conditional Heteroskedasticity.
GARCH is a preferred method for finance professionals as it provides a more real-life estimate while predicting parameters such as volatility, prices and returns.
GARCH(1,1) estimates volatility in a similar way to EWMA (i.e., by conditioning on new information) except that it adds a term for mean reversion. It says the series is "sticky" or somewhat
persistent to a long-run average.
This video is developed by David from Bionic Turtle.
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
• Getting Started with R
• R Programming for Data Science
• Data Visualization with R
• Financial Time Series Analysis with R
• Quantitative Trading Strategies with R
• Derivatives with R
• Credit Risk Modelling With R
• Python for Data Science
• Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $39 (Regular $57)
JOIN 30,000 DATA PROFESSIONALS
Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python. | {"url":"https://financetrain.com/using-garch-approach-to-estimate-volatility","timestamp":"2024-11-04T05:56:49Z","content_type":"text/html","content_length":"92439","record_id":"<urn:uuid:7e2f147c-31d8-4da4-84c7-0b4eb5d9a5c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00330.warc.gz"} |
SBG: Questions Per Skill
I gave my first sbg skill quiz over 4 concepts in both Algebra 1 and Geometry on Friday. I also gave my first vocab quiz. We went through all the vocab words from the first section together and
filled in each square of the Frayer model: Definition, Characteristic, Example, Nonexample. Then in the second section they looked up the definition in the book themselves and I mentioned the other 3
as we went along. Friday was the quiz and
is the example for Algebra 1 and
for Geometry. I reviewed a few minutes during our warm up and gave them 2-4 minutes to study on their own. The results were pretty terrible. The few that did well surprised me. One student missed
about 3 out of the 5 days of instruction but told another teacher that we underestimate his abilities because he studies a lot at home. Touché. Another girl assured me that she would flunk it and I
believe she has/d a learning disability or just low math scores. They were my top scorers. Yay!
I have to say I am pretty happy with the process even though I'm disappointed by the results. I like the Frayer model and I feel like the questions came straight from the information I gave them
without being matching or multiple choice. The question I gave about 'modify the expression so it can be evaluated' was too hard and most didn't know what to do. That one was my fault. I felt like
the others were within reach. My colleagues told me to stand my ground. It was only the first quiz and now they know what to expect. They will either rise to the challenge or fail. That sounds harsh
to me but I will be curious to see what happens at the end of week two.
Now my first sbg skill quiz...
Skill 1 and 2 Skill 3 and 4
Skill 1 and 2 Skill 3 and 4
I didn't really have a clue what I was doing. Still don't. I know you are supposed to make your assessments first but I just don't get that yet. How do I know what to assess when I haven't taught it
yet? I know, you're thinking "How do you know what to teach when you don't know what you're assessing yet?". Touché. I had 5 questions per 2 skills and only 4 skills on the quiz. So 10 questions.
This really makes no sense. I see that clearly now. I don't know how to grade these. If there are 3 questions for 1 skill, how do I give them one score for that skill? What about when there are only
2 questions per skill? Does it change now? Is each question worth a 2 to achieve the maximum score of 4? I haven't looked at them yet because I have no idea how to assess them.
I know other people use Marzano's 3 levels of questions per skill but I need someone to explain that to me. And still how do you grade that? How do those 3 pieces work together to create one score?
I'd like to ask one question per skill that is purely computation but then I don't know where they would get questions on a deeper level that synthesize, analyze, apply concepts, and you know,
actually matter.
On a positive note, I'd like to brag on myself for actually running out of time to finish the lesson.
More than once.
That NEVER happened last year. I went from preparing 10-12 slide Powerpoints to 25 or more. No, it is not all direct direct instruction.
Yes, I use pictures.
What I've started doing is giving them a worksheet as notes. We do examples together on the board. Then they do 3-5 at their seat and compare with their partners. Then we go back to the board. I have
them come to the board and draw examples or work out problems. While they are working I scout the room to
check for understanding
(oooh nice little sample of edu-jargon for ya) This way I am alternating their focus and while I am 'lecturing' they can actually pay attention instead of scrambling to write. (Yes I need more
inquiry and a variety of other strategies and skills. It's week two of my second year. Work with me here people.) Then at the end, I created an
overview sheet
for them to summarize the important ideas associated with that skill. Hopefully that will be like a quick and dirty study guide refresher with the accompanying worksheet to provide examples.
I would like to explain that I am doing vocabulary separate from sbg skills because
1. My admin asked me to
2. Test scores show our students don't understand standardized testing vocabulary
3. I know they have the skills but don't know what the question is asking
This is hard to integrate into geometry. In geometry, understanding the vocabulary is synonymous with understanding the concepts. Hopefully from the examples I linked to, you can affirm my question
creating skills in that the vocab quiz built more off of the technical, specific definition and the sbg skill quiz was more identifying, using, modeling, applying, the definition. Hopefully. After
some Twitter conversation, I realized that vocab can still fit into sbg. If we are about learning, then we are still about learning vocabulary. If we are about learning, and students can retake, then
students can retake vocabulary. If Algebra is broken up into specific concepts, it can also be broken up into specific vocabulary. And as long as I'm teaching it, they should be learning it, and I
can freely assess it. And re-assess it.
8 comments:
1. Hihi!
I too am trying to figure out the SBG thing, but in calculus, where some of the questions are more involved/take longer.
If I were doing it for Alg II, I would give a couple questions on the same skill, and then give them a score (based on some sort of rubric) from 0-4 or 0-5 or 0-whatever on that skill. Like,
looking at all the different problems on the same skill, I think this kid knows the material at a 2 level -- or something like that.
I think it's okay to think more holistically and less like we used to think when grading, because we don't need to parse things as much.
At least, that's how I'm thinking.
2. Can I make a recommendation? Instead of numbering your skills, name them. If I were to ask your students, "What are you learning?" I'd rather hear them say, "Systems of Equations" than "Skill 9."
3. Raymond,
I understand your recommendation and the students have a list of the skill and it's number. If you asked my students what skill we are working on, I doubt they would know the correct number
I did realize that I should label my quizzes with the skill name and even the entire skill spelled out.
4. i did some research on frayer's model.
although it's a good way to promote vocabulary, there's a couple of ways to improve it.
1. USAGE
in maths, knowing how to define something scientifically may not be as important as knowing what it can do.
what can phytagoras theorem do?
1. help u calculate one side of a triangle when two other sides are given.
2. know whether a triangle has a hypotenuse when given its three sides, by checking whether a^2+b^2 = c^2
2. RELATED TERM
related terms are the vocab that is related to the one u're learning.
in learning bout phytagoras, u need to know bout right angle, hypotenuse and etc.
this can promote the students to *pinpoint* the connection between one terminology with another, to improve their understanding and memory.
"in geometry, understanding the vocabulary is synonymous with understanding the concepts."
have u ever heard of natural vs artificial definition in cognitive psychology? most students know what something is ie they can point it in an image and what not, provide recognition, but they
have problems explaining things *linguistically* in detail as demanded by maths.
coz these kids are taught maths, not how to articulate.
so i guess, if u want ur kids to score the vocab test, u gotta teach them how to define things in maths.
this can usually be done by taking the defining characteristics and organizing them into a cohesive whole.
why? because a concept is basically defined by its characteristics. learn to explain the characteristic, and u'll learn how to define the concept.
u need an anchor example ie prototype that clearly illustrates a particular concept for this exercise to work.
1. get the anchor example ie prototype
2. analyze the characteristic
3. which characteristic can be categorized as the essential characteristics? list it.
4. from the essential characteristics, what are the defining characteristics?
5. write down the definition by your understanding of step4.
lemme summarize the flow
=> analyze characteristic
=> essential characteristic
=> defining characteristic
=> definition
this way, their definition is built from their understanding of the concept instead of being recalled from memory.
then u can give them other concept and example, and ask them to produce the definition. later on they can check the definition they generated and the one given in the book for proper feedback.
i havent tested the instruction i juz explained. it was kinda spontaneous. so i hope it helps :P
5. Anonymous8/31/10, 6:08PM
As far as writing the assessment first: You have your skills. Decide the "4" or "5" level of question you think they should be able to perform once they have received your instruction. You now
have your quiz (maybe the second or third edition). Now, make a couple of questions up that are easier for their first and second attempts. So, when you instruct, you teach them the skill in that
progression (easy to hard) to get them to the "5" level.
I think 2 questions per skill on a single quiz is enough to be able to tell what score to give them. It should be one score for that topic based on how they answered both questions combined. I
would stick to 2 questions every time.
The Marzano levels idea doesn't work if you do skills instead of topics. A topic is more encompassing and can have levels- so you can say "1" if they do this much; "2" if they do more; and "3" if
all of it is correct.
If you want higher levels on Bloom's- make that your "5" and work backwards from there.
Hope that helps.
6. Anonymous8/31/10, 6:11PM
Your questions and my answering them helped me solidify a lot of the same questions/issues I was trying to figure out...so thanks!
7. Matt,
Thanks for the comments. I am trying the Marzano method for now and we will see what happens. I know what I should do but I have trouble creating my assessments first and I'm never sure of what
they should be able to do. I just need someone to tell me what they should know and then sbg would be a whole lot easier! lol
8. I'm really glad you are blogging about this as my school year begins next week. Thanks for working out a lot of my concerns with SBG ahead of time. I can speak for many others when I say that
you've been incredibly valuable to all educators starting the SBG journey. Thanks!
As a math teacher trying SBG, I'd like a math assessment to involve computation, application, and analysis. I'd say that a student that can only successfully navigate the "computation" question
with not a high score on application/analysis is at the basic knowledge level at best. Where there will be struggles I'm guessing is those gray area students that are almost there on application/
analysis. It's obvious that a student that can do all 3 on a quiz successfully has mastered that skill at this point in time.
Computation and application are relatively easy types of questions to generate. Analysis is difficult, where you're asking a student to generate a result and make some inference/decision based on
their result. What's weird about this in Math is that the computation can be done incorrectly, but with an appropriate conclusion based on that result.
I'm sure you will get much better at making or assessments and grading them over time. Remember, you are an SBGBeginner, and have created a wiki that says just that.
Good luck, and I look forward to hearing more from you! | {"url":"https://misscalculate.blogspot.com/2010/08/sbg-questions-per-skill.html","timestamp":"2024-11-09T06:30:36Z","content_type":"text/html","content_length":"138525","record_id":"<urn:uuid:fe4508bd-cdaf-48f7-91da-52f5ea1e1516>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00602.warc.gz"} |
CBSE Class 8 Maths Notes Free PDF Download
A key to success can’t be found easily. The more and harder work is required to grasp it. Mostly, you thought when you looked at the names of toppers that how lucky they are. God bless them with
great knowledge and a sharp mind.
The fact is that success can achieve by anyone. Only you become unable to understand the basics. When the basics are not cleared, never consider you can be among the toppers. Even cramming the whole
syllabus will help you for a soon time. Within some days, you will forget your cramming portion.
But the concepts learned by understanding the basics will be with you forever. The fact is that a single concept uses in solving many problems. If you will understand a single concept, you will able
to sort out numerous problems.
Understanding concepts are essential in all subjects. But class 8th maths is a subject which is full of concepts, formulas, equations, and theorems. It needs a lot of thorough understanding and it is
possible only if you have a class 8 Maths notes.
These notes are well-structured that will let you know everything in a precise manner. In brief, these math notes have a full-fledged syllabus and you don’t need to use your maths notebooks to learn
any concept. All are given in simple steps that would help you to understand straightforwardly.
CBSE Class 8 Maths Revision Notes – Free PDF Download
You just need to start with the simple chapters as it will help you to grasp the basics before commencing the complex chapters. Maths of class 8 is not a tough subject. Only you make it tough by
ignoring the right way of understanding it. When you will have everything in a precise manner as all the irrelevant data has been deleted, then what will you need more.
You just need to use these notes every day simultaneously your daily work. In this way, you will become a scholar in maths within no time.
Why are class 8 maths notes imperative?
Needless to say, when you have everything is in ready form, you just need to start preparation. Let’s assume, tomorrow will be your Chapter 1 maths test. When you will start preparation, first you
will make notes covering all the concepts. Once done, you will check whether you did accurately or not.
Might be, while making your maths notes, you will find many difficulties. In brief, all this process will suck much of your time and you will leave with a minimum time to understand as well as
revising the concepts of chapter 1. In a case, if you would already have chapter 1 maths notes that are precise and accurate, you just need to understand the concepts.
You will get much of time to revise all the concepts. This way of preparing for your maths exam will save much of your time as well as other resources. Moreover, you will be able to score high marks
as well.
It is a pleasure to share that notes for class 8 maths are prepared by the veteran teachers. All the information is written over there are true as well as scoring. These are designed after analysing
the last 10 years records. The veteran teachers know which concept should be learned first and which may be left if you can’t do.
They know the ways to solving a particular problem. While making these notes, they split the complex answers into maximum steps, thereby, students can understand it properly. Moreover, these notes
have not prepared by a single teacher. Every chapter has been designed as per the professional.
Thus, it is a time to say goodbye to thinking that concepts of maths are never-ending. Just download these notes free of cost in PDF form. Once done, you need to start preparation only. Moreover, you
can also refer these to your classmates as these can download free of cost like you as well. | {"url":"https://www.ncertbooks.guru/cbse-class-8-maths-notes-pdf/","timestamp":"2024-11-03T15:05:47Z","content_type":"text/html","content_length":"80368","record_id":"<urn:uuid:893704e7-8619-4c0e-bd5b-1e0f2473ee0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00406.warc.gz"} |
Evidence for F(uzz) theory
We show that in the decoupling limit of an F-theory compactification, the internal directions of the seven-branes must wrap a non-commutative four-cycle S. We introduce a general method for obtaining
fuzzy geometric spaces via toric geometry and develop tools for engineering four-dimensional GUT models from this non-commutative setup. We obtain the chiral matter content and Yukawa couplings, and
show that the theory has a finite Kaluza-Klein spectrum. The value of 1/αgut is predicted to be equal to the number of fuzzy points on the internal four-cycle S. This relation puts a non-trivial
restriction on the space of gauge theories that can arise as a limit of F-theory. By viewing the seven-brane as tiled by D3-branes sitting at the N fuzzy points of the geometry we argue that this
theory admits a holographic dual description in the large N limit. We also entertain the possibility of constructing string models with large fuzzy extra dimensions, but with a high scale for quantum
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
• F-theory
• Non-commutative geometry
Dive into the research topics of 'Evidence for F(uzz) theory'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/evidence-for-fuzz-theory","timestamp":"2024-11-11T10:50:05Z","content_type":"text/html","content_length":"48975","record_id":"<urn:uuid:3f946c09-aae4-488d-999a-326cced38d3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00880.warc.gz"} |
Permutations and Combinations | Path2TJ
top of page
To enable every aspiring TJ student to present their best self
Check out our Feature in WTOP!
Welcome to the guide on Permutations and Combinations
Please note this topic is a big differentiator in getting a high score
The prerequisites are a thorough understanding of Factorials. If you do not have a good handle on these concepts, use the links below to explore these concepts and skills first. Though this topic is
often taught in schools, it is not always covered to the depth necessary to quickly answer questions. This lesson will focus on the specifics of permutations and combinations to the level needed for
the test.
Choose one or more of the following links to come up to the speed with the concepts:
Practice and check your understanding with the practice sheets available here:
• Counting, Permutations, Combinations (A question bank with any questions, do enough to make you feel comfortable)
• Permutations and Combinations worksheet (Practice problems to learn Permutations and Combinations. HIGHLY RECOMMENDED THAT YOU DO ALL OF THESE)
Now check if you are battle-ready by answering the following questions:
1. A letter lock Consists of four rings, each ring contains 9 digits. The lock can be opened by setting a four-digit code with the correct combination of the four rings. How many unsuccessful
attempts are possible in which the lock cannot be opened?
2. In how many ways can 11 identical books on English and 9 identical books on Math be placed in a row on a shelf so that two books on Math are not together?
3. There are 6 numbered chairs placed around a circular table. 3 boys and 3 girls want to sit on them in a way that no two boys nor girls sit next to each other. How many such arrangements are
4. A committee is to be formed in which 5 people are chosen from 6 men and 4 women is to be chosen. In how many ways can this committee be made if there can be, at most, 2 women? a. 186 b. 168 c. 136
d. 169 5. Allen and Mary host a TV show together in which one day N number of guests attend the show. Each guest shakes hands with every other guest and each guest also shakes hands with each host.
If there happens to be a total of 65 handshakes, find the number of guests that attended the show.
5. If 10 objects are arranged in a row, then the number of ways of selecting three of these objects such that no two of them are adjacent is:
6. 8 people are to be seated at Melville's Restaurant. The only table available is a round table. However, Jack and Jill insist on sitting next to each other. In how many ways can the 8 people be
7. Suppose the Lincoln-Douglas debate team consists of 10 seniors. Two of which will be randomly selected to become captains. However, one of the seniors is already captain of another club and is
ineligible. In how many ways can the two captains be chosen?
8. Oliver is applying for an internship and says, “I have a 15% chance of being accepted to Internship A and I have a 5% chance of getting accepted into both A and B. Assuming Oliver will be accepted
by at least one internship, what is the chance he is accepted for internship B but not A?
1. B 2. B 3. C 4. A 5. A 6. D 7. A 8. C
bottom of page | {"url":"https://www.path2tj.com/permutations-and-combinations","timestamp":"2024-11-08T21:45:48Z","content_type":"text/html","content_length":"654202","record_id":"<urn:uuid:e5356e71-1351-4d70-8b72-44e84b2deda0>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00110.warc.gz"} |
(x+4)(x-4) Simplify
Simplifying (x+4)(x-4)
The expression (x+4)(x-4) is a product of two binomials. We can simplify this expression using the difference of squares pattern.
What is the Difference of Squares Pattern?
The difference of squares pattern states that: (a + b)(a - b) = a² - b²
Applying the Pattern to (x+4)(x-4)
In our expression, we have:
Applying the difference of squares pattern, we get:
(x + 4)(x - 4) = x² - 4²
Simplifying further
We can further simplify the expression by squaring the constant term:
x² - 4² = x² - 16
Therefore, the simplified form of (x+4)(x-4) is x² - 16.
Key Takeaway
The difference of squares pattern is a useful tool for simplifying expressions of the form (a + b)(a - b). By recognizing this pattern, we can quickly and efficiently simplify expressions without
having to expand them using the distributive property. | {"url":"https://jasonbradley.me/page/(x%252B4)(x-4)-simplify","timestamp":"2024-11-03T03:17:16Z","content_type":"text/html","content_length":"57030","record_id":"<urn:uuid:5aeb1b9e-5e96-4d27-b3fd-016629b23c59>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00476.warc.gz"} |
25. In figure, M is mid-point of side CD of a parallelogram ABC... | Filo
Question asked by Filo student
25. In figure, is mid-point of side of a parallelogram . The line is drawn intersecting at and produced at . Prove that . [2009] [3M]
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
8 mins
Uploaded on: 11/26/2023
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on All topics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text 25. In figure, is mid-point of side of a parallelogram . The line is drawn intersecting at and produced at . Prove that . [2009] [3M]
Updated On Nov 26, 2023
Topic All topics
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 112
Avg. Video Duration 8 min | {"url":"https://askfilo.com/user-question-answers-mathematics/25-in-figure-is-mid-point-of-side-of-a-parallelogram-the-36313839313739","timestamp":"2024-11-08T23:39:56Z","content_type":"text/html","content_length":"260456","record_id":"<urn:uuid:d40de976-5cca-4e4d-b635-c9c941e03130>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00803.warc.gz"} |
Word processing and spread sheets
COURSE GOALS: The main goal of this course is to teach students the basic principles used in writing seminars and scientific and scholarly articles by using LaTex, MS Word, and also to use Excel
LEARNING OUTCOMES AT THE LEVEL OF THE PROGRAMME:
Upon completing the degree, students will be able to:
1. KNOWLEDGE AND UNDERSTANDING
1.5. describe the purpose and use of common software packages;
1.6. list and describe the methods for manipulating data, basic principles of databases and fundamental algorithms in programming;
2. APPLYING KNOWLEDGE AND UNDERSTANDING
2.11. plan and design efficient and appropriate assessment strategies and methods to evaluate and ensure the continuous development of pupils;
4. COMMUNICATION SKILLS
4.1. communicate effectively with pupils and colleagues;
4.2. present complex ideas clearly and concisely;
5. LEARNING SKILLS
5.1. search for and use professional literature as well as any other sources of relevant information;
LEARNING OUTCOMES SPECIFIC FOR THE COURSE:
Upon completing the course, students will be able to:
1. demonstrate knowledge of basic elements that comprise a scientific and/or scholar article;
2. use LaTex and MS Word for writing scientific and scholar articles;
3. us MS Excel to process data and plot charts and graphs;
4. find information on the internet that is needed and/or required to write a given article or perform a certain operational task on the internet.
COURSE DESCRIPTION:
The course description for every week is as follows:
1. Getting acquainted with Linux OS, and the way scientists communicate in their community by using scientific papers;
2. Basic components of a scientific or scholar work (title, authors, affiliation, abstract, text, images equations, literature). Getting acquainted with LaTeX.
3. Difference between form and content. Selection of the topic for the article (every student picks one topic), and beginning of work on the article. Learn how to write mathematical formulae in
4. Using and describing figures. Learning how to use different formats, jpg, png, eps, and convert from one to another. Work on the article.
5. Using literature in scientific work. Basic principles of choosing and citing literature. Work on the article.
6. Finalizing the article in LaTeX and grading the article.
7. Starting to work on the Windows OS. Repetition: basic constituents of scientific work. Work on the article in MS word.
8. Work on the article in MS Word, learn how to use equations in MS Word.
9. Work on the article in MS Word, learn how to use figures in MS Word.
10. Work on the article in MS Word, learn how to cite and write literature in MS Word.
11. Finalizing the article in MS Word and grading the article.
12. Starting with Excel. Basic operations with tables. Sum, conditional sum, basic formulae.
13. Plotting graphs and charts, histograms, lines, basic programming in Excel.
14. Fitting in Excel, exporting figures for LaTeX and MS Word, and using in texts.
15. Written test of knowledge of Excel; repetition and conclusion; emphasize importance of continuous learning and keeping track of the developments in modern software for writing and using
REQUIREMENTS FOR STUDENTS:
Students are required to do an article in MS Word and LaTeX, and finally to take the test in Excel.
GRADING AND ASSESSING THE WORK OF STUDENTS:
We grade article written in LaTeX, Word, and colloquium in Excel. Those three grades comprise the final grade. | {"url":"http://camen.pmf.unizg.hr/phy/en/course/otpt","timestamp":"2024-11-05T19:38:03Z","content_type":"text/html","content_length":"74872","record_id":"<urn:uuid:85c15d0f-04f6-4fc0-ae70-a0725fc0b40c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00421.warc.gz"} |
MOO: VanEck Vectors Agribusiness ETF | Logical Invest
What do these metrics mean?
'Total return is the amount of value an investor earns from a security over a specific period, typically one year, when all distributions are reinvested. Total return is expressed as a percentage of
the amount invested. For example, a total return of 20% means the security increased by 20% of its original value due to a price increase, distribution of dividends (if a stock), coupons (if a bond)
or capital gains (if a fund). Total return is a strong measure of an investment’s overall performance.'
Using this definition on our asset we see for example:
• Looking at the total return of 15.1% in the last 5 years of VanEck Vectors Agribusiness ETF, we see it is relatively lower, thus worse in comparison to the benchmark SPY (109.2%)
• Looking at total return in of -22% in the period of the last 3 years, we see it is relatively lower, thus worse in comparison to SPY (33.3%).
'Compound annual growth rate (CAGR) is a business and investing specific term for the geometric progression ratio that provides a constant rate of return over the time period. CAGR is not an
accounting term, but it is often used to describe some element of the business, for example revenue, units delivered, registered users, etc. CAGR dampens the effect of volatility of periodic returns
that can render arithmetic means irrelevant. It is particularly useful to compare growth rates from various data sets of common domain such as revenue growth of companies in the same industry.'
Which means for our asset as example:
• Compared with the benchmark SPY (15.9%) in the period of the last 5 years, the compounded annual growth rate (CAGR) of 2.9% of VanEck Vectors Agribusiness ETF is smaller, thus worse.
• During the last 3 years, the annual performance (CAGR) is -7.9%, which is smaller, thus worse than the value of 10.1% from the benchmark.
'Volatility is a statistical measure of the dispersion of returns for a given security or market index. Volatility can either be measured by using the standard deviation or variance between returns
from that same security or market index. Commonly, the higher the volatility, the riskier the security. In the securities markets, volatility is often associated with big swings in either direction.
For example, when the stock market rises and falls more than one percent over a sustained period of time, it is called a 'volatile' market.'
Using this definition on our asset we see for example:
• Compared with the benchmark SPY (20.9%) in the period of the last 5 years, the 30 days standard deviation of 21.4% of VanEck Vectors Agribusiness ETF is larger, thus worse.
• Looking at historical 30 days volatility in of 18.1% in the period of the last 3 years, we see it is relatively larger, thus worse in comparison to SPY (17.6%).
'Downside risk is the financial risk associated with losses. That is, it is the risk of the actual return being below the expected return, or the uncertainty about the magnitude of that difference.
Risk measures typically quantify the downside risk, whereas the standard deviation (an example of a deviation risk measure) measures both the upside and downside risk. Specifically, downside risk in
our definition is the semi-deviation, that is the standard deviation of all negative returns.'
Applying this definition to our asset in some examples:
• Looking at the downside risk of 15.7% in the last 5 years of VanEck Vectors Agribusiness ETF, we see it is relatively larger, thus worse in comparison to the benchmark SPY (14.9%)
• Compared with SPY (12.3%) in the period of the last 3 years, the downside volatility of 13.2% is higher, thus worse.
'The Sharpe ratio was developed by Nobel laureate William F. Sharpe, and is used to help investors understand the return of an investment compared to its risk. The ratio is the average return earned
in excess of the risk-free rate per unit of volatility or total risk. Subtracting the risk-free rate from the mean return allows an investor to better isolate the profits associated with risk-taking
activities. One intuition of this calculation is that a portfolio engaging in 'zero risk' investments, such as the purchase of U.S. Treasury bills (for which the expected return is the risk-free
rate), has a Sharpe ratio of exactly zero. Generally, the greater the value of the Sharpe ratio, the more attractive the risk-adjusted return.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (0.64) in the period of the last 5 years, the Sharpe Ratio of 0.02 of VanEck Vectors Agribusiness ETF is lower, thus worse.
• Compared with SPY (0.43) in the period of the last 3 years, the risk / return profile (Sharpe) of -0.58 is lower, thus worse.
'The Sortino ratio, a variation of the Sharpe ratio only factors in the downside, or negative volatility, rather than the total volatility used in calculating the Sharpe ratio. The theory behind the
Sortino variation is that upside volatility is a plus for the investment, and it, therefore, should not be included in the risk calculation. Therefore, the Sortino ratio takes upside volatility out
of the equation and uses only the downside standard deviation in its calculation instead of the total standard deviation that is used in calculating the Sharpe ratio.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (0.9) in the period of the last 5 years, the downside risk / excess return profile of 0.02 of VanEck Vectors Agribusiness ETF is lower, thus worse.
• Looking at downside risk / excess return profile in of -0.79 in the period of the last 3 years, we see it is relatively lower, thus worse in comparison to SPY (0.62).
'The Ulcer Index is a technical indicator that measures downside risk, in terms of both the depth and duration of price declines. The index increases in value as the price moves farther away from a
recent high and falls as the price rises to new highs. The indicator is usually calculated over a 14-day period, with the Ulcer Index showing the percentage drawdown a trader can expect from the high
over that period. The greater the value of the Ulcer Index, the longer it takes for a stock to get back to the former high.'
Applying this definition to our asset in some examples:
• The Ulcer Index over 5 years of VanEck Vectors Agribusiness ETF is 18 , which is higher, thus worse compared to the benchmark SPY (9.32 ) in the same period.
• Compared with SPY (10 ) in the period of the last 3 years, the Downside risk index of 22 is larger, thus worse.
'A maximum drawdown is the maximum loss from a peak to a trough of a portfolio, before a new peak is attained. Maximum Drawdown is an indicator of downside risk over a specified time period. It can
be used both as a stand-alone measure or as an input into other metrics such as 'Return over Maximum Drawdown' and the Calmar Ratio. Maximum Drawdown is expressed in percentage terms.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (-33.7 days) in the period of the last 5 years, the maximum reduction from previous high of -36.8 days of VanEck Vectors Agribusiness ETF is lower, thus worse.
• During the last 3 years, the maximum DrawDown is -33.5 days, which is lower, thus worse than the value of -24.5 days from the benchmark.
'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Max Drawdown Duration is the worst (the maximum/longest) amount of time an investment has
seen between peaks (equity highs) in days.'
Using this definition on our asset we see for example:
• The maximum days under water over 5 years of VanEck Vectors Agribusiness ETF is 643 days, which is greater, thus worse compared to the benchmark SPY (488 days) in the same period.
• Compared with SPY (488 days) in the period of the last 3 years, the maximum days below previous high of 643 days is greater, thus worse.
'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks
(equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of
Using this definition on our asset we see for example:
• Looking at the average days below previous high of 194 days in the last 5 years of VanEck Vectors Agribusiness ETF, we see it is relatively greater, thus worse in comparison to the benchmark SPY
(123 days)
• Compared with SPY (176 days) in the period of the last 3 years, the average time in days below previous high water mark of 282 days is higher, thus worse. | {"url":"https://logical-invest.com/app/etf/moo/vaneck-vectors-agribusiness-etf","timestamp":"2024-11-11T07:20:05Z","content_type":"text/html","content_length":"59957","record_id":"<urn:uuid:09ed3ee2-92b9-496c-92cc-e30e99adeb2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00260.warc.gz"} |
Spring Potential Energy (Elastic Potential Energy)
Definition of Spring Potential Energy (Elastic Potential Energy)
If you pull on a spring and stretch it, then you do work. That is because you are applying a force over a displacement. Your pull is the force and the amount that you stretch the spring is the
Since work is the transfer of energy, we must understand where the energy was transferred. We say that the energy was transferred into the spring. The work becomes stored energy in the spring. The
work becomes potential energy in the spring.
A spring can be stretched or compressed. The same mathematics holds for stretching as for compressing springs. We will be primarily discussing energy as it is stored in a spring when it is stretched
here; however, the same physics would apply for a spring when it is compressed.
As you have probably noticed from the above header, spring potential energy is also called elastic potential energy.
Linear Springs
This discussion will be about linear springs, the simplest type of spring.
A linear spring is a spring where the force that stretches the spring is in direct proportion to the amount of stretch. That is, the force vs. extension graph forms a straight, positively sloped line
that passes through the origin, like this:
The slope of this graph is called the spring constant and is symbolized by the letter k. The spring constant in the above graph is 20 Newtons per meter, or 20 N/m. This means that you would need 20
Newtons of force to stretch the spring one meter, or 2 Newtons of force to stretch the spring 0.1 meter, and so on.
Work Done Stretching The Spring
Let us say that in this discussion a force of F is necessary to stretch the spring to an extension of x.
We see below that this force F and the related extension x have been marked on the graph. Also detailed is the area under the graph for this situation.
The area under this graph of force vs. extension is in Joules, units of energy. This is because the area is in units of Newtons (vertically) times meters (horizontally).
Do not forget that units of work are units of force times units of displacement, or units of Newtons times units of meters. And units of work are units of the transfer of energy, that is, they are
units of energy, or Joules.
So, the area under this graph symbolizes energy. This area is the work done to stretch the spring.
Now, work is the transfer of energy. After the spring has been stretched, and work has been done, to where has the energy been transferred? We say that it has become potential energy in the spring.
That is, the energy has been stored in the spring. Therefore, the amount of energy symbolized by the area under the above graph is the energy that has been stored in the spring. It is the potential
energy of the spring.
This area can be calculated. It is shaped like a triangle; so, its area is one half times its height times its base. We have:
Area under graph = (0.5)(F)(x)
This area is the energy stored in the spring. The symbol for the energy stored in the spring could be U[s]. The 'U' stands for potential energy and the subscript 's' stands for spring. So, now we
U[s]= (0.5)(F)(x)
The spring is a linear spring where the stretching force is directly proportional to the extension, as mentioned above. This, again, can be stated as:
F = kx
Placing this substitution for F in the above formula for U[s] we get:
U[s]= (0.5)(kx)(x)
Removing the parentheses and noticing that x times x is x^2, we have:
U[s]= 0.5kx^2
This last formula reads: The potential energy of a spring, or the energy stored in a spring, equals one half times the spring constant times the square of the extension. This is how to calculate how
much energy is stored in a spring.
Work Done Compressing The Spring
Some linear springs store energy through compression, rather than extension. For example, when you compress the spring in a common jack-in-the-box toy, you do work on the spring, and that work is
stored as energy in the spring. Later, when the jack-in-the-box pops, that energy comes out of storage.
The formula for the amount of energy stored in a linear spring due to compression is the same as the one for extension:
U[s]= 0.5kx^2
Questions about Spring Potential Energy
Here are a few problems over the above equations. | {"url":"http://zonalandeducation.com/mstm/physics/mechanics/energy/springPotentialEnergy/springPotentialEnergy.html","timestamp":"2024-11-08T15:18:21Z","content_type":"text/html","content_length":"15432","record_id":"<urn:uuid:7f931e87-9d41-4d60-b838-44e8c6175d2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00809.warc.gz"} |
Machine learning: use matlab to realize K-means algorithm to complete clustering and image compression
Principle here , we can see from the principle that the K-means algorithm mainly has three parts - random initialization, clustering division and moving aggregation.
Random initialization
Using the randperm function, the 1 ∼ m 1\sim m The sequence of 1 ∼ m is randomly disturbed before taking out K K K as initial gathering points.
function centroids = kMeansInitCentroids(X, K)
%KMEANSINITCENTROIDS This function initializes K centroids that are to be
%used in K-Means on the dataset X
% centroids = KMEANSINITCENTROIDS(X, K) returns K initial centroids to be
% used with the K-Means on the dataset X
% You should return this values correctly
centroids = zeros(K, size(X, 2));
% ====================== YOUR CODE HERE ======================
% Instructions: You should set centroids to randomly chosen examples from
% the dataset X
% Initialize the centroids to be random examples
%Randomly reorder the indicies of examples
randidx = randperm(size(X,1));
% Take the first K examples
centroids = X(randidx(1:K),:);
% =============================================================
Cluster partition
First assume that all points belong to class 1, and then try to update its category from class 2 to K for each point. Matrix multiplication can be used to calculate the distance. Note that the
coordinates of sample points and aggregation points here are row vectors.
function idx = findClosestCentroids(X, centroids)
%FINDCLOSESTCENTROIDS computes the centroid memberships for every example
% idx = FINDCLOSESTCENTROIDS (X, centroids) returns the closest centroids
% in idx for a dataset X where each row is a single example. idx = m x 1
% vector of centroid assignments (i.e. each entry in range [1..K])
% Set K
K = size(centroids, 1);
% You need to return the following variables correctly.
idx = ones(size(X,1), 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Go over every example, find its closest centroid, and store
% the index inside idx at the appropriate location.
% Concretely, idx(i) should contain the index of the centroid
% closest to example i. Hence, it should be a value in the
% range 1..K
% Note: You can use a for-loop over the examples to compute this.
for i=1:size(X,1)
for k=2:K
if ((x-centroids(k,:))*(x-centroids(k,:))')<((x-centroids(idx(i),:))*(x-centroids(idx(i),:))')
% =============================================================
Mobile gathering point
Update the coordinates of the aggregation point to the average value of other points in the cluster where it is located, and add up the coordinates of other points and divide them by the number of
function centroids = computeCentroids(X, idx, K)
%COMPUTECENTROIDS returns the new centroids by computing the means of the
%data points assigned to each centroid.
% centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by
% computing the means of the data points assigned to each centroid. It is
% given a dataset X where each row is a single data point, a vector
% idx of centroid assignments (i.e. each entry in range [1..K]) for each
% example, and K, the number of centroids. You should return a matrix
% centroids, where each row of centroids is the mean of the data points
% assigned to it.
% Useful variables
[m n] = size(X);
% You need to return the following variables correctly.
centroids = zeros(K, n);
% ====================== YOUR CODE HERE ======================
% Instructions: Go over every centroid and compute mean of all points that
% belong to it. Concretely, the row vector centroids(i, :)
% should contain the mean of the data points assigned to
% centroid i.
% Note: You can use a for-loop over the centroids to compute this.
for i=1:m
% =============================================================
Solving clustering
In order to better show the solution process, there is no random initialization function runkmeans M provided by Wu Enda:
% Load an example dataset
% Settings for running K-Means
max_iters = 10;
For consistency, here we set centroids to specific values but in practice you want to generate them automatically, such as by setting them to be random examples (as can be seen in kMeansInitCentroids).
initial_centroids = [3 3; 6 2; 8 5];
% Run K-Means algorithm. The 'true' at the end tells our function to plot the progress of K-Means
figure('visible','on'); hold on;
plotProgresskMeans(X, initial_centroids, initial_centroids, idx, K, 1);
xlabel('Press ENTER in command window to advance','FontWeight','bold','FontSize',14)
[~, ~] = runkMeans(X, initial_centroids, max_iters, true);
set(gcf,'visible','off'); hold off;
function [centroids, idx] = runkMeans(X, initial_centroids, ...
max_iters, plot_progress)
%RUNKMEANS runs the K-Means algorithm on data matrix X, where each row of X
%is a single example
% [centroids, idx] = RUNKMEANS(X, initial_centroids, max_iters, ...
% plot_progress) runs the K-Means algorithm on data matrix X, where each
% row of X is a single example. It uses initial_centroids used as the
% initial centroids. max_iters specifies the total number of interactions
% of K-Means to execute. plot_progress is a true/false flag that
% indicates if the function should also plot its progress as the
% learning happens. This is set to false by default. runkMeans returns
% centroids, a Kxn matrix of the computed centroids and idx, a m x 1
% vector of centroid assignments (i.e. each entry in range [1..K])
% Set default value for plot progress
if ~exist('plot_progress', 'var') || isempty(plot_progress)
plot_progress = false;
% Plot the data if we are plotting progress
if plot_progress
hold on;
% Initialize values
[m n] = size(X);
K = size(initial_centroids, 1);
centroids = initial_centroids;
previous_centroids = centroids;
idx = zeros(m, 1);
% Run K-Means
for i=1:max_iters
% Output progress
fprintf('K-Means iteration %d/%d...\n', i, max_iters);
if exist('OCTAVE_VERSION')
% For each example in X, assign it to the closest centroid
idx = findClosestCentroids(X, centroids);
% Optionally, plot progress here
if plot_progress
plotProgresskMeans(X, centroids, previous_centroids, idx, K, i);
previous_centroids = centroids;
fprintf('Press enter to continue.\n');
% Given the memberships, compute new centroids
centroids = computeCentroids(X, idx, K);
% Hold off if we are plotting progress
if plot_progress
hold off;
Running in matlab, you can see the continuous movement of aggregation points and the continuous updating of the colors of other points:
Picture compression
A complete picture if its pixels are N × M N\times M N × M. Each pixel is represented in RGB, so one point needs to store three 0 ∼ 255 0\sim 255 0 ∼ 255 unsigned integer, consumption 3 × 8 = 24 3\
times 8=24 three × 8 = 24 bytes, the whole picture needs N × M × 24 N\times M\times 24 N × M × 24 bytes. If the color of the picture is compressed, 16 clusters are found, and the average color of
each cluster is used to replace the original color, then each pixel only needs to store the color index 1 ∼ 16 1\sim 16 1∼16( 0 ∼ 15 0\sim 15 0 ∼ 15), only 4 4 4 bytes, and then store the divided 16
16 The total space occupied by the color information in 16 is 16 × 24 + N × M × 4 16\times 24+N\times M\times 4 sixteen × 24+N × M × 4. Basically, the picture size can be compressed to the original
size 1 6 \frac{1}{6} 61.
First, use the imread function to read in the picture. At this time, the picture is stored in the N × M × 3 N\times M\times 3 N × M × In the three-dimensional matrix of 3. Then use reshape to convert
it into ( N × M ) × 3 (N\times M)\times3 (N × M) × 3, which has become a familiar pattern. Call the K-means algorithm to find the cluster and replace the color of the whole cluster with the color of
the aggregation point:
% Load an image of a bird
A = double(imread('bird_small.png'));
A = A / 255; % Divide by 255 so that all values are in the range 0 - 1
% Size of the image
img_size = size(A);
X = reshape(A, img_size(1) * img_size(2), 3);
K = 16;
max_iters = 10;
initial_centroids = kMeansInitCentroids(X, K);
% Run K-Means
[centroids, ~] = runkMeans(X, initial_centroids, max_iters);
% Find closest cluster members
idx = findClosestCentroids(X, centroids);
X_recovered = centroids(idx,:);
% Reshape the recovered image into proper dimensions
X_recovered = reshape(X_recovered, img_size(1), img_size(2), 3);
% Display the original image
subplot(1, 2, 1);
axis square
% Display compressed image side by side
subplot(1, 2, 2);
title(sprintf('Compressed, with %d colors.', K));
axis square
The effect of compression is as follows: | {"url":"https://programming.vip/docs/61fefc043e0fa.html","timestamp":"2024-11-09T02:58:08Z","content_type":"text/html","content_length":"18377","record_id":"<urn:uuid:eadb46dc-052e-44dd-94bc-004aa5eb700e>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00296.warc.gz"} |
next → ← prev
Introduction to greedy algorithm:
A greedy algorithm is a simple and intuitive strategy for solving optimization problems. It is an algorithmic paradigm that follows the problem-solving heuristic of making the locally optimal choice
at each stage with the hope of finding a global optimum. The idea is to make the best possible decision at each step without considering the consequences of that decision on future steps.
The key characteristic of greedy algorithms is that they make a series of choices by selecting the best available option at each step without revisiting or undoing previous choices. This approach
often leads to a suboptimal solution, but in many cases, it provides an acceptable solution and has the advantage of being computationally efficient.
Here are some common characteristics of problems that can be solved using greedy algorithms:
1. Optimal Substructure: The solution to the problem can be constructed from optimal solutions to sub problems.
2. Greedy Choice Property: A global optimum can be arrived at by selecting a local optimum (the best available option at the current step).
While greedy algorithms are relatively straightforward and efficient, they may only sometimes guarantee the best solution for some problems. The lack of backtracking and consideration of future
consequences can lead to suboptimal solutions. Therefore, the choice of using a greedy algorithm depends on the specific problem at hand and whether the greedy approach is suitable for that scenario.
Examples of problems often solved using greedy algorithms include:
- Activity Selection: Given a set of activities with start and finish times, find the maximum number of non-overlapping activities that can be performed.
- Fractional Knapsack Problem: Given a set of items with weights and values, determine the maximum value that can be obtained by putting a fraction of each item into a knapsack of limited capacity.
- Dijkstra's Shortest Path Algorithm: Find the shortest path from a source vertex to all other vertices in a weighted graph.
- Huffman Coding: Construct an optimal prefix-free binary tree for encoding characters based on their frequencies.
It's important to note that while greedy algorithms are powerful and efficient in certain scenarios, there may be better choices for some types of optimization problems. Thorough analysis and
understanding of the problem's characteristics are crucial before applying a greedy approach.
Introduction to divide and conquer algorithm:
A divide-and-conquer algorithm is a problem-solving strategy that breaks a problem into smaller subproblems, solves them independently, and then combines their solutions to solve the original
problem. The term "divide and conquer" encapsulates the core idea of breaking down a complex problem into simpler, more manageable parts.
The typical structure of a divide-and-conquer algorithm involves three steps:
1. Divide: Break the problem into smaller, more manageable subproblems. This step continues recursively until the subproblems become simple enough to be solved directly.
2. Conquer: Solve the subproblems. This is the base case of the recursion where the problem is small enough that it can be solved directly without further subdivision.
3. Combine: Merge the solutions of the subproblems to obtain the solution for the original problem.
The divide-and-conquer approach is often implemented using recursion, where the algorithm calls itself to solve the subproblems. Each recursive call works on a smaller instance of the problem until
the base case is reached, at which point the solutions are combined to build the result.
Some classic examples of algorithms that use the divide-and-conquer paradigm include:
1. Merge Sort: It divides the array into two halves, recursively sorts each half, and then merges the sorted halves to obtain a fully sorted array.
2. Quick Sort: It chooses a pivot element, partitions the array into two subarrays based on the pivot, recursively sorts each subarray, and then combines them.
3. Binary Search: Given a sorted array, it divides the array in half, compares the target value to the middle element, and continues the Search in the appropriate subarray.
4. Strassen's Matrix Multiplication: It splits the matrices into submatrices, recursively computes products, and combines them using additions and subtractions.
The divide-and-conquer paradigm is powerful because it often leads to algorithms with efficient time complexity. However, it's essential to carefully design the algorithm to ensure that the
subproblems are disjoint and that the combination step is efficient. Additionally, the recursive nature of divide-and-conquer algorithms may result in extra space requirements due to the function
call stack.
In summary, divide-and-conquer algorithms provide an effective way to solve complex problems by breaking them down into simpler subproblems, solving them independently, and then combining their
Types of Greedy algorithms:
Example algorithms for greedy Search here are a few:
• Greedy Algorithm for Minimum Spanning Tree (MST):
□ Problem: Given a connected, undirected graph with weighted edges, find a minimum spanning tree (MST).
□ Algorithm: Start with an arbitrary vertex and greedily choose the edge with the smallest weight that connects a vertex in the MST to a vertex outside the MST. Repeat until all vertices are
included in the MST.
• Fractional Knapsack Problem:
□ Problem: Given a set of items, each with a weight and a value, determine the maximum value of items to include in a knapsack of limited capacity.
□ Algorithm: Greedily select items based on the ratio of value to weight. Choose items with the highest value-to-weight ratio until the knapsack is full.
Dijkstra's Shortest Path Algorithm:
• Problem: Given a graph with weighted edges, find the shortest path from a source vertex to all other vertices.
• Algorithm: Maintain a set of vertices whose shortest distance from the source is known. At each step, choose the vertex with the smallest known distance, relax its neighbors' distances, and add
it to the set. Repeat until all vertices are included.
Huffman Coding:
• Problem: Given a set of characters and their frequencies, find a binary encoding that minimizes the total length of the encoded message.
• Algorithm: Build a binary tree by repeatedly combining the two characters with the lowest frequencies. Assign binary codes to the edges based on the path in the tree.
Activity Selection Problem:
• Problem: Given a set of activities with start and finish times, find the maximum number of non-overlapping activities that can be performed.
• Algorithm: Sort the activities based on their finish times. Greedily select the activities with the earliest finish times that do not overlap with the previously selected ones.
These are just a few examples of greedy algorithms. Greedy algorithms make locally optimal choices at each stage with the hope of finding a global optimum. It's important to note that while greedy
algorithms are easy to design and implement, they may only sometimes guarantee an optimal solution for every problem.
Implementation of the greedy algorithm by using an example of the Huffman algorithm.
The provided Python code implements the Huffman coding algorithm for lossless data compression. The `build_huffman_tree` function constructs a binary tree representing character frequencies. Huffman
codes, assigning shorter codes to more frequent characters, are generated using the `build_huffman_codes` function. The `huffman_encoding` function encodes input data using these Huffman codes,
producing a compressed binary representation. The encoded data, along with the Huffman tree, can later be used to decode the original data. This algorithm efficiently compresses data by representing
frequent characters with shorter codes, resulting in reduced overall bit usage for encoding. The implementation uses priority queues and binary trees to manage the construction of the Huffman tree
and the generation of Huffman codes.
Applications or examples of divide and conquer algorithm:
Divide and Conquer is a powerful problem-solving paradigm applied across various domains. In the realm of computer science and algorithms, classic examples include the quick sort and merge sort
algorithms. Quick sort efficiently sorts an array by partitioning it into smaller sub arrays, sorting each sub array, and combining the results. Merge sort divides an array into halves; recursively
sorts each half, and then merges them to produce a fully sorted array. Another notable example is the binary search algorithm, which repeatedly divides a sorted array to locate a specific element
Outside of computer science, divide and conquer principles are applied in diverse fields. In robotics, path-planning algorithms often use a divide-and-conquer strategy to navigate complex
environments. Additionally, in mathematical problem-solving, algorithms like the fast Fourier transform (FFT) use divide and conquer for efficient signal processing. This versatile approach continues
to find applications in solving intricate problems by breaking them down into more manageable components.
Implementation of divide and conquer algorithm.
The provided Python code implements the binary search algorithm, a classic example of a divide-and-conquer strategy. The function `binary_search` takes a sorted array (`arr`) and a target element
(`target`) as parameters. It initializes two pointers (`low` and `high`) at the beginning and end of the array, respectively. It then iteratively calculates the middle index and compares the
corresponding element to the target. If the middle element is equal to the target, the index is returned. Otherwise, the search space is halved by adjusting the pointers based on the comparison
results. This process continues until the target is found or the search space is empty, at which point -1 is returned. The example usage demonstrates searching for an element in a sorted array,
showcasing the algorithm's efficiency in logarithmic time complexity.
Divide and Conquer vs Greedy approach.
1. D&C approach is a solution-based technique, whereas greedy is an optimization approach.
2. D&C can work efficiently on low-complexity problems such as sorting, whereas the greedy approach becomes effective when the complexity of the problems increases, for example, the Fractional
Knapsack problem and Huffman coding compression.
3. D&C implements recursion in order to achieve a solution, whereas greedy takes an iterative approach to solve the sub-problems.
4. D&C uses the top-bottom approach, i.e., it breaks the larger problem into smaller sub-problems and then solves them to build a solution. Greedy uses the bottom-top approach, where it solves the
sub-problems first, which will lead to an optimal solution.
5. The D&C approach is recursive, so it is slower than the iterative greedy approach.
Divide and Conquer and Greedy are two prominent algorithmic paradigms with distinct strategies for problem-solving.
Divide and Conquer is characterized by breaking down a problem into smaller, more manageable sub problems, solving them recursively, and combining their solutions to obtain the overall result. This
paradigm is well-suited for problems that can be naturally divided into independent subparts, such as the classic example of quicksort for sorting arrays. It often results in algorithms with a
logarithmic time complexity.
On the other hand, the Greedy approach involves making locally optimal choices at each step with the hope that they will lead to a globally optimal solution. Greedy algorithms are efficient and
simple but may only guarantee the best possible solution in some cases. Examples include Dijkstra's algorithm for finding the shortest path in a graph and Huffman coding for data compression.
While Divide and Conquer emphasizes problem decomposition and recursive solving, Greedy focuses on making the best choice at each step without reconsidering previous decisions. The selection between
these approaches depends on the problem at hand; Divide and Conquer tend to be more versatile for a broader range of problems, while Greedy is often chosen for its simplicity and efficiency in
specific scenarios.
In conclusion, Divide and Conquer and Greedy algorithms are distinct problem-solving strategies with differing philosophies. Divide and conquer excels in problems where breaking them into independent
subproblems and solving them recursively leads to an efficient solution, often achieving logarithmic time complexity. Greedy algorithms, in contrast, make locally optimal choices at each step, aiming
for a globally optimal solution. While Greedy algorithms are simpler and more intuitive, they may only guarantee the optimal solution in some cases. The choice between these paradigms depends on the
nature of the problem at hand, with Divide and Conquer offering more versatility across a broader range of scenarios and Greedy providing efficiency in specific, well-defined contexts.
Divide and Conquer and Greedy are both widely used algorithm paradigms that find their uses in various problem statements. We cannot say which one is better than the other since it is entirely
dependent on the problem.
← prev next → | {"url":"https://www.javatpoint.com/difference-between-greedy-and-divide-and-conquer","timestamp":"2024-11-04T07:42:28Z","content_type":"text/html","content_length":"130907","record_id":"<urn:uuid:1e6d1625-a80d-4e71-8f0f-9b9fd0470b27>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00810.warc.gz"} |
GRE Quantitative Practice Test (Example Questions)
GRE Quantitative Reasoning Practice Test
If you need help studying for the GRE Quantitative Reasoning test or just want some more information about what the test is like, you’ve come to the right place!
Click below to take a free GRE Quantitative Reasoning practice test.
What’s On the GRE Quantitative Reasoning Test?
The Quantitative Reasoning questions are also grouped into two sections. Each section has a different number of questions and a different time limit:
The good news is that you are allowed to use an on-screen calculator for the entirety of the Quantitative Reasoning section! You likely won’t need it for most of the questions, but rest assured that
it’s there if you do want to use it.
There are three different question types to test your ability to solve problems dealing with number properties and geometric figures.
Quantitative Comparison
These questions will present you with two quantities, and you’ll be asked to determine which quantity is greater. In some cases, the quantities will be equal.
Here’s an example question:
1. $0<y<1$
Quantity A Quantity B
${y}^{2}$ $y$
1. Quantity A is greater.
2. Quantity B is greater.
3. The two quantities are equal.
4. The relationship cannot be determined from the information given.
The correct answer in this case is B. When a positive number less than 1 is squared, the result is smaller than the original number. For example if $y$ = 0.5 then ${y}^{2}$ = 0.25.
These are your standard multiple-choice questions. You’ll be given a question and a list of five answer choices, only one of which is the correct answer.
Here’s an example question:
1. A rectangle has a length of 12 and a width of 5. What is the length of its diagonal?
1. 10
2. 12
3. 13
4. 15
5. 17
The correct answer in this case is C. The diagonal of a rectangle divides it into two right-angled triangles. You can use the Pythagorean theorem (${a}^{2}+{b}^{2}={c}^{2}$) to find the answer,
making $a$ = 12 and $b$ = 5:
Numeric Entry
For these questions, you’ll have to input your answer into one or more boxes. If your answer is a decimal or an integer, it goes into a single box. If your answer is a fraction, the numerator goes in
one box and the denominator in another box.
Here’s an example question:
1. A bag contains 5 red marbles, 3 blue marbles, and 2 green marbles. What is the probability of randomly selecting a blue marble?
The correct answer in this case is $\frac{3}{10}$, which means you would type 3 into the top box and 10 into the bottom box. Since there are 10 total marbles and 3 blue marbles, you can use $\frac{\
text{Favorable outcomes}}{\text{Total possible outcomes}}$ to get the correct answer: $\frac{3}{10}$.
Other GRE Practice Tests
If you need some extra practice in a another area of the GRE, click below to get started!
Online GRE Prep Course
If you want to be fully prepared, Mometrix offers an online GRE prep course. The course is designed to provide you with any and every resource you might want while studying. The GRE course includes:
• Review Lessons Covering Every Topic
• 600+ GRE Practice Questions
• More than 500 Digital Flashcards
• Over 240 Instructional Videos
• Money-back Guarantee
• Free Mobile Access
• and More!
The GRE prep course is designed to help any learner get everything they need to prepare for their GRE exam. Click below to check it out! | {"url":"https://www.testprepreview.com/gre/gre-quantitative-practice-test.htm","timestamp":"2024-11-02T14:51:40Z","content_type":"text/html","content_length":"53045","record_id":"<urn:uuid:f5c47c36-69dc-4d9e-8295-37a21c0fab38>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00207.warc.gz"} |
Electric Motor Efficiency: Let's Calculate and Understand the Percentage Efficiency of an Electric Motor!
What is the useful power out of an electric motor and the total power into the motor?
The useful power out of an electric motor is 20W and the total power into the motor is 80W. What is the percentage efficiency of the motor?
Calculation and Explanation:
The efficiency of an electric motor can be calculated as the ratio of the output power to the input power, multiplied by 100 to get the percentage.
When it comes to the electric motor in question, we have the following data:
Output Power: 20W
Total Input Power: 80W
To calculate the efficiency, we can use the formula:
Efficiency = (Output Power / Input Power) * 100
Substitute the values we have:
Efficiency = (20W / 80W) * 100 = 25%
Therefore, the efficiency of this electric motor is 25%, meaning that 25% of the electrical power is converted into useful mechanical energy, while the rest is lost as heat. | {"url":"https://tutdenver.com/physics/electric-motor-efficiency-let-s-calculate-and-understand-the-percentage-efficiency-of-an-electric-motor.html","timestamp":"2024-11-10T21:04:39Z","content_type":"text/html","content_length":"21945","record_id":"<urn:uuid:a0f9a7cb-3cdc-48cf-bbbb-dcec33c7206b>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00586.warc.gz"} |
Optimal Risky Portfolio: 5 Things You Must Understand
The optimal risky portfolio (also called tangency portfolio) is a portfolio composed of risky assets in which the amount you invest in each asset will make it so the portfolio gives you the highest
possible return for a given level of risk.
Want to better understand this? Learn these 5 things:
#1) Efficient Frontier
A portfolio frontier (or minimum variance frontier) is a graph that maps out all possible portfolios with different asset weight combinations with minimum risk (variance, or standard deviation) for
all given levels of expected return.
The risk of the portfolio is graphed on the x-axis, and the expected return of the portfolio is on the y-axis.
Minimum Variance Frontier
As an investor, you like a higher return, but dislike a higher risk.
This is where the concept of an efficient frontier comes in. The efficient frontier contains the portfolios that dominate all others on the portfolio frontier. It is the upper half of the hyperbola
in the graph above. The upward-sloping portion of the portfolio frontier.
What does this mean? It means they give you higher returns for the same level of risk. In other words, the mean of the returns of these portfolios is higher, while the variance is lower.
No rational mean-variance investor chooses to hold a portfolio not located on the efficient frontier.
The portfolios on the efficient frontier provide the highest possible return for a given level of risk.
Now, there are many portfolios in the efficient frontier. Choosing one of them depends on the investor’s risk aversion and utility function.
A portfolio above the efficient frontier doesn’t exist, while a portfolio below the efficient frontier is inefficient.
The minimum risk portfolio—also called MVP (Minimum Variance Portfolio), because it is the variance of an asset’s returns that determines the risk of the asset—is the portfolio with the least risk.
This implies it has a lower expected return too though.
The MVP is located on the point that is most to the left on the portfolio frontier.
#2) Optimal Risky Portfolio with Two Risky Assets
The frontier for a portfolio of two risky assets will vary according to the correlation between the two.
If the assets have perfect positive correlation (ρ = 1), the efficient frontier is linear. The two assets are identical, and there are no gains from diversification. This is unrealistic. In the real
world, two assets with perfect correlation are actually the same asset.
If the assets have perfect negative correlation (ρ = –1), the minimum variance portfolio is risk-free because the asset’s risks cancel out each other. This also doesn’t happen in reality.
When the assets have imperfect correlation (between –1 and 1, but not equal), the risk of the portfolio is lower than if the assets have a perfect positive correlation.
Because there are gains from diversification. This is the most realistic case. It’s unlikely you’ll find perfect correlation in the real world between two distinct assets.
In order to understand how diversification reduces the risk of a portfolio, let’s take a look at some formulas.
The expected return of a portfolio—E(R[P])—is the weighted average of the expected returns of the assets composing the portfolio:
E(R[P]) = w[1]E(R[1]) + w[2]E(R[2])
The same is not true for the variance, which reflects how risky a portfolio is. The formula is different. The variance of a portfolio is generally smaller than the weighted average of the variances
of individual asset returns of the portfolio.
This means you can gain from diversification because you can reduce the risk of your portfolio without necessarily giving up higher returns.
The formula for the variance of a portfolio (σ^2[P]) of two assets is the following:
σ^2[P] = w^2[1]σ^2[1] + w^2[2]σ^2[2] + 2ρ(R[1],R[2])w[1]w[2]σ[1]σ[2]
Notice how the only number that can be negative (and thus reduce the variance of the portfolio) is the correlation (ρ). Low or even negative correlation means more diversification, which means less
To further understand this, let’s look at the efficient frontier for two imperfectly correlated risky assets:
Efficient Frontier for Two Imperfectly Correlated Assets
All portfolios between A and B belong to the minimum variance frontier.
Portfolio D dominates portfolio C because it provides a higher return for the same level of risk, hence it is more efficient. This doesn’t happen when the assets have perfect positive correlation, as
all portfolios are efficient in that case.
Portfolio B is the riskiest, but also the one with the highest expected return.
The smaller the correlation (the further away from positive 1), the more to the left the efficient frontier is. For example, the assets in portfolio D have a lower correlation (probably negative)
than the assets in portfolio B. In other words, less correlation between assets results in lower risk because of diversification.
The dashed lines convey the perfect correlation cases seen before.
Now, let’s understand how a risk-free asset affects the portfolio:
#3) Optimal Portfolio with a Risk-Free Asset
When you introduce a risk-free asset to a two-asset portfolio, things change. You now have a complete portfolio.
A risk-free asset is one that has a certain future return. In reality, you can argue these don’t exist, but in general we consider government bonds from developed countries risk-free assets.
If one of the two assets is risk-free, the efficient frontier is a straight line, as opposed to being the upper half of a hyperbola. This line originates from the return of the risk-free asset in the
Efficient Frontier for One Risky and One Risk-Free Asset
The Sharpe ratio is the slope of the line and it measures the increase in expected return per additional unit of risk (standard deviation).
Remember that a portfolio is also an asset in it itself. It is defined by its expected return, its standard deviation, and its correlation with other assets or portfolios.
Thus, the previous analysis with just two assets is more general than it seems:
You can easily repeat it with one of the two assets being a portfolio. This means you can extend the analysis from two to three assets, from three to four, and so on.
If there are n (infinite) risky, imperfectly correlated assets, then the efficient frontier will have a bullet shape:
Efficient Frontier for One Risk-Free asset and n Risky Assets
With more and more assets, diversification possibilities improve and, in principle, the efficient frontier is more to the left.
Your goal is to maximize the expected return per unit of risk. Or in other words, the Sharpe ratio.
You want to find the capital allocation line with the highest possible slope, but for a portfolio that still exists. This means you want to find the line that is tangent to the bullet shape.
For example, portfolio E is composed solely of a bunch of risky assets. If we combine this portfolio with one risk-free asset, we get the straight line originating from the risk-free return to point
You can quickly check that all portfolios on this line are dominated by those you can create by combining the risk-free asset with portfolio F. All portfolios on the line ending in F have a higher
expected return for the same risk level (volatility) than the portfolios on the line ending on E.
Keep searching for the highest similar line combining the risk-free return with the risky asset bullet-shaped frontier and you obtain the truly efficient frontier—the straight line originating from
the risk-free return on the y-axis that is tangent to the risky asset frontier.
This is the Capital Allocation Line:
Capital Allocation Line
The capital allocation line (CAL), also called global efficient frontier, is a line that reflects the best combinations of risky assets with a risk-free one. This line only exists when the portfolio
contains a risk-free asset, like a government bond for example.
It is a straight line tangent to the bullet-shaped frontier of risky assets. The y-intercept of the capital allocation line is the risk-free rate.
Every single point on this line is a portfolio of risky assets that includes a position on the risk-free asset, except for the tangency portfolio (also called the optimal risky portfolio).
The T in the graph above is the tangency portfolio. This portfolio is the only portfolio on the capital allocation line composed solely of risky assets, hence why it is also called an optimal risky
portfolio. As before, if short positions in the risk-free asset are possible, the efficient frontier extends beyond T (dashed line).
A short position in the risk-free asset is, for example, a loan where you pay the risk-free interest rate. This means the portfolio can be riskier than the riskiest of assets that compose it because
of leverage. This is why it goes off to the left without end. More risk, more return.
To the right of the optimal risky portfolio, the position on the risk-free asset is a deposit.
#4) Optimal vs. Efficient Portfolio
Optimal risky portfolio can have a different meaning though:
When we say a portfolio is optimal, we are talking about a portfolio that maximizes the preferences (utility theory) of a given investor.
This is different than saying a portfolio is efficient. An efficient portfolio is any portfolio on the efficient frontier. The optimal portfolio is one of the portfolios that make up this frontier,
based on the preferences of the investor.
Imagine two investors who share the same perceptions as to expected returns, risk, and return correlations for a pair of assets but disagree in their willingness to take risks.
The efficient frontier will be identical for these two investors. However, different points on that same line (frontier) will represent their own personal optimal portfolios.
Because they have different preferences the optimal risky portfolio for each is different.
It’s also important to talk about the difference between optimal risky portfolio and optimal complete portfolio.
An optimal complete portfolio is any portfolio in the capital allocation line that includes risky assets and risk-free assets. On the other hand, the optimal risky portfolio is also on the capital
allocation line, but it is only composed of risky assets. This is the only point on the CAL with this characteristic.
#5) How to Calculate the Optimal Risky Portfolio
To calculate the optimal risky portfolio for more than two assets, you need to understand matrixes and vectors. On top of that, the returns of these assets must be linearly independent—you can’t
write one as a linear combination of the other.
Otherwise, this results in a matrix for which you can write one of the rows as a combination of the others (singular matrix). The problem is you cannot invert singular matrixes, which is something
you’ll need to do below.
The optimal risk portfolio formula is a vector of weights of the assets that compose it, and is given by:
On top of that, 1 is a vector of ones. How many ones? The same as the number of assets. V is the covariance matrix of returns. [] is the vector of expected returns for each asset. rf is the risk-free
The optimal portfolio weights must add up to 1, as the weight of the risk-free asset is zero. For the other portfolios on the CAL that are not the tangency portfolio, the weights will add up to less
or more than one.
Here’s why:
When the portfolio is located to the left of the optimal risky portfolio, the weights for the risky assets add up to less than 1, indicating a positive weight for the risk-free asset (deposit). For
portfolios to the right, they’ll add up to more than 1. The difference indicates the negative weight for the risk-free asset (borrowing). When you add this negative weight to the others, it will give
you 1.
The vector of expected returns for the tangency portfolio is:
How to Find an Optimal Risky Portfolio in Excel
To find the optimal risky portfolio in Excel, you want to maximize the Sharpe ratio. To do that you can use the Solver add-in.
To find the tangency portfolio you need:
• The expected return and risk of all assets that compose the portfolio.
• The covariance matrix between them.
Using Solver, maximize the Sharpe ratio by changing the weights invested in each asset.
Frequently Asked Questions (FAQs)
What is a risky portfolio?
A risky portfolio is a portfolio composed of assets for which the return is uncertain. When dealing with portfolios, your goal is to decide the weights you need to give each asset in order to achieve
the optimal portfolio risk-return.
What is an optimal risky portfolio?
Also called the tangency portfolio, it is a portfolio composed of an optimal combination of only risky assets that gives you the highest return per unit of risk.
What is the minimum risk portfolio?
Also called the minimum variance portfolio (MVP), it is the portfolio with the least amount of variance (or standard deviation), a measure of asset return’s risk. On a graphical representation of a
portfolio frontier, it is the point that is most to the left. | {"url":"https://financestu.com/optimal-risky-portfolio/","timestamp":"2024-11-11T01:44:33Z","content_type":"text/html","content_length":"101986","record_id":"<urn:uuid:d392e7e2-76e8-4013-872f-3107495b965f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00399.warc.gz"} |
Personal Finance Advice for Real People
Annuity: An Annuity As a Variable Plan
Annuity: An Annuity As a Variable Plan
Annuity is an investment contract in which a certain amount of money is made payable on a regular basis, to a certain beneficiary. The beneficiary in this contract usually depends on the terms and
conditions of the agreement. The investor in this contract usually expects regular income, interest, and payment on a deferred basis. This contract is usually secured with the equity in the business.
The annuitant receives regular payment in the form of cash, interest income, or a combination of these two.
Annuity usually have fixed payments that increase over a period of time. When you buy annuity you expect that the value will increase over a period of time. The present value of a particular annuity
depends on various factors like the Discounted Cash Flow, Accrued Interest, Term Life, Accumulated Earnings, and expected returns. The present value of annuity represents how much money would need to
be paid in the future to maintain a given series of annuity payment. The reason why the value decreases over time is because of inflation, death of principal, and time intervals.
Annuity payment structure depends on many financial decisions. One of them is choosing the term of the annuity payment. The longer the duration, the more the investor can defer payments and receive
the additional amount. On the other hand, shorter terms would mean lower payments but bigger lump sum. When choosing the terms for the annuity plan, investors should carefully consider their goals
and objectives.
Another factor that affects the annuity plan is the internal rate of return. It is called internal rate of return because it is the rate which the investors will earn from the interest accumulated
within the plan. The internal rate of returns varies with the type of annuity payment a person chooses. For example, a lump-sum distribution provides higher internal rate of returns than fixed
On the other hand, variable annuities give higher cash flows if rates of interest are variable. Most people opt for variable cash flows in order to meet their long-term financial goals. While fixed
payments may be appropriate for some cases, investors may not have the option of changing their investment parameters.
Before purchasing an annuity plan, it is necessary to take into account the present values and the expected end date of payments. Present value refers to when the actual money invested is equal to,
or more than, the amount expected to be invested at the end of the plan. End date is when the actual money or the end value received is equal to, or more than, the amount expected to be paid out at
the end of the plan. A well-planned retirement plan can provide consistent monthly incomes over the years. However, a well-planned retirement plan requires careful evaluation and a good
decision-making process. | {"url":"https://www.mystructuredsettlementcash.com/blog/annuity-an-annuity-as-a-variable-plan/","timestamp":"2024-11-01T22:27:51Z","content_type":"text/html","content_length":"19659","record_id":"<urn:uuid:66efa299-5ec4-4437-ab55-9329e6afd9f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00785.warc.gz"} |
Understanding Confidence Intervals in Statistics Assignments
1. Understanding Confidence Intervals in Statistics Assignments
Understanding Confidence Intervals and Coverage Probability in Normal Distributions
July 08, 2024
David Thompson
United States
Distribution Theory
David Thompson, a Statistics Expert with 10 years of experience, holds a Ph.D. in Statistics from Stanford University. He specializes in data analysis and statistical modeling, providing
comprehensive assistance to university students. David's expertise helps students grasp complex concepts and excel in their academic and research endeavors.
Solving your statistics assignment can be a daunting task, especially when it involves understanding and calculating confidence intervals. A common scenario you might encounter is determining the
probability that a given interval will cover an unknown population parameter, such as the mean. This type of problem requires a solid grasp of normal distributions, sample means, and standard
deviations. By mastering these concepts, you can confidently approach any statistics assignment that requires interval estimation. This blog will guide you through the process of calculating the
coverage probability of confidence intervals, providing you with the tools and techniques needed to complete your distribution theory assignments. Whether you're dealing with a small sample size or a
larger dataset, the principles discussed here will help you solve your statistics assignment efficiently and accurately.
Step-by-Step Guide for Solving Statistics Assignments
When calculating the coverage probability of a confidence interval for the mean of a normal distribution, follow these steps: understand the interval, transform the sample mean to a standard normal
variable, and use statistical tables or software. This approach simplifies complex problems, making it easier to solve your statistics assignment accurately.
Step 1: Understanding the Interval:
The first step in solving a statistics assignment involving confidence intervals is to understand what a confidence interval represents and how it is constructed. A confidence interval provides a
range of values within which the true population parameter, such as the mean (μ), is expected to lie with a certain level of confidence.
Key Components of a Confidence Interval:
1. Sample Mean (Xˉ): The sample mean is the average of the data points in your sample. It serves as the central point of the confidence interval.
2. Margin of Error: The margin of error reflects the range within which the true population parameter is expected to fall. It accounts for the variability in the sample and the desired confidence
3. Standard Deviation (σ): The standard deviation measures the dispersion of the sample data. In cases where the population standard deviation is unknown, the sample standard deviation (s) is used.
4. Sample Size (n): The sample size impacts the width of the confidence interval. Larger samples provide more precise estimates of the population parameter, resulting in narrower intervals.
5. Confidence Level: The confidence level, often expressed as a percentage (e.g., 95%), indicates the degree of certainty that the interval contains the true population parameter. Higher confidence
levels result in wider intervals.
Understanding these components and how they interact is crucial for accurately constructing and interpreting confidence intervals. This foundational knowledge will help you solve your statistics
assignment more effectively.
Step 2: Normal Distribution and Sample Mean:
Understanding the relationship between the normal distribution and the sample mean is essential when working with confidence intervals in statistics. This step involves recognizing how the properties
of the normal distribution apply to the sample mean and how this knowledge helps in constructing confidence intervals.
The Normal Distribution:
1. Definition: The normal distribution, also known as the Gaussian distribution, is a continuous probability distribution characterized by its bell-shaped curve. It is defined by two parameters: the
mean (μ) and the standard deviation (σ).
2. Properties:
□ Symmetry: The normal distribution is symmetric about its mean.
□ 68-95-99.7 Rule: Approximately 68% of the data falls within one standard deviation of the mean, 95% within two, and 99.7% within three.
Sample Mean and Its Distribution:
1. Sample Mean (Xˉ): The sample mean is the average of a set of observations drawn from the population. It serves as an estimate of the population mean.
2. Distribution of the Sample Mean: When you draw multiple samples of size n from a population, the sample means will themselves form a distribution. According to the Central Limit Theorem,
regardless of the population's distribution, the distribution of the sample means will be approximately normal if the sample size is sufficiently large (typically n>30).
3. Mean and Standard Deviation of the Sample Mean:
□ Mean: The mean of the sample mean distribution is equal to the population mean (μ).
□ Standard Deviation: The standard deviation of the sample mean distribution, also known as the standard error, is given by σ\sqrt{n}, where σ is the population standard deviation and n is the
sample size.
Step 3: Transformation to Standard Normal Distribution:
Transforming the sample mean to a standard normal distribution is a crucial step in solving statistics assignments involving confidence intervals for the mean of a normal distribution. This
transformation simplifies the process of finding probabilities and constructing confidence intervals by leveraging the well-known properties of the standard normal distribution.
Standard Normal Distribution:
The standard normal distribution, denoted as N(0,1), is a special case of the normal distribution with a mean of 0 and a standard deviation of 1. It is used as a reference to convert any normal
distribution to a standard form, allowing for easier calculation of probabilities.
Transformation Process:
1. Standardizing the Sample Mean: To transform the sample mean (Xˉ) to a standard normal variable (Z), we use the following formula:
• Xˉ is the sample mean.
• μ is the population mean.
• σ is the population standard deviation.
• on is the sample size.
2. Interpreting Z-Score: The Z-score represents the number of standard errors that the sample mean (Xˉ) is away from the population mean (μ). It standardizes the distribution of the sample mean,
allowing us to use standard normal distribution tables to find probabilities and critical values.
3. Using Z-Score for Confidence Intervals: Once the sample mean is transformed into the Z-score, we can determine the probability that a given interval will cover the population mean (μ) by looking
up the Z-score in standard normal distribution tables.
Practical Steps in Assignments:
1. Calculate the Sample Mean: Begin by calculating the sample mean (Xˉ) from your data.
2. Determine the Standard Error: Compute the standard error of the sample mean using σsqrt{n}}.
3. Standardize the Sample Mean: Use the formula Z=(Xˉ−μ)/σsqrt{n} to convert the sample mean to a Z-score.
4. Use Z-Tables or Software: Refer to Z-tables or statistical software to find the probabilities associated with the calculated Z-score. This helps in determining the confidence interval and its
coverage probability.
5. Construct the Confidence Interval: Based on the Z-score and the desired confidence level, construct the confidence interval for the population mean.
By transforming the sample mean to a standard normal distribution, you simplify the process of finding probabilities and constructing confidence intervals. This approach is fundamental in statistics
and is essential for solving your statistics assignment accurately and efficiently. Understanding this transformation allows you to leverage the standard normal distribution's properties, making
complex statistical problems more manageable.
Step 4: Setting Up the Probability:
Setting up the probability in statistics assignments involves defining the interval within which we expect the unknown population parameter to lie with a specified level of confidence. This step
focuses on interpreting the confidence interval and determining the probability that it covers the true population mean (μ).
Confidence Interval Definition:
1. Interval Definition: A confidence interval is a range of values constructed from sample data that is likely to contain the true population parameter, such as the mean (μ), with a certain level of
confidence. For example, a 95% confidence interval implies that if we were to take 100 different samples and construct confidence intervals in the same way, approximately 95 of those intervals
would contain the true population mean.
2. Components:
□ Sample Mean (Xˉ): The central point of the confidence interval, estimated from sample data.
□ Margin of Error: The range around the sample mean that accounts for sampling variability and is determined by the standard error.
□ Confidence Level: The probability (expressed as a percentage) that the interval will contain the true population parameter.
Probability Setup:
1. Formulating the Interval: To set up the probability, we define the confidence interval as:
(Xˉ−z(σ/{sqrt{n}}, Xˉ+znσ/{sqrt{n})
• X is the sample mean.
• z is the critical value from the standard normal distribution corresponding to the desired confidence level (e.g., 1.96 for 95% confidence).
• σ is the population standard deviation (or sample standard deviation if unknown).
• n is the sample size.
2. Interpreting the Probability: The probability is interpreted as the likelihood that this interval contains the true population mean (μ). For instance, a 95% confidence interval suggests that there
is a 95% probability that the interval covers the true value of μ.
3. Using Statistical Tables or Software: Statistical tables or software are employed to find the critical value zzz corresponding to the desired confidence level. This critical value determines the
width of the interval and ensures that the specified confidence level is achieved.
By setting up the probability in this manner, you ensure that your confidence intervals are accurately constructed and interpreted in statistics assignments. This approach provides a clear framework
for estimating population parameters from sample data and enhances your ability to solve your statistics assignment with confidence and precision.
Step 5: Expressing the Interval in Terms of Z:
Expressing the confidence interval in terms of the standard normal variable Z is a fundamental step in solving statistics assignments involving interval estimation for the mean of a normal
distribution. This transformation allows us to leverage the properties of the standard normal distribution to determine the probability that the interval covers the unknown population parameter.
Transforming the Confidence Interval:
1. Standardizing the Interval: To express the confidence interval in terms of Z, we use the formula:
• Xˉ is the sample mean.
• μ is the population mean.
• σ is the population standard deviation (or sample standard deviation if unknown).
• n is the sample size.
2. Interpreting the Z-Score: The Z-score represents the number of standard deviations that the sample mean (Xˉ) is away from the population mean (μ). It allows us to standardize the distribution of
the sample mean to a standard normal distribution N(0,1).
3. Setting Up the Interval: Once Z is calculated, the confidence interval can be expressed as:
(Xˉ−Z⋅σsqrt{n}, Xˉ+Z⋅σsqrt{n})
This interval represents the range within which we are confident that the true population mean (μ) lies, based on the sample data.
By expressing the confidence interval in terms of Z, you transform the problem into one that can be readily addressed using standard normal distribution tables or statistical software. This approach
simplifies the calculation of probabilities and enhances your ability to accurately solve your statistics assignment involving interval estimation.
Step 6: Simplifying the Probability Expression:
Simplifying the probability expression involves refining the calculation to determine the likelihood that a confidence interval covers the unknown population parameter, such as the mean (μ), in
statistics assignments. This step focuses on using the properties of the standard normal distribution to derive a straightforward probability statement.
Refining the Probability Calculation:
1. Using the Standard Normal Variable Z: Once the confidence interval is expressed in terms of the standard normal variable Z, we can simplify the probability expression. Recall that Z is calculated
• Xˉ is the sample mean.
• μ is the population mean.
• σ is the population standard deviation (or sample standard deviation if unknown).
• n is the sample size.
2. Probability Interpretation: The probability that the confidence interval covers the true population mean μ can be interpreted as the area under the standard normal curve between two Z-scores,
corresponding to the lower and upper bounds of the interval.
3. Critical Values Z_α/2: For a confidence level 1−α, where α is the significance level (e.g., α=0.05 for 95% confidence), the critical values Zα_/2 are determined from standard normal distribution
tables or statistical software. These values divide the area under the curve, leaving 1−α in the middle.
This interval represents the range within which we are 1−α confident that the true population mean μ lies.
Step 7: Calculating the Coverage Probability:
Calculating the coverage probability involves determining the likelihood that a confidence interval constructed from sample data covers the unknown population parameter, such as the mean (μ), in
statistics assignments. This step focuses on applying the critical values of the standard normal distribution to quantify the interval's effectiveness in capturing the true population mean.
Calculating the Probability:
• Interpreting Coverage Probability: The coverage probability is the proportion of times that the confidence interval constructed in this manner will contain the true population mean μ, over an
infinite number of repeated samples.
Simplifying this gives you the interval within which you can be 95% confident that the true population mean μ lies.
• Coverage Interpretation: In practice, if you were to repeat the sampling process many times and construct confidence intervals in the same manner, approximately 95% of those intervals would
contain the true population mean μ. This illustrates the effectiveness and reliability of the confidence interval in capturing the population parameter.
By calculating the coverage probability in this structured approach, you ensure that your confidence intervals are statistically valid and provide meaningful estimates of the population parameter in
statistics assignments. This methodological approach enhances your ability to interpret and apply confidence intervals effectively, contributing to accurate solutions in your statistics assignment
Understanding how to calculate the coverage probability of confidence intervals is a crucial skill for anyone looking to solve their statistics assignment effectively. By breaking down the problem
into manageable steps, such as transforming variables and using standard normal distribution tables, you can simplify even the most complex assignments. Remember, the key to success in statistics is
practice and familiarity with the underlying concepts. As you continue to work on these types of problems, you'll find that your confidence grows, and your ability to solve your statistics assignment
improves significantly. Use this guide as a reference, and don't hesitate to seek additional resources or help if needed. With dedication and the right approach, you'll be well-equipped to handle any
statistical challenge that comes your way. | {"url":"https://www.statisticsassignmentexperts.com/blog/understanding-confidence-intervals-in-statistics-assignments.html","timestamp":"2024-11-06T07:53:42Z","content_type":"text/html","content_length":"109855","record_id":"<urn:uuid:eec05250-2d51-4fef-bedd-0b903094fcbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00115.warc.gz"} |
by Liliana Usvat
In mathematics, a toroid is a doughnut-shaped object, such as an O-ring. It is a ring form of a solenoid.
A toroid is used as an inductor in electronic circuits, especially at low frequencies where comparatively large inductances are necessary.
In geometry, a torus (pl. tori) is a surface of revolution generated by revolving a circle in three-dimensional space about an axis coplanar with the circle. If the axis of revolution does not touch
the circle, the surface has a ring shape and is called a ring torus or simply torus if the ring shape is implicit.
A torus can be defined parametrically by:
θ, φ are angles which make a full circle, starting at 0 and ending at 2π, so that their values start and end at the same point,
R is the distance from the center of the tube to the center of the torus,
r is the radius of the tube.
R and r are also known as the "major radius" and "minor radius", respectively. The ratio of the two is known as the "aspect ratio". A doughnut has an aspect ratio of about 2 to 3.
An implicit equation in Cartesian coordinates for a torus radially symmetric about the z-axis is
or the solution of f(x, y, z) = 0, where
Algebraically eliminating the square root gives a quartic equation,
The three different classes of standard tori correspond to the three possible relative sizes of r and R. When R > r, the surface will be the familiar ring torus. The case R = r corresponds to the
horn torus, which in effect is a torus with no "hole". The case R < r describes the self-intersecting spindle torus. When R = 0, the torus degenerates to the sphere.
When R ≥ r, the interior
of this torus is diffeomorphic (and, hence, homeomorphic) to a product of an Euclidean open disc and a circle.
The surface area and interior volume of this torus are easily computed using Pappus's centroid theorem giving
These formulas are the same as for a cylinder of length 2πR and radius r, created by cutting the tube and unrolling it by straightening out the line running around the center of the tube. The losses
in surface area and volume on the inner side of the tube exactly cancel out the gains on the outer side.
As a torus is the product of two circles, a modified version of the spherical coordinate system is sometimes used. In traditional spherical coordinates there are three measures, R, the distance from
the center of the coordinate system, and θ and φ, angles measured from the center point. As a torus has, effectively, two center points, the centerpoints of the angles are moved; φ measures the same
angle as it does in the spherical system, but is known as the "toroidal" direction. The center point of θ is moved to the center of r, and is known as the "poloidal" direction. These terms were first
used in a discussion of the Earth's magnetic field, where "poloidal" was used to denote "the direction toward the poles".In modern use these terms are more commonly used to discuss magnetic
confinement fusion devices.
n-dimensional torus
The torus has a generalization to higher dimensions, the n-dimensional torus, often called the n-torus or hypertorus for short. (This is one of two different meanings of the term "n-torus".)
Recalling that the torus is the product space of two circles, the n-dimensional torus is the product of n circles. That is:
The 1-torus is just the circle: T1 = S1. The torus discussed above is the 2-torus, T2. And similar to the 2-torus, the n-torus, Tn can be described as a quotient of Rn under integral shifts in any
coordinate. That is, the n-torus is Rn modulo the action of the integer lattice Zn (with the action being taken as vector addition). Equivalently, the n-torus is obtained from the n-dimensional
hypercube by gluing the opposite faces together.
An n-torus in this sense is an example of an n-dimensional compact manifold. It is also an example of a compact abelian Lie group. This follows from the fact that the unit circle is a compact abelian
Lie group (when identified with the unit complex numbers with multiplication). Group multiplication on the torus is then defined by coordinate-wise multiplication.
Toroidal groups play an important part in the theory of compact Lie groups. This is due in part to the fact that in any compact Lie group G one can always find a maximal torus; that is, a closed
subgroup which is a torus of the largest possible dimension. Such maximal tori T have a controlling role to play in theory of connected G. Toroidal groups are examples of protori, which (like tori)
are compact connected abelian groups, which are not required to be manifolds.
Magnetic Field of Toroid
Finding the magnetic field inside a toroid is a good example of the power of Ampere's law. The current enclosed by the dashed line is just the number of loops times the current in each loop. Amperes
law then gives the magnetic field by
The toroid is a useful device used in everything from tape heads to tokamaks.
Free energy
There are dedicated independent scientists around the world that claim that we can generate unlimited clean energy by just tapping into the ‘torus’, a shape that supposedly pervades the universe, and
which could yield endless free energy.
Toroid and Nature
The Sun has a large toroidal field surrounding it — the heliosphere — that is itself embedded inside a vastly larger toroidal field encompassing the Milky Way galaxy. Our Earth’s magnetic field is
surrounding us and is inside the Sun’s field, buffering us from the direct impact of solar electromagnetic radiation. Earth’s atmosphere and ocean dynamics are toroidal and are influenced by the
surrounding magnetic field. Ecosystems, plants, animals, etc all exhibit torus flow dynamics and reside within and are directly influenced by (and directly influence) the Earth’s atmospheric and
oceanic systems. And on it goes inward into the ecosystems and organs of our bodies, the cells they’re made of, and the molecules, atoms and sub-atomic particles.
Continuing our exploration of the torus as a form and flow process, one of the key characteristics of it is that at its very center, the entire system comes to a point of ultimate balance and
stillness — in other words, perfect centeredness.
The torus is the oldest structure in existence and without it nothing could exist. The toroidal shape is similar to a donut but rather than having an empty central “hole”, the topology of a torus
folds in upon itself and all points along its surface converge together into a zero-dimensional point at the center called the Vertex.
It has even been suggested that the torus can be used to define the workings of consciousness itself. In other words…consciousness has a geometry! The geometric shape used to describe the
self-reflexive nature of consciousness is the torus. The torus allows a vortex of energy to form which bends back along itself and re-enters itself. It ‘inside-outs’, continuously flowing back into
itself. Thus the energy of a torus is continually refreshing itself, continually influencing itself.
All Toroids have a black hole at one end and a white hole at the other. Black holes suck in energy and white holes emit it. So in our human body toroids we have black (negatively charged) and white
(positively charged) holes.
When the torus is in balance and the energy is flowing we are in a perfect state to clear ourselves of anything that is ‘not self’ anything that prevents us being our authentic selves.
The infinity symbol an ancient two dimensional representation of the 3D double toroidal energy flow – self generating, continual, never ending.
Theorists in astrophysics now argue that each electron holds at its core “zero point energy,” as does the universe at large.
Zero point energy is a place where there is no sound or light. This nothingness, issuing the essence of everything, exists at the heart of creation. It is the place wherein miraculous manifestation
from nothing to something happens. | {"url":"http://www.mathematicsmagazine.com/Articles/Toroid.php","timestamp":"2024-11-08T04:42:44Z","content_type":"application/xhtml+xml","content_length":"16671","record_id":"<urn:uuid:f9bf56b4-0649-4c33-a2e3-2102f1efc9b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00001.warc.gz"} |
GMAT Geometry Practice Problems | Special Properties of Triangles (Part 3)
A 8 min read
In the last article in the GMAT Geometry Series, we explained properties of different types of triangles and how those properties will help you solve GMAT geometry questions on triangles. We
illustrated them with GMAT geometry practice problems and also asked you to solve some GMAT Geometry Practice Problems.
In this article, we will look at three special properties which are tested by GMAT in Geometry:
If you haven’t gone through the previous articles of this series, here are the links-
Learn how a median affects the area of any triangle
Consider any triangle ABC and draw a line joining A to the midpoint of BC. Let’s call this line AD. Now tell me, what can we infer about the area of triangle ABD and ACD?
Is there any relation between them?
We know,
Area of a triangle = (1/2) x base x height
So, let us drop a perpendicular from A on BC and name it AE.
Since the heights of triangle ABD and ACD are equal to AE, we can write:
Area of triangle ABD = ½ x BD x AE
Area of triangle ACD = ½ x CD x AE
Also, since D is the midpoint of BC, we have CD = BD
Hence, we can conclude that:
Area of triangle ABD = Area of triangle ACD
For any strategic advice for GMAT or MBA Admissions, write to us at acethegmat@e-gmat.com. Sign up for free trial and get unlimited access to concept files, live sessions, and practice questions.
We know that the line joining a vertex of a triangle to the midpoint of the opposite side is called a median of the triangle.
Therefore, we found that the median AD divides the triangle into two equal areas.
Let us look at an Official question which can be solved very quickly, using the above property.
GMAT Quant Official Guide Question | (GMAT Data Sufficiency)
A slightly edited version of OG Question: OG 16 DS 126
In triangle ABC, point X is the midpoint of side AC and point Y is the midpoint of side BC. If point R is the midpoint of line segment XC and if point S is the midpoint of line segment YC, what is
the area of triangular region RCS, if the area of triangular region ABX is 32 square units?
This diagram might look scary and the question might seem undoable at the beginning. But let us simplify the diagram, concentrating only on the medians, BX, XY, RS and RY (join RY in the diagram). In
doing so, the diagram will look as follows:
Since BX is a median in triangle ABC, we can use the Special Property to conclude that
Area of ABX = Area of BXC
It is given that the Area of triangle ABX = 32 square units.
Hence, Area of BXC = 32 square units.
Since XY is a median in triangle BXC, we can conclude that
Area of BXY = Area of XYC = 32/2 = 16
Since RY is a median in triangle XYC, we can conclude that
Area of RYX = Area of RYC = 16/2 = 8
And since RS is a median in triangle YRC we can conclude that
Area of YRS = Area of RCS = 8/2 = 4
So, is this question really tough? Definitely, not as tough as it looked at the beginning.
We used one simple and special property to elegantly solve a complicated looking question.
Discover the Property of Sum of External Angles of a triangle
We know that,
An external angle = sum of two opposite interior angles
In the above diagram,
ABC is a triangle with sides extended as shown.
Therefore, using the above property we can write,
Ext. ∠ ACD = int. ∠CAB + int. ∠CBA
But hold on, that is not the special property!
Now you know that the sum of internal angles = 180°.
But what about the sum of external angles?
If we add up all the external angles as shown in the diagram, we will get
Ext ∠ (ACD + CAE + ABF) = int. ∠ (2A + 2B + 2C) = 2 x 180° = 360°
Hence, the sum of external angles = 360°
We will discuss this property in more details for Polygons, where I will show you that sum of external angles is equal to 360° for any convex polygon, irrespective of the number of sides the polygon
A Cheeky Trick on Angle-Side Property of Right Angled Triangle
While going through GMAT geometry practice problems on right-angled triangles, the most commonly tested right-angled triangles are ones with the following set of values as its angles.
1. 30° – 60° – 90°
2. 45° – 45° – 90°
Therefore, it is important for us to know the relation between these angles and the sides of the triangles, as knowing this can help us in finding the answer quickly and at the same make the
calculations significantly simple.
Case 1:
As shown in the figure, if the angles of the triangle are 30°-60°-90°, the ratio of the sides will be –
AB: BC: AC = 1: √3: 2
Case 2:
As shown in the figure, if the angles of the triangle are 45°-45°-90°, the ratio of the sides will be –
AB: BC: AC = 1: 1: √2
Application of Sum of External Angles and Right Angle
Let us apply the special property 2 and 3 in a very simple GMAT-like question.
The two questions given below are based on the same diagram, so let’s solve these questions together to understand the application of the above properties.
Q.1 What is the value of z?
1. 110°
2. 120°
3. 130°
4. 140°
5. 150°
Q.2 If the value of AC = 4 units, what is the ratio of the sides AB and BC?
1. 2 : 1
2. 1 : √ 3
3. 2 : √ 3
4. √ 3 : 1
5. 1 : 2
Detailed Solution to the above GMAT Geometry Practice Problem:
Part 1:
Notice carefully, that z is the external angle of triangle ABC.
From our conceptual understanding related to angles of a triangle, we can write –
External ∠CAE = sum of opposite internal angles of triangle ABC
Therefore, ∠z = ∠ABC + ∠ACB
From the diagram, we can clearly infer that –
∠ABC = 90° and ∠ACB = y°
Also, ∠ACB + ∠ACD = 180° [straight line angle property] And, ∠ACD = 150°
Therefore, ∠ACB = 180° – 150° = 30°
Hence, the value of ∠z = ∠ABC + ∠ACB = 90° + 30° = 120°
Part 2:
This question can be solved in multiple ways –
Method I:
We know that
• from our above discussion in part 1 we know that ∠ACB = 30°
length AB = AC sin 30° = 4 x ½ = 2
And BC = AC cos 30° = 4 x √3/2 = 2√3
Therefore, ratio of AB : BC = 2 : 2√3 = 1 : √3
Method II:
But say one does not know trigonometry but still wants to solve it. This is where, our
Special Property 3, Case 1 comes in handy!
• If we observe carefully, we will notice that triangle ABC is a 30°-60°-90° triangle
• Since ∠ABC = 90° , ∠BAC = 60° and ∠ACB = 30°
• And we know that ratio of AB : BC : CA = 1 : √3 : 2
Therefore, from here, we can directly say that the answer would be 1 : √3 and we don’t even have to use the value of AC to find the answer!
Cool, isn’t it?
GMAT Geometry Practice Problems – Another Question to test your imagination
A 60 feet long ladder is inclined against a wall such that the bottom and the top of the ladder are at equal distance from the point where the wall meets the floor. The same ladder slides away from
the foot of the wall such that it is inclined at an angle of 30° from the floor. What is the difference between the heights to which the ladder reaches in the two cases?
1. 15(√ 2 – 1)
2. 30(√ 2 – 1)
3. 15(√ 2 + 1)
4. 30√ 2
5. 30(√ 2 + 1)
Now this question might seem to be a tough one because the position of the ladder is different in the two cases, which leads to change in the angles with respect to the floor. Therefore, most
students might feel that this question might not be an easy one to solve.
But no need to worry, let’s see how by using our special property and a methodical approach, we can get our answer very easily!
Step 1: Drawing Inferences from the Question Statement
Let us represent the two cases diagrammatically.
Triangle BAC represents the initial case, where point A is the point of intersection of the wall and the floor and BC represents the ladder.
We are given that AB = AC and BC = 60 feet.
Triangle PAQ represents the case after the ladder slid. The only value that remains unchanged is the length of the ladder, hence BC = PQ.
Therefore, in the Right Triangle PAQ,
PQ = 60 feet, and ∠AQP = 30°
We need to find the value of AB – AP.
Step 2: Finding the required values
Let us first consider Right Triangle BAC:
We know that when two sides of a right triangle are equal, it is a 45°-45°-90° triangle.
As discussed in special property 3, in such a triangle, the sides opposite to the angles 45°, 45°, and 90° respectively are in the ratio 1: 1: √2
Thus, we have found out the value of AB.
Now, considering Right Triangle PAQ:
It is a 30°-60°-90° right-angled triangle. And we know that in such a triangle, the sides opposite to the angles 30°, 60°, and 90° respectively are in the ratio 1 : √3 : 2.
Step 3: Calculate the final answer
Difference between the heights to which the ladder reaches in the two cases
= AB – AP
= 30√2 – 30
= 30(√2 – 1)
Answer: Option (2)
Takeaways & GMAT Sample Problems
Any geometry question testing the concepts of triangles can be easily tackled as long as the test taker is clear with:
1. the basic properties of a triangle
2. the subtle differences these properties exhibit, depending on the type of the triangle
3. when to apply and how to apply these (Focus on the questions that are given in the article and understand how each of these properties is used in each question)
With this, we come to the end of our series on GMAT Geometry. We hope you liked our articles as much as we liked creating them. Remember the takeaways from our articles on GMAT Geometry and go
through GMAT geometry sample problems and geometry practice problems. | {"url":"https://e-gmat.com/blogs/geometry-practice-problems-gmat-special-properties-of-triangles/","timestamp":"2024-11-11T21:02:20Z","content_type":"text/html","content_length":"676831","record_id":"<urn:uuid:3277b454-f734-4e6c-94a0-bb33e2e63baf>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00399.warc.gz"} |
In geometry and combinatorics, an arrangement of hyperplanes is an arrangement of a finite set A of hyperplanes in a linear, affine, or projective space S. Questions about a hyperplane arrangement A
generally concern geometrical, topological, or other properties of the complement, M(A), which is the set that remains when the hyperplanes are removed from the whole space. One may ask how these
properties are related to the arrangement and its intersection semilattice. The intersection semilattice of A, written L(A), is the set of all subspaces that are obtained by intersecting some of the
hyperplanes; among these subspaces are S itself, all the individual hyperplanes, all intersections of pairs of hyperplanes, etc. (excluding, in the affine case, the empty set). These intersection
subspaces of A are also called the flats of A. The intersection semilattice L(A) is partially ordered by reverse inclusion.
If the whole space S is 2-dimensional, the hyperplanes are lines; such an arrangement is often called an arrangement of lines. Historically, real arrangements of lines were the first arrangements
investigated. If S is 3-dimensional one has an arrangement of planes.
A hyperplane arrangement in space
General theory
The intersection semilattice and the matroid
The intersection semilattice L(A) is a meet semilattice and more specifically is a geometric semilattice. If the arrangement is linear or projective, or if the intersection of all hyperplanes is
nonempty, the intersection lattice is a geometric lattice. (This is why the semilattice must be ordered by reverse inclusion—rather than by inclusion, which might seem more natural but would not
yield a geometric (semi)lattice.)
When L(A) is a lattice, the matroid of A, written M(A), has A for its ground set and has rank function r(S) := codim(I), where S is any subset of A and I is the intersection of the hyperplanes in S.
In general, when L(A) is a semilattice, there is an analogous matroid-like structure called a semimatroid, which is a generalization of a matroid (and has the same relationship to the intersection
semilattice as does the matroid to the lattice in the lattice case), but is not a matroid if L(A) is not a lattice.
For a subset B of A, let us define f(B) := the intersection of the hyperplanes in B; this is S if B is empty. The characteristic polynomial of A, written p[A](y), can be defined by
${\displaystyle p_{A}(y):=\sum _{B}(-1)^{|B|}y^{\dim f(B)},}$
summed over all subsets B of A except, in the affine case, subsets whose intersection is empty. (The dimension of the empty set is defined to be −1.) This polynomial helps to solve some basic
questions; see below. Another polynomial associated with A is the Whitney-number polynomial w[A](x, y), defined by
${\displaystyle w_{A}(x,y):=\sum _{B}x^{n-\dim f(B)}\sum _{C}(-1)^{|C-B|}y^{\dim f(C)},}$
summed over B ⊆ C ⊆ A such that f(B) is nonempty.
Being a geometric lattice or semilattice, L(A) has a characteristic polynomial, p[L(A)](y), which has an extensive theory (see matroid). Thus it is good to know that p[A](y) = y^i p[L(A)](y), where i
is the smallest dimension of any flat, except that in the projective case it equals y^i + 1p[L(A)](y). The Whitney-number polynomial of A is similarly related to that of L(A). (The empty set is
excluded from the semilattice in the affine case specifically so that these relationships will be valid.)
The Orlik–Solomon algebra
The intersection semilattice determines another combinatorial invariant of the arrangement, the Orlik–Solomon algebra. To define it, fix a commutative subring K of the base field and form the
exterior algebra E of the vector space
${\displaystyle \bigoplus _{H\in A}Ke_{H}}$
generated by the hyperplanes. A chain complex structure is defined on E with the usual boundary operator ${\displaystyle \partial }$ . The Orlik–Solomon algebra is then the quotient of E by the ideal
generated by elements of the form ${\displaystyle e_{H_{1}}\wedge \cdots \wedge e_{H_{p}}}$ for which ${\displaystyle H_{1},\dots ,H_{p}}$ have empty intersection, and by boundaries of elements of
the same form for which ${\displaystyle H_{1}\cap \cdots \cap H_{p}}$ has codimension less than p.
Real arrangements
In real affine space, the complement is disconnected: it is made up of separate pieces called cells or regions or chambers, each of which is either a bounded region that is a convex polytope, or an
unbounded region that is a convex polyhedral region which goes off to infinity. Each flat of A is also divided into pieces by the hyperplanes that do not contain the flat; these pieces are called the
faces of A. The regions are faces because the whole space is a flat. The faces of codimension 1 may be called the facets of A. The face semilattice of an arrangement is the set of all faces, ordered
by inclusion. Adding an extra top element to the face semilattice gives the face lattice.
In two dimensions (i.e., in the real affine plane) each region is a convex polygon (if it is bounded) or a convex polygonal region which goes off to infinity.
• As an example, if the arrangement consists of three parallel lines, the intersection semilattice consists of the plane and the three lines, but not the empty set. There are four regions, none of
them bounded.
• If we add a line crossing the three parallels, then the intersection semilattice consists of the plane, the four lines, and the three points of intersection. There are eight regions, still none
of them bounded.
• If we add one more line, parallel to the last, then there are 12 regions, of which two are bounded parallelograms.
Typical problems about an arrangement in n-dimensional real space are to say how many regions there are, or how many faces of dimension 4, or how many bounded regions. These questions can be answered
just from the intersection semilattice. For instance, two basic theorems, from Zaslavsky (1975), are that the number of regions of an affine arrangement equals (−1)^np[A](−1) and the number of
bounded regions equals (−1)^np[A](1). Similarly, the number of k-dimensional faces or bounded faces can be read off as the coefficient of x^n−k in (−1)^n w[A] (−x, −1) or (−1)^nw[A](−x, 1).
Meiser (1993) designed a fast algorithm to determine the face of an arrangement of hyperplanes containing an input point.
Another question about an arrangement in real space is to decide how many regions are simplices (the n-dimensional generalization of triangles and tetrahedra). This cannot be answered based solely on
the intersection semilattice. The McMullen problem asks for the smallest arrangement of a given dimension in general position in real projective space for which there does not exist a cell touched by
all hyperplanes.
A real linear arrangement has, besides its face semilattice, a poset of regions, a different one for each region. This poset is formed by choosing an arbitrary base region, B[0], and associating with
each region R the set S(R) consisting of the hyperplanes that separate R from B. The regions are partially ordered so that R[1] ≥ R[2] if S(R[1], R) contains S(R[2], R). In the special case when the
hyperplanes arise from a root system, the resulting poset is the corresponding Weyl group with the weak order. In general, the poset of regions is ranked by the number of separating hyperplanes and
its Möbius function has been computed (Edelman 1984).
Vadim Schechtman and Alexander Varchenko introduced a matrix indexed by the regions. The matrix element for the region ${\displaystyle R_{i}}$ and ${\displaystyle R_{j}}$ is given by the product of
indeterminate variables ${\displaystyle a_{H}}$ for every hyperplane H that separates these two regions. If these variables are specialized to be all value q, then this is called the q-matrix (over
the Euclidean domain ${\displaystyle \mathbb {Q} [q]}$ ) for the arrangement and much information is contained in its Smith normal form.
Complex arrangements
In complex affine space (which is hard to visualize because even the complex affine plane has four real dimensions), the complement is connected (all one piece) with holes where the hyperplanes were
A typical problem about an arrangement in complex space is to describe the holes.
The basic theorem about complex arrangements is that the cohomology of the complement M(A) is completely determined by the intersection semilattice. To be precise, the cohomology ring of M(A) (with
integer coefficients) is isomorphic to the Orlik–Solomon algebra on Z.
The isomorphism can be described explicitly and gives a presentation of the cohomology in terms of generators and relations, where generators are represented (in the de Rham cohomology) as
logarithmic differential forms
${\displaystyle {\frac {1}{2\pi i}}{\frac {d\alpha }{\alpha }}.}$
with ${\displaystyle \alpha }$ any linear form defining the generic hyperplane of the arrangement.
Sometimes it is convenient to allow the degenerate hyperplane, which is the whole space S, to belong to an arrangement. If A contains the degenerate hyperplane, then it has no regions because the
complement is empty. However, it still has flats, an intersection semilattice, and faces. The preceding discussion assumes the degenerate hyperplane is not in the arrangement.
Sometimes one wants to allow repeated hyperplanes in the arrangement. We did not consider this possibility in the preceding discussion, but it makes no material difference.
See also
• "Arrangement of hyperplanes", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Edelman, Paul H. (1984), "A partial order on the regions of ${\displaystyle \mathbb {R} ^{n}}$ dissected by hyperplanes", Transactions of the American Mathematical Society, 283 (2): 617–631,
CiteSeerX 10.1.1.308.820, doi:10.2307/1999150, JSTOR 1999150, MR 0737888.
• Meiser, Stefan (1993), "Point location in arrangements of hyperplanes", Information and Computation, 106 (2): 286–303, doi:10.1006/inco.1993.1057, MR 1241314.
• Orlik, Peter; Terao, Hiroaki (1992), Arrangements of Hyperplanes, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 300, Berlin:
Springer-Verlag, doi:10.1007/978-3-662-02772-1, ISBN 978-3-642-08137-8, MR 1217488.
• Stanley, Richard (2011). "3.11 Hyperplane Arrangements". Enumerative Combinatorics. Vol. 1 (2nd ed.). Cambridge University Press. ISBN 978-1107602625.
• Zaslavsky, Thomas (1975), "Facing up to arrangements: face-count formulas for partitions of space by hyperplanes", Memoirs of the American Mathematical Society, 1 (154), Providence, R.I.:
American Mathematical Society, doi:10.1090/memo/0154, MR 0357135. | {"url":"https://www.knowpia.com/knowpedia/Arrangement_of_hyperplanes","timestamp":"2024-11-03T15:59:40Z","content_type":"text/html","content_length":"108226","record_id":"<urn:uuid:33fd42dd-3338-49b4-98af-b5b5505a4e3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00817.warc.gz"} |
ECONOMETRICS FOR EC307
ECONOMETRICS FOR EC307 Oriana Bandiera Imran Rasul February, 22, 2000
WARNING: The sole purpose of these notes is to give you a “road map” of the econometric issues and techniques you are likely to see in the class papers. They are BY NO MEANS a substitute for an
econometric book or course. You should use them as a reference to recall concepts you already know. ALWAYS refer to a book (Dougherty or Greene) for a comprehensive analysis. MEMORISING THESE NOTES
WILL NOT HELP YOU IN THE EXAM.
1. The Linear Regression model
2. Inference in the OLS model
3. Problems for the OLS model (heteroscedasticity, autocorrelation)
4. GLS
5. Panel Data
6. Simultaneous Equations (endogeneity bias)
7. 2SLS
8. Multicollinearity
9. Omitted Variable Bias
10.Including Irrelevant Variables
11.Measurement Error
12.Limited Dependent Variables: Probit and Logit
13. Fixed Effects (again)
14.FIML and LIML
15.Non Parametric Estimation
0.1 Types of Econometric Data
The unit of each observation i, can be an individual, family, school, …rm, region,
country etc. This data can be;
(a) Cross sectional : collected for a sample of units at a given moment in time
(b) Time series : collected for a given unit, over several time periods
(c) Panel : collected for a sample of units over a period of time. If this period is
the same for all i, this is a balanced panel, otherwise it is an unbalanced panel.
1 The Linear Regression Model
The underlying idea in a multiple regression model is that there is some relationship
between a ‘dependent’ variable, y, and a set of ‘explanatory’ or ‘independent’
variables, x1, x2 , …, xK;
y = f(x1; x2; :::; xK) (1)
In a sense we are identifying a causal relationship between the x variables on
the RHS and the y on the LHS. The basic assumption of the model is that the
sample observations on y may be expressed as a linear combination of the sample
observations on the explanatory x variables plus a disturbance vector, u;
y = ¯1×1 + ¯2×2 + ::: + ¯KxK + u (2)
More precisely, if we have N observations on each set of y’s and corresponding
x’s so that for observation i;
yi = ¯1x1i + ¯2x2i + ::: + ¯KxKi + ui (3)
The disturbance term re‡ects the fact that no empirical relationship is ever exact,
but on average, a relationship de…ned by (3) is expected to hold, so on average we
expect our disturbances to be zero, hence;
E(ui) = 0 for each i
Graphically we can illustrate what we are trying to do in the simple regression
Fitting a Regression Line
Our linear regression line is the line which “best …ts” the sample data. Heuristically,
this line is that which minimises the distance between itself and the actual
y values observed for each x observation. The gap between the actual y observation
and the …tted, or predicted, y, ^y;from our regression line is called the “residual”;
residual : ei = yi¡ ^yi
i y
The “ordinary least squares” (OLS) method of …tting a regression line thus solves
the following in the simple regression case (with only one explanatory variable plus
the intercept);
We choose
rather than
eibecause the latter may equal zero even though
the …t is very poor because huge positive and negative ei’s cancel out.
1.1 Gauss-Markov Conditions
A1: Each disturbance term on average is equal to zero
E(u) = 0 ) E(y) = X¯, which will be satis…ed automatically if a constant term
is included, since the role of the constant term is to pick up any systematic tendency
in y not accounted for by the explanatory variables in the regression.
A2: Each disturbance term has the same variance (around mean zero)
)(a) each ui distribution has the same variance (homoscedasticity)
)(b) all disturbances are pairwise uncorrelated (no serial correlation)
(b) means that the size of the disturbance term for individual i has no in‡uence
on the size of the disturbance for individual j, for i 6= j.
A3:All the explanatory variables contribute something in explaining the
variation in the data
)the explanatory variables do not form a linearly dependent set.
A4: The explanatory variables are …xed and can be taken as given
X is a non stochastic matrix )so in our sample, the only source of variation is in
the u vector and hence in the y vector, ) cov(X; u) = 0.
A5: The disturbance term follows a normal distribution
u has a multivariate normal distribution )by A1, A2 and A5, u » N(0;¾2I).
Theorem 1 (Gauss Markov): Under the Gauss Markov assumptions A1-A5 the
OLS estimator is BLUE (best linear unbiased estimator).
1.2 Exogenous and Endogenous Variables
We have two types of variable that we consider with respect to a given model. An
exogenous variable is one that is determined outside of the model under consideration,
and so its value is taken as given. An endogenous variable is one whose value is
explained from within the model. In our model above, X is exogenous, and y is
1.3 Unbiasedness, E¢ciency and Consistency
There are three principle properties we look for in any estimator;
² Unbiasedness: on average, we expect the estimated parameter to equal the
true population value of the parameter (in the OLS case, ^¯ OLSis unbiased as
E(^¯OLS) = ¯).
² E¢ciency:an estimator, ^μ 1is said to more e¢cient than another estimator ^μ 2if
V ar(^μ1) > V ar(^μ2)
² Consistency: an estimator, ^μ1is said to be consistent if as the sample size
increases to in…nity, lim
V ar(^μ1) ! 0; and lim
E(^μ) ! μ:
Consistency, Efficiency and Bias
more efficient
Hence if OLS is BLUE, this implies that no other unbiased linear estimator has a
smaller variance (is more e¢cient) than the OLS estimator under the Gauss Markov
1.4 Goodness of Fit
A summary statistic of the goodness of …t of our regression model is given by the R2
statistic. Clearly,
0 · R2 · 1
and as R2 ! 1 the …t of the model is said to improve.
One word of warning is that as the number of explanatory variables, K, increases,
it can be shown that R2 necessarily increases and so can be brought arbitrarily close
to one simply by including more explanatory variables into the regression, even if
these explanatory variables are found to be insigni…cant.
In order to prevent this problem, an “adjusted R2” or
2statistic is often reported,
de…ned as;
(N ¡ 1)R2 ¡ K
N ¡ K ¡ 1
< R2 (5)
This does not necessarily rise with K. It can be shown that the addition of a
new variable to the regression will cause
to rise i¤ its t-statistic is greater than one
(which still does not imply that the variable is signi…cant), therefore a rise in
2 does
not necessarily establish that the speci…cation of the model has improved, although
it is a better indicator of this than R2, just not a perfect one.
1.5 Interpretation of the Estimated Parameters, ^¯
The estimated parameters, ^¯ = (^¯1;^¯2; :::;^¯ K) have a very simple interpretation. Let
us consider one particular element of ^¯ ; ^¯i;
² the standard interpretation of ^¯i is that if xi increases by one unit, then this
causes y to increase by ^¯i units
² if the LHS variables y is in log form (but X is not) then the equation is in
semi-logarithmic form. In this case the interpretation is that, for small ¯i, if xi
increases by one unit this leads to a ¯i% increase in y.
² if all variables are in logarithmic form then the interpretation is that ¯i corresponds
to the elasticity of y with respect to a 1% unit change in xi:
2 Inference in the OLS Model
2.1 t-tests
A hypothesis that we commonly want to test is;
Null hypothesis- H0 : ¯i = 0
Alternative hypothesis- H1:¯i 6= 0
If we accept H0 )the explanatory variable corresponding to ¯i, namely xi, is not
important (or ‘insigni…cant’) in explaining y. We statistically test for this using the
t ¡ test statistic;
t =
H0 ~
t(N ¡ K) (6)
where ^¯ i refers to the OLS estimate of ¯i, and
se(^¯i) =
V ar(^¯ i) =
¾2aii = ¾paii (7)
and N= number of observations, K= number of explanatory variables including
the intercept.
In general to test H0 : ¯i =»¯ we use the test statistic;
t =
i ¡ »¯
se(^¯ i)
H0 ~
t(N¡ K) (8)
This test statistic can be easily computed once we have performed OLS, and will
give us a value for each ¯i. This value is compared against the ‘critical’ value given
by the t(N ¡ K) distribution.
Accept H0 i¤
< tcrit(N ¡ K) (9) Do not accept H0 i¤ ¯¯¯¯¯¯ ^¯ i se(^¯ i) ¯¯¯¯¯¯ > tcrit(N ¡ K) (10)
where tcrit(N ¡ K) is derived from tables for the appropriate signi…cance level,
e.g. 5%, 1%, and is approximately equal to two. A rough and ready calculation that
you can do is to reject H0 i¤ the t-statistic is more than two. If (17) holds, then the
variable xi is said to be ‘signi…cant’. If (18) holds, then the variable xi is said to be
‘insigni…cant’. Either the t-statistic (in absolute value) or the standard error will be
reported with coe¢cient estimates.
2.1.1 Degrees of Freedom
The test statistic for the t-test is t(N ¡K). (N¡ K) is referred to as the ‘degrees
of freedom’ for the test. To get some intuition of where this comes from consider the
simple regression model, y = ® + x¯ + u. Hence the t-test for H0 : ¯ = 0 uses the
test statistic t(N ¡ 2): We have to take o¤ two degrees of freedom from the sample
size because the …rst two observations give us no information on the line of best …t;
Degrees of Freedom
no information perfect relationship
2.1.2 Con…dence Intervals
Consider the simple regression line: ^y = 95:3+ 2:53
t, where t is time, and (0.08) is
the se on the estimate, estimated from a sample size of 23. To test H0 : ¯ = 0 we use
the test statistic;
t =
se(^¯ )
= 31:625
The critical value for this test is tn¡2 = t23;0:05 = 2:069 (5% signi…cance level).
Clearly, as 31.625>2.069 that implies that we reject H0 and conclude that time does
matter. We can construct a con…dence interval from our t-statistic. We know ^¯ is
just an estimate of ¯, so in what range might we reasonably expect the true ¯ to lie
in? For large n, the t distribution is very similar to the normal distribution. Using
this fact we can construct the following con…dence intervals;
To cover 95% of the distribution : ¯ 2 [b § 1:96se(b)] (11)
To cover 99% of the distribution : ¯ 2 [b § 2:56se(b)] 95% Confidence Interval
1.96 ( ) b
b – se b 1.96 ( )
b + se b
2.5% 2.5%
95% Confidence Interval
1.96 ( ) b
b – se b 1.96 ( )
b + se b
2.5% 2.5%
In practice we need to take account of the fact that we don’t know ¾2
u. Hence we
use the t-distribution to form the con…dence intervals;
To cover 95% of the distribution : ¯ 2 [b § tn¡2;0:025:se(b)] (12)
To cover 99% of the distribution : ¯ 2 [b § tn¡2;0:005:se(b)] Above we rejected H0 : ¯ = 0 at the 5% signi…cance level. This is equivalent to
saying that 0 does not lie in the 95% con…dence interval.
2.2 F-tests
Another commonly reported test is that which tests the joint signi…cance of all the
H0 : ¯1 = ¯2 = ::: = ¯K = 0
To test H0 we use the test statistic;
F =
ESS=(K ¡ 1)
RSS=(N ¡ K)
(1 ¡ R2)=(N ¡ K)
H0 ~
F [(K ¡ 1) ; (N ¡ K)] (13)
Again we have to choose an appropriate critical value for F [(K ¡ 1) ; (N ¡ K)] to set the signi…cance level. We can also use the F-test to test whether a group of
explanatory variables are jointly signi…cant;
H0 : ¯K+1 = ¯K+2 = ::: = ¯K+M = 0
So under H0 the regression model is;
y = ® + ¯1×1 + ::: + ¯K + u ! RSSK
Under H1 the regression model is;
y = ® + ¯1×1 + ::: + ¯KxK + ¯K+1xK+1 + ::: + ¯K+MxK+M + u ! RSSM
The test statistic to use is;
F =
(RSSK ¡ RSSM) =(M ¡ K)
RSSM=(N ¡M ¡ 1)
H0 ~
F [(M ¡ K) ; (N ¡M ¡ 1)] (14)
2.3 Type I and Type II Errors
The signi…cance level refers to the probability that you will not accept H0 even though
the true ¯i accords with H0 . Hence the signi…cance level (which is typically 1% or
5%) gives the probability of wrongly rejecting a true null hypothesis (H0 ), known as
a ‘type I error’.
Clearly we want to minimise this as much as possible, but as we decrease the
probability of a type I error (by changing our critical value, tcrit(N¡K), accordingly),
we necessarily increase the probability of a type II error, namely, accepting a false
null hypothesis, H0 . Hence we face a trade-o¤ between the two types of error.
H0 accepted H0 rejected
H0 true type I error
H0 false type II error
It is standard practice in the literature to …x the probability of a type I error, the
signi…cance level, at either 1%, 5% or 10%. The exact probability of a type II error
will then depend on the speci…c test that we are using. A ‘good’ test will give a low
probability of a type II error when the probability of a type I error is low. In that
case the test is said to have ‘high power’.
Type I and Type II Errors
prob type II error
prob type I
error High power test
Standard trade-off
2.4 Dummy Variables
Note that in the last example, one of the x’s was the variable ‘male’. This is an
example of a ‘dummy’ variable which takes the following form;
malei = 1 if individual i is male (15)
0 otherwise
We can de…ne dummies for all such dichotomous variables, e.g. race, seasonals.
2.5 Interaction Variables
Consider a wage equation;
w = ¯1 + ¯2age + ¯3d + u
d = 1 if college graduate
0 otherwise
Now suppose we want to examine the hypothesis that “not only are the salaries
of college graduates higher than those of non college graduates at any given age, but
they rise faster as the individuals get older”. to test this we require the inclusion of
an interaction term;
w = ¯1 + ¯2age + ¯3d + ¯4d:age + u (16)
Interaction Terms
1 b
3 b
2 4 slope= b + b
2 slope= b
non graduates
2.6 Chow Test
It sometimes happens that your sample observations contain two subsamples potentially,
e.g. male and female. Do we run separate or combined regressions? Sometimes
we can combine the subsamples using a dummy variable, e.g. male dummy, or we
can allow for interaction terms that don’t restrict the coe¢cients to be the same for
each subsample;
Chow Test: Combined
Chow Test: Separate Regression
Suppose you have two subsamples: A and B ! RSS UA; UB from the separate
If you run a pooled regression: P ! RSS UP = UP
A + UP
B from the pooled
regression (where UP
i is the contribution to the RSS from subsample i:)
The subsample regressions must …t data at least as well as the pooled sample
) UA · UP
A and UB · UP
B and therefore (UA + UB) · UP
(UA + UB) = UP only in the case when there is no need to split the samples.
There is a price to pay for the improved …t using subsamples – we lose degrees
of freedom as (k + 1) extra parameters are estimated (k =#explanatory variables).
Hence we have (2k + 2) parameters to estimate (k explanatory variables and ¾2
A and
B): Is the improvement in …t signi…cant? We use the following F-statistic;
ChowTest : F =
Improvement in …t/dof used up
Unexplained/dof remaining
UP ¡ UA ¡ UB¢
= (k + 1)
(UA + UB) = (n ¡ 2k ¡ 2)
H0 ~
F [(k + 1) ; (n ¡ 2k ¡ 2)] 3 Problems for the OLS Model
The two extensions to the OLSmodel that we shall consider both arise froma failure of
all the GaussMarkov assumptions to hold. In particular, the assumption of ‘spherical’
A2:V ar(u) = E(uu0) = ¾2I no longer holds in each of these cases.
3.1 Heteroscedasticity
When A2 holds, the disturbance term is said to be homoscedastic, i.e. each observation
i has the same variability in its disturbance term, ¾2:When this is not the case,
u is said to be “heteroscedastic”. This can be illustrated graphically in the simple
regression case;
Clearly the disturbance terms appear to be increasing with x here. This case
of heteroscedasticity still has that the disturbances are pairwise uncorrelated. The
consequences of heteroscedasticity are;
² a; b from OLS (y = ® + ¯x + u) are still unbiased and consistent
² a; b are no longer BLUE
² se(a); se(b) are invalid (as they are constructed under the incorrect assumption
of homoscedasticity).
3.1.1 The Gold…eld-Quandt Test
H0 : V ar(ui) = ¾2 for all i (homoscedasticity)
H1 : V ar(ui) = f(xi), f0(xi) ? 0 (some functional relationship with x)
For example, suppose we believe V ar(ui) is increasing in xi;
Goldfield-Quandt Test
n’ n-2n’ n’
Test Statistic : F =
H0 ~
F [(n0 ¡ k ¡ 1) ; (n0 ¡ k ¡ 1)] (18)
3.1.2 What Can you Do About Heteroscedasticity?
Suppose V ar(ui) = ¾2i
. If we know ¾2i
for all i we can eliminate the heteroscedasticity
by dividing through by ¾ifor each observation so that the transformed model becomes;
+ ¯
= 0; V ar
= 1 i.e. homoscedatic errors
So running this transformed model will give improve e¢cient estimates. Note
that because the constant term is
then there will be a di¤erent constant term
estimated for each individual. The intuition behind this is that those observations
with the smallest ¾2i
will be most useful for locating the true regression line. We take
advantage by using weighted least squares in the transformed model, which gives
the greatest weights to the highest quality observations (lowest ¾2i
), whereas OLS is
ine¢cient because it gives all observations an equal weighting.
3.2 Autocorrelation
Suppose now that i refers to a time period, not an observation on an individual
entity. Then in many economic applications we may expect to see a relationship
between disturbances in adjacent time periods. This can again be illustrated in the
case of the simple regression model;
A common form of such disturbances is the “autocorrelation” (AR) structure;
AR(1):ut = ½ut¡1 + “t; where “t » N(0; ¾2″
I), and j½j < 1 (19) This is denoted AR(1) because of the presence of one lag on the RHS. This means that for ½ > 0, if we experience a large disturbance in period t, then we expect a
similarly large disturbance (dampened by a factor ½) in the subsequent period. If
j½j > 1, then this would imply that the disturbances diverge over time, which is not
typically observed in economic data.
It can be shown that the var-covariance structure in the case of AR(1) disturbances
is such that E(u) = 0 still but;
V ar(u) =
var(u1) cov(u1; u2) ¢ ¢¢ cov(u1; uN)
cov(u2; u1) var(u2) ¢ ¢¢ cov(u2; uN)
… …
cov(uN; u1) cov(uN; u1) ¢ ¢¢ var(uN)
= ¾2
1 ½ ½2 ¢ ¢¢ ½N¡1
½ 1 ½ … …
½2 ½ 1 … ½2
… …
… ½
½N¡1 ¢ ¢¢ ½2 ½ 1
where ¾2 =
1 ¡ ½2
The consequences of autocorrelation are;
² regression coe¢cients remain unbiased
² estimates become “too e¢cient”, se’s are wrongly calculated being biased downwards.
3.2.1 The Durbin-Watson Test
We can run tests to test for the existence of autocorrelation. The most commonly
reported test is the Durbin-Watson test statistic, which is calculated from the OLS
residuals. To derive the test statistic note that for the AR(1) model;
ut = ½ut¡1 + “t
so ut depends on its own lagged values. If we just run an OLS regression on this
cov(et; et¡1)
where et =^ut (…tted residuals) (21)
E(etet¡1) ¡ E(et)E(et¡1)
¡1) ¡ [E(et)]2
As t ! 1; E(et) = E(et¡1) = 0 so;
The DW statistic is based on this and is calculated as;
(et ¡ et¡1)2
‘ 2(1 ¡ r), where r = corr(et; et¡1) (23)
The critical values for this test are reported in Johnston (1991). In large samples;
DW ! (2 ¡ 2½)
H0 : no autocorrelation
H1 : AR(1)
) if ½ = 0 ) DW ¼ 2: Hence the further DW is from 2 (in either direction), the
more likely it is that AR(1) is present. The problem with this test is that the actual
critical values depend on the explanatory variables in the regression, so in tables,
only an upper and lower bound on critical values can be reported. This means that
there is a region of values for the DW statistic where the test is indeterminate;
The Durbin-Watson Test
0 dl du 2 4
H1 H0
4 Generalised Least Squares
Both heteroscedastic and autocorrelated errors imply that A2 no longer holds . In
either of these cases we can write the var-covariance matrix of the error terms as;
V ar(u) = E(uu0) = 6=
¾2I (24)
This violates the Gauss Markov assumption A2 and so OLS will no longer will
be appropriate. The best estimator for ¯ in this model;
y = X¯ + u; V ar(u) = is
the “generalised least squares” (GLS) estimator.
Proposition 2 The GLS estimator is
GLS= (X0¡
1y (25)
E(^¯ GLS) = ¯ so GLS is still unbiased,
V ar(^¯ GLS) = (X0¡
Under normality of u (so A5 still holds),
GLS» N(¯; (X0¡
1X)¡1) (26)
5 Panel Data
When our data contain repeated observations on each individual, say, the resulting
panel data opens up a number of possibilities that are not available in a single cross
section. In particular, the opportunity to compare the same individual over time
allows us the possibility of using that individual as his or her own control. this
enables us to get closer towards an ideal experimental situation..
5.1 An Introduction to Panel Data
An increasing number of developing countries are collecting survey data, usually
across households, for a period of time. amongst the best known such panel data sets
are the ICRISAT data set from India and the LSMS data sets from the World Bank.
There are two dimensions to panel data;
yit; i = 1:::N; t = 1:::T (27)
The standard linear panel data model is of the form;
yit = ¯0xit + “it where “it » N(0; V ) (28)
Due to the two dimensions present in the data there is likely to be serial correlation
present in “, i.e. the disturbances are not all independent of each other. The simplest
case to model is the “one-factor” model;
“it = ®i + Àit (29)
This treats as negligible the error correlations across individuals, but focuses
instead on the relationship between across individuals within the same time period.
In the one-factor model;
(A) ®’s and À’s are independent across all i and t
(B) Àit » N(0; ¾2
(C) ®i » N(0; ¾2
The last two factors )homoscedastic disturbances..
Hence, one interpretation of ®i is that it picks up idiosyncratic disturbances of
each individual, e.g. expensive tastes. Of course, the observation i could refer to a
particular household, country etc. Hence;
cov(“it; “is) = cov(®i + Àit; ®i + Àis) 6= 0 (30)
so that the errors in the panel data model;
yit = ¯0xit + “it = yit = ¯0xit + ®i + Àit (31)
are serially correlated. This violates the Gauss Markov assumption;
A2:V ar(“) =¾2I which implies that;
² each disturbance has the same variance( still true here, (B), (C) above unsure
homoscedastic errors)
² all disturbances are pairwise uncorrelated (this is violated here)
Hence, using OLS will lead to inconsistent estimates of ¯. What solutions can be
5.2 Method 1: GLS/Random E¤ects
In the simplest linear regression model we have seen that when A2 does not hold,
because of heteroscedasticity or autocorrelation, we can use GLS to obtain consistent
estimates of ¯. However, recall that to operationalise GLS we need to calculate the
inverse of the var-covariance matrix of the disturbances, V ar(“) = .
In this case
this will be an (NT £ NT) matrix which is potentially huge so even with modern
day computing power, it is often not plausible to use GLS.
5.3 Method 2: Fixed E¤ects
The root cause of our problem is the presence of the idiosyncratic factor ®i in the
error which makes OLS invalid. One way to transform the regression equation to
remove the ®i’s by removing the time average of each variable. This is the “…xed
e¤ects” transformation.
To estimate (¯;®1; ®2::::; ®N) using OLS we use a two step procedure;
Step One :Take the …xed e¤ects transformation;
i:= (xit¡
_xi:)¯0 + (vit¡
i:) (32)
i:refer to time averages;
vit (33)
Note that as ®i is the same for all t (is a …xed e¤ect),
i:= ®i so
®i¡ _®i:
= 0 (34)
Doing OLS on (7) gets us consistent estimates of ¯, denoted ^¯FE.
Step Two :To recover (®1; ®2::::; ®N);
i: ¡
^ ¯0FE
_xi: (35)
5.3.1 Fixed E¤ects and Random E¤ects
yit = ®i + x0
it¯ + “it
“it = ®i + Àit
Fixed e¤ects estimates^¯ conditional on the ®0s (just as we normally do estimation
conditional on the x’s). Random e¤ects treats each ®i as an observation arising from
some underlying distribution. The FE parameters are thus (¯;®1; :::;®N;¾2
º ) and
the RE parameters are
. Note the signi…cantly di¤erent interpretation of
these models.
6 Simultaneous Equations (Endogeneity Bias)
Here we investigate violation of the fourth G-M condition;
A4: X is a nonstochastic matrix ) in our sample, the only source of variation is in
u and hence y, therefore, cov(X; u) = 0:
Consider the following Keynesian income determination model;
Ct = ® + ¯Yt + ut (36)
Yt = Ct + It
Yt =
1 ¡ ¯
1 ¡ ¯
1 ¡ ¯
The 1
1¡¯ term is the multiplier. the important point to note is that Yt depends on
ut, the disturbance term from the consumption equation. clearly then Yt is correlated
with the disturbance term in (1) which violates the Gauss Markov assumption. If we
try and estimate ® and ¯ from (1) our estimates will be biased and the se’s will be
invalid. In most cases, OLS will also be inconsistent.
Fortunately, the problem of simultaneous equations bias can often be mitigated
by replacing OLS by a di¤erent estimation technique. These fall into two types;
² single equation estimation
² systems equation estimation
The latter method is more e¢cient, but it is also harder to implement.
6.1 Instrumental Variables
The problems arise her because cov(X; u) 6= 0. Hence we aim to …nd an “instrument”
for X;W with two desirable properties;
² W should be (highly) correlated with what it is instrumenting for, i.e. cov(W;X) 6=
² W should not be correlated with the disturbance term, i.e. cov(W; u) = 0
In this case, the model itself provides us with a suitable instrument for Yt. It
is correlated with Yt through the identity (2), and it cannot be correlated with the
disturbance term because it is an exogenous variable. the estimator we use is the
instrumental variables estimator de…ned as;
bIV =
Cov(It; Yt)
As a general rule, if an equation in a simultaneous equations model is exactly
identi…ed, IV will yield exactly the same coe¢cient estimates as ILS if the exogenous
variables in the model are used as instruments. however, any variable that satis…es
the two conditions can be a potential instrument for X.
6.2 Underidenti…cation
Consider the following supply and demand model;
yd = ® + ¯p + °x + ud (39)
ys = ± + “p + us
where x= per capita income, assumed exogenous. (p; y) are the endogenous
variables, determined by the market clearing process. When the market clears,
yd = ys = y: Solving for the RF equations;
p =
® ¡ ±
” ¡ ¯
” ¡ ¯
ud ¡ us
” ¡ ¯
y =
®” ¡ ¯±
” ¡ ¯
” ¡ ¯
“ud ¡ ¯us
” ¡ ¯
p depends on ud so …tting OLS to (9) would lead to biased and inconsistent
estimates. Similarly for (10). We rewrite the RF equations as;
p = ®0 + ¯0x + Àp (41)
y = ±0 + “0x + ºy
6.2.1 IV
x is the only exogenous variable in the model. We should be able to use it to instrument
for p. This works in the supply equation but not in the demand equation
where x already enters. Hence we can only obtain estimates of the supply equation
6.3 Overidenti…cation
Consider the following supply and demand equation system where demand is also a
function of time (perhaps due to evolving habit formation);
ydt = ® + ¯pt + °xt + ½t + udt (42)
yst = ± + “pt + ust
6.3.1 IV
There are two exogenous variables in the model, xt and t both already in the demand
equation so neither can be used to instrument for pt in that equation. Hence the
demand equation is underidenti…ed.
The supply equation is overidenti…ed because there are more exogenous variables
that can be used as instruments than we actually need. We can use either xt or t as
instruments for pt: They will give us di¤erent estimates of ± and “, although both will
be consistent. As a …rst pass, we might prefer to use the instrument which is more
correlated with pt. however, the optimal method to use is “two stage least squares”
where we use a linear combination of potential instruments.
7 Two Stage Least Squares (2SLS)
We have seen how the supply equation is overidenti…ed because both xt and t were
available as instruments for pt. The optimal estimation method in this case is 2SLS
where we use a linear combination of the potential instruments;
zt = ho + h1xt + h3t (43)
We want an instrument which is as highly correlated as possible with pt so we
want to maximise corr( pt; zt): We have already done this, when we estimated the
RF equations (24), (25). In (26) we found ^ptfrom a linear combination of xt and t.
When we ran that OLS regression we are doing three things at the same time;
1 minimising the sum of squares of residuals in (24)
2 maximising the value of R2 (goodness of …t)
3 maximising the correlation between the predicted and actual values of pt , i.e.
corr( pt; zt)
It is 3 that we are doing here. Hence we have a two stage procedure;
² regress RF equations and calculate the predicted values using endogenous variables
² use the predicted values as instruments for the actual values
This procedure produces consistent estimates. note that when an equation is
exactly identi…ed, 2SLS o¤ers no advantage over ILS or standard IV.
7.1 An Overidenti…cation Test
The most intuitive way to test the validity of the instruments, namely whether they
conform to the conditions;
(A) cov(xi; zi) 6= 0
(B) cov(zi; ui) = 0:
is to use the test statistic;
NR2 » Â2(K0 ¡ K) (44)
where N=sample size, K0=number of instruments, K=number of explanatory
variables. R2 is the R2 from the following regression;
y ¡ X ^¯=W+ u (45)
) R2 is from the regression e =W+ u where W is the set of K0 instruments.
This procedure tells us whether the instruments play a direct role in determining
y, not just an indirect role through the predicted x’s, ^X. If the test fails, one or more
of the instruments are invalid and ought to be included in the explanation of y.
8 Multicollinearity
Multicollinearity is the problem of when an approximately linear relationship among
the explanatory variables leads to unreliable regression estimates, i.e. because two
explanatory variables are highly correlated, you will not be able to precisely estimate
the contribution from each variable. As the standard errors rise there is a greater
probability of incorrectly …nding the variable not to be signi…cant in the regression.
All regression su¤er multicollinearity to some degree, but some more so than
others, especially in time series data. Symptoms of multicollinearity are;
² small changes in the data can produce wide swings in parameter estimates
² coe¢cients may have high se’s and low signi…cance levels despite being jointly
highly signi…cant and the R2 is high
² coe¢cients have the wrong sign or implausible magnitude
8.1 What Can You do About It?
There are two responses – direct attempts to improve the conditions for the reliability
of regression estimates, or the use of extraneus information.
8.1.1 Direct Measures
² increase the number of observations, e.g. switch from annual to quarterly data
(problem is that this might make measurement errors or autocorrelation worst)
² reduce ¾2
u by including more explanatory variables
8.1.2 Extraneous Information
² theoretical restrictions, e.g. in a Cobb Douglas production function, Y =
AK®L¯ertº we may impose the restriction of CRTS ) ® + ¯ = 1:
² empirical estimates so use previous studies to impose a restriction on a particular
parameter, e.g. intertemporal elasticity of substitution ¼ 0.3
9 Omitted Variables Bias
Suppose a dependent variable depends on two variables x1 and x2 according to the
y = ® + ¯1×1 + ¯2×2 + u (46)
but you omit x2 from the regression and run;
y = ® + ¯1×1 + u (47)
Your …tted regression is;
^y= a + b1x ; b1 =
cov(x1; y)
If (1) is the true DGP then it can be shown that;
E(b1) = E
cov(x1; y)
= ¯1 + ¯2
cov(x1; x2)
and so your estimate will now be biased. This is “omitted variables bias”. note
that this bias can go in either direction depending on the signs of ¯2 and cov(x1; x2).
This bias term arises because x1is being asked to also pick up the omitted e¤ects of
Omitted Variables Bias
x1 x2
Direct effect of x1 holding x2 constant True effect of x2
Apparent effect of x1 picking up x2
Only in the special case where cov(x1; x2) = 0 will omitted variables bias not
occur. A further consequence of omitted variables bias is that the se’s and tests
become invalid.
10 Including Irrelevant Variables
Suppose that the DGP is given by;
y = ® + ¯1×1 + u
but the econometrician estimates;
y = ® + ¯1×1 + ¯2×2 + u
and you estimate b1 using (4) instead of b1 = cov(x1;y)
var(x1) . In this case;
E(b1) = ¯1 (50)
Hence our estimate is still unbiased but in general it will be ine¢cient because it
does not exploit the information that ¯2 = 0: This is demonstrated below;
Irrelevant Variables
var( )
cov( , )
estimated using
1 x
x y
not exploit info that 0
estimated using(4) but does
b =
The level of ine¢ciency rises as more explanatory variables are added, and the
closer is the correlation coe¢cient to §1: Only in the special case where corr(x1; x2) =
0 is there no loss of e¢ciency.
11 Measurement Error
Economic variables are often measured with error, e.g. the correlation between actual
years of schooling and reported years of schooling is typically around 0.9 in most
11.1 Measurement Error in the Explanatory Variables
Suppose we have the relationship;
y = ® + ¯z + À; º~(0; ¾2
º ) (51)
z cannot be measured accurately, let x denote its true value. so for each observation
xi = zi + wi; w~(0; ¾2
w); w; À independent (52)
Substituting (10) into (9);
y = ® + ¯x + º ¡ ¯w (53)
Denote by u the composite error in this equation so
u = º ¡ ¯w (54)
Hence (11) can be written as;
y = ® + ¯x + u (55)
We regress y on x (rather than what we really wanted to do which was to regress
it on z) and obtain our OLS estimator;
b =
cov(x; y)
= ¯ +
cov(x; u)
Note that x and u are negatively correlated as x depends positively on w, u
depends negatively on w so our estimator is downwards biased. In fact;
plim b = ¯ + ¡¯¾2
+ ¾2
Hence we underestimate ¯ by;
+ ¾2
The implications of this are that the bigger the population variance of the measurement
error, ¾2
w relative to the population variance, ¾2
w; the bigger will be the
downwards bias. Graphically the e¤ects of measurement error are illustrated below;
Measurement Error
y True line
Regression line with ME
The standard solution to this problem is to …nd an instrument for the variable that
is measured with error. This instrument will be correlated with z and uncorrelated
with w: If more than one instrument is available then use 2SLS.
11.2 Measurement Error in the Dependent Variable
On the whole this does not matter as much. This is undesirable because it will tend to
decrease the precision of the estimates but it will not cause the estimates to become
unbiased. Let the true dependent variable be q so that;
q = ® + ¯x + º
If q is measured with error such that;
yi = qi + ri
y ¡ r = ® + ¯x + (º + r) = ® + ¯x + u (57)
The only di¤erence from the usual regression is that the disturbance term has
two components. The explanatory variables x have not been a¤ected, the estimates
remain unbiased.
12. LIMITED DEPENDENT VARIABLES.
12.1 PROBIT AND LOGIT
In many cases the dependent variable we want to explain is discrete rather than
continuous. In these notes we’ll talk about discrete variables that can only take two
values. We have seen many of these during the course, example include the type of
tenancy contract (fixed rent or sharecropping), the decision to send kids to school (yes
or no), to plant trees (again: yes or no). Without loss of generality we can give the
dependent variables values of 1 and 0, depending on whether the event occurs or not
(ex.: 1 if the farmer plants trees, 0 if he doesn’t). Assume that the variable is 1 with
probability p and 0 with probability (1-p). Let’s call our dependent variable y. The
expected value of y is p*1+(1-p)*0= p. So p is the probability that the event will
occur. The theory will suggest a set of explanatory variables X that affect p. Then P is
a function of X and of unknown parameters B, which measure the effect of X on P. B
are the parameters we want to estimate. In math we write:
P=Probability (Y=1) = F(XB) (1)
How does F look like?
The simplest thing is to assume F is linear, we then have:
P= XB + e (2)
where e is, as usual, a stochastic error term. This model is called linear probability
model and can be estimated like the standard linear regression you already know.
SO WHAT? Problem is that the linear probability model does not take into account
the fact that the dependent variable is a probability and, as such, should always be
between 0 and 1. Estimation of (2) can give you predicted values larger than 1 (or
smaller than 0) which make very little sense. The trick is then to use a function which
only takes values between 0 and 1. Good candidates are probability distributions like
the normal, which, by definitions, are bounded between 0 and 1. If we assume F(.) is a
normal distribution with mean 0 and variance 1; (1) would look like:
( )
P e du XB
XB = = F –
-¥ ò p
This is the probit model.
If we assume F(.) is the standard logistic distribution, (1) would look like:
( )
P XB
= L
This is called a logit model.
The results obtained with probit and logit are generally very similar1, so you shouldn’t
worry too much about which one is used.
Probits and logits cannot be estimated by least squares. Instead we use a method
called maximum likelihood (ML). Simply put, this method consists in finding the
parameters of the distribution that maximise the likelihood that our data comes from
precisely that distribution. You don’t have to be able to maximise the likelihood
yourself, generally only computers can.
12.2 THINGS YOU NEED TO KNOW TO READ PROBIT AND LOGIT
a. The estimated coefficients don’t mean much.
Pretty depressing at first. To see what this means it is better to think about OLS first.
In an OLS regression coefficients have an intuitive interpretation. Say you estimate
consumption as a function of income and the income coefficient is 0.8. This means
that if income increases by 100, consumption will increase by 80. Now say you use a
probit to estimate the probability of going to college as a function of income and you
get 0.8 again. Does it mean that when income increases by 100 the probability
increases by 80? Obviously not! If you look closely at the model you see that the
coefficients are inside a complicated function and to find the marginal effect of a
variable you need to take the derivative of the function with respect to that variable.
This will depend on the coefficient of the variable but also on the coefficients and on
the sample values of all the other variables in the regression. Clearly, you don’t have
to do this in the exam. Generally authors who estimate probit models will kindly
provide you with three numbers: the coefficient, the t-statistic (or standard error) and
the marginal effect (typically in brackets, below the t-stat).
b. T-stats are t-stats no matter what
That is, to see whether a coefficient is statistically significant from zero at the 5%
level compare the t-stat (=coefficient/standard error) to 1.96. If the t-stat is LARGER
than 1.96 the coefficient is significant.
You might encounter something called ‘robust’ or ‘white’ standard errors2, don’t
worry the critical values for the t-stat are always the same.
c. It’s rare to see Instrumental Variables with probit models.
You can do a two stage estimation similar to the IV with OLS but the standard errors
come out wrong and it is a bit complicated to correct them. That’s why you rarely see
1 The coefficients B will differ because they come from different functions. To compare them one has
to multiply the probit coefficients times 1.6. This is quite irrelevant for the purpose of this course.
2 These are sometimes used to correct for heteroscedasticity.
it. If there is a problem of endogeneity, people tend to use linear probability models
(with which you can do IV)
d. Mis-specification
Omitted variables, heteroscedasticity, measurement error and endogeneity all create
problems to probit and logit estimates. The consequences of mis-specification are
even more serious than in the OLS. For instance if there is an omitted variable, the
coefficients on the other variables are biased even if these are not correlated with the
omitted variable itself. If the disturbances are heteroscedastic the probit and logit
estimators are inconsistent.
13. FIXED EFFECT ESTIMATION
Assume you want to estimate the effect of contractual structure on farm productivity.
You expect plots cultivated under fixed rent contracts to be more productive than
plots which are sharecropped (why?). Say you have data on 1000 plots, you run your
regression and find that effectively the coefficient on the dummy variable that
identifies fixed rent contract is positive and significant. Unfortunately that’s not
enough to prove your theory right (actually nothing can “prove” a theory right but
here we’re really far from it!). I might argue that behind all of this there is an omitted
variable that affects both contracts and productivity. I say that very good (smart,
educated, entrepreneurial, whatever) farmers are obviously more productive and that,
at the same time, they prefer fixed rent contract (they are confident they’d do a good
job so they are willing to take all the risk). So the results might have nothing to do
with your theory, they are just a fruit of farmers’ heterogeneity. If you can’t find a
proxy for farmers’ ability, what can you do?
There is a way out if each farmer cultivates more than one plot. The way out, as you
might imagine, is fixed effect estimation. Say there are 200 farmers, cultivating 50
plots each. To do fixed effect you create 199 dummies, which identify each farmer
and plug them in the regression with the other explanatory variable. For instance the
first dummy is equal to 1 if the plot is tilled by Mr. Blue, 0 otherwise; the second
dummy is equal to 1 if the plot is tilled by Mr Pink, 0 otherwise and so on. This
allows you to estimate the effect of contracts on productivity, conditional on farmers’
identity, which controls for all farmers’ unobserved characteristics. Now you’re
checking whether given the farmer’s identity (and hence his skills etc) fixed rent
contracts foster productivity. If the coefficient on the contract variable is still positive
and significant now you know it was not so because of farmers’ heterogeneity. 3
14. FIML & LIML
In some cases we find a model within a larger model. For example in the group
lending paper (Pitt et al.) the Authors wanted to explain some economic outcomes
(expenditure, kids’ schooling etc.) as a function of the amount of credit received
through group lending. At the same time the amount of credit received was itself a
function of other variables.
To keep matters simple say you want to estimate y1 as a function of (x,q) and y2 as a
function of (z,l, y1). Here x and z are the independent variables and q and l are the
3 If you have understood how this work it should be trivial to see that you can’t do fixed effects if each
farmer only cultivates one plot.
parameters to be estimated. Notice that y1 only depends on q while y2 depends on l
AND q (through y1). You have a choice:
– with full information maximum likelihood (FIML) estimation you form the
joint distribution f(y1, y2 | x, z, l,q) and then maximise the full likelihood function.
– With limited information maximum likelihood (LIML) estimation you first
estimate q (since y1 is a function of q only) and then you use the estimated value
of q to estimate l. This method is very convenient because you don’t have to form
the joint distribution and maximise the full likelihood function (which is
computationally very complex), rather you get the distribution f(y1| x, q), choose q
to maximise the likelihood and then employ that value of q (let’s call it q*) to
form the second distribution f(y2| z, l, (x, q*)) and the likelihood that you have to
maximise only with respect to l.
15. NON PARAMETRIC ESTIMATION
15.1 Description
Note that when we want to estimate the relationship between, say, X and Y we
generally assume that this relationship has a specific functional form. We have often
seen LINEAR relationships of the form Y=a+bX. If we think that’s a good
specification (like in the keynesian consumption function) all we need to do is to find
the appropriate values of a and b, i.e. those such that the predicted Y is as close as
possible to the observed Y, given X. This kind of regression is called
PARAMETRIC.4 In some cases we have no reason to think that the relationship
between X and Y should take a specific functional form and we want to learn that
from the data, that is we want to estimate the “shape” of the relationship between X
and Y. Since we do not estimate parameters of a specific functional form (remember
we haven’t specified one) this kind of analysis is called NON-PARAMETRIC.
Intuitively, asking the data to tell us about the shape, as opposed to the parameters of
a given shape, is much more demanding. It follows that non-parametric estimation is
feasible only if we have a large number of data points.
Remember that a regression of Y on X is the conditional expectation of Y given X,
that is E(Y|X). With OLS we assume that the regression function is linear. Under non
parametric estimation we assume nothing. How can we get E(Y|X)? If we had infinite
data points, E(Y|X) would simply be the average of all the values of Y for a given X.
If (as it is always the case!) the data set contains a finite, no matter how large, number
of Xs, things gets messier. The idea, however, is similar. Non parametric estimation
does the following:
1. divide up the range of X into a (evenly spaced) grid of points (100 in the Deaton’s
4 Since we estimates parameters given an assumed functional form, any kind of regression where the
functional form is specified, even if it’s not a linear function, is called parametric. During the course
we have seen many different functional forms: for instance the Strauss and Thomas paper assumed that
productivity was a concave function of calories intake.
2. choose a symmetric interval around each X: the size of the interval is called
3. choose a function that gives weights to different data points within the interval such
that points farther away from the central X gets less weight and that points just inside
and just outside the band have zero weight. (remember that we are trying to
approximate the procedure we’d follow if we had infinite Xs: in that case we’d take
one X at the time, here we take a bunch of them but give more weight to the one in
the middle). This weighting function is called a “kernel” function and comes in a
couple of different forms.(The one in the Deaton’s paper is a “quartic” kernel
function). Then do either:
4a KERNEL ESTIMATION: For each bandwidth, compute the average all the Ys that
correspond to the Xs within the bandwidth, giving less weight to those corresponding
to Xs that are farther away (as dictated by the kernel function specified above).
4b SMOOTH LOCAL REGRESSION: For each bandwidth run a weighted OLS
regression of Y on X using the weights defined by the kernel function defined above.
15.2 PROS & CONS
The obvious advantage of non parametric vs. parametric estimation is that with the
former we let the data choose the shape of the relationship. Unfortunately, non
parametric estimation is both more flexible and more costly. In particular:
1. We need a very big sample (otherwise you need to specify very wide bandwidth,
which result in very unprecise estimates, which are practically useless)
2. It is quite difficult to condition Y on more than one variable (you’d need many
many more data points)
3. Non parametric estimation cannot deal with simultaneity (endogeneity),
measurement error, selection bias etc.
5 You have to choose the size of the bandwidth bearing in mind that estimates will be more precise but
also more variable the smaller is the bandwidth (if it were zero we’d know the “true” density at each
point, but this makes sense only if the data were infinite) | {"url":"https://karyatulisilmiah.com/econometrics-for-ec307/","timestamp":"2024-11-07T06:13:36Z","content_type":"text/html","content_length":"98021","record_id":"<urn:uuid:5b79f008-5ef0-4a3a-b5e4-4b8251c13fd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00730.warc.gz"} |
Add And Subtract Rational Numbers Worksheets [PDF]: Algebra 1 Math
How Will This Worksheet on "Add and Subtract Rational Numbers" Benefit Your Students' Learning?
• These worksheets are very useful for students to understand adding and subtracting rational numbers.
• The student can easily solve the various problems on rational numbers addition and subtraction.
• The worksheet can help the student to understand practical situations and strengthen problem-solving abilities.
• These worksheets can be helpful in offering appropriate challenges for each student.
• This worksheet can help the student in self studies and autonomy.
How to Add and Subtract the Integers?
Here's steps of adding and subtracting rational numbers:
`1`. Find a Common Denominator
`2`. Rewrite Fractions with the Common Denominator
`3`. Add the Fractions
`4`. Simplify the Result
`1`. Find a Common Denominator
`2`. Rewrite Fractions with the Common Denominator
`3`. Subtract the Fractions
Q. Add.$ewline$$\frac{9}{10} + \frac{3}{10}=$_____ | {"url":"https://www.bytelearn.com/math-algebra-1/worksheet/add-and-subtract-rational-numbers","timestamp":"2024-11-12T08:43:24Z","content_type":"text/html","content_length":"131764","record_id":"<urn:uuid:8a61f8a8-d48f-4840-85f8-6c8eeff3d266>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00200.warc.gz"} |
Gravitational Field: Particle Generated Force
• Thread starter N9924734063
• Start date
In summary, the gravitational field is a region around a massive object where another object with mass experiences a force of attraction. It is measured by determining the force of gravity on a test
mass and is generated by particles with mass. The strength of the gravitational field is directly proportional to the force of attraction and decreases with distance due to the inverse square law.
is gravitational field made with any particles
Staff Emeritus
Science Advisor
Gold Member
Please clarify what you're asking, and write using standard grammar and punctuation.
The gravitational field is described by spacetime curvature, no particles show up in this context.
FAQ: Gravitational Field: Particle Generated Force
What is gravitational field?
The gravitational field is a region in space around a massive object where another object with mass experiences a force of attraction. It is a fundamental concept in physics that explains the force
of gravity.
How is gravitational field measured?
Gravitational field is measured by determining the force of gravity acting on a test mass placed in the field. This is usually done using the equation F = G(m1m2)/r^2, where F is the force of
gravity, G is the gravitational constant, m1 and m2 are the masses of the two objects, and r is the distance between them.
How does a particle generate a gravitational field?
A particle generates a gravitational field because it has mass. According to Newton's law of gravitation, any object with mass will exert a force of attraction on another object with mass. This force
of attraction is what we call the gravitational field.
What is the relationship between gravitational field and particle generated force?
The relationship between gravitational field and particle generated force is that the gravitational field is the space in which the particle's force of attraction is felt. The strength of the
gravitational field is directly proportional to the force of attraction between two objects with mass.
How does the strength of a gravitational field change with distance?
The strength of a gravitational field decreases with distance from the source object. This is because the force of gravity is inversely proportional to the square of the distance between two objects.
As the distance between two objects increases, the gravitational force between them decreases. | {"url":"https://www.physicsforums.com/threads/gravitational-field-particle-generated-force.506856/","timestamp":"2024-11-02T02:47:20Z","content_type":"text/html","content_length":"82244","record_id":"<urn:uuid:f64cdbf3-ecd8-40bd-a298-73ef82c583e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00092.warc.gz"} |
Multiplication X 8 Worksheets
Mathematics, particularly multiplication, forms the foundation of many scholastic self-controls and real-world applications. Yet, for numerous students, mastering multiplication can pose a challenge.
To address this obstacle, teachers and moms and dads have embraced an effective tool: Multiplication X 8 Worksheets.
Intro to Multiplication X 8 Worksheets
Multiplication X 8 Worksheets
Multiplication X 8 Worksheets -
Try these practice activities to help your students master these facts Multiplication by 8s These printable learning activities feature 8 as a factor in basic multiplication Multiplication by 9s When
you re teaching students to multiply only by the number nine use these printable worksheets
Welcome to The Multiplying 1 to 12 by 8 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 19 and has
been viewed 1 269 times this week and 1 554 times this month
Significance of Multiplication Technique Recognizing multiplication is pivotal, laying a solid foundation for innovative mathematical concepts. Multiplication X 8 Worksheets supply structured and
targeted practice, cultivating a much deeper comprehension of this basic arithmetic procedure.
Advancement of Multiplication X 8 Worksheets
Multiplication Worksheets Numbers 1 Through 12 Mamas Learning Corner
Multiplication Worksheets Numbers 1 Through 12 Mamas Learning Corner
Our multiplication worksheets are free to download easy to use and very flexible These multiplication worksheets are a great resource for children in Kindergarten 1st Grade 2nd Grade 3rd Grade 4th
Grade and 5th Grade Click here for a Detailed Description of all the Multiplication Worksheets Quick Link for All Multiplication Worksheets
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
From traditional pen-and-paper workouts to digitized interactive layouts, Multiplication X 8 Worksheets have actually advanced, accommodating diverse knowing styles and preferences.
Sorts Of Multiplication X 8 Worksheets
Fundamental Multiplication Sheets Simple workouts focusing on multiplication tables, aiding students construct a solid math base.
Word Trouble Worksheets
Real-life situations integrated right into problems, improving vital reasoning and application abilities.
Timed Multiplication Drills Tests developed to enhance rate and accuracy, helping in rapid mental math.
Benefits of Using Multiplication X 8 Worksheets
Multiplication worksheets For Grade 3 4th Grade Math worksheets 7th Grade Math worksheets
Multiplication worksheets For Grade 3 4th Grade Math worksheets 7th Grade Math worksheets
Multiplication by 8 worksheets helps kids to be able to do 1 to 8 numbers multiplication and they can learn to master the multiplication facts is the commutative property Benefits of Multiplication
by 8 Worksheets Multiplication by 8 worksheets gives different methods to solve types of multiplication problems and it will help to solve the
Once they know their multiplication facts they can start to learn related facts e g if 3 x 4 12 then 30 x 4 120 and 300 x 4 1200 The multiplication printable worksheets below will support your child
with their multiplication learning
Boosted Mathematical Abilities
Consistent method hones multiplication proficiency, boosting overall math abilities.
Enhanced Problem-Solving Abilities
Word issues in worksheets establish logical reasoning and technique application.
Self-Paced Knowing Advantages
Worksheets accommodate individual learning speeds, cultivating a comfortable and versatile learning environment.
Exactly How to Create Engaging Multiplication X 8 Worksheets
Integrating Visuals and Colors Vibrant visuals and shades capture interest, making worksheets visually appealing and engaging.
Consisting Of Real-Life Circumstances
Relating multiplication to daily scenarios adds importance and usefulness to workouts.
Tailoring Worksheets to Different Ability Degrees Tailoring worksheets based on differing efficiency degrees makes certain comprehensive knowing. Interactive and Online Multiplication Resources
Digital Multiplication Equipment and Games Technology-based sources supply interactive learning experiences, making multiplication appealing and pleasurable. Interactive Websites and Applications
On-line systems offer varied and easily accessible multiplication technique, supplementing conventional worksheets. Customizing Worksheets for Various Understanding Styles Aesthetic Students
Aesthetic help and representations help comprehension for students inclined toward aesthetic learning. Auditory Learners Spoken multiplication troubles or mnemonics satisfy students that understand
ideas through acoustic means. Kinesthetic Students Hands-on activities and manipulatives support kinesthetic learners in understanding multiplication. Tips for Effective Execution in Knowing
Consistency in Practice Normal method strengthens multiplication abilities, promoting retention and fluency. Balancing Repetition and Selection A mix of recurring exercises and diverse issue formats
keeps passion and understanding. Giving Constructive Responses Responses help in determining areas of improvement, urging continued development. Challenges in Multiplication Practice and Solutions
Motivation and Interaction Hurdles Tedious drills can cause uninterest; cutting-edge strategies can reignite motivation. Conquering Concern of Math Adverse assumptions around mathematics can prevent
development; producing a positive knowing setting is essential. Influence of Multiplication X 8 Worksheets on Academic Performance Researches and Study Searchings For Study suggests a positive
correlation in between consistent worksheet usage and enhanced mathematics efficiency.
Multiplication X 8 Worksheets emerge as flexible devices, promoting mathematical effectiveness in students while suiting diverse learning designs. From standard drills to interactive online
resources, these worksheets not just improve multiplication abilities yet also advertise vital reasoning and problem-solving abilities.
Fill In Multiplication Worksheets Fill In The Blanks Class 1 Maths Worksheet Multiplication
Printable Multiplication Sprints PrintableMultiplication
Check more of Multiplication X 8 Worksheets below
Multiplication Worksheets Year 8 PrintableMultiplication
Multiplication 8 Worksheets Printable Worksheets Pinterest Multiplication Worksheets
2 Digit By 2 Digit Multiplication Using Area Model Worksheets Free Printable
Printable Multiplication Sheets Free Printable Multiplication Flash Cards
multiplication Worksheet For Kids Archives EduMonitor
Multiplying 1 to 12 by 8 100 Questions A Math Drills
Welcome to The Multiplying 1 to 12 by 8 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 19 and has
been viewed 1 269 times this week and 1 554 times this month
Multiplication Facts Worksheets Math Drills
In each box the single number is multiplied by every other number with each question on one line The tables may be used for various purposes such as introducing the multiplication tables skip
counting as a lookup table patterning activities and memorizing Multiplication Facts Tables from 1 to 12
Welcome to The Multiplying 1 to 12 by 8 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 19 and has
been viewed 1 269 times this week and 1 554 times this month
In each box the single number is multiplied by every other number with each question on one line The tables may be used for various purposes such as introducing the multiplication tables skip
counting as a lookup table patterning activities and memorizing Multiplication Facts Tables from 1 to 12
2 Digit By 2 Digit Multiplication Using Area Model Worksheets Free Printable
Multiplication 8 Worksheets Printable Worksheets Pinterest Multiplication Worksheets
Printable Multiplication Sheets Free Printable Multiplication Flash Cards
multiplication Worksheet For Kids Archives EduMonitor
Kindergarten Worksheets Maths Worksheets Multiplication Worksheets
Kids Page 8 Times Multiplication Table Worksheet
Kids Page 8 Times Multiplication Table Worksheet
Simple Multiplication Worksheets Superstar Worksheets
FAQs (Frequently Asked Questions).
Are Multiplication X 8 Worksheets suitable for every age teams?
Yes, worksheets can be customized to various age and ability degrees, making them adaptable for numerous learners.
Exactly how usually should trainees exercise utilizing Multiplication X 8 Worksheets?
Consistent technique is key. Normal sessions, ideally a couple of times a week, can produce substantial improvement.
Can worksheets alone enhance mathematics abilities?
Worksheets are a valuable tool but must be supplemented with varied discovering methods for thorough skill advancement.
Are there online systems providing cost-free Multiplication X 8 Worksheets?
Yes, numerous academic internet sites offer free access to a wide variety of Multiplication X 8 Worksheets.
Exactly how can moms and dads support their kids's multiplication method in the house?
Encouraging consistent technique, giving aid, and producing a favorable discovering atmosphere are beneficial steps. | {"url":"https://crown-darts.com/en/multiplication-x-8-worksheets.html","timestamp":"2024-11-13T20:36:08Z","content_type":"text/html","content_length":"28858","record_id":"<urn:uuid:51130dfa-70c8-4bc0-b02f-939a65072116>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00779.warc.gz"} |
How to Compare Data When You Move from Google Analytics to GA4 - Conversion Sciences
How to Compare Data When You Move from Google Analytics to GA4
How do you compare your data when you move from Universal Analytics to Google Analytics 4 (GA4), or some other analytics package? Learn how to size up a lot of data very quickly using a simple
algorithm you learned in high school.
Once you move from Google Analytics to Ga4, you can be assured that your data will not match exactly. In fact it may be off by a percentage.
What is more important is that relative changes from day to day, week to week, and month to month are of the same magnitude in both systems.
Is the GA4 data correlated with the UA data?
For example, if we graphed the Universal Analytics (UA) metric Users against the GA4 metric Active Users, it might look like this:
The blue bars are the users reported by UA and the green bars are the active users reported by GA4 on a daily basis. It’s clear that UA is reporting more users than GA4 is reporting acitve users.
This is to be expected, because active users are calculated differently by GA4 than is users.
What is more important is that they move similarly to each other day after day. In other words, if GA4 is going to report fewer active users, the magnitude of the difference between it and UA should
be consistent, day after day.
For most days this appears to be true. But some days, UA reported many more users than GA4 reported active users.
Does this mean that we can’t trust one or the other? There is a way to find out.
Scatter Plots, Not Bar Graphs
The bar graph is a crude tool for comparing two data sets. In fact, any time-series graph is going to disappoint.
What we need is a Scatterplot.
A scatterplot ignores the order of the date and instead compares the data on each day. On a day that UA reported 200 users, how many active users did GA4 report? We plot that point.
When we do it for each day in our data set, we might see something like this:
What you might notice is that this data lies in a straight line, for the most part. This is a good sign. It means that the GA4 data changes relative to the UA data for each of the days mapped.
This doesn’t mean that it’s accurate, though. Here’s a scatterplot of the same data, but I’ve artificially doubled the daily UA data.
This data looks good, but it’s not. How would we know?
Spreadsheets and your high school math teacher give us a simple way to evaluate the data like a boss.
Add a Trendline
First, Google Sheets will calculate a trend line for us. When at science events, we call this a linear regression. This is the straight line that best “fits” the points. If the points look like a
line, then the trend line will be a close approximation of the data. In Google Sheets you’ll find this in the Customize tab under Series >.
These features exist in Excel as well.
When we add a trend line to our data, we see this:
That draws a pretty line right along with our data. How closely do the two data sets match? That’s what R^2 tells us.
Reading the R^2 Value
If you’re curious about how this is calculated, here’s a helpful video.
Google Sheets will calculate R^2, but this is not enough. We want the equation of the trend line so that we know how closely related the two data sets are.
There are some mathy looking bits in our legend now.
The R^2 number tells us how well the trend line describes our data. A perfect fit would give us an R^2 value of one. The closer to one it is, the more likely our two data sets are describing the same
The equation is the one you learned in high school. It’s just the equation of a line.
The Equation of a Trend Line
This is one of those equations that you swore you would never use in math class. Today, it’s going to give you X-ray vision into your data.
y = mx + B
x is the GA4 Active Users
y is the UA Users
The choice of x or y axis is arbitrary for a scatterplot.
m is the “slope” of the line. It’s the “rise over the run”. If we expect our two datasets to be alike, we expect a slope very close to one.
B is the “y intercept”. It is where our line crosse he vertical axis, also called the “y axis” when x is zero.
We’re hoping that our GA4 data is as much like our UA data as possible. If the two were reporting the exact same number each day:
• R^2 would be 1
• The slope (m) of the line would 1
• The y intercept (B) would be 0
I compared two identical data sets to show this.
So, what if our data isn’t perfect?
If R^2 is significantly less than one, the two data sets are not well-correlated to each other. In other words, they are not describing the same thing. If it’s 0.9 or above, we feel pretty good about
the comparison. If its below 0.8, we should be worried.
Even if R^2 is close to one, the slope (right before “x”) might be significantly less than one. In this case, we would find that that one dataset is adding or subtracting a percentage of the true
value. It could be doubling the count of users, or not reporting users on some percentage of the pages of your website.
If the R2 value his close to one and the slope is close to one, we may find the y-intercept to be higher than zero. This means that some consistent value is being added to one or the other dataset.
One is counting something that the other is not.
Here are some common scenarios we see in comparing UA and GA4 data, and how the equation would be expected to change.
You’re comparing the wrong data.
Let’s start off by looking at a bad correlation. Here the R^2 value and slope are near 0. The y-intercept is very high.
Something is just not right here. Maybe you’re not pulling the data right.
Bot traffic is not being filtered in one dataset.
In this example, I’ve artificially added 50 users per day to one of the datasets. This is what it would look like if GA4 was filtering out a consistent traffic source, like bot traffic, but UA was
The entire trend line will is lifted by 50 users. Because it’s consistent, the slope and R^2 values are not affected. But the y-intercept will rise precariously.
You’re double counting.
It’s remarkably easy to double-count by adding the Google Analytics tag twice. In this case, the slope will be close to 0.5 (or 2.0 if you flip the x and y axis in your scatterplot).
It’s not unusual for us to find a website that is adding pageviews using an on-page tag and a tag manager tag. This will double-count pageviews.
You are “breaking” sessions.
If you are “breaking sessions” in either dataset, you’ll see inflation of sessions. This will be reflected in the slope. It will be significatnly above or below one.
For example, if you use a utm_ query parameter on a call-to-action button on your site, UA will start a new session, as if the user was just arriving on the site. GA4 doesn’t do this.
If your visitors are going to a third-party site and returning, you can get broken sessions. If you have cross-domain tracking setup in UA but not in GA4, you’ll see something like this for the
segement of visitors that visit the other site.
The analytics tag is missing on some pages.
With this example, I’ve added 50% to the dataset on the Y axis. This simulates the scenario in which 33% of the pages on the X-axis dataset don’t have tags.
Note that the R^2 value doesn’t change. However, the slope of the line is well below 1. In fact, it’s about 2/3 of a perfect slope.
Revenue, Transacations and Segments
This approach can be used to check most of your metrics and segments.
Not only can you evaluate the data you are collecting, you can evaluate your ability to pull data in GA4 that represents the thinking of the UA developers. UA data is pre-processed differently in UA
than it is in GA4.
This is a great way to be sure you’re pulling similar data segments.
Compare Google Analytics to your sales data.
If you want to be sure Google Analytics is collecting ecommerce data, you can compare transactions from GA to transactions from your backend, such as Shopify, BigCommerce, Magento, etc. This approach
is great for that.
This is one of the first things we do with or new Conversion Catalyst clients.
The graphs look the same. Don’t be fooled.
Be careful when you move from Google Analytics to GA4.
In all of these examples, the scatterplots look pretty much the same visually. However, our high school math teacher has equipped us with the equation we need to diagnose our data.
Thanks, high school math teacher!
Latest posts by Brian Massey
(see all)
0 replies
Want to join the discussion?
Feel free to contribute!
Leave a Reply Cancel reply | {"url":"https://conversionsciences.com/compare-data-move-google-analytics-ga4/","timestamp":"2024-11-07T15:53:38Z","content_type":"text/html","content_length":"327400","record_id":"<urn:uuid:53cb5b8b-0f24-4116-8f39-af919d7e7b77>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00624.warc.gz"} |
Move from Excel to Python with Pandas
Move from Excel to Python with Pandas Transcripts
Chapter: Data wrangling with Pandas
Lecture: Pandas' dt, the date time accessor
0:00 Now we're going to read in our sample sales data into our Jupyter notebook. So we'll do the imports. I went ahead and put those in here,
0:07 and now you can see the data frame of that represents the Excel file. And if we do DF info,
0:14 it tells us that the purchase date is a date time 64 data type, which is good, which is what we had expected.
0:21 Quantity, price, extended amount and shipping costs are numeric values. So everything appears to be in order here.
0:29 Here's how we might think about actually accessing the purchase date. So if we know that we have a purchase state,
0:36 maybe we could try typing month after that. And we get an attribute error so Pandas doesn't know how to get at the month
0:45 And so what penance has done is it has introduced a concept of an excess er and D T stands for daytime.
0:53 So now it knows that this is a daytime data type, and there is an excess er called D T, which enables us to get at the underlying data in that column.
1:05 And here we want to pull out the month we can do a similar sort of so year works as expected. And there are some that you may not think of
1:15 what's try like Day of Week Pandas goes in and Comptel, what day of the week each of those days is and assigns a numerical value to
1:25 it. So remember the example we had of trying to get the quarter and how we had to do a fairly, maybe non intuitive calculation for Excel?
1:35 Let's take a look at what if we just use quarter? Ah, so that tells us that Pamela's knows the concept of quarter and can automatically
1:45 calculate that force, which is really helpful. And the recent one highlight This is there are a lot of options available once you
1:53 have the correct data type to make your data manipulation just a little bit easier.
1:57 For instance, what if you want to know whether a current month has 30 or 31? Or maybe it's a leap year.
2:06 We can look at days and month so we can see that it calculates a 31 and 30. We can also see if something is the end of the month.
2:17 So none of these examples that are showing just the head and the tail. But it is a helpful thing to keep in mind as you doom or data manipulation
2:25 Now, one of the things that you really need to keep in mind is that I did all of this. But there's been no underlying change to the data frame.
2:35 If we want to actually add some of these new columns to data frame, we need to make sure that we explicitly do so.
2:49 So what I've done here is I've created two new columns purchase month and purchase year and assigned the month and year to that.
2:57 You can see the data frame now has the purchase month and year. So we are, replicating what we had in our Excel spreadsheet and if
3:06 we wanted to add one more to the purchase corner. Now we have our purchase quarter, and you can see that this is March.
3:19 The first quarter in this November, | {"url":"https://training.talkpython.fm/courses/transcript/move-from-excel-to-python-and-pandas/lecture/270504","timestamp":"2024-11-05T14:03:40Z","content_type":"text/html","content_length":"27526","record_id":"<urn:uuid:cb26f6cf-8729-44dd-b43c-1e91d2e4402d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00175.warc.gz"} |
24 days of Rust - slow_primes | Blog | siciarz.net
24 days of Rust - slow_primes
Important note: this article is outdated! Go to http://zsiciarz.github.io/24daysofrust/ for a recent version of all of 24 days of Rust articles. The blogpost here is kept as it is for historical
When I start learning a new programming language, I like to code at least several solutions to Project Euler problems. These are very math-oriented and may not be the best introduction to general
purpose programming, but it's a start. Anyway, it's just fun to solve them! (...and way more fun to solve them in a fast way and not by brute force.)
A lot of Project Euler problems involve prime numbers in some way. These include finding nth prime, efficient factorization or checking whether some curious number is prime or not. You could of
course write these mathematical procedures yourself, which is also an educational activity. But I'm lazy. I set out to find some ready-made code and stumbled upon the slow_primes library by Huon
Wilson. Incidentally this was the first external dependency I ever used in a Rust program, long before crates.io.
By the way don't let the name fool you, it's not that slow. As Huon says:
Despite the name, it can sieve the primes up to 10^9 in about 5 seconds.
So let's see what's in there, shall we?
Prime sieve
The first thing to do is to create a sieve (see Wikipedia on Sieve of Eratosthenes for a detailed explanation of the algorithm). We need to set an upper bound on the sieve. There's a clever way to
estimate that bound (see the docs for estimate_nth_prime) but for simplicity I'll hardcode it now to 10000.
Let's actually check some numbers for primality:
extern crate slow_primes;use slow_primes::Primes;fn main() { let sieve = Primes::sieve(10000); let suspect = 5273u; println!("{} is prime: {}", suspect, sieve.is_prime(suspect)); // true
let not_a_prime = 1024u; println!("{} is prime: {}", not_a_prime, sieve.is_prime(not_a_prime)); // guess
How about finding 1000th prime number?
let n = 1000u;match sieve.primes().nth(n - 1) { Some(number) => println!("{}th prime is {}", n, number), None => println!("I don't know anything about {}th prime.", n),}
The primes() method returns an iterator over all prime numbers generated by this sieve (2, 3, 5, 7...). Iterators in Rust have a lot of useful methods; the nth() method skips over n initial
iterations, returning the nth element (or None if we exhausted the iterator). The argument is zero-based, so to find 1000th prime we need to pass 999 to nth().
Factorization is a way to decompose a number into its divisors. For example, 2610 = 2 * 3 * 3 * 5 * 29. Here's how we can find it out with slow_primes API:
println!("{}", sieve.factor(2610));
When we run this, we'll get:
$ cargo run
Ok([(2, 1), (3, 2), (5, 1), (29, 1)])
What is this? Let's have a look at the result type of factor():
type Factors = Vec<(uint, uint)>;fn factor(&self, n: uint) -> Result<Factors, (uint, Factors)>
Looks a bit complicated, but remember the Result type. The Ok variant wraps a vector of pairs of numbers. Each pair contains a prime factor and its exponent (how many times it appears in the
factorization). In case of an error we'll get a pair (leftover value, partial factorization).
We can use factorization to find the total number of divisors (including compound ones). This is very important in number theory (although for reasons that are outside the scope of this blog).
Consider the following function:
fn num_divisors(n: uint, primes: &Primes) -> Option<uint> { use std::iter::MultiplicativeIterator; match primes.factor(n) { Ok(factors) => Some(factors.into_iter().map(|(_, x)| x + 1).product()), Err(_) => None, }}
The trick is to multiply all prime factor exponents, incremented before multiplication. See the explanation at Maths Challenge for the curious. So when we call the function on our 2610 example, we'll
get Some(24) as a result.
println!("{}", num_divisors(2610, &sieve));
Further reading
Code examples in this article were built with rustc 0.13.0-nightly and slow_primes 0.1.4.
The header photo (taken by me) shows Tamka street in Warsaw, Poland. | {"url":"https://siciarz.net/24-days-rust-slow_primes/","timestamp":"2024-11-13T19:42:14Z","content_type":"text/html","content_length":"52737","record_id":"<urn:uuid:50cfbbc2-66d9-401e-bfb4-b68e193f3a52>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00876.warc.gz"} |
More notes on deriving Applicative from Monad
More notes on deriving Applicative from Monad
A year or two ago I wrote about what you do if you already have a Monad and you need to define an Applicative instance for it. This comes up in converting old code that predates the incorporation of
Applicative into the language: it has these monad instance declarations, and newer compilers will refuse to compile them because you are no longer allowed to define a Monad instance for something
that is not an Applicative. I complained that the compiler should be able to infer this automatically, but it does not.
My current job involves Haskell programming and I ran into this issue again in August, because I understood monads but at that point I was still shaky about applicatives. This is a rough edit of the
notes I made at the time about how to define the Applicative instance if you already understand the Monad instance.
pure is easy: it is identical to return.
Now suppose we have >>=: how can we get <*>? As I eventually figured out last time this came up, there is a simple solution:
fc <*> vc = do
f <- fc
v <- vc
return $ f v
or equivalently:
fc <*> vc = fc >>= \f -> vc >>= \v -> return $ f v
And in fact there is at least one other way to define it is just as good:
fc <*> vc = do
v <- vc
f <- fc
return $ f v
(Control.Applicative.Backwards provides a Backwards constructor that reverses the order of the effects in <*>.)
I had run into this previously and written a blog post about it. At that time I had wanted the second <*>, not the first.
The issue came up again in August because, as an exercise, I was trying to implement the StateT state transformer monad constructor from scratch. (I found this very educational. I had written State
before, but StateT was an order of magnitude harder.)
I had written this weird piece of code:
instance Applicative f => Applicative (StateT s f) where
pure a = StateT $ \s -> pure (s, a)
stf <*> stv = StateT $
\s -> let apf = run stf s
apv = run stv s
in liftA2 comb apf apv where
comb = \(s1, f) (s2, v) -> (s1, f v) -- s1? s2?
It may not be obvious why this is weird. Normally the definition of <*> would look something like this:
stf <*> stv = StateT $
\s0 -> let (s1, f) = run stf s0
let (s2, v) = run stv s1
in (s2, f v)
This runs stf on the initial state, yielding f and a new state s1, then runs stv on the new state, yielding v and a final state s2. The end result is f v and the final state s2.
Or one could just as well run the two state-changing computations in the opposite order:
stf <*> stv = StateT $
\s0 -> let (s1, v) = run stv s0
let (s2, f) = run stf s1
in (s2, f v)
which lets stv mutate the state first and gives stf the result from that.
I had been unsure of whether I wanted to run stf or stv first. I was familiar with monads, in which the question does not come up. In v >>= f you must run v first because you will pass its value to
the function f. In an Applicative there is no such dependency, so I wasn't sure what I neeeded to do. I tried to avoid the question by running the two computations ⸢simultaneously⸣ on the initial
state s0:
stf <*> stv = StateT $
\s0 -> let (sf, f) = run stf s0
let (sv, v) = run stv s0
in (sf, f v)
Trying to sneak around the problem, I was caught immediately, like a small child hoping to exit a room unseen but only getting to the doorway. I could run the computations ⸢simultaneously⸣ but on the
very next line I still had to say what the final state was in the end: the one resulting from computation stf or the one resulting from computation stv. And whichever I chose, I would be discarding
the effect of the other computation.
My co-worker Brandon Chinn opined that this must violate one of the applicative functor laws. I wasn't sure, but he was correct. This implementation of <*> violates the applicative ”interchange” law
that requires:
f <*> pure x == pure ($ x) <*> f
Suppose f updates the state from !!s_0!! to !!s_f!!. pure x and pure ($ x), being pure, leave it unchanged.
My proposed implementation of <*> above runs the two computations and then updates the state to whatever was the result of the left-hand operand, sf discarding any updates performed by the right-hand
one. In the case of f <*> pure x the update from f is accepted and the final state is !!s_f!!. But in the case of pure ($ x) <*> f the left-hand operand doesn't do an update, and the update from f is
discarded, so the final state is !!s_0!!, not !!s_f!!. The interchange law is violated by this implementation.
(Of course we can't rescue this by yielding (sv, f v) in place of (sf, f v); the problem is the same. The final state is now the state resulting from the right-hand operand alone, !!s_0!! on the left
side of the law and !!s_f!! on the right-hand side.)
Stack Overflow discussion
I worked for a while to compose a question about this for Stack Overflow, but it has been discussed there at length, so I didn't need to post anything:
That first thread contains this enlightening comment:
□ Functors are generalized loops
[ f x | x <- xs];
□ Applicatives are generalized nested loops
[ (x,y) | x <- xs, y <- ys];
□ Monads are generalized dynamically created nested loops
[ (x,y) | x <- xs, y <- k x].
That middle dictum provides another way to understand why my idea of running the effects ⸢simultaneously⸣ was doomed: one of the loops has to be innermost.
The second thread above (“How arbitrary is the ap implementation for monads?”) is close to what I was aiming for in my question, and includes a wonderful answer by Conor McBride (one of the inventors
of Applicative). Among other things, McBride points out that there are at least four reasonable Applicative instances consistent with the monad definition for nonempty lists. (There is a hint in his
answer here.)
Another answer there sketches a proof that if the applicative ”interchange” law holds for some applicative functor f, it holds for the corresponding functor which is the same except that its <*>
sequences effects in the reverse order.
[Other articles in category /prog/haskell] permanent link | {"url":"https://blog.plover.com/prog/haskell/how-to-ap.html","timestamp":"2024-11-12T16:10:54Z","content_type":"text/html","content_length":"33260","record_id":"<urn:uuid:d60f34fc-ec3a-440b-a5ce-59dbd4fe134b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00269.warc.gz"} |
The 2023 DCAMM Annual Seminar Speaker - DCAMM
Professor Basile Audoly
Institut Polytechnique de Paris
will give the lecture
One-dimensional models for highly deformable elastic rods
We are interested in identifying effective mathematical models describing the deformations of rods, i.e., cylindrical elastic bodies whose cross-section dimensions are much smaller than their length.
Owing to the separation of scales, their equilibrium is governed by ordinary differential equations which are easier to solve than the partial differential equations applicable in 3D elasticity.
These equilibrium equations are well-established as long as the strain remains small, i.e., when the cross-sections remain almost undeformed. In this talk, I will discuss the interesting case of soft
rods having highly deformable cross-sections. This includes inflated cylindrical rubber balloons, elastic bars made of very soft gels deforming under the action of surface tension, and carpenter's
tapes. I will present a method for deriving the one-dimensional equations governing the equilibrium of these highly deformable rods, and will show that they accurately account for the localization
phenomena that are ubiquitous in these systems. | {"url":"https://construct.dtu.dk/kalenderliste/arrangement?id=454f2f28-9969-427e-9f87-5ce6d4d66ed3","timestamp":"2024-11-08T13:53:44Z","content_type":"application/xhtml+xml","content_length":"139611","record_id":"<urn:uuid:f5c22fac-8ad4-4edd-b981-6b9f55ba8dc6>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00248.warc.gz"} |
In the context of arithmetic, carrying is part of the operation of representing addition of natural numbers by digits with respect to a base.
In terms of commutative algebra
Given the rig of natural numbers $\mathbb{N}$, there exists a free commutative $\mathbb{N}$-algebra $\mathbb{N}[b]$ on one generator $b$ called the base. Since multiplication in a commutative algebra
is power-associative, there exists a right $\mathbb{N}$-action on $\mathbb{N}[b]$$(-)^{(-)}:\mathbb{N}[b]\times\mathbb{N}\to\mathbb{N}[b]$ called the power, and every element in $\mathbb{N}[b]$ could
be written as a polynomial
$p = \sum_{n=0}^{k} a(n) b^n$
When the algebra is quotiented out by the relation $b \sim 10$, the resulting quotient algebra is isomorphic to the original rig of natural numbers $\mathbb{N}$. This means that every natural number
could be expressed a polynomial with base ten,
$p = \sum_{n=0}^{k} a(n) 10^n$
There is a canonical such polynomial, one where all natural numbers in the sequence $a(n) \lt 10$ in the polynomial. Carrying arises from adding two canonical polynomials, when the sum $a_1(n) + a_2
(n) \geq 10$ and the polynomial is no longer canonical; in order to make the polynomial canonical again, one would have to take the sum $a_1(n) + a_2(n)$modulo 10 and add 1 to the sum $a_1(n+1) + a_2
(n+1)$ in the next power of ten. This means there ought to be another representation of the digits in terms of integers modulo 10.
In terms of cohomology
Write $\mathbb{Z}/10$ for the abelian group of addition of integers modulo 10. In the following we identify the elements as
$\mathbb{Z}/{10} = \{0,1,2, \cdots, 9\} \,,$
as usual.
Being an abelian group, every delooping n-groupoid $\mathbf{B}^n (\mathbb{Z}/{10})$ exists.
Carrying is a 2-cocycle in the group cohomology, hence a morphism of infinity-groupoids
$c : \mathbf{B} (\mathbb{Z}/{10}) \to \mathbf{B}^2 (\mathbb{Z}/{10}) \,.$
It sends
$\array{ && \bullet \\ & {}^{\mathllap{a}}earrow &\Downarrow^=& \searrow^{\mathrlap{b}} \\ \bullet &&\stackrel{a+b mod 10}{\to}&& } \;\;\; \mapsto \;\;\; \array{ && \bullet \\ & {}^{\mathllap{id}}
earrow &\Downarrow^{c(a,b)}& \searrow^{\mathrlap{id}} \\ \bullet &&\stackrel{id}{\to}&& \bullet } \,,$
$c(a,b) = \left\{ \array{ 1 & a + b \geq 10 \\ 0 & a + b \lt 10 \,. } \right.$
The central extension classified by this 2-cocycle, hence the homotopy fiber of this morphism is $\mathbb{Z}/{100}$
$\array{ \mathbf{B} (\mathbb{Z}/{100}) &\to& * \\ \downarrow && \downarrow \\ \mathbf{B} (\mathbb{Z}/{10}) &\stackrel{\mathbf{c}}{\to}& \mathbf{B}^2 (\mathbb{Z}/{10}) } \,.$
That now carries a 2-cocycle
$\mathbf{B} (\mathbb{Z}/{100}) \to \mathbf{B}^2 (\mathbb{Z}/{10}) \,,$
and so on.
$\array{ \vdots \\ \downarrow \\ \mathbf{B} (\mathbb{Z}/{1000}) &\stackrel{c}{\to}& \mathbf{B}^2 (\mathbb{Z}/{10}) \\ \downarrow \\ \mathbf{B} (\mathbb{Z}/{100}) &\stackrel{c}{\to}& \mathbf{B}^2 (\
mathbb{Z}/{10}) \\ \downarrow \\ \mathbf{B} (\mathbb{Z}/{10}) &\stackrel{c}{\to}& \mathbf{B}^2 (\mathbb{Z}/{10}) }$
This tower can be viewed as a sort of “Postnikov tower” of $\mathbb{Z}$ (although it is of course not a Postnikov tower in the usual sense). Note that it is not “convergent”: the limit of the tower
is the ring of $10$-adic integers $\mathbb{Z}_{10}$. This makes perfect sense in terms of carrying: the $10$-adic integers can be identified with “decimal numbers” that can be “infinite to the left”,
with addition and multiplication defined using the usual carrying rules “on off to infinity”.
• Dan Isaksen, A cohomological viewpoint on elementary school arithmetic, The American Mathematical Monthly, Vol. 109, No. 9. (Nov., 2002), pp. 796-805. (jstor) | {"url":"https://ncatlab.org/nlab/show/carrying","timestamp":"2024-11-14T23:47:44Z","content_type":"application/xhtml+xml","content_length":"32363","record_id":"<urn:uuid:92d94832-6abb-4f3d-8ce2-e3e62a5f8a6f>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00349.warc.gz"} |