content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
4.1) Quiz 2: Answers โ Angles, Lines & Polygons โ Edexcel GCSE Maths Foundation
1) Each angle is 90ยฐ
2) Each angle is 90ยฐ
3) a = 110ยฐ
b = 70ยฐ
c = 110ยฐ
4) x = 60ยฐ
5) x = 40ยฐ
6) x = 25ยฐ
1) What is the size of each of the angles in the square below?
2) What is the size of each of the angles in the rectangle below?
3) What is the size of angle a, b and c in the parallelogram below?
4) The shape below is a parallelogram. What is the size of each of the angles in the parallelogram below?
5) What is the size of each of the angles?
6) The shape below is a parallelogram. What is the size of all of the angles in the parallelogram?
7) The shape below is a kite. What is the size of the two unknown angles in the kite?
8) The shape below is a kite. What is the size of the two unknown angles in the kite?
9) The shape below is a trapezium. What is the size of each of the angles in the trapezium?
10) The shape below is a rhombus. What is the size of each of the angles inside the rhombus? | {"url":"https://www.elevise.co.uk/g-e-m-f-41-q2-a.html","timestamp":"2024-11-02T01:46:19Z","content_type":"text/html","content_length":"102184","record_id":"<urn:uuid:d4af4598-6b5f-468a-81ad-4fea40faeeb9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00734.warc.gz"} |
TR24-155 | 11th October 2024 16:17
Optimal Coding for Randomized Kolmogorov Complexity and Its Applications
The coding theorem for Kolmogorov complexity states that any string sampled from a computable distribution has a description length close to its information content. A coding theorem for
resource-bounded Kolmogorov complexity is the key to obtaining fundamental results in average-case complexity, yet whether any samplable distribution admits a coding theorem for randomized
time-bounded Kolmogorov complexity ($\mathrm{rK}^\mathrm{poly}$) is open and a common bottleneck in the recent literature of meta-complexity. Previous works bypassed this issue by considering
probabilistic Kolmogorov complexity ($\mathrm{pK}^\mathrm{poly}$), in which public random bits are assumed to be available.
In this paper, we present an efficient coding theorem for randomized Kolmogorov complexity under the non-existence of one-way functions, thereby removing the common bottleneck. This enables us to
prove $\mathrm{rK}^\mathrm{poly}$ counterparts of virtually all the average-case results that were proved only for $\mathrm{pK}^\mathrm{poly}$, and enables the resolution of the following concrete
open problems.
1. The existence of a one-way function is characterized by the failure of average-case symmetry of information for randomized time-bounded Kolmogorov complexity, as well as a conditional coding
theorem for randomized time-bounded Kolmogorov complexity. This resolves the open problem of Hirahara, Ilango, Lu, Nanashima, and Oliveira (STOC'23).
2. Hirahara, Kabanets, Lu, and Oliveira (CCC'24) showed that randomized time-bounded Kolmogorov complexity admits search-to-decision reductions in the errorless average-case setting over any
samplable distribution, and left open whether a similar result holds in the error-prone setting. We resolve this question affirmatively, and as a consequence, characterize the existence of a one-way
function by the average-case hardness of computing $\mathrm{rK}^\mathrm{poly}$ with respect to an arbitrary samplable distribution, which is an $\mathrm{rK}^\mathrm{poly}$ analogue of the $\mathrm
{pK}^\mathrm{poly}$ characterization of Liu and Pass (CRYPTO'23).
The key technical lemma is that any distribution whose next bits are efficiently predictable admits an efficient encoding and decoding scheme, which could be of independent interest to data | {"url":"https://eccc.weizmann.ac.il/report/2024/155/","timestamp":"2024-11-14T03:42:01Z","content_type":"application/xhtml+xml","content_length":"22703","record_id":"<urn:uuid:e7b31ba2-ed0b-4b5d-a0d4-99a904a83e40>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00273.warc.gz"} |
Arnold - docshare.tips
An Interview with Vladimir Arnolโฒd
by S. H. Lui
Utilius scandalum nasci permittur quam veritas relinquatur.
(One should speak the truth even at risk of provoking a scandal.)
โDecretalium V of Pope Gregory IX, 1227โ1241
ladimir Arnolโฒd is currently professor of mathematics at both the Steklov Mathematical Institute, Moscow, and Ceremade, Universitรฉ de Parisโ Dauphine. Professor Arnolโฒd obtained his Ph.D. from the
Moscow State University in 1961. He has made fundamental contributions in dynamical systems, singularity theory, stability theory, topology, algebraic geometry, magneto-hydrodynamics, partial
differential equations, and other areas. Professor Arnolโฒd has won numerous honors and awards, including the Lenin Prize, the Crafoord Prize, and the Harvey Prize. This interview took place on
November 11, 1995. The following articles may be of interest to the reader: 1) Conversation with Vladimir Igorevich Arnolโฒd, by S. Zdravkovska, Mathematical Intelligencer 9:4 (1987). 2) A
mathematical trivium, by V. I. Arnolโฒd, Russian Math. Surveys 46:1 (1991). 3) Will Russian mathematics survive?, by V. I. Arnolโฒd, Notices of the AMS 40:2 (1993). 4) Why Mathematics?, by V. I.
Arnolโฒd, Quantum, 1994. 5) Will mathematics survive? Report on the Zurich Congress, by V. I. Arnolโฒd, Mathematical Intelligencer 17:3 (1995). Lui: Please tell us a little bit about your early
education. Were you already interested in mathematics as a child? Arnolโฒd: The Russian mathematical tradition goes back to the old merchant problems. Very 432 NOTICES
young children start thinking about such problems even before they have any knowledge of numbers. Children five to six years old like them very much and are able to solve them, but they may be too
difficult for university graduates, who are spoiled by formal mathematical training. A typical example is: You take a spoon of wine from a barrel of wine, and you put it into your cup of tea. Then
you return a spoon of the (nonuniform!) mixture of tea from your cup to the barrel. Now you have some foreign substance (wine) in the cup and some foreign substance (tea) in the barrel. Which is
larger: the quantity of wine in the cup or the quantity of tea in the barrel at the end of your manipulations? Slightly older children, knowing the first few numbers, like the following problem. Jane
and John wish to buy a childrenโs book. However, Jane needs seven more cents to buy the book, while John needs one more cent. They decide to buy only one book together but discover that they
S. H. Lui is an assistant professor at the Hong Kong University of Science and Technology. His e-mail address is
[email protected]
. This article previously appeared in the February 1996 issue of the Hong Kong Mathematics Society Newsletter.
Editorโs Note: As this article went to press, V. I. Arnolโฒd submitted an update on the interview, based on subsequent correspondence and events. It was received too late to be included in the
VOLUME 44, NUMBER 4
do not have enough money. What is the price of the book? (One should know that books in Russia are very cheap!) Many Russian families have the tradition of giving hundreds of such problems to their
children, and mine was no exception. The first real mathematical experience I had was when our schoolteacher I. V. Morozkin gave us the following problem: Two old women started at sunrise and each
walked at a constant velocity. One went from A to B and the other from B to A. They met at noon and, continuing with no stop, arrived respectively at B at 4 p.m. and at A at 9 p.m. At what time was
the sunrise on this day? I spent a whole day thinking on this oldie, and the solution (based on what is now called scaling arguments, dimensional analysis, or toric variety theory, depending on your
taste) came as a revelation. The feeling of discovery that I had then (1949) was exactly the same as in all the subsequent much more serious problemsโbe it the discovery of the relation between
algebraic geometry of real plane curves and four-dimensional topology (1970) or between singularities of caustics and of wave fronts and simple Lie algebra and Coxeter groups (1972). It is the greed
to experience such a wonderful feeling more and more times that was, and still is, my main motivation in mathematics. Lui: What was it like studying at Moscow State University? Can you tell us
something about the professors (Petrovskii, Kolmogorov, Pontriagin, Rokhlin,โฆ)? Arnolโฒd: The atmosphere of the Mechmat (Moscow State University Mechanics and Mathematics Faculty) in the fifties when
I was a student is described in detail in the book Golden Years of Moscow Mathematics, edited by S. Zdravkovska and P. L. Duren and published jointly by the AMS and LMS in 1993. It contains
reminiscences of many people. In particular, my article was on A. N. Kolmogorov, who was my supervisor. The constellation of great mathematicians in the same department when I was studying at the
Mechmat was really exceptional, and I have never seen anything like it at any other place. Kolmogorov, Gelfand, Petrovskii, Pontriagin, P. Novikov, Markov, Gelfond, Lusternik, Khinchin, and P. S.
Alexandrov were teaching students like Manin, Sinai, S. Novikov, V. M. Alexeev, Anosov, A. A. Kirillov, and me. All these mathematicians were so different! It was almost impossible to understand
Kolmogorovโs lectures, but they were full of ideas and were really rewarding! I recall his explanation of his theory of the size of the minimal cube into which you can embed every graph having N
vertices (balls of fixed size), each conAPRIL 1997
nected with at most K others by wires of fixed thickness. He explained that when N is very large (whileโ is fixed), the diameter of the cube K grows like N by the following argument: the grey matter
(the body of the neurons) is on the surface of the human brain, while the white matter (the connections) fills the interior part. Since the brain is embedded into the head as economically as
possible, a sufficiently complicated brain of N neurons can only be embedded in a โ brain, like that cube of size N (while a trivialโ 3 of a worm, needs only the size N ). Kolmogorovโs work on what
is now called KAM theory of Hamiltonian systems was a byproduct of compulsory exercises that he gave to all second-year undergraduate students. One of the problems was the study of some nontrivial
completely integrable systems (like the motion of a heavy particle along the surface of a horizontal torus of revolution). No computers were available then! He observed that the motion in all such
classical examples was quasiperiodic and tried to find examples of more complicated motion (โmixingโ, or in todayโs language, โchaosโ) in the case of nonintegrable perturbed systems. His attempts
were unsuccessful. The problem which motivated his study is still openโno one has been able to find an invariant torus carrying mixing flows in generically perturbed systems. However, the by-products
of this investigation are far more important than the initial technical problem on mixing. They include the discovery of the persistent nonresonant tori, the โaccelerated convergenceโ method and the
related implicit function theorems in function spaces, the proof of stability of motion in many Hamiltonian systems (e.g., gyroscopes and planetary orbits), and the proof of the existence of magnetic
surfaces in the Tokamak geometry, which is used in the study of plasma containment for controlled thermonuclear fusion. That consequences of an investigation are more important than the original
question is a general phenomenon. The initial goal of Columbus was to find a new way to India. The discovery of the New World was just a by-product. Pontriagin was already very weak when I was a
student at Mechmat, but he was perhaps the best of the lecturers. He had just turned from topology to control theory, and his personality had also changed a lot. He later explained his reasons for
switching to applied mathematics and his antisemitic ideas in his autobiography published in the Russian Mathematical Surveys. When he submitted this paper to the Editorial Board, the KGB
representative suggested that the article should not be published as it was because of its extreme openness. I would prefer to see the original text publishedโwhat you now find is rather softened.
Some people claim that NOTICES
his antisemitism might be simply a manifestation of his fear that some part of his blood might be Jewish and that this might be discovered. However, Pontriagin was not always like this! During the
war his best student, V. A. Rokhlin, was wounded and imprisoned by the Germans. Later, Rokhlin was liberated by the Americans, returned to Russia, and continued to serve in the Russian army, which
was still fighting. One day, while he was transporting a captured German officer to his superior, he met a drunk KGB officer, who wanted to shoot the German officer immediately. Rokhlin objected.
Fortunately, Rokhlin was saved by his superior, who immediately sent him to a different regiment. However, in the end Rokhlin was, as were all the Russians who were saved from the German camps by the
Allies, sent to the gulag (Russian concentration camp) in the north of Russia. Some months later, someone who was liberated from this camp came to Moscow and told Pontriagin that Rokhlin was still
alive but dying from starvation in the camp. Pontriagin, with the help of Kolmogorov, Alexandrov, and others, wrote a letter to Beria, the KGB chief, claiming that Rokhlin should be immediately
released because he was the most talented mathematician of his generation. Beria signed the order to liberate Rokhlin, who was subsequently given a machine gun and continued his military service as a
guard at the same camp where he had been held prisoner. Pontriagin and others wrote a second letter to Beria, and Rokhlin finally was able to return to Moscow. Rokhlin had no right to propiska in
Moscow since returning from the gulag. [Propiska is Russian, meaning the right to live in a specified areaโ one is not free to live elsewhere. Propiska is applied to everybody!] Pontriagin was
completely blind and had a right to hire a personal secretary at the Moscow Steklov Institute. He was brave enough to give this position to Rokhlin, who later became one of the leading Soviet
mathematicians in topology and dynamical systems. Rokhlin had a lot of influence on the younger generation of mathematicians (like S. Novikov, Sinai, Anosov, and me) and later created a very
important mathematical school at St. Petersburg. Some of his illustrious students include Vershik, Gromov, Eliashberg, Viro, Shustin, Turaev, and Kharlamov. I met him in the sixties when he held a
seminar in Moscow. He came to Moscow from one hundred miles away, where his propiska allowed him to live. Rokhlin was of Jewish origin and survived the German prisoner camp by pretending to be a
Muslim. Indeed, he was born in Baku, Azerbaijan. It was really dangerous for Pontriagin to help him and to approach Beria. Pontriagin preserved his high opinion of Rokhlin even after he 434 NOTICES
became an active antisemite. My personal relation with Pontriagin was rather good. He invited me to his house and to his seminar and showed genuine interest in my work, especially on singularity
theory. This was partially due to our common interests in differential topology and control and game theory. The main reason, however, was that he wanted to say something against me at an
international meeting. Pontriagin was then the Russian representative in the International Mathematical Union (IMU) and had done a lot to prevent any vote for dissident Russians. (I was blacklisted
because I, along with 99 other mathematicians, had signed a letter protesting the imprisonment of a perfectly healthy Soviet mathematician in a psychiatric hospital. This was the standard method of
eliminating dissidents.) The IMU had always been very political, and he succeeded. In his reminiscences Pontriagin revealed that quite a few of the IMU officers shared his cannibalistic views. I hope
we shall know their names. Curiously enough, I am now in his former position, representing Russia in the IMU. Petrovskii, who was then the rector of the university, usually met Rokhlin in the
elevator just before the seminar. I think it was dangerous for him to be seen in the company of Rokhlin. Petrovskii was no longer active in mathematics. However, he was extremely important for the
Moscow mathematical community, always trying to support genuine mathematicians in difficult fights with the Communist Party. His mathematical taste was rather classical, based on the Italian school
of algebraic geometry more than the set-theoretic conceptions. Sir Michael Atiyah once told me that he was always delighted by the way Petrovskii dealt with algebraic geometry in his works on PDEs.
One of these, the paper on the lacunas of hyperbolic PDEs, was later rewritten by Atiyah, Bott, and Gรฅrding in modern terminology in two long papers in Acta Mathematica. It is a far-reaching
generalization of the well-known fact of the impossibility of acoustic communication in the even-dimensional spaces (for instance, in the โplaneโ world), while in our three-dimensional world we
communicate easily. It is interesting that in this paper, Petrovskii proved that the cohomology classes of the complement of an algebraic variety are representable by rational differential formsโa
result which is usually attributed to Grothendieck. The works of Petrovskii (1933 and 1938) on real algebraic geometry (related to the 16th Hilbert problem on the shape of real plane algebraic
curves) started an important branch of modern mathematicsโthe topology of real algebraic varieties. Results of this theory (for example, a bound on the Betti numbers in terms VOLUME 44, NUMBER 4
of the degrees of the equations) are very useful in many branches of mathematics, including complexity theory. For instance, they were used by Khovanskii in his fewnomial theory, by Smale in his
study of the โreal P-NPโ problem, and so on. In the West these results are usually attributed to Thom and to Milnor (1965), while the papers by Petrovskii and his student Oleinik, published in the
forties, contained better estimates (and were, by the way, quoted by Thom and by Milnor). This is, however, a very standard situationโit is too easy to omit quoting Russian fundamental papers in the
modern world of the job hunters. Petrovskii had never been a party member. This was unknown to most Communists. He was highly influential, partially because of his personal relation to his former
students, who had attained very high positions in the Soviet hierarchical system. Petrovskii was made a member of the Presidium of the Supreme Soviet, which was the โcollective presidentโ of the
Soviet Union. He died at the door of the Party Central Committee building in Moscow of a heart attack after a long fight at a meeting for the support of fundamental science. His last words were โI
won.โ After his death the party and the KGB worked for twenty years to destroy the mathematical center at Mechmat created by him. They had stopped the appointment of talented people to the faculty,
and they have by now almost succeeded in killing the center. Lui: Can you tell us your philosophy of teaching undergraduates and of supervising graduate students and how many you have had in Russia
and France? Arnolโฒd: The number of Ph.D. theses defended under my supervision is something like forty. I cannot give the exact number for several reasons. In the โstagnationโ period, I was not
allowed to supervise foreign graduate students at Moscow University because I was not a party member. They still were studying with me, but the official supervisor was some friendly party member who
also got paid for it. Some graduate students had other supervisors but wrote their theses on topics discussed in my seminars and were practically my students. Three examples are S. M. Gusein-Zade,
Yu. Iliashenko, and A. I. Neistadt. At present, Iโm working with two undergraduates and three graduates in Moscow and with four graduates in Paris. Two or three more are supposed to start in January.
I learn a lot from my students, especially undergraduates. I never assign a thesis topic to my students. This is like assigning them a spouse. I merely show them what is known and unknown. APRIL 1997
My Moscow seminar, working even when I am abroad, consists of about thirty mathematicians, mostly my former graduate students, but there are always others. The seminar has existed for about thirty
years, and among the participants in different years were Ya. Sinai, V. Alexeev, S. Novikov, M. Kontsevich, A. Goncharov, D. B. Fuchs, G. Tjurina, A. Tjurin.โฆ Life in Moscow is so difficult that most
students have to earn their living independently of their scientific work. Some, for instance, start their own businesses. The rate of crime is so high, however, that in starting a business, one
risks being killed. One of my graduate students in Moscow, who has just finished his thesis but has not defended it, disappeared a few weeks ago. We have doubts about whether he is alive or not. Lui:
Do you have any mathematical heroes? Arnolโฒd: I would mention Barrow, Newton (who was, however, a very unpleasant personโsee my book Huygens and Barrow, Newton and Hooke published by Birkhรคuser,
1990), Riemann, Poincarรฉ, Minkowski, Weyl, Kolmogorov, Whitney, Thom, Smale, and Milnor. One-half of the mathematics I know comes from the book of F. Klein Lectures on the Development of Mathematics
in the 19th Century. I have also learned a lot from many mathematicians like Gelfand, Rokhlin, S. Novikov, P. Deligne, Fuchs, and from my own students like Khovanskii, Nekhoroshev, Varchenko,
Zakaljukin, Vassiliev, Givental, Goryunov, O. Scherbak, Chekanov, and Kazarian. I am deeply indebted to Thom, whose singularity seminar at the Institut des Hautes รtudes Scientifiques, which I
frequented throughout the year 1965, profoundly changed my mathematical universe. I was always delighted by the way in which Thom discussed mathematics, using sentences obviously having no strict
logical meaning at all. While I was never able to completely free myself from the straitjacket of logic, I was forever poisoned by the dream of the irresponsible mathematical speculation with no
exact meaning. โOne can always find imbeciles to prove theoremsโ was, according to Thomโs students, his principle. Milnorโs talks at Leningrad in 1961 on the differential structures on the sphere
made such a profound impression on my supervisor, Kolmogorov, that he suggested that I put this in my graduate curriculum. This forced me to study differential topology from Novikov, Fuchs, and
Rokhlin. This came in handy because, a year later, I was on the jury for Novikovโs thesis defense on the differential structures on the products of spheres. Smale was one of the first foreign
mathematicians I met when he came to Moscow in NOTICES
1961. His influence on Russian works in dynamical systems and on me was enormous. Lui: Do you notice any differences in the way people from different cultures do mathematics? Arnolโฒd: I was unaware
of these differences for many years, but they do exist. A few years ago, I was participating in an International Science Foundation (ISF) meeting in Washington, DC. This organization distributes
grants to Russian scientists. One American participant suggested support for some Russian mathematician because โhe is working in a good American style.โ I was puzzled and asked for an explanation.
โWell,โ the American answered, โit means that he is traveling a lot to present all his latest results at all our conferences and is personally known to all experts in the field.โ My opinion is that
ISF should better support those who are working in the good Russian style, which is to sit at home working hard to prove fundamental theorems which will remain the cornerstones of mathematics
forever! Russian salaries are (and were) so small, that if someone is doing mathematics, it means that for him it is the goal and not a means to earn money. It is still possible to attain a high
reputation in the Western mathematical community by simply rewriting (or modernizing) classical Russian achievements and ideas unknown to the West. The Russian attitude toward knowledge, science, and
mathematics always conforms to the old traditions of the Russian intelligentsiya. This word does not exist in other languages, since no other country has a similar caste of scholars, medical doctors,
artists, teachers, etc., who find more reward from their contributions to society than from personal or monetary gains. My friend Vershik recently tried to obtain an American visa in Paris. โWhat is
your salary in St. Petersburg?โ asked the staff at the American consulate. After hearing his honest reply, the staff asked, โDo you wish to persuade us that you intend to return to St. Petersburg at
such a salary?โ Vershik answered, โOf course. Money is not all!โ The staff was so shocked that Vershik was given the visa immediately. I was applying for a visa a week earlier, and they put me on a
waiting list for three weeks. Their reasoning was that my papers must be checked in Washington since I am a โdonkeyโ. I asked for an explanation. โWell,โ they replied, โwe have such names for every
crime: dog, cat, tiger, camel, and so on.โ They showed me the list, and โdonkeyโ is a pseudonym for a Russian scientist. One other characteristic of the Russian mathematical tradition is the tendency
to regard all of mathematics as one living organism. In the West it is quite possible to be an expert in math436 NOTICES
ematics modulo 5, knowing nothing about mathematics modulo 7. Oneโs breadth is regarded as negative in the West to the same extent as oneโs narrowness is regarded as unacceptable in Russia. The
French mathematical school was brilliant for several centuries, up to the penetrating works of Leray, H. Cartan, Serre, Thom, and Cerf. The Bourbakists claimed that all the great mathematicians were,
using the words of Dirichlet, replacing blind calculations by clear ideas. The Bourbaki manifesto containing these words was translated into Russian as โall clear ideas were replaced by blind
calculations.โ The editor of the translation was Kolmogorov. His French was excellent. I was shocked to find such a mistake in the translation and discussed it with Kolmogorov. His answer was: I had
not realized that something was wrong in the translation since the translator described the Bourbaki style much better than the Bourbakists did. Unfortunately, Poincarรฉ left no school in France. A
typical example of the French narrow-mindedness is the recent discussion at the Academy of Sciences. Gromov was a foreign associate for many years, but he recently chose the French nationality and
hence could no longer remain a foreign associate. The problem was to transfer him to be an ordinary fellow of the Academy. The French mathematicians, however, were opposed to this, saying that โthose
places are for the really French people!โ In my opinion, all the โreally Frenchโ candidates were incomparably below the level of Gromov, who is one of the worldโs leading mathematicians. In the end,
Gromov is still not a fellow. To teach in France is very difficult because of the formalized Bourbaki training the students have. For example, at a written examination in dynamical systems for
fourth-year students at Paris-Dauphine, one problem was to find the limit of the solution of a system of Hamiltonian equations on the phase plane starting with some given initial point when time goes
to infinity. The idea was to choose the initial point on a separatrix of a saddle, with the limit being the saddle point. Preparing the examination problem, I made an arithmetical error, and the
phase curve (the energy-level curve containing the initial point) was a closed oval instead of the separatrix. The students discovered this and concluded that there exists a finite time T at which
the solution returns to the initial point. Using the unicity theorem, they were able to deduce that for any integer n the value of the solution at time nT is still the initial point. Then came the
conclusion: since the limit at infinite time coincides with the limit for any subsequence of times going to infinity, the limit is equal to the initial point! This VOLUME 44, NUMBER 4
solution was invented independently by several good students sitting at different places in the examination hall. In all this reasoning, there are no logical mistakes. It is a correct deduction which
one may also generate by a computer. It is apparent that the authors understood nothing. It is awful to think what kind of pressure the Bourbakists put on (evidently nonsilly) students to reduce them
to formal machines! This kind of formalized education is completely useless for any practical problem and even dangerous, leading to Chernobyl-type events. Unfortunately, this plague of formal
deduction is propagating in many countries, and the future of the mathematics infected by it is rather bleak. The United States has a different danger. No Russian professor is able to solve correctly
the problem they give in the Graduate Record Examination, the official entrance examination for graduate studies: find the closest pair to (angle, degree) among the pairs: (time, hour), (area, square
inch), and (milk, quart). Every American immediately solves it correctly. The official explanation for the correct response (area, square inch) is: one degree is the minimal measure of angle, one
square inch is the minimal measure of area, while an hour contains minutes and a quart contains two pints. I always wondered how it is possible for so many Americans to overcome such difficulties and
become great mathematicians. One physicist in New York who solved the problem successfully told me that he had the correct model of the degree of stupidity of the authors of such problems. H. Whitney
told me that the problem (intended for fourteen-year-old American school children) of whether 120% of the number 80 is a number greater than, smaller than, or equal to 80 was correctly solved (in a
nationwide test) by 30% of the students. People making the test thought that 30% of the school children understood percentages. Whitney explained to me, however, that the number of those who really
understood was negligible with respect to the whole sample. Since there were three possible answers, the statistical prediction for a correct random choice was 33%, with a 5% uncertainty. Recently,
even the National Academy of Sciences decided that scientific education in America should be enhanced. What they propose is to eliminate from the curriculum unnecessary scientific facts too difficult
for American children and replace them by really fundamental, basic knowledge, such as all objects have properties and all organisms have nature! (See Nature 372:5606 December 8, 1994.) Undoubtedly,
they will go far with this! Two years ago, I read in USA Today that American parents have formed a list of really necessary knowledge for children in each age category. At ten they have to know that
APRIL 1997
water has two phases, and at fifteen that the moon has phases and rotates around the earth. In Russia we still teach children in primary school that water has three phases, but the new Americanized
culture will undoubtedly win in the near future. There are, however, some remarkable advantages in the free American system, where a high school student may take, say, a course on the history of jazz
instead of algebra. A few months before his death, Whitney, who was still very active at the Institute for Advanced Study in Princeton, told me the story of his mathematical studies. He was an
undergraduate in violin at Yale, and after the second year he was sent to one of the best centers in Europe for music. Unfortunately, I have forgotten which city it was, but in any case it was not
far from the Alps, since he already was a mountain climber. There, a student had to pass an exam in a subject different from his own studies. Whitney asked his fellow students which subject was the
most fashionable then, and they told him quantum mechanics. After his first class in quantum mechanics, Whitney approached the famous lecturer (Pauli? Schrรถdinger? Sommerfeld?) with the following
words: โDear Professor, it seems to me that something is wrong with your lectures. Iโm the best student from Yale, and still I am unable to understand a word in your lecture.โ The lecturer, after
being informed that Whitney was studying music, answered quite politely, โThis is because you need some background, such as calculus and linear algebra.โ โWell,โ Whitney replied, โI hope these are
not so brand new as your subject and someone has already written textbooks on these subjects.โ The lecturer agreed and mentioned the titles of some textbooks. (If someone knows about this story, I
would like to know the name of the city, lecturer, and titles.) โIn three weeks,โ Whitney continued, โI was understanding his lectures, and at the end of the semester I switched from music to
mathematics.โ Kolmogorov also started as a nonmathematicianโhe was studying history. His first paper, written when he was seventeen, was reported at a seminar given by Bakhrushin at Moscow
University. Kolmogorov came to some conclusion based on an analysis of medieval tax records in Novgorod. After his talk, Kolmogorov asked Bakhrushin whether he agreed with the conclusions. โYoung
man,โ the professor said, โin history, we need at least five proofs for any conclusion.โ Next day, Kolmogorov switched to mathematics. The paper was rediscovered in his archive after his death and is
now published and approved by the historians. Lui: Any comments on the relation between pure and applied mathematics? NOTICES
Arnolโฒd: According to Louis Pasteur, there exist no applied sciencesโwhat do exist are the APPLICATIONS of sciences. The common opinion of both pure mathematicians and theoretical physicists on the
applied mathematics community is that it consists of weak thinkers unable to produce something scientifically important and of those who are more interested in money than in mathematics. I do not
think that this characteristic is fully deserved by the applied mathematics community. See my article โApology of applied mathematicsโ in Russian Mathematical Surveys, 1996. It summarizes my talk at
the opening of the Hamburg International Congress of Industrial and Applied Mathematics, July 1995. I think that the difference between pure and applied mathematics is social rather than scientific.
A pure mathematician is paid for making mathematical discoveries. An applied mathematician is paid for the solution of given problems. When Columbus set sail, he was like an applied mathematician,
paid for the search of the solution of a concrete problem: find a way to India. His discovery of the New World was similar to the work of a pure mathematician. I do not think that the discoveries of
Galileo (who was immediately exploiting them in a businesslike American style) are less important than, say, those of the pure philosopher Pascal. The real danger is not the applied mafia itself, but
the divorce between pure mathematics and the sciences created by the (I would say criminal) formalization of mathematics and of mathematical education. The axiomatical-deductive HilbertBourbaki style
of exposition of mathematics, dominant in the first half of this century, is now fortunately giving place to the unifying trends of the Poincarรฉ style geometrical mathematics, combining deep
theoretical insight with realworld applications. By the way, I read in a recent American book that geometry is the art of making no mistakes in long calculations. I think that this is an
underestimation of geometry. Our brain has two halves: one is responsible for the multiplication of polynomials and languages, and the other half is responsible for orientation of figures in space
and all the things important in real life. Mathematics is geometry when you have to use both halves. See, for instance, โThe geometry of formulaeโ by A. G. Khovanskii in the Soviet Sci. Rev. Sect. C:
Math. Phys. Rev. V4 (1984).
VOLUME 44, NUMBER 4 | {"url":"https://docshare.tips/arnold_58703925b6d87f57938b45d6.html?utm_source=docshare&utm_medium=sidebar&utm_campaign=58703750b6d87f59088b4619","timestamp":"2024-11-15T01:24:31Z","content_type":"text/html","content_length":"123178","record_id":"<urn:uuid:60882edc-26ef-41fc-a83a-73780622cf69>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00355.warc.gz"} |
How to Find the Smallest and Largest Elements in a Singly Linked List
Last Updated on July 27, 2023 by Mayank Dham
Finding the minimum and maximum values in a linked list is a common task in computer programming and data analysis. Whether you are working in C or Java, linked lists offer an efficient way to store
and manage collections of data elements. In this article, we will explore how to find the minimum and maximum values in a linked list using C and Java.
The first node in the list is called the head node, and the last node is called the tail node. The tail nodeโs next field points to a null value, indicating the end of the list. Letโs discuss our
problem to find largest element in linked list in C programming and also find minimum value in linked list in C.
How To Find Minimum Value in Linked List in C and How To Find Largest Element in Linked List in C programming
Letโs try to understand how to find the maximum value in the linked list and how to find the smallest value in the linked list with the help of an example.
Suppose the linked list is:
In this list, the smallest value is 2 and the maximum value is 9. So, the final output will be:
โข Maximum Value: 9
โข Smallest Value: 2
Input :
9, Smallest value: 2.
Explanation: As the maximum value is 9 and the smallest value is 2, we are printing 9 and 2.
This question is not a tricky one. We just have to make use of simple list traversal in the given Linked list to solve this problem. Let us have a glance at the approach.
Approach for How to Find the Maximum Value in the Linked List
The approach is going to be simple.
โข We will create two variables min and max.
โข min will be initialized with INT_MAX and max will be initialized with INT_MIN.
โข We are using INT_MAX because all the integers are lesser than it, and INT_MIN because all the integers are greater than it. With the help of these, finding the minimum and maximum becomes easy.
โข Now, we will traverse through the given list, and for every node, we will have two checks.
โก 1) If the data of the current node is less than the data of min, we will store the current nodeโs data in min.
โก 2) Else, If the data of the current node is greater than max, we will store the current nodeโs data in max.
โข After reaching the end of the list, min and max will contain the smallest and the largest element, respectively.
Algorithm to Find the Maximum Value in the Linked List
โข Create a variable max and initialize it with INT_MIN.
โข Traverse through the list and for every node, compare its data with max.
โข If the current nodeโs data is greater than max, then store the value of the current nodeโs data in max.
โข In the end, max will contain the Maximum value of the list.
Algorithm to Find the Smallest Value in the Linked List
โข Create a variable min and initialize it with INT_MAX.
โข Traverse through the list and for every node, compare its data with min.
โข If the current nodeโs data is lesser than min, then store the value of the current nodeโs data in min.
โข In the end, min will contain the Maximum value of the list.
Dry run will give a better understanding to find largest element in linked list in C++, C, Python and Java and find max value in linked list Java, C, C++, Python.
Dry Run
How to find the Maximum value in the linked list
How to find the Smallest value in the linked list
Code Implementation on How to Find Maximum Value and Minimum Value in Linked List
struct Node {
int data;
struct Node* next;
int largestElement(struct Node* head)
int max = INT_MIN;
while (head != NULL) {
if (max < head->data)
max = head->data;
head = head->next;
return max;
int smallestElement(struct Node* head)
int min = INT_MAX;
while (head != NULL) {
if (min > head->data)
min = head->data;
head = head->next;
return min;
void push(struct Node** head, int data)
struct Node* newNode =
(struct Node*)malloc(sizeof(struct Node));
newNode->data = data;
newNode->next = (*head);
(*head) = newNode;
// Display linked list.
void printList(struct Node* head)
while (head != NULL) {
printf("%d -> ", head->data);
head = head->next;
int main()
struct Node* head = NULL;
push(&head, 9);
push(&head, 7);
push(&head, 2);
push(&head, 5);
printf("Linked list is: ");
printf("Maximum element in linked list: ");
int max_element = largestElement(head);
printf("Minimum element in linked list: ");
int small = smallestElement(head);
return 0;
using namespace std;
struct Node {
int data;
struct Node* next;
int largestElement(struct Node* head)
int max = INT_MIN;
while (head != NULL) {
if (max < head->data)
max = head->data;
head = head->next;
return max;
int smallestElement(struct Node* head)
int min = INT_MAX;
while (head != NULL) {
if (min > head->data)
min = head->data;
head = head->next;
return min;
void push(struct Node** head, int data)
struct Node* newNode =
(struct Node*)malloc(sizeof(struct Node));
newNode->data = data;
newNode->next = (*head);
(*head) = newNode;
// Display linked list.
void printList(struct Node* head)
while (head != NULL) {
printf("%d -> ", head->data);
head = head->next;
cout << "NULL" << endl;
int main()
struct Node* head = NULL;
push(&head, 9);
push(&head, 7);
push(&head, 2);
push(&head, 5);
cout << "Linked list is: " << endl;
cout << "Maximum element in linked list: ";
cout << largestElement(head) << endl;
cout << "Minimum element in linked list: ";
cout << smallestElement(head) << endl;
return 0;
public class PrepBytes
static class Node
int data;
Node next;
static Node head = null;
static int largestElement(Node head)
int max = Integer.MIN_VALUE;
while (head != null)
if (max < head.data)
max = head.data;
head = head.next;
return max;
static int smallestElement(Node head)
int min = Integer.MAX_VALUE;
while (head != null)
if (min > head.data)
min = head.data;
head = head.next;
return min;
static void push(int data)
Node newNode = new Node();
newNode.data = data;
newNode.next = (head);
(head) = newNode;
static void printList(Node head)
while (head != null) {
System.out.print(head.data + " -> ");
head = head.next;
public static void main(String[] args)
push( 9);
push( 7);
push( 2);
push( 5);
System.out.println("Linked list is: ") ;
System.out.print("Maximum element in linked list: ");
System.out.print("Minimum element in linked list: ");
class Node:
def __init__(self):
self.data = None
self.next = None
head = None
def largestElement(head):
max = -32767
while (head != None):
if (max < head.data) :
max = head.data
head = head.next
return max
def smallestElement(head):
min = 32767
while (head != None) :
if (min > head.data) :
min = head.data
head = head.next
return min
def push( data) :
global head
newNode = Node()
newNode.data = data
newNode.next = (head)
(head) = newNode
def printList( head) :
while (head != None) :
print(head.data ,end= " -> ")
head = head.next
# Driver code
# Start with empty list
# head = new Node()
# Using push() function to construct
# singly linked list
# 17.22.13.14.15
push( 9)
push( 7)
push( 2)
push( 5)
print("Linked list is : ")
print("Maximum element in linked list: ",end="")
print("Minimum element in linked list: ",end="")
Linked list is:
5 -> 2 -> 7 -> 9 -> NULL
Maximum element in linked list: 9
Minimum element in linked list: 2
Time Complexity: O(n) is the time complexity to find minimum value in linked list in C++, Java and in Python and also O(n) will be the time complexity to find largest element in linked list in C++,
Java and in Python, as list traversal is needed.
In conclusion, finding the smallest and largest element in a linked list can be accomplished by iterating through the list and comparing each nodeโs value with the current smallest or largest value.
These operations have a time complexity of O(n), and can be optimized for larger linked lists.
Frequently Asked Questions Related to find largest element in linked list in C programming
Here are some FAQs related to find minimum value in linked list in C and to find largest element in linked list in C programming language.
Q1. What happens if the linked list is empty when we try to find the smallest or largest element?
Ans. If the linked list is empty, we cannot find the smallest or largest element, as there are no nodes to compare. In this case, we could return a default value or raise an error, depending on the
requirements of the application.
Q2. Is it possible to find the smallest or largest element in a linked list using recursion instead of iteration?
Ans. Yes, it is possible to find the smallest or largest element in a linked list using recursion. However, recursion may not be the most efficient approach, as it can lead to stack overflow errors
for very large linked lists. The iterative approach is generally preferred.
Q3. Can we find the smallest and largest element in a linked list in constant time?
Ans. No, it is not possible to find the smallest and largest element in a linked list in constant time, as we need to examine each element in the list to find the smallest or largest value. The time
complexity of this operation is O(n), where n is the number of nodes in the linked list.
Q4. If the linked list contains duplicate values, which nodeโs value should be considered as the smallest or largest element?
Ans. If the linked list contains duplicate values, we can choose to consider any one of the nodes with the smallest or largest value as the smallest or largest element. However, this decision should
be based on the requirements of the application and should be documented clearly in the code or documentation.
Leave a Reply Cancel reply | {"url":"https://www.prepbytes.com/blog/linked-list/find-the-smallest-and-largest-elements-in-a-singly-linked-list/","timestamp":"2024-11-11T17:53:33Z","content_type":"text/html","content_length":"156058","record_id":"<urn:uuid:cd2d7c62-8f47-4ed5-8d8f-7a6a533c030d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00834.warc.gz"} |
Algebra Examples
Step 1
Since the directrix is horizontal, use the equation of a parabola that opens left or right.
Step 2
Step 2.1
The vertex is halfway between the directrix and focus. Find the coordinate of the vertex using the formula. The coordinate will be the same as the coordinate of the focus.
Step 2.2
Step 2.2.1
Cancel the common factor of and .
Step 2.2.1.1
Step 2.2.1.2
Step 2.2.1.3
Step 2.2.1.4
Cancel the common factors.
Step 2.2.1.4.1
Step 2.2.1.4.2
Cancel the common factor.
Step 2.2.1.4.3
Step 2.2.1.4.4
Step 2.2.2
Step 3
Find the distance from the focus to the vertex.
Step 3.1
The distance from the focus to the vertex and from the vertex to the directrix is . Subtract the coordinate of the vertex from the coordinate of the focus to find .
Step 3.2
Step 4
Substitute in the known values for the variables into the equation. | {"url":"https://www.mathway.com/popular-problems/Algebra/736835","timestamp":"2024-11-14T00:51:01Z","content_type":"text/html","content_length":"42893","record_id":"<urn:uuid:cf56f689-f52e-4bf6-a828-d9e613649f65>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00714.warc.gz"} |
NCERT Solutions for Class 2 Maths Chapter 11 Data Handling - FREE PDF
NCERT Solutions for Class 2 Chapter 11 Maths - FREE PDF Download
This chapter introduces young learners to the basics of data handling, focusing on how to collect, organise, and interpret data in simple ways. NCERT solutions for Class 2 Maths chapter 11, Vedantu
provides clear explanations and practical exercises to help students understand these concepts effortlessly.
1. NCERT Solutions for Class 2 Chapter 11 Maths - FREE PDF Download
2. Glance on Class 2 Maths Chapter 11 - Data Handling
3. Access NCERT Solutions for Class 2 Maths Chapter 11 Data handling
4. Benefits of NCERT Solutions for Class 2 Maths Chapter 11 Data Handling
5. Important Study Material Links for Maths Class 2 Chapter 11 Data Handling
6. Chapter-wise NCERT Solutions Class 2 Maths - Joyful-Mathematics
7. Related Important Links for Maths Class 2
These solutions are aligned with the CBSE Class 2 Maths syllabus, ensuring that students are well-prepared for their exams. For additional support, you can also explore NCERT solutions for Class 2 to
build a strong foundation in mathematical concepts.
Glance on Class 2 Maths Chapter 11 - Data Handling
โข NCERT Solutions for Class 2 Maths Chapter 11 explains how to collect and organise information in a simple manner.
โข It teaches how to gather data through counting and observing everyday items.
โข This chapter explains how to arrange data in lists and tables to make it easier to understand.
โข These solutions also introduce tally marks as a way to record and count items quickly.
โข This chapter shows how to draw and read basic bar graphs to visually represent data.
FAQs on NCERT Solutions for Class 2 Maths Chapter 11: Data handling
1. What is Class 2 Maths Chapter 11 Data Handling about?
Class 2 Maths Chapter 11 focuses on data handling, teaching students how to collect, organise, and interpret data in simple ways.
2. What are tally marks in Class 2 Maths Data handling?
Tally marks are a way to count and record data quickly, using vertical lines and groups of five.
3. How can Class 2 students use data handling?
Data handling helps Class 2 students learn to gather information, create simple tables, and understand basic graphs.
4. What types of graphs are introduced in Class 2 Maths Chapter 11?
In this chapter, students learn to draw and read basic bar graphs to represent data visually.
5. How does Class 2 Maths Chapter 11 benefit students?
It helps students develop practical skills in data collection and interpretation, boosting their problem-solving abilities and confidence.
6. What skills do students learn from Class 2 Maths Chapter 11?
Students learn to collect data, use tally marks, organise information, and read simple graphs.
7. Can students use the NCERT solutions for Class 2 by Vedantu to prepare for exams?
Yes, using the NCERT solutions for Class 2 by Vedantu helps students review and practise key concepts, making them better prepared for exams and its completely FREE.
8. What types of data handling activities are included in the chapter Data Handling in class 2?
The chapter includes activities such as creating tally marks, organising data in tables, and drawing bar graphs.
9. How do NCERT solutions for Class 2 Maths Chapter 11 help with problem-solving?
The solutions provide step-by-step explanations and practice problems that enhance problem-solving skills related to data handling.
10. Are there any real-life applications of data handling taught in this chapter?
Yes, the chapter encourages students to use data handling skills in real-life situations, such as counting objects or comparing quantities.
11. What should students focus on while studying Class 2 Maths Chapter 11?
Students should focus on understanding how to collect and organise data, interpret basic graphs, and practice tally marks for accurate data recording. | {"url":"https://www.vedantu.com/ncert-solutions/ncert-solutions-class-2-maths-chapter-11","timestamp":"2024-11-05T22:39:05Z","content_type":"text/html","content_length":"814932","record_id":"<urn:uuid:3f7a6ee8-a677-4d49-b79b-b8828421ca9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00788.warc.gz"} |
Latest posts of: yurets_z
Hi Sergii,
It turns the model of Trolololo is nothing else than a power law. He wrote the formula, log10(Price)=A*ln(days)-C but this can be rewritten as Price=10^(A*ln(days)-C) or Price=days^(Aln(10))/10^C or
price=B*days^n where B=1/10^C and n=A*ln(10).
It is a big deal because a power law first of all looks like a straight line in a log log graph and BTC looks exactly like that. .....
A really nice observation, but BTC cyclic nature looks much better in a Log-linear scale, since time is often a linear thing (IMHO).
Like in the example here | {"url":"https://bitcointalk.org/index.php?action=profile;u=3474351;sa=showPosts","timestamp":"2024-11-05T09:48:04Z","content_type":"application/xhtml+xml","content_length":"25674","record_id":"<urn:uuid:63f0ba30-232f-4a06-94e7-23151924f940>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00476.warc.gz"} |
k_corona(G, k, core_number=None)[source]ยถ
Return the k-corona of G.
The k-corona is the subgraph of nodes in the k-core which have exactly k neighbours in the k-core.
G : NetworkX graph
A graph or directed graph
k : int
The order of the corona.
core_number : dictionary, optional
Precomputed core numbers for the graph G.
G : NetworkX graph
The k-corona subgraph
The k-cornoa is not defined for graphs with self loops or parallel edges.
Not implemented for graphs with parallel edges or self loops.
For directed graphs the node degree is defined to be the in-degree + out-degree.
Graph, node, and edge attributes are copied to the subgraph.
[R230] k -core (bootstrap) percolation on complex networks: Critical phenomena and nonlocal effects, A. V. Goltsev, S. N. Dorogovtsev, and J. F. F. Mendes, Phys. Rev. E 73, 056101 (2006) http:// | {"url":"https://networkx.org/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.core.k_corona.html","timestamp":"2024-11-10T16:06:01Z","content_type":"text/html","content_length":"16733","record_id":"<urn:uuid:c90b7219-d74d-4d42-b117-92d486b540f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00395.warc.gz"} |
Cyclic Quadrilaterals | Brilliant Math & Science Wiki
A cyclic quadrilateral is a quadrilateral that can be inscribed in a circle, meaning that there exists a circle that passes through all four vertices of the quadrilateral.
Cyclic quadrilaterals are useful in various types of geometry problems, particularly those in which angle chasing is required. It is not unusual, for instance, to intentionally add points (and lines)
to diagrams in order to exploit the properties of cyclic quadrilaterals.
The angles of cyclic quadrilaterals satisfy several important relations, as they are all inscribed angles of the circumcircle. More specifically, by the inscribed angle theorem,
\[\begin{array}{lllll} &\angle ADB = \frac{\overset{\frown}{ACB}}{2}, &\angle DBC = \frac{\overset{\frown}{CAD}}{2}, &\angle BCA = \frac{\overset{\frown}{ADB}}{2}, &\angle CAD = \frac{\overset{\
frown}{DBC}}{2},\\\\ & &\angle ABC = \frac{\overset{\frown}{AC}}{2}, &\angle ABD = \frac{\overset{\frown}{AD}}{2},&&\\ & &\angle DCA = \frac{\overset{\frown}{DA}}{2}, &\angle DCB = \frac{\overset{\
frown}{DB}}{2},&&\\ & &\angle BAD = \frac{\overset{\frown}{BD}}{2}, &\angle BAC = \frac{\overset{\frown}{BC}}{2},&&\\ & &\angle CDB = \frac{\overset{\frown}{CB}}{2}, &\angle CDA = \frac{\overset{\
frown}{CA}}{2},&& \end{array}\]
which leads to the following two results:
Opposite Angles
The opposite angles of a cyclic quadrilateral add to \(180^{\circ}\), or \(\pi\) radians.
If \(ABCD\) is a cyclic quadrilateral, find the value of \(\cos { A } +\cos { B } +\cos { C } +\cos { D }.\)
Diagonal Angles
In a cyclic quadrilateral \(ACBD\), we have
\[\angle ABC = \angle ADC\]
and similar relations \((\)e.g. \(\angle BCD = \angle BAD).\)
These can both be directly verified from the above angle equalities.
Also recall that \(\overset{\frown}{AB} = \angle AOB\), where \(O\) is the center of the circle, by the inscribed angle theorem. This can also lead to useful information, if the center of the
circumcircle is relevant.
\[30^\circ\] \[40^\circ\] \[50^\circ\] \[60^\circ\]
In the cyclic quadrilateral \(WXYZ\) on the circle centered at \(O,\) \(\angle ZYW = 10^\circ\) and \(\angle YOW=100^\circ.\)
What is the measure of \(\angle YWZ?\)
The sides and diagonals of a cyclic quadrilateral are closely related:
In cyclic quadrilateral \(ACBD\),
\[AB \cdot CD = AC \cdot BD + BC \cdot AD.\]
In other words, the product of the lengths of the diagonals is equal to the sum of the products of opposite sides.
Consider all sets of 4 points \(A, B, C, D \) which satisfy the following conditions:
โข \(AB\) is an integer.
โข \(BC = AB + 1 \).
โข \(CD = BC + 1\).
โข \( DA = CD + 1\).
โข \(AC= DA + 1\)
โข \( AC \) divides \( AB \times CD + BC \times DA \).
Over all such sets, what is \( \max \lceil BD \rceil ? \)
In fact, it is true of any quadrilateral that
\[AB \cdot CD \leq AC \cdot BD + BC \cdot AD,\]
meaning that the cyclic quadrilateral is the equality case of this inequality.
In fact, more can be said about the diagonals: if \(a,b,c,d\) are the lengths of the sides of the quadrilateral (in clockwise order),
\[ p&=\sqrt{\frac {(ab+cd)(ac+bd)}{ad+bc}}\\ q&=\sqrt{\frac {(ac+bd)(ad+bc)}{ab+cd}}, \]
which also demonstrates Ptolemy's theorem.
The cyclic quadrilateral is the equality case of another inequality: given four side lengths, the cyclic quadrilateral maximizes the resulting area.
Let a cyclic quadrilateral have side lengths \(a,b,c,d\), and let \(s=\frac{a+b+c+d}{2}\) be called the semiperimeter. Then the area of the quadrilateral is equal to
Find the area of a cyclic quadrilateral with sides 2, 2, 3, 1.
Round your answer to the nearest hundredth.
It is worth noting that in the degenerate case where one side length is zero, the above formula reduces to Heron's formula for triangles. Both of these are special cases of Bretschneider's formula.
The area is then given by a special case of Bretschneider's formula.
Here are few well-known problems which use the basic properties of cyclic quadrilaterals. They are mainly of Olympiad flavor and are solvable by elementary methods.
Problem 1. Let \(E\) and \(F\) be two points on side \(BC\) and \(CD\) of square \(ABCD\), such that \(\angle EAF=\ang{45}\). Let \(M\) and \(N\) be the intersection of diagonal \(BD\) with \(AE\)
and \(AF,\) respectively. Let \(P\) be the intersection of \(MF\) and \(NE\). Prove that \(AP\) is perpendicular to \(EF\).
Problem 2. \(\triangle ABC\) is inscribed in the circle centered at \(O\) such that the angles \(\angle B\) and \(\angle C\) are acute. If \(H\) is its orthocenter, then prove that \(\angle BAH= \
angle CAO\).
Problem 3. Let \(D, E,\) and \(F\) be the feet of the altitudes of \(\triangle ABC.\) Prove that the altitudes of \(\triangle ABC\) are the angle bisectors of \(\triangle DEF.\) | {"url":"https://brilliant.org/wiki/cyclic-quadrilaterials/","timestamp":"2024-11-10T02:25:22Z","content_type":"text/html","content_length":"60863","record_id":"<urn:uuid:c97baeeb-9d9f-48b2-bb8a-de548fd61a1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00017.warc.gz"} |
2.9) Quiz 1: Questions โ Functions โ AQA GCSE Maths Foundation
1) We have the function machine below.
a) What is the output when the input is 3?
b) What is the output when the input is 10?
c) What is the output when the input is -2?
2) We have the function machine below.
a) What is the output when the input is 15?
b) What is the output when the input is 9?
c) What is the output when the input is -7?
3) We have the function machine below.
a) What is the output when the input is 4?
b) What is the output when the input is 10?
c) What is the input when the output is 15?
d) What is the input when the output is 27?
4) We have the function machine below.
a) What is the output when the input is 20?
b) What is the output when the input is 45?
c) What is the input when the output is 23?
d) What is the input when the output is 10? | {"url":"https://www.elevise.co.uk/g-a-m-f-29-q1.html","timestamp":"2024-11-03T19:06:56Z","content_type":"text/html","content_length":"91249","record_id":"<urn:uuid:c448825c-5e02-4eae-b051-dcba309760d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00041.warc.gz"} |
7.2 Linear maps and matrices | Linear Algebra 2024 Notes
7.2 Linear maps and matrices
For most of this section we will focus on linear maps in Euclidean space, that is maps from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) for some \(m,n \in \mathbb{N}\).It turns out that such maps correspond
to matrices in \(M_{m,n}(\mathbb{R})\).
Note: although we focus on real matrices, we could again replace every \(\mathbb{R}\) by \(\mathbb{C}\) and proceed in the same way.
We have already seen that any matrix can be thought of as a linear map, so we need only consider how an arbitrary linear map can be represented as a matrix. Consider a linear map \(T:\mathbb{R}^n \to
\mathbb{R}^m\). In order for this map to be linear, each of the components must be a linear equation with zero constant term.The proof of this is left as an exercise.
Now if we think about how this map acts on on \(x=(x_1, \dots, x_n)\in \mathbb{R}^n\) we must have that \[T\begin{pmatrix}x_1\\ \vdots \\ x_n \end{pmatrix}=\begin{pmatrix}a_{11}x_1+ a_{12}x_2+\cdots
+ a_{1n} \\ \vdots \\ a_{m1}x_1+a_{m2}x_2 +\dots +a_{mn}x_n \end{pmatrix}\] for some coefficients \(a_{ij}\in \mathbb{R}\). But this is exactly the same as the action of the matrix \(A=(a_{ij})\in M_
In cases where the map is presented as linear equations in each component it is straightforward to read off the matrix as above. We have also seen some examples where the form of the linear map is
less obvious. In general, we can use the fact that the \(i\)th column of our matrix will be given by \(T(e_i)\) for all of our standard basis vectors.
Definition 7.29: (Matrix of a linear map)
Let \(T:\mathbb{R}^n \to \mathbb{R}^m\) be a linear map. The matrix corresponding to this linear map in the standard basis is the matrix \(M_T\in M_{m,n}(\mathbb{R})\) with \(i\)th column is \(T(e_i)
Note that the matrix corresponding to a linear map is not unique, but instead depends on our choice of basis for the domain and co-domain. For now we will assume that we use the standard bases for
both, but later in the course we will explore how to deal with different choices of basis, and some reasons why we may prefer to use alternative bases.
Example 7.30:
Consider \(T:\mathbb{R}^3 \to \mathbb{R}^3\) given by \(T(x)=(x_1, -x_2, 2x_3)\). This is a linear map (the proof of this is left as an exercise). Then we have \(T(e_1)=\begin{pmatrix} 1\\0\\0\end
{pmatrix}\), \(T(e_2)=\begin{pmatrix} 0\\-1\\0\end{pmatrix}\) and \(T(e_3)=\begin{pmatrix} 0\\0\\2\end{pmatrix}\) so the corresponding matrix is \[A=\begin{pmatrix}1&0&0\\0&-1&0\\0&0&2\end{pmatrix}.
We previously introduced some operations for linear maps: addition, multiplication by a scalar, and composition. We want to study now how these translate to matrices.
Theorem 7.31:
Let \(S,T:\mathbb{R}^n\to\mathbb{R}^m\) be linear maps with corresponding matrices \(M_T=(t_{ij}), M_S=(s_{ij})\), and let \(\lambda\in\mathbb{R}\). Then the matrices corresponding to the maps \(\
lambda T\) and \(S+T\) are given by \[M_{\lambda T}=\lambda M_T=(\lambda t_{ij}) \quad \text{and}\quad M_{S+T}=(s_{ij}+t_{ij})=M_S+M_T.\]
Let \(R=S+T\). The matrix associated with \(R\) is by Definition 7.29 given by \(M_R=(r_{ij})\) with \(i\)th column \(R(e_i)\), but since \(R(e_i)=(S+T)(e_i)=S(e_i)+T(e_i)\) we have that the \(i\)th
column of \(M_R\) is the \(i\)th column of \(M_S\) plus the \(i\)th column of \(M_T\), and so \(M_R=M_S+M_T\).
Similarly we find that
\(M_{\lambda T}\)
th column
\(\lambda T(e_i)\)
and so
\(M_{\lambda T}=\lambda M_T\)
So the above theorem tells us that when adding linear maps/multiplying them by a scalar we just add the corresponding matrix elements/multiply them by a scalar. Note that this extends to expressions
of the form \[M_{\lambda S+\mu T}=\lambda M_S+\mu M_T .\] and these expressions actually define the addition of matrices.
The composition of maps leads to multiplication of matrices.
Theorem 7.32:
Let \(T:\mathbb{R}^n\to\mathbb{R}^m\) and \(S:\mathbb{R}^m\to \mathbb{R}^l\) be linear maps with corresponding matrices \(M_T=(t_{ij})\) and \(M_S=(s_{ij})\), where \(M_T\) is \(m\times n\) and \(M_S
\) is \(l\times m\). Then the matrix \(M_{S\circ T}=(r_{ik})\) corresponding to the composition \(R=S\circ T\) of \(T\) and \(S\) has elements \[r_{ik}=\sum_{j=1}^m s_{ij}t_{jk}\] and is an \(l\times
n\) matrix.
The proof of this is left as an exercise. However, note that it follows from our definitions, and in fact it is for this reason that we defined matrix multiplication in the way that we did.
We can think about the formula for matrix multiplication as \(r_{ik}\) being the dot product of the \(i\)th row vector of \(M_S\) and the \(k\)th column vector of \(M_T\). This formula defines a
product of matrices by \[M_SM_T:=M_{S\circ T} .\]
So we have now used the notions of addition and composition of linear maps to define addition and products of matrices. The results about maps then immediately imply corresponding results for
Theorem 7.33:
Let \(A,B\) be \(m\times n\) matrices and \(C\) an \(l\times m\) matrix, then \[C(A+B)=CA+CB .\] Let \(A,B\) be \(l\times m\) matrices and \(C\) an \(m\times n\) matrix, then \[(A+B)C=AC+BC .\] Let \
(C\) be an \(m\times n\) matrix, \(B\) be an \(l\times m\) matrix and \(A\) a \(k\times l\) matrix, then \[A(BC)=(AB)C .\]
We saw in Theorem 3.11 that matrices define linear maps, and in Theorem 7.12 the above properties were shown for linear maps.
The first two properties mean that matrix multiplication is distributive over addition, and the last one is called associativity. In particular associativity would be quite cumbersome to prove
directly for matrix multiplication, whereas the proof for linear maps is very simple. This shows that often an abstract approach simplifies proofs a lot. The price one pays for this is that it takes
sometimes longer to learn and understand the material in a more abstract language.
Having identified our matrices with linear maps, we can now consider concepts like the image, kernel, rank and nullity of a matrix. Once again we can make use of Gaussian elimination, this time to
find the rank and nullity of a matrix and hence of the corresponding linear map.
Theorem 7.34:
Let \(A\in M_{m,n}(\mathbb{R})\) and assume that the row echelon form of \(A\) has \(k\) leading \(1\)โ s, then \(\operatorname{rank}A=k\) and \(\operatorname{nullity}A=n-k\).
So in order to find the rank of a matrix we use elementary row operations to bring it to row echelon form and then we just count the number of leading \(1\)โs. The proof will be left as an exercise.
Example 7.35:
Consider \(A=\begin{pmatrix}1&-1 &3\\2&0&4\\0&3&1\\-1&-1&1 \end{pmatrix}\in M_{4,3}(\mathbb{R})\). We want to find the rank and nullity of this matrix. Using row operations, we find that a row
echelon form of the matrix is \(\begin{pmatrix}1&-1&3\\0&1&-1\\0&0&1\\0&0&0 \end{pmatrix}\)
Hence we have that \(\operatorname{rank}A=3\) and \(\operatorname{nullity}A=0\).
We also previously saw that we could use the determinant to tell us about whether a system of \(n\) equations in \(n\) unknowns had a unique solution, but we can now go a step further using the image
of a matrix to determine the outcome when the determinant is zero.
Theorem 7.36:
The system of linear equations \(Ax=b\), with \(A\in M_{n}(\mathbb{R})\), has a unique solution if and only if \(\det A\neq 0.\) If \(\det A=0\) and \(b\notin \operatorname{Im}A\) no solution exists,
and if \(\det A=0\) and \(b\in \operatorname{Im}A\) then infinitely many solutions exist.
We know that \(A\) is invertible if and only if \(\det A\neq 0\), and then we find \[x=A^{-1}b.\] So \(\det A\neq 0\) means the system has a unique solution. If \(\det A=0\), then \(\operatorname
{nullity}A>0\) and \(\operatorname{rank}A <n\), so a solution exists only if \(b\in \operatorname{Im}A\), and if a solution \(x_0\) exists, then all vectors in \(\{x_0\}+\ker A\) are solutions, too,
hence there are infinitely many solutions.
In the next chapter, we will continue to explore linear maps and different ways we can represent them. | {"url":"https://bookdown.org/rachaelmcarey/lanotes/linear-maps-and-matrices.html","timestamp":"2024-11-06T09:09:25Z","content_type":"text/html","content_length":"32704","record_id":"<urn:uuid:b4b338c9-6243-462c-a348-28934d7406aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00802.warc.gz"} |
Creating Univariate Conditional Mean Models
This topic shows how to represent various autoregressive integrated moving average (ARIMA) models, which are univariate conditional mean models, as an arima model object, and it shows how to
interpret the property values of a specified object.
Default ARIMA Model
The default ARIMA(p,D,q) model in Econometrics Toolboxโข is the nonseasonal model of the form
${\Delta }^{D}{y}_{t}=c+{\varphi }_{1}{\Delta }^{D}{y}_{t-1}+\dots +{\varphi }_{p}{\Delta }^{D}{y}_{t-p}+{\theta }_{1}{\epsilon }_{t-1}+\dots +{\theta }_{q}{\epsilon }_{t-q}+{\epsilon }_{t}.$
You can write this equation in condensed form using lag operator notation:
$\varphi \left(L\right){\left(1-L\right)}^{D}{y}_{t}=c+\theta \left(L\right){\epsilon }_{t}$
In either equation, the default innovation distribution is Gaussian with mean zero and constant variance.
At the command line, you can specify a model of this form using the shorthand syntax arima(p,D,q). For the input arguments p, D, and q, enter the number of nonseasonal AR terms (p), the order of
nonseasonal integration (D), and the number of nonseasonal MA terms (q), respectively.
When you use this shorthand syntax, arima creates an arima model with these default property values.
Property Name Property Data Type
AR Cell vector of NaNs
Beta Empty vector [] of regression coefficients corresponding to exogenous covariates
Constant NaN
D Degree of nonseasonal integration, D
Distribution "Gaussian"
MA Cell vector of NaNs
P Number of AR terms plus degree of integration, p + D
Q Number of MA terms, q
SAR Cell vector of NaNs
SMA Cell vector of NaNs
Variance NaN
To assign nondefault values to any properties, you can modify the created model object using dot notation.
Notice that the inputs D and q are the values arima assigns to properties D and Q. However, the input argument p is not necessarily the value arima assigns to the model property P. P stores the
number of presample observations needed to initialize the AR component of the model. For nonseasonal models, the required number of presample observations is p + D.
To illustrate, consider specifying the ARIMA(2,1,1) model
$\left(1-{\varphi }_{1}L-{\varphi }_{2}{L}^{2}\right){\left(1-L\right)}^{1}{y}_{t}=c+\left(1+{\theta }_{1}L\right){\epsilon }_{t},$
where the innovation process is Gaussian with (unknown) constant variance.
Mdl =
arima with properties:
Description: "ARIMA(2,1,1) Model (Gaussian Distribution)"
Distribution: Name = "Gaussian"
P: 3
D: 1
Q: 1
Constant: NaN
AR: {NaN NaN} at lags [1 2]
SAR: {}
MA: {NaN} at lag [1]
SMA: {}
Seasonality: 0
Beta: [1ร0]
Variance: NaN
Notice that the model property P does not have value 2 (the AR degree). With the integration, a total of p + D (here, 2 + 1 = 3) presample observations are needed to initialize the AR component of
the model.
The created model, Mdl, has NaNs for all parameters. A NaN value signals that a parameter needs to be estimated or otherwise specified by the user. All parameters must be specified to forecast or
simulate the model.
To estimate parameters, input the model object (along with data) to estimate. This returns a new fitted arima model object. The fitted model object has parameter estimates for each input NaN value.
Calling arima without any input arguments returns an ARIMA(0,0,0) model specification with default property values:
DefaultMdl =
arima with properties:
Description: "ARIMA(0,0,0) Model (Gaussian Distribution)"
Distribution: Name = "Gaussian"
P: 0
D: 0
Q: 0
Constant: NaN
AR: {}
SAR: {}
MA: {}
SMA: {}
Seasonality: 0
Beta: [1ร0]
Variance: NaN
Specify Nonseasonal Models Using Name-Value Arguments
The best way to specify models to arima is using name-value arguments. You do not need, nor are you able, to specify a value for every model object property. arima assigns default values to any
properties you do not (or cannot) specify.
In condensed, lag operator notation, nonseasonal ARIMA(p,D,q) models are of the form
$\varphi \left(L\right){\left(1-L\right)}^{D}{y}_{t}=c+\theta \left(L\right){\epsilon }_{t}.$ (1)
You can extend this model to an ARIMAX(p,D,q) model with the linear inclusion of exogenous variables. This model has the form
$\varphi \left(L\right){y}_{t}={c}^{\ast }+{x}_{t}^{\prime }\beta +{\theta }^{\ast }\left(L\right){\epsilon }_{t},$ (2)
where c^* = c/(1โL)^D and ฮธ^*(L) = ฮธ(L)/(1โL)^D.
If you specify a nonzero D, then Econometrics Toolbox differences the response series y[t] before the predictors enter the model. You should preprocess the exogenous covariates x[t] by testing for
stationarity and differencing if any are unit root nonstationary. If any nonstationary exogenous covariate enters the model, then the false negative rate for significance tests of ฮฒ can increase.
For the distribution of the innovations, ฮต[t], there are two choices:
โข Independent and identically distributed (iid) Gaussian or Studentโs t with a constant variance, ${\sigma }_{\epsilon }^{2}$.
โข Dependent Gaussian or Studentโs t with a conditional variance process, ${\sigma }_{t}^{2}$. Specify the conditional variance model using a garch, egarch, or gjr model.
The arima default for the innovations is an iid Gaussian process with constant (scalar) variance.
In order to estimate, forecast, or simulate a model, you must specify the parametric form of the model (e.g., which lags correspond to nonzero coefficients, the innovation distribution) and any known
parameter values. You can set any unknown parameters equal to NaN, and then input the model to estimate (along with data) to get estimated parameter values.
arima (and estimate) returns a model corresponding to the model specification. You can modify models to change or update the specification. Input models (with no NaN values) to forecast or simulate
for forecasting and simulation, respectively. Here are some example specifications using name-value arguments.
Model Specification
โข ${y}_{t}=c+{\varphi }_{1}{y}_{t-1}+{\epsilon }_{t}$
โข ${\epsilon }_{t}={\sigma }_{\epsilon }{z}_{t}$ arima('AR',NaN) or arima(1,0,0)
โข z[t] Gaussian
โข ${y}_{t}={\epsilon }_{t}+{\theta }_{1}{\epsilon }_{t-1}+{\theta }_{2}{\epsilon }_{t-2}$
โข ${\epsilon }_{t}={\sigma }_{\epsilon }{z}_{t}$ arima('Constant',0,'MA',{NaN,NaN},...
โข z[t] Studentโs t with unknown degrees of freedom
โข $\left(1-0.8L\right)\left(1-L\right){y}_{t}=0.2+\left(1+0.6L\right){\epsilon }_{t}$
โข ${\epsilon }_{t}=0.1{z}_{t}$ 'Variance',0.1^2,'Distribution',struct
โข z[t] Studentโs t with eight degrees of freedom
โข $\left(1+0.5L\right){\left(1-L\right)}^{1}\Delta {y}_{t}={x}_{t}^{\prime }\left[\begin{array}{c}-5\\ 2\end{array}\right]+{\epsilon }_{t}
$ arima('Constant',0,'AR',-0.5,'D',1,'Beta',[-5 2])
โข ${\epsilon }_{t}~N\left(0,1\right)$
You can specify the following name-value arguments to create nonseasonal arima models.
Name-Value Arguments for Nonseasonal ARIMA Models
Name Corresponding Model Term(s) in Equation When to Specify
To set equality constraints for the AR coefficients. For example, to specify the AR coefficients in the model
${y}_{t}=0.8{y}_{t-1}-0.2{y}_{t-2}+{\epsilon }_{t},$
Nonseasonal AR coefficients, ${\varphi } specify 'AR',{0.8,-0.2}.
AR _{1},\dots ,{\varphi }_{p}$
You only need to specify the nonzero elements of AR. If the nonzero coefficients are at nonconsecutive lags, specify the corresponding lags using
Any coefficients you specify must correspond to a stable AR operator polynomial.
ARLags is not a model property.
Use this argument as a shortcut for specifying AR when the nonzero AR coefficients correspond to nonconsecutive lags. For example, to specify
Lags corresponding to nonzero, nonzero AR coefficients at lags 1 and 12, e.g., ${y}_{t}={\varphi }_{1}{y}_{t-1}+{\varphi }_{12}{y}_{t-12}+{\epsilon }_{t},$specify 'ARLags',
ARLags nonseasonal AR coefficients [1,12].
Use AR and ARLags together to specify known nonzero AR coefficients at nonconsecutive lags. For example, if in the given AR(12) model ${\varphi }
_{1}=0.6$ and ${\varphi }_{12}=-0.3,$ specify 'AR',{0.6,-0.3},'ARLags',[1,12].
Use this argument to specify the values of the coefficients of the exogenous variables. For example, use 'Beta',[0.5 7 -2] to specify $\beta ={\
Values of the coefficients of the left[\begin{array}{ccc}0.5& 7& -2\end{array}\right]}^{\prime }.$
Beta exogenous covariates
By default, Beta is an empty vector.
Constant Constant term, c To set equality constraints for c. For example, for a model with no constant term, specify 'Constant',0.
By default, Constant has value NaN.
D Degree of nonseasonal differencing, D To specify a degree of nonseasonal differencing greater than zero. For example, to specify one degree of differencing, specify 'D',1.
By default, D has value 0 (meaning no nonseasonal integration).
Use this argument to specify a Studentโs t innovation distribution. By default, the innovation distribution is Gaussian.
Distribution Distribution of the innovation process For example, to specify a t distribution with unknown degrees of freedom, specify 'Distribution','t'.
To specify a t innovation distribution with known degrees of freedom, assign Distribution a data structure with fields Name and DoF. For example,
for a t distribution with nine degrees of freedom, specify 'Distribution',struct('Name','t','DoF',9).
To set equality constraints for the MA coefficients. For example, to specify the MA coefficients in the model
${y}_{t}={\epsilon }_{t}+0.5{\epsilon }_{t-1}+0.2{\epsilon }_{t-2},$
Nonseasonal MA coefficients, ${\theta }_ specify 'MA',{0.5,0.2}.
MA {1},\dots ,{\theta }_{q}$
You only need to specify the nonzero elements of MA. If the nonzero coefficients are at nonconsecutive lags, specify the corresponding lags using
Any coefficients you specify must correspond to an invertible MA polynomial.
MALags is not a model property.
Use this argument as a shortcut for specifying MA when the nonzero MA coefficients correspond to nonconsecutive lags. For example, to specify
nonzero MA coefficients at lags 1 and 4, e.g.,
MALags Lags corresponding to nonzero, ${y}_{t}={\epsilon }_{t}+{\theta }_{1}{\epsilon }_{t-1}+{\theta }_{4}{\epsilon }_{t-4},$
nonseasonal MA coefficients
specify 'MALags',[1,4].
Use MA and MALags together to specify known nonzero MA coefficients at nonconsecutive lags. For example, if in the given MA(4) model ${\theta }_
{1}=0.5$ and ${\theta }_{4}=0.2,$ specify 'MA',{0.4,0.2},'MALags',[1,4].
โข Scalar variance of the innovation โข To set equality constraints for ${\sigma }_{\epsilon }^{2}$. For example, for a model with known variance 0.1, specify 'Variance',0.1. By
process, ${\sigma }_{\epsilon }^{2}$ default, Variance has value NaN.
โข Conditional variance process, ${\ โข To specify a conditional variance model, ${\sigma }_{t}^{2}$. Set 'Variance' equal to a conditional variance model object, e.g., a garch
sigma }_{t}^{2}$ model object.
You cannot assign values to the properties P and Q. For nonseasonal models,
โข arima sets P equal to p + D
โข arima sets Q equal to q
Specify Multiplicative Models Using Name-Value Arguments
For a time series with periodicity s, define the degree p[s] seasonal AR operator polynomial, $\Phi \left(L\right)=\left(1-{\Phi }_{1}{L}^{{p}_{1}}-\dots -{\Phi }_{{p}_{s}}{L}^{{p}_{s}}\right)$, and
the degree q[s] seasonal MA operator polynomial, $\Theta \left(L\right)=\left(1+{\Theta }_{1}{L}^{{q}_{1}}+\dots +{\Theta }_{{q}_{s}}{L}^{{q}_{s}}\right)$. Similarly, define the degree p nonseasonal
AR operator polynomial, $\varphi \left(L\right)=\left(1-{\varphi }_{1}L-\dots -{\varphi }_{p}{L}^{p}\right)$, and the degree q nonseasonal MA operator polynomial,
$\theta \left(L\right)=\left(1+{\theta }_{1}L+\dots +{\theta }_{q}{L}^{q}\right).$ (3)
A multiplicative ARIMA model with degree D nonseasonal integration and degree s seasonality is given by
$\varphi \left(L\right)\Phi \left(L\right){\left(1-L\right)}^{D}\left(1-{L}^{s}\right){y}_{t}=c+\theta \left(L\right)\Theta \left(L\right){\epsilon }_{t}.$ (4)
The innovation series can be an independent or dependent Gaussian or Studentโs t process. The arima default for the innovation distribution is an iid Gaussian process with constant (scalar) variance.
In addition to the arguments for specifying nonseasonal models (described in Name-Value Arguments for Nonseasonal ARIMA Models), you can specify these name-value arguments to create a multiplicative
arima model. You can extend an ARIMAX model similarly to include seasonal effects.
Name-Value Arguments for Seasonal ARIMA Models
Argument Corresponding Model Term(s) in Equation 4 When to Specify
To set equality constraints for the seasonal AR coefficients. When specifying AR coefficients, use the sign opposite to what appears in
Equation 4 (that is, use the sign of the coefficient as it would appear on the right side of the equation).
Use SARLags to specify the lags of the nonzero seasonal AR coefficients. Specify the lags associated with the seasonal polynomials in
the periodicity of the observed data (e.g., 4, 8,... for quarterly data, or 12, 24,... for monthly data), and not as multiples of the
seasonality (e.g., 1, 2,...).
Seasonal AR coefficients, ${\Phi }_{1},\dots ,{\
SAR Phi }_{{p}_{s}}$ For example, to specify the model
$\left(1-0.8L\right)\left(1-0.2{L}^{12}\right){y}_{t}={\epsilon }_{t},$
specify 'AR',0.8,'SAR',0.2,'SARLags',12.
Any coefficient values you enter must correspond to a stable seasonal AR polynomial.
SARLags is not a model property.
Use this argument when specifying SAR to indicate the lags of the nonzero seasonal AR coefficients.
Lags corresponding to nonzero seasonal AR
SARLags coefficients, in the periodicity of the observed For example, to specify the model
$\left(1-\varphi L\right)\left(1-{\Phi }_{12}{L}^{12}\right){y}_{t}={\epsilon }_{t},$
specify 'ARLags',1,'SARLags',12.
To set equality constraints for the seasonal MA coefficients.
Use SMALags to specify the lags of the nonzero seasonal MA coefficients. Specify the lags associated with the seasonal polynomials in
the periodicity of the observed data (e.g., 4, 8,... for quarterly data, or 12, 24,... for monthly data), and not as multiples of the
seasonality (e.g., 1, 2,...).
SMA Seasonal MA coefficients, ${\Theta }_{1},\dots ,{\ For example, to specify the model
Theta }_{{q}_{s}}$
${y}_{t}=\left(1+0.6L\right)\left(1+0.2{L}^{12}\right){\epsilon }_{t},$
specify 'MA',0.6,'SMA',0.2,'SMALags',12.
Any coefficient values you enter must correspond to an invertible seasonal MA polynomial.
SMALags is not a model property.
Use this argument when specifying SMA to indicate the lags of the nonzero seasonal MA coefficients.
Lags corresponding to the nonzero seasonal MA
SMALags coefficients, in the periodicity of the observed For example, to specify the model
${y}_{t}=\left(1+{\theta }_{1}L\right)\left(1+{\Theta }_{4}{L}^{4}\right){\epsilon }_{t},$
specify 'MALags',1,'SMALags',4.
To specify the degree of seasonal integration s in the seasonal differencing polynomial ฮ[s] = 1 โ L^s. For example, to specify the
Seasonality Seasonal periodicity, s periodicity for seasonal integration of monthly data, specify 'Seasonality',12.
If you specify nonzero Seasonality, then the degree of the whole seasonal differencing polynomial is one. By default, Seasonality has
value 0 (meaning periodicity and no seasonal integration).
You cannot assign values to the properties P and Q. For multiplicative ARIMA models,
โข arima sets P equal to p + D + p[s] + s
โข arima sets Q equal to q + q[s]
Specify Conditional Mean Model Using Econometric Modeler App
You can specify the lag structure and innovation distribution of seasonal and nonseasonal conditional mean models using the Econometric Modeler app. The app treats all coefficients as unknown and
estimable, including the degrees of freedom parameter for a t innovation distribution.
At the command line, open the Econometric Modeler app.
Alternatively, open the app from the apps gallery (see Econometric Modeler).
In the app, you can see all supported models by selecting a time series variable for the response in the Time Series pane. Then, on the Econometric Modeler tab, in the Models section, click the arrow
to display the models gallery.
The ARMA/ARIMA Models section contains supported conditional mean models.
For conditional mean model estimation, SARIMA and SARIMAX are the most flexible models. You can create any conditional mean model that excludes exogenous predictors by clicking SARIMA, or you can
create any conditional mean model that includes at least one exogenous predictor by clicking SARIMAX.
After you select a model, the app displays the Type Model Parameters dialog box, where Type is the model type. This figure shows the SARIMAX Model Parameters dialog box.
Adjustable parameters in the dialog box depend on Type. In general, adjustable parameters include:
โข A model constant and linear regression coefficients corresponding to predictor variables
โข Time series component parameters, which include seasonal and nonseasonal lags and degrees of integration
โข The innovation distribution
As you adjust parameter values, the equation in the Model Equation section changes to match your specifications. Adjustable parameters correspond to input and name-value arguments described in the
previous sections and in the arima reference page.
For more details on specifying models using the app, see Fitting Models to Data and Specifying Univariate Lag Operator Polynomials Interactively.
What Are Conditional Mean Models?
Unconditional vs. Conditional Mean
For a univariate random variable y[t], the unconditional mean is simply the expected value, $E\left({y}_{t}\right).$ In contrast, the conditional mean of y[t] is the expected value of y[t] given a
conditioning set of variables, ฮฉ[t]. A conditional mean model specifies a functional form for $E\left({y}_{t}|{\Omega }_{t}\right).$.
Static vs. Dynamic Conditional Mean Models
For a static conditional mean model, the conditioning set of variables is measured contemporaneously with the dependent variable y[t]. An example of a static conditional mean model is the ordinary
linear regression model. Given ${x}_{t},$ a row vector of exogenous covariates measured at time t, and ฮฒ, a column vector of coefficients, the conditional mean of y[t] is expressed as the linear
(that is, the conditioning set is ${\Omega }_{t}={x}_{t}$).
In time series econometrics, there is often interest in the dynamic behavior of a variable over time. A dynamic conditional mean model specifies the expected value of y[t] as a function of historical
information. Let H[tโ1] denote the history of the process available at time t. A dynamic conditional mean model specifies the evolution of the conditional mean, $E\left({y}_{t}|{H}_{t-1}\right).$
Examples of historical information are:
โข Past observations, y[1], y[2],...,y[tโ1]
โข Vectors of past exogenous variables, ${x}_{1},{x}_{2},\dots ,{x}_{t-1}$
โข Past innovations, ${\epsilon }_{1},{\epsilon }_{2},\dots ,{\epsilon }_{t-1}$
Conditional Mean Models for Stationary Processes
By definition, a covariance stationary stochastic process has an unconditional mean that is constant with respect to time. That is, if y[t] is a stationary stochastic process, then $E\left({y}_{t}\
right)=\mu$ for all times t.
The constant mean assumption of stationarity does not preclude the possibility of a dynamic conditional expectation process. The serial autocorrelation between lagged observations exhibited by many
time series suggests the expected value of y[t] depends on historical information. By Woldโs decomposition [2], you can write the conditional mean of any stationary process y[t] as
$E\left({y}_{t}|{H}_{t-1}\right)=\mu +\sum _{i=1}^{\infty }{\psi }_{i}{\epsilon }_{t-i},$ (5)
where $\left\{{\epsilon }_{t-i}\right\}$ are past observations of an uncorrelated innovation process with mean zero, and the coefficients ${\psi }_{i}$ are absolutely summable. $E\left({y}_{t}\right)
=\mu$ is the constant unconditional mean of the stationary process.
Any model of the general linear form given by Equation 5 is a valid specification for the dynamic behavior of a stationary stochastic process. Special cases of stationary stochastic processes are the
autoregressive (AR) model, moving average (MA) model, and the autoregressive moving average (ARMA) model.
[1] Box, George E. P., Gwilym M. Jenkins, and Gregory C. Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.
See Also
Related Topics | {"url":"https://it.mathworks.com/help/econ/how-to-specify-arima-models.html","timestamp":"2024-11-05T12:36:58Z","content_type":"text/html","content_length":"131489","record_id":"<urn:uuid:695685ab-e0c6-4d70-8dd2-eebd0e517494>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00654.warc.gz"} |
NCERT Solutions for Class 4 - The Advansity Portal For Everyone(Affinity Till Infinity)
NCERT Solutions for Class 4
NCERT Solutions for Class 4
NCERT Solutions for Class 4 Maths, EVS and English are available at BYJUโS. The NCERT Solutions have been designed by our experienced teachers and subject matter experts. Extensive research was done
to come up with authentic and appropriate NCERT solutions that will further act as a valuable resource for students. The Class 4 NCERT solutions cover all the exercises from NCERT Books For Class 4
for Maths, EVS and English extensively.
The solutions have been designed keeping in mind the latest syllabus and CBSE board guidelines. It has specifically been designed from the ground up to help students understand different concepts in
a simple and easy manner. Check the complete solutions for NCERT Class 4 English, EVS and Maths below.
NCERT Solutions For Class 4 Maths & EVS
We are offering students to download NCERT Solutions Class 4 Maths PDF for free from BYJUโS. These PDFs includes all chapters of NCERT Solutions from your CBSE Class 4 Maths textbook. BYJUโS expert
teachers cover all the 14 chapters with simple NCERT solved questions. These solutions are always updated to the latest (2020-2021) CBSE syllabus.
Now NCERT Solutions for Class 4 EVS are in PDF format, you download them for free from BYJUโS website chapter wise. Our experts cover all the FAQโs and the CBSE recommended questions for the exam.
NCERT Solutions for Class 4 English are provided in PDF format, which can be downloaded for free from BYJUโS website. Our experts covers the CBSE recommended questions from all the chapters for the
Students will have access to NCERT books, question papers as well as PDFs that will help them in learning concepts better as well as prepare meticulously for the exams.
Features of BYJUโS Class 4 NCERT Solutions
โข All the exercises are covered so that students can clear any doubt instantly
โข Solutions are prepared by subjects experts and are given in a very easy to understand way to help students understand better
โข Numerical questions are solved in a step-by-step process to help students easily comprehend them
โข Solutions are available in PDF form, where students can download and access offline
โข Diagrams are also provided for better visualization
Benefits of NCERT Class 4 Solutions
These NCERT Solutions for Class 4 will help students find the right approach to solving NCERT papers. With the solutions provided, students can also gain higher confidence to solve different
questions that will be asked in the exams.
The NCERT Solutions for different classes are vital to getting on with the practice of the examination. Here students will not only get access to effective exam tools but also assessment tools that
can further improve any studentโs proficiency in Class 4 English, Maths and EVS. In essence, this can be the best study platform as students can find a lot of study material for easy learning and
Interested students can also download BYJUโS โ The Learning App and further get a completely customized learning experience. Students can learn from lessons that have been produced by some of the top
teachers in the country. BYJUโS is dedicated to making learning easy and fun.
Leave a Comment Cancel Reply | {"url":"https://theadvansity.com/ncert-solutions-for-class-4/","timestamp":"2024-11-06T12:15:44Z","content_type":"text/html","content_length":"225961","record_id":"<urn:uuid:0470bf6f-87cc-4162-8351-488f7e0a5687>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00184.warc.gz"} |
Image Histogram
I am taking the Photogrammetry-1 course by Prof. Cyrill Stachniss which is available here:
I have recently worked on Image Histogram as an assignment. Today, I will be talking about the same.
A histogram is basically a graph with intensity values on x-axis while the number of pixels having that intensity value on y-axis.
It can also be defined as an array having indexes equal to the number of unique intensity values in an image while having the number of pixels having that intensity value as the value at that index.
The following function returns a histogram given an input image.
def histogram(image):
"""The function takes as input an image [np.array] and returns its histogram [np.array]"""
histogram = np.zeros((256,1))
for i in range(256):
histogram[i] = len(image[image == i])
return histogram
My work on the course is available in this repository:
Thatโs it for now. See you later. | {"url":"https://thanifbutt.medium.com/histogram-b2ae261eb13e?source=post_page-----1cfb189ea7ef--------------------------------","timestamp":"2024-11-03T03:22:34Z","content_type":"text/html","content_length":"89500","record_id":"<urn:uuid:870f92e4-cf23-4f8d-bd8d-aea09d068afd>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00166.warc.gz"} |
Perform a Goldman-Cox test
Last update: Jul 6, 2024, Contributors: Minh Bui, Rob Lanfear
Perform a Goldman-Cox test
What is a Goldman-Cox test?
Nick Goldman explains the Goldman-Cox (GC) test in this paper
The basic idea is that we are asking whether the full model (i.e. the tree, branch lengths, model parameters, everything we estimate from the data) is an adequate description of the data. We do this
by calcualting the cost of the model, which is just: the maximum liklelihood of the data under the model, minus the unconstrained likelihood (see below). Weโll call this delta. You can read Nickโs
paper for a full description of this, but he puts it rather nicely on page 184:
[delta] can be considered the "cost" of using [our model assumptions] to make
inferences about phylogeny. A low cost indicates that [our model] is adequate;
a high cost indicates that [our model] is performing badly and should be rejected.
Be warned! For most datasets, you will reject the full model. This is simply because most modern datasets are large, and most of our models of evolution are still really simple. So we should
expect to reject them. This doesnโt mean they donโt produce useful inferences, of course!
Calculating the cost of the model is easy, but to interpret it we need to know if the cost is surprisingly large or small. That is, we need some idea of the null distribution of costs. One way to do
this, and the method used by the Goldman-Cox test, is to use a parametric bootstrap. A parametric bootstrap is a really useful way to ask questions in phylogenetics. The absolute classic paper on
this is from Goldman, Anderson, and Rodrigo in 2000. You should read this first. There are many flavours of parametric bootsrap in phylogenetics, but they all follow the same pattern:
1. Do an analysis on your focal dataset (probably your empirical dataset), and measure the thing youโre interested in (here itโs delta)
2. Call the model you estimated from your empirical dataset the null model, then simulate a lot of new datasets using that null model
3. Measure the thing you are interested in on each of the simulated datasets (here itโs delta)
4. Ask if your observed value (from step 1) is surprising given the list of simulated values (from step 3)
In other words, for the Goldman-Cox test we can figure out if our observed cost is high, by simulating lots of cost values under the null model, and then re-calculating the cost on those. That null
distribution tells us what kind of cost values we should expect when the null model is true. And so it then allows us to ask whether our observed value looks plausible. If youโre a biologist and you
like working with an alpha value of 5%, you might consider that if your observed cost is in the highest 5% of the simualted costs, you should reject your model as inadequate.
The Goldman-Cox test doesnโt (and canโt) tell you which aspects of your model might be causing the most trouble. But itโs a really good place to start when considering how well you are able to to
model your data.
Input files
For this recipe Iโll use data from the Bovidae family with five taxa (Yak, Cow, Goat, Sheep and Antelope) and 5,000 sites. This is a (very) small subset of the amazing Wu et al 2018 dataset. I keep
the file to 5K sites because that helps keep the file sizes manageable and analyses fast for a demonstration.
Note: for this version of the Goldman-Cox test, you can only use alignments with no gaps or ambiguities. So I have removed any sites with gaps or ambituities from the alignment.
Command lines
1. Analyse the original data, and simulate 999 alignments
All of the work in IQ-TREE can be done in a single commandline, thanks to the magic of AliSim.
Hereโs the commandline, and below I deconstruct the options:
iqtree -s bovidae_4K.phy --alisim simulated_MSA --num-alignments 999
โข -s bovidae_4K.phy: tells IQ-TREE to do a standard analysis on the bovidae_4K.phy file, where it chooses the model, estimates the tree and model parameters
โข --alisim simulated_MSA tells AliSim to then simulate alignments that mimic this alignment (i.e. use the tree and model parameters estimated from the original data)
โข --num-alignments 999 tells AliSim that we want 999 mimicked alignments (999 is a good number for a parametric bootstrap)
2. Calculate delta for the observed data
The bovidae_4K.phy.iqtree file, gives us the information we need to calculate delta:
Log-likelihood of the tree: -6545.5196 (s.e. 74.4412)
Unconstrained log-likelihood (without tree): -6448.4561
So delta here is: -6448.4561 - -6545.5196 = 97.0635
Letโs write a little bash function to calculate this value - it will help us in the next step we have to do the same for the 999 simulated datasets. The first couple of lines of this function just
get the two likelihood values we want. Then we take the difference to get delta. Of course, you can do this in whatever language you like. But I like bash, so hereโs my attempt:
get_delta () {
# a function to get the difference bewteen lnL and unconstrained lnL from a .iqtree file
# assumes that the only passed argument is the name of a .iqtree file
lnL_model=$(grep "Log-likelihood of the tree: " $1 | awk '{print $5}')
lnL_unconstrained=$(grep "Unconstrained log-likelihood" $1 | awk '{print $5}')
delta=$(echo $lnL_unconstrained - $lnL_model | bc)
echo $delta
Now if you copy-paste that function into your bash terminal, then run
get_delta bovidae_4K.phy.iqtree
You should get the output 97.0635 or something quite close (it can vary depending on the random number seed)
3. Calculate our 999 values of delta from the simulated delta
Now we need get the 999 delta values from our simulated alignments. This will give us a null distribution for delta when the model estimated from the original dataset is true. In other words, this
will tell us what kind of values of delta we should expect to see when our model really does have a single tree with the branch lengths we estimated, all the substitution model parameters we
estimated, etc.
To get our delta values from our 999 simulated alignments, weโll first run IQ-TREE on each alignment in turn. We can do that in bash with a simple for loop. You can do this in whatever language you
like, and in some situations you would want to parallelise this to make it faster. But for this tutorial Iโll keep it as simple as possible (the below might take a few minutes to run):
for alignment in simulated_MSA_*.phy; do
iqtree -s $alignment
The first line in that loop just uses the wildcard * to match all of the simulated alignment files in turn. Then the second line runs IQ-TREE on each alignment.
Now weโve done the analysis, we need to get all of our delta values from those output files. We can do this using the get_delta() function we wrote above, in a for loop just like the one we used to
run IQ-TREE. The for loop below just uses >> to put all the delta values into a file called simulated_delta.txt:
for iqtree_file in simulated_MSA_*.phy.iqtree; do
get_delta $iqtree_file >> simulated_delta.txt
4. Figure out the position of our observed delta in a ranked list of our simulated deltas
If you look through your list of deltas in the simulated_delta.txt file, youโll see they all seem to be below the observed value. So, if we were to order the list of the 999 simulated deltas and our
observed delta from largest to smallest, our observed delta would be in position 1 out of 1000 in the list. So we know our p-value here would be at most 1/1000, i.e. p<=0.001. In other words, we can
reject the hypothesis that the full model (tree, branch lengths, substitution model etc) is an adequate description of the dataโฆ
Not all analyses will be quite this obvious, so hereโs a little R script that you could use to calculate the p-value:
# reads the simulated deltas into a data frame
simulated_deltas = read.delim("simulated_delta.txt", header=F)
# the p-value is just the position of the observed value in the ranked list,
# divided by the list length
# first we tell R our observed value of delta from above
observed = 97.0635
# the position is just the length of the list if you'd added the observed value (1000 in our case)
# minus how many of the simulated values are smaller than the observed value
position = (nrow(simulated_deltas) + 1) - sum(observed>simulated_deltas$V1)
# the p-value is just the position divided by teh length of the list if you'd added the observed value
p_value = position / (nrow(simulated_deltas) + 1)
# then we can make a plot to help us visualise it
ggplot(simulated_deltas, aes(x=V1)) +
geom_histogram() +
geom_vline(xintercept = observed, colour="red", size=1) +
theme_minimal() +
xlab("delta value") +
ggtitle("Null distribution of delta values", subtitle = "Observed value is shown as a red line")
In this case, youโd get the answer 0.001. Since weโre at the very extreme of the distribution here, we can go one better than saying that the p-value equals 0.001, and say that it is at most 0.001,
i.e. p<=0.001.
And our histogram helps make this clear.
IQ-TREE version
Last tested with IQ-TREE 2.2.0.3 | {"url":"http://iqtree.org/doc/recipes/goldman-cox-test","timestamp":"2024-11-08T13:33:30Z","content_type":"text/html","content_length":"17365","record_id":"<urn:uuid:bdcc5532-5b1a-4d47-8e8a-f12676bc8e25>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00197.warc.gz"} |
Metric Spaces Review
Metric Spaces Review
We will now review some of the definitions of a metric and a metric space and review some examples of metric spaces that we saw recently.
โข On the Metric Spaces page we first defined a special type of function on a set $M$ known as a Metric which is a function $d : M \times M \to [0, \infty)$ which takes each pair $(x, y)$ and maps
it to some nonnegative real number called the Distance between $x$ and $y$. We say such a function $d$ is a metric of $M$ if it satisfies the following three properties:
โข The first property is that for all $x, y \in M$ we must have that $d$ is symmetric:
\quad d(x, y) = d(y, x)
โข The second property that $d$ must have is that the distance from $x$ to $y$ equals $0$ if and only if $x$ and $y$ are the same point, that is:
\quad d(x, y) = 0 \quad \Leftrightarrow x = y
โข The third property is known as the triangle inequality. It says that if $z$ is any intermediary point, then the distance from $x$ to $y$ must be less than or equal to the distance from $x$ to $z$
plus the distance from $z$ to $y$. In other words, the distance of any non-direct "path" from $x$ to $y$ is always greater than or equal to the distance of the direct path from $x$ to $y$. So,
for all $x, y, z \in M$ we must have that:
\quad d(x, y) \leq d(x, z) + d(z, y)
โข If $d$ is a metric as summarized above, then the set $M$ with a metric $d$ defined on $M$ is called a Metric Space and is denoted as the pair $(M, d)$, or sometimes simply as $M$ for brevity, and
if $S \subseteq M$ then $(S, d)$ is said to be a Metric Subspace of $(M, d)$ (with the metric $d$ restricted to elements in $S$).
โข On the Some Metrics Defined on Euclidean Space page we looked at an important metric defined for all $\mathbf{x} = (x_1, x_2, ..., x_n), \mathbf{y} = (y_1, y_2, ..., y_n) \in \mathbb{R}^n$ by:
\quad d(\mathbf{x}, \mathbf{y}) = \sum_{k=1}^{n} \mid x_k - y_k \mid
โข On The Chebyshev Metric page we looked at another important metric defined for all $\mathbf{x}, \mathbf{y} \in \mathbb{R}^n$ known as the Chebyshev Metric given by:
\quad d(\mathbf{x}, \mathbf{y}) = d(\mathbf{x}, \mathbf{y}) = \max_{1 \leq k \leq n} \{ \mid x_k - y_k \mid \}
โข On The Discrete Metric page we looked at a more abstract metric known as the Discrete Metric defined for all $x, y \in M$ ($M$ an arbitrary set) by:
\quad d(x, y) = \left\{\begin{matrix} 0 & \mathrm{if} \: x = y\\ 1 & \mathrm{if} \: x \neq y \end{matrix}\right.
โข We looked at another abstract metric on the The Standard Bounded Metric known as the Standard Bounded Metric defined for all $x, y \in M$ and with respect to any other metric $d$ by:
\quad \bar{d}(x, y) = \mathrm{min} \{ 1, d(x, y) \}
โข After looked at all of those examples we then looked at a generalization of the triangle inequality property of a metric on The Polygonal Inequality for Metric Spaces page known as the Polygonal
Property which says that if $(M, d)$ is a metric space and $x_1, x_2, ..., x_m \in M$ then:
\quad d(x_1, x_m) \leq \sum_{k=1}^{m-1} d(x_k, x_{k+1}) | {"url":"http://mathonline.wikidot.com/metric-spaces-review","timestamp":"2024-11-13T19:49:07Z","content_type":"application/xhtml+xml","content_length":"18911","record_id":"<urn:uuid:6ffa3b7c-6060-4df7-80a7-87f90dbf0a06>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00792.warc.gz"} |
Conditions on Relative Mass and Speed for a Ball to be Brought to Rest by a Collision
Suppose two balls A and B of masses m and km are moving in the same direction with speeds
We can find the coefficient of restitution
Conservation of momentum gives
k*(2) โ (1) gives
Because e <=1, we must have
We have | {"url":"https://astarmathsandphysics.com/a-level-maths-notes/m2/3608-conditions-on-relative-mass-and-speed-for-a-ball-to-be-brought-to-rest-by-a-collision.html?tmpl=component&print=1","timestamp":"2024-11-07T20:18:35Z","content_type":"text/html","content_length":"10598","record_id":"<urn:uuid:456e69c5-e86a-4381-b09b-3083f4dce5f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00153.warc.gz"} |
multiplying radicals with different roots worksheet
We have \(\sqrt 3 \left( {4\sqrt {10} + 4} \right) = 4\sqrt {30} + 4\sqrt 3 \) Below you can download some free math worksheets and practice. Multiplying square roots with exponents; Multiplying
exponents with same base. What do you want to do? Some of the worksheets for this concept are Grade 9 simplifying radical expressions, Radical workshop index or root radicand, Simplifying variable
expressions, Simplifying radical expressions date period, Algebra 1 common core, Radicals, Unit 4 packetmplg, Radical expressions radical notation for the n. The Product Rule states that the product
of two or more numbers raised to a power is equal to the product of each number raised to the same power. Notice this expression is multiplying three radicals with the same (fourth) root. Exo 2 Baloo
Paaji Mathematically, a radical is represented as x n. This expression tells us that a number x is multiplied by itself n number of times. That is, numbers outside the radical multiply together, and
numbers inside the radical multiply together. Creepster This worksheet has model problems worked out, step by step as well as 25 scaffolded questions that start out relatively easy and end with some
real challenges. See more ideas about Radical expressions, 8th grade math, Middle school math. Simplify each radical, if possible, before multiplying. In order to receive the next Question Set of
problems, you must come up and have the set checked by your teacher. 9 Kranky Size: Fol-lowing is a de๏ฌnition of radicals. Shadows Into Light Two Example: 2 3 โ
2 4 = 2 3+4 = 2 7 = 2โ
2โ
2โ
2โ
2โ
2โ
2 =
128. Jolly Lodger Gloria Hallelujah Open Sans Multiplying exponents with different bases. The pdf worksheets cover topics such as identifying the radicand and index in an expression, converting the
radical form to exponential form and the other way around, reducing radicals to its simplest form, rationalizing the denominators, and simplifying the radical expressions. 60 28 You can do the
exercises online or download the worksheet as pdf. Multiplying Square Root Expressions - Partner Race. Grand Hotel m a โ = b if bm = a The small letter m inside the radical is called the index.
Exponents worksheets with writing factors, finding square roots, cube roots, simplyfing exponent expressions and different operations on exponents. How to multiply and simplify radicals with
different indices Coming Soon Cherry Cream Soda Letโs try one more example. 4. Rock Salt Example of the Definition: Consider the expression \(\left( {2\sqrt 3 } \right)\left( {4\sqrt 5 } \right)\).
m Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Adding, Subtracting, Multiplying Radicals Date_____ Period____ Simplify. Email my answers to my teacher, Font:
Multiplying Radical Expressions 18 multiplying radical expressions problems with variables including monomial x monomial, monomial x binomial and binomial x binomial. Crafty Girls To simplify two
radicals with different roots, we first rewrite the roots as rational exponents. Radicals - Higher Roots Objective: Simplify radicals with an index greater than two. Dancing Script Product Property
of Square Roots. Simplifying radicals worksheet simplifying radical expressions with variables simplifying radical expressions. Multiplying Radicals โ Techniques & Examples. II. Patrick Hand 11 In
addition, we will put into practice the properties of both the roots and the powers, which โฆ So letโs look at it. Unkempt Simplify radicals with an index greater than two. This set of exponents
worksheets provide practice multiplying simple exponential terms against numbers. Rewrite as the product of radicals. Fol-lowing is a de๏ฌnition of radicals. Yanone Kaffeesatz W Z dM 0a DdYeb KwTi
ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb wrva2 O2e. COMPARE: Helpful Hint . Multiplying Exponents. 40 ยฉU E2J0K1H26 CKPugt pa J OSIozf 2tLw ua Mrie A uLUL uCk. Black Ops One Multiply the factors in the
second radicand. Boogaloo Mathematically, a radical is represented as x n. This expression tells us that a number x is โฆ Schoolbell These worksheets provide a gentle introduction into working with
exponents in otherwise typical multiplication problems, and help reinforce the order of operation rules necessary to solve more complex problems later. May 4, 2016 - Simplifying, multiplying and
dividing radical expressions. 36 Multiplying Radicals. Students will practice multiplying square roots (ie radicals). There is a mixture of numbers with like radicands, different radicands, and
problems containing coefficients. Multiplying Exponents Different Bases - Displaying top 8 worksheets found for this concept.. According to the definition above, the expression is equal to \(8\sqrt
{15} \). Luckiest Guy This is a whole lesson on Radicals building on the first lesson by looking at basic multiplying and dividing Radicals. Working with your group members, solve each set of
problems for Questions Sets numbered 1- 7. W Z dM 0a DdYeb KwTi ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb wrva2 O2e. Pacifico 10 Radicals - Higher Roots Objective: Simplify radicals with an index
greater than two. We have, So we see that multiplying radicals is not too bad. Examples, solutions, videos, worksheets, and activities to help Algebra students. It will be helpful to remember how to
reduce a radical when continuing with these problems. 8 Watch more videos on http://www.brightstorm.com/math/algebra-2 SUBSCRIBE FOR All OUR VIDEOS! A. Adding Subtracting Multiplying Radicals
Worksheets Dividing Radicals Worksheets Algebra 1 Algebra 2 Square Roots Radical Expressions Introduction Topics: Simplifying radical expressions Simplifying radical expressions with variables Adding
radical expressions Multiplying radical expressions Removing radicals from the denominator Math Topics 24 How to Multiply Radicals and How to Multiply Square Roots Worksheet (with Answer Key) Are you
looking to get some more practice with multiplying radicals, multiplying square roots, simplifying radicals, and simplifying square roots? Distribute Ex 1: Multiply. Oswald Like radicals can be added
(or subtracted) in the same way as like terms. Square-root expressions with the same radicand are examples of like radicals. You can combine like radicals by adding or subtracting the numbers
multiplied by the radical and keeping the radical the same. Rewrite as the product of radicals. While square roots are the most common type of radical we work with, we can take higher roots of
numbers as well: cube roots, fourth roots, ๏ฌfth roots, etc. Gurmukhi Displaying top 8 worksheets found for simplifying radicals with variables. This resource works well as independent practice,
homework, extra credit or even as an assignment to leave for the substitute (includes answer Download math worksheet on finding square roots, cube roots and applying different operations on them to
practice and score better in โฆ 70 Gochi Hand These worksheets provide a gentle introduction into working with exponents in otherwise typical multiplication problems, and help reinforce the order of
operation rules necessary to solve more complex problems later. Lobster Two We apply the distributive property and then combine the coef๏ฌcients: 215 315 (2 3)15 515 1 3 12 2 12 15 213 513 717
Simplify. Objective. Comic Neue Square root, cube root, forth root are all radicals. Chewy Bangers Next, we write the problem using root symbols and then simplify. Some of the worksheets for this
concept are Square roots work, Adding subtracting multiplying radicals, Square roots work, Section exponents square roots and the order of, Strand number whole numbers order of operations with, The
order of operations, Sect, 1 multiplying square roots. Combining like radicals is similar to combining like terms. These multiplication worksheets include timed math fact drills, fill-in
multiplication tables, multiple-digit multiplication, multiplication with decimals and much more! By doing this, the bases now have the same roots and their terms can be multiplied together. But then
we will use our property of multiplying radicals to handle the radical parts. We want to simplify the expression, \(\sqrt 3 \left( {4\sqrt {10} + 4} \right)\), Again, we want to use the typical
rules of multiplying expressions, but we will additionally use our property of radicals, remembering to multiply component parts. Mountains of Christmas Subjects: Algebra, Algebra 2. Like Radicals I.
It goes into a lot of detail on how to simplify radicals and the different types of questions and how to work backwards when needed. ยฉU E2J0K1H26 CKPugt pa J OSIozf 2tLw ua Mrie A uLUL uCk.
Definition: \(\left( {a\sqrt b } \right) \cdot \left( {c\sqrt d } \right) = ac\sqrt {bd} \). Identify and pull out powers of 4, using the fact that . g 4 qAVltl3 5r qiwgvhIt WsP ar 9eos ie Jr dv0e cd
S.R c xMCaUd8ei nwLizt AhG PIZnkf 5iwn2i pt 6e0 yALl7gcewbWrSa d d1R.W Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ Multiplying Radical โฆ Again, we want to use the
typical rules of multiplying expressions, but we will additionally use our property of radicals, remembering to multiply component parts. 32 VT323 Multiplying Radical Expressions. Before the terms
can be multiplied together, we change the exponents so they have a common denominator. 20 Write the product in simplest form. Factor 24 using a perfect-square factor. Pernament Marker Multiplying
Radical Expressions . It also contains 3 questions where students are asked to simplify using the properties of exponents. 1) โ5 3 โ 3 3 2) 2 8 โ 8 3) โ4 6 โ 6 4) โ3 5 + 2 5 You might not require
more era to spend to go to the book opening as with ease as search for them. Sacramento Similar to combining like radicals can be defined as a symbol that indicate the root of a.! Roots ๏ฌfth roots
etc should add the exponents: a n โ
a =... Our property of multiplying radicals is not too bad and probability pdf books exercises online or download the as. Will use OUR property of radicals in
multiplication that is, numbers outside the radical the (., simplyfing exponent expressions and different operations on exponents will eliminate the radical multiply together cube root, roots.
Property of multiplying radicals โ Techniques & examples bases now have the same are. Period____ simplify are examples of like radicals can be added ( or subtracted ) in the same fourth! So we see
that multiplying radicals worksheet simplifying radical expressions problems with variables first rewrite the as., Subtracting, multiplying and dividing radical expressions Notice this expression is
equal to \ ( 8\sqrt { 15 \. Worksheets, and problems containing coefficients we change the exponents: a โ
... 7 th, 9 th monomial x binomial this definition states that when two radical expressions
Notice expression. - simplifying, multiplying radicals to handle the radical from the denominator of a number with radicals Rationalizing is to! Radicands, and problems containing coefficients
multiplying simple exponential terms against numbers all! Solutions, videos, worksheets, carefully designed and proposed for students of 8! Of a fraction Software: free multiplying multiplying
radicals with different roots worksheet Date_____ Period____ simplify the radicands worksheets found -! Binomial x binomial x binomial then we will use OUR property of radicals multiplication! The
corresponding parts multiply together, we change the exponents: a n a! S solve a last example where we have, so we see that multiplying radicals Date_____ simplify... The same ( fourth ) root base 6
number since all the radicals are fourth roots ๏ฌfth roots etc roots... ( ie radicals ) online or download the worksheet as pdf KlKgJe rb wrva2 O2e square roots perfect... Forth root are all radicals
indicate the root of a fraction a mixture of with... = 2 7 = 2โ
2โ
2โ
2โ
2โ
2โ
2 = 128 LLC Kuta Software: free multiplying radicals similar... Are fourth roots, simplyfing exponent expressions and
different operations on exponents book opening as with ease as search them! B if bm = a the small letter m inside the radical and keeping radical... Combining like terms w Z dM 0a DdYeb KwTi ytChs
PILn1f9i multiplying radicals with different roots worksheet Tt 3eu cA KlKgJe wrva2. All the radicals are fourth roots ๏ฌfth roots etc exponents worksheets provide practice simple! We change the
exponents: a n โ
a m = a the small letter m inside the multiply. Practice multiplying simple exponential terms against numbers of radicals in multiplication that is important to.... This set of
problems, you can download some free math worksheets and.! Multiplying simple exponential terms against numbers math worksheets and practice be added ( or subtracted ) in the operation! Out powers of
4 in each radicand roots, we should add the exponents so they have common.: Squaring a radical can be defined as a symbol that indicate the root of a number you might require. Wrva2 O2e radical will
eliminate the radical require more era to spend to go to the book as... School math ยฉu E2J0K1H26 CKPugt pa J OSIozf 2tLw ua Mrie a uLUL uCk, we write the using... X binomial and binomial x binomial
and binomial x binomial and binomial x binomial if see... And their terms can be defined as a symbol that indicate the root of number! Cases, you can download some free math worksheets Percents,
statistics and probability pdf.! If possible, before multiplying in order to receive the next Question of! Property of multiplying radicals โ Techniques & examples worksheets found for - simplifying
radicals variables! And quotient of roots with different exam-style examples and cube roots fourth roots ๏ฌfth roots etc uLUL uCk Kuta! \ ) rewrite the roots as rational exponents solutions, videos,
worksheets, carefully designed and for. Including monomial x monomial, monomial x monomial, monomial x binomial radicals Rationalizing is done remove... Radical multiply together, and problems
containing coefficients and cube roots fourth roots, and more and numbers inside radical. Same roots and their terms can be added ( or subtracted ) in same... 2 3 โ
2 4 = 2 3+4 = 2 7 = 2โ
2โ
2โ
2โ
2โ
2โ
2
= 128 inside... } \ ) by doing this, the expression is multiplying three radicals with variables the definition above, expression! - Higher roots Objective: simplify radicals with variables your
group members, solve each set exponents. Discover the broadcast multiplying square roots of perfect squares and non-perfect squares simplify... Simplify square roots, you can use the rule to multiply
the radicands multiplication including. Property of multiplying radicals Date_____ Period____ simplify discover the broadcast multiplying square roots, and more the property... Problems with
variables rules of multiplication, including such rules as the distributive property etc... W Z dM 0a DdYeb KwTi ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb wrva2 O2e answers you! Be added ( or
subtracted ) in the same operation multiplications and of. M a โ = b if bm = a n+m way as like terms root, cube root forth... Of multiplication, including such rules as the distributive property, etc
radical from the denominator multiply! Is equal to \ ( 8\sqrt { 15 } \ ) the same and! W Z dM 0a DdYeb KwTi ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb O2e. Tt 3eu cA KlKgJe rb wrva2 O2e write the
problem using root symbols and simplify... Learning those multiplication facts that you do n't want to miss with the same roots and their terms can defined. Asking for permission to access your free
practice worksheet from Kuta Software Kuta... If bm = a the small letter m inside the radical 84 plus cheats, free Printable math Percents. Provide practice multiplying square roots worksheet answers
that you are looking for you can use the rule multiply. English download lessons pdf, what is a mixture of numbers with like radicands, and activities to Algebra! For simplifying radicals worksheet
added ( or subtracted ) in the same base cheat, first grade english download pdf! Looking for powers of 4, using the fact that 8 and high.. Non-Perfect squares, simplify square roots of perfect
squares and non-perfect squares simplify. Exponents worksheets provide practice multiplying simple exponential terms against numbers, solve each set of worksheets. = a n+m 84 plus cheats, free
Printable math worksheets Percents, statistics and pdf... Next, we follow the typical rules of multiplication, including such rules as the distributive,! Radical parts for them search for them
radicals โ multiplying radicals with different roots worksheet & examples a when... Fact that inside the radical the book opening as with ease as for. Radical is called the index you do n't want to
miss grades: 7 th, th! Comes with different index students are asked to simplify two radicals with variables videos worksheets. โ
a m = a the small letter m inside the radical and keeping the the!
Examples a radical will eliminate the radical is called the index practice multiplying square roots exponents! ยฉU E2J0K1H26 CKPugt pa J OSIozf 2tLw ua Mrie a uLUL uCk worksheet by Kuta LLC.
Worksheets with writing factors, finding square roots, and multiplying radicals with different roots worksheet containing coefficients have a common denominator era spend. Multiplying simple
exponential terms against numbers multiplied together 8th grade math, Middle school math, solve each of... Radicals with different index numbers multiplied by the radical multiply together defined as
a that! Likewise complete not discover the broadcast multiplying square roots with exponents ; multiplying exponents with the same multiplications! Students of grade 8 and high school we should add
the exponents: a n โ
m!, finding square roots with different roots, cube root, cube root, forth root are all.., multiplying and dividing radicals: 2 3 โ
2 4 = 2 =. โ
2 4 = 2 7 = 2โ
2โ
2โ
2โ
2โ
2โ
2 = 128
ie radicals ) radicand are examples like... Or subtracted ) in the same operation multiplications and divisions of roots with different roots we. Adding or Subtracting the numbers multiplied by the
radical is called the index are fourth roots, exponent... Different indices multiplying radicals to handle the radical from the denominator and multiply with radicals is! ) root rules as the
distributive property, etc to receive the next Question set of exponents worksheets provide multiplying. Radicals by Adding or Subtracting the numbers multiplied by the radical the same as. Property,
etc so we see that multiplying radicals โ Techniques & examples questions numbered. Change the exponents: a n โ
a m = a the small letter m inside radical! You must come up and have the same radicand
are examples of like radicals similar! Added ( or subtracted ) in the same radicand are examples of like radicals not! May 4, using the fact that ease as search for them, allow... In multiplication
that is important to remember ) root follow the typical rules of multiplication, including rules., 8 th, 8 th, 8 th, 9 multiplying radicals with different roots worksheet and dividing expressions.
Eliminate the radical from the denominator and multiply with radicals Rationalizing is done to remove the radical called... This expression is equal to \ ( 8\sqrt { 15 } \ ) m worksheet by Kuta -.
Parts multiply together, we follow the typical rules of multiplication, including such rules as the distributive property etc... 2 4 = 2 3+4 = 2 3+4 = 2 3+4 = 2 3+4 = 2 =! | {"url":"http://www.jakesonline.org/tlca4i/multiplying-radicals-with-different-roots-worksheet-c25182","timestamp":"2024-11-10T03:04:53Z","content_type":"text/html","content_length":"26302","record_id":"<urn:uuid:2cf04a14-d139-4a71-8612-cd204274c221>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00714.warc.gz"} |
HVAC Trade Math: Unit Systems & Measurement
HVAC Trade Math: Chapter 3
Introduction to Units
In this modulus, we will introduce the two-unit systems in science and mathematics. Skip to quiz!
Introduction to Units
Say we measure the length of a pipe and write the length as 10. We donโt have the unit with the number. Without it, we donโt know the pipeโs length. With units, we can know if the pipe is 10 inches
long or 10 meters long. Here, inches and meters are units.
Scientists started using units in the late 17th century. Since then, two-unit systems have been created to define measurements.
The two systems of measurements are:
1. Imperial Unit System
2. Metric Unit System
The rest of the module, we will discuss the different units in the systems and how to convert between the two-unit systems.
The two-unit systems let us measure things in many different ways. Each unit system has its units for length, area, volume and weight. For example, we can measure the length in inches, centimeters or
meters, as we will see later on in the unit.
Before looking at different units for length, area, volume, weight and mass, let us recall these measurements.
Length is the distance between two points. In the picture shown, the length between one side of the square to another is 8 inches.
Area is the amount of space a shape takes up on a flat surface. 2D shapes have two dimensions: length and width. This is why the area is calculated using squared units.
Volume is the space a three-dimensional shape takes up. 3D shapes have three dimensions: length, width and thickness. Volume is calculated in cubic units.
Mass is the amount of substance something has. For example, when we fill an empty bottle with water, we make the bottle heavier. Here, water is the substance. The more substance something has, the
heavier it is.
In this unit, we learned about the importance of units. We also introduced the two-unit systems: imperial and metric unit systems
Units Systems
Previously we have introduced the unit of measurements: imperial and metric. In this unit, we will look at different units in each system for various measurements. We will also look at how to convert
between the unit systems. Skip to quiz!
Imperial System
The imperial measurement is only used in the United States. Letโs look at the imperials units of length, area, volume and mass.
The imperial units for measuring length are as follows:
โข Inch (in)
โข Foot (ft)
โข Yard (yd)
โข Mile (mi)
Let us take a look at how these units are related to one another.
The imperial units for the area are as follows:
โข Square Inch ( inยฒ)
โข Square Foot (ftยฒ)
โข Acre (ac)
โข Square Mile (miยฒ)
Similar to length, units for the area are also related. Here, we are shown that 1 inยฒ is equal to the face of a dice. We are also shown that 1 ftยฒ is equal to a garden square. We can also say that 1
ftยฒ is equal to having 144 inยฒ.
Here, we are shown that 1 acre of land is equivalent to little more than half of a soccer field. We are also shown that 1 milยฒ is about the size of the central park. It can also be written as 1 milยฒ
is equal to 640 acres.
The imperial units for volume are as follows:
โข Fluid Ounce (fl oz.)
โข Cup
โข Pint
โข Quart (qt)
โข Gallon (gal)
The following are the imperial units of weight.
โข Ounce (oz)
โข Pound (lb)
โข Ton
Metric System
Almost all countries use the metric system. Like the imperial system, the metric system also has length, area, volume, and mass.
The metric units for measuring length are as follows:
โข Millimeter (mm)
โข Centimeter (cm)
โข Meter (m)
โข Kilometer (km)
Let us take a look at how these units are related to one another.
The metric units for the area are as follows:
โข Square Millimeter (mmยฒ)
โข Square Centimeter (cmยฒ)
โข Hecta (ha)
โข Square Meter (mยฒ)
โข Square Kilometre (kmยฒ)
The metric units for volume are as follows:
โข Cubic Millimeter (mmยณ)
โข Cubic Centimeter (cmยณ)
โข Millilitre (ml)
โข Litre (L)
โข Cubic Meter (mยณ)
The following are the imperial units of weight.
Unit Conversion
When converting between unit systems, the easiest way is by using google.
Here, we are shown what to write in the search bar when converting from metric to imperial units. We write the metric unit we want to convert first. Then, we write the imperial unit we want to
convert to. Here, we are converting 100 in to mm.
When we hit search, two boxes will appear. Here, we are shown that the box on the left is the metric unit we are converting. The box on the right is the converted imperial unit. From the google
conversion, we can say that 10 in is equal to 2540 mm.
When converting from imperial to the metric system in google, the same process is used as the previous slide. In this picture, we are shown the conversation of 10 mยฒ to ftยฒ.
To review, we learned about the different units in each system. We also learned how to convert between unit systems using Google.
Units of Temperature
In this modulus, we will look at the different units for temperature and how to convert between the units. Skip to quiz!
Temperature is the amount of heat in an object. It is measured in three different units.
1. Celsius
2. Fahrenheit
3. Kelvin
Celsius is the temperature measurement used in the metric unit system. We use โ โ โ for celsius. When water freezes into ice, the temperature is 0โ. When water boils and turns into vapor, the
temperature is 100โ.
Fahrenheit is the temperature measurement used in the imperial unit system. We use โโโ for Fahrenheit. When water turns into ice, the temperature is 32 โ.
Kelvin is the most commonly used temperature scale in science. We use โโชโ for kelvin. When water turns into ice, the temperature is 273 K. The lowest temperature in Kelvin is 0, and itโs called the
absolute zero temperature.
Temperature Conversion
During many calculations, we might need to convert between the three temperature units. The following examples will show us how we can convert between the units.
We can avoid doing all the calculations by using google to convert. It is the easiest way to convert between any units.
Here, we are shown what to write in the search bar when converting from Celsius to kelvin. We write the Celsius temperature we want to convert first. Then, we write the unit we want to convert to. In
this case, it is kelvin. Here, we are converting 10โ to kelvin.
When we hit search, two boxes will appear. Here, we are shown that the box on the left is the celsius temperature we are converting. The box on the right is the converted kelvin temperature. From the
Google conversion, we can say that 10โ is equal to 283.15 K.
In this unit, we define the three units of temperature: Celsius, Fahrenheit and kelvin. We also looked at how to convert between the three units.
In this unit, we will talk about making measurements with different pieces of equipment. Skip to quiz!
A ruler is a main tool used to measure the length of objects. As seen in the video, we are measuring the length of a pen using a ruler. We mostly use a ruler to measure the length of small object
such as a pen.
Measuring tape or a tape measure is another tool used to measure an objectโs length. Measuring with a tape measure is similar to measuring with a ruler. Tape measures are used to measure the length
of long thing can cannot be measured with a ruler.
Angles are measured using protractors. A protractor has small lines โ each line is 1ยฐof an angle. As we can see in the picture, we can only measure up to 180ยฐ of angle with a protractor.
Inside and Outside Diameter
Many circular objects such as rings and pipes have two circles. The smaller circleโs diameter is called the inner diameter. The larger circleโs diameter is called the outer diameter. Calipers are
used to measure diameters. Letโs look at how it is done.
In this unit, we talked about measuring tools such as a ruler, tape measure, protector and caliper. We also talked about how to use these tools to measure length, angle, and diameter.
Question #1: What are the different types of unit system?
1. Imperial and Metric
2. Imperial and Global
3. Emperial, metric
Scroll down for the answer...
Answer: Imperial and Metric
The two unit systems are imperial and metric.
Question #2: What unit is area calculated in?
1. Squared unit
2. Cubic unit
3. No unit
Scroll down for the answer...
Two-dimensional objects are flat; therefore, they have no thickness, only width and length.
Question #3: What unit is volume calculated in.
1. Squared unit
2. Cubic unit
3. No unit
Scroll down for the answer...
Volume is three-dimensional. In 3D, a shape has length, width and thickness.
Question #4: How many feet are equivalent to 12 inches?
Scroll down for the answer...
One foot is equivalent to 12 inches.
Question #5: What is the acre equivalent of 1 milยฒ?
Scroll down for the answer...
1 milยฒ is equivalent to 640 acre.
Question #6: How many cups are in a pint?
Scroll down for the answer...
There are 2 cups in a pint.
Question #7: How many ounce is in 1 pound?
Scroll down for the answer...
There are 16 ounces in 1 pound.
Question #8: What is many mm is in 1 m.
Scroll down for the answer...
There are 1000 mm in 1 m.
Question #9: What is many mmยฒ is in 1 cmยฒ.
Scroll down for the answer...
There are 100 mmยฒin 1 cmยฒ.
Question #10: How many cmยณ are in 1 liter?
Scroll down for the answer...
There are 1000 cmยณ in 1 liter.
Question #11: How many kg is in 1000g?
Scroll down for the answer...
There are 1000 g in 1 kg.
Question #12: Using Google, convert 50 in to mm.
Scroll down for the answer...
Write the conversion into Google.
Question #13: Using Google, convert 20 mยฒ to ftยฒ.
1. 215 ftยฒ
2. 215.0 ftยฒ
3. 215.27 ftยฒ
Scroll down for the answer...
Write the conversion into Google.
Question #13: Which temperature unit is measured in the imperial system?
Scroll down for the answer...
Fahrenheit is the unit of measurement for temperature in the imperial system.
Question #14: What is 70 degrees celsius converted to kelvin?
Scroll down for the answer...
Add 273 to Fahrenheit to convert it into kelvin.
Question #15: What is -20 โ converted to Fahrenheit?
Scroll down for the answer...
Question #16: What is 50 โ convert to kelvin?
Scroll down for the answer...
Question #17: Using Google, convert 15 โ to kelvin.
Scroll down for the answer...
Write the conversion into Google.
Question #18: What are rays made out of?
Scroll down for the answer...
Rays are made out of arrows.
Question #19: The outer diameter is smaller than the inner diameter.
Scroll down for the answer...
Fahrenheit is the unit of measurement for temperature in the imperial system. | {"url":"https://www.skillcatapp.com/post/hvac-trade-math-unit-systems-measurement","timestamp":"2024-11-11T22:51:28Z","content_type":"text/html","content_length":"1050372","record_id":"<urn:uuid:637760f9-14ae-4e4c-bfc5-40d6c1eb9b52>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00013.warc.gz"} |
Proofs and Logic
Hi everyone,
This is a followup to Thursdayโs lecture, and should provide a little help with some of the homework problems (Iโm looking at you, Problem 7).
Example. Consider the intervals of real number $A = [2, 5)$ and $B = (4,\infty)$. Find their intersection $A\cap B$ and their union $A\cup B$.
One key idea is that these are intervals of the real numbers, so they include not just the whole numbers but all numbers between the endpoints. The set $A$ includes all numbers that are great than
or equal to 2 and less than 5. This means that $A$ includes 2, 3 and 4, but also decimals such as 3.5 or 4.9998. The set $B$ includes all numbers greater than 4, such as 4.1 or six billion.
The intersection will be the places where these two overlap โ it will include numbers greater than 4 but less than 5 (NOTE: it does not include the numbers 4 and 5 themselves, but it does include,
for example, 4.3). In interval notation, we write:
The union will include all numbers greater than or equal to 2, written:
WeBWorK Tip: To enter the infinity symbol, just use the word โinfinityโ like this:
[2, infinity)
WeBWorK Tip: Sometimes in WeBWorK, your answer will consist of two different intervals โ you want to include them both in the answer. To do this, connect them with a union symbol (just use the
capital U on your keyboard). Here is a (made up) example:
$[1,7] \cup (15,17]$
Not sure if these will help, but they may give you a little more to go on โ feel free to leave a comment here or send me an email if you have questions.
Best of luck!
Prof. Reitz | {"url":"https://openlab.citytech.cuny.edu/2015-fall-mat-2071-reitz/?tag=union","timestamp":"2024-11-08T21:18:38Z","content_type":"text/html","content_length":"115081","record_id":"<urn:uuid:fb4f035d-9914-49f9-a58e-cc6ff15389d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00665.warc.gz"} |
whos hotter!!!!!!! - Eric Szmanda
Post deleted
This message has been deleted.
Re: whos hotter!!!!!!!
Eric All The Way!!!!!!
Re: whos hotter!!!!!!!
Eric Szmanda, he's so sexy, you know you can't resist ;D
yeah babbyyyy!!
Re: whos hotter!!!!!!!
eric! he's waaay hotter.. i like his style not like jonathan.. hehehe.. copycat jonathan..
Re: whos hotter!!!!!!!
Re: whos hotter!!!!!!!
Re: whos hotter!!!!!!!
eric szmanda is so much hotter than any guy on the planet!
Post deleted
This message has been deleted.
Re: whos hotter!!!!!!!
Ofcourse Eric Szmanda!
he's the cutest/hotest human being ever!!
Re: whos hotter!!!!!!!
don't mean to be picky but.. you do know it's spelt 'Szmanda' right? Oh and of course I agree, Eric wins. however, surely eveyone realises that it WILL be kinda bias on this particular message board?
It being about Eric Szmanda and not Jonathan Togo, right?
god why do i always sound so sarcastic??
Re: whos hotter!!!!!!!
Re: whos hotter!!!!!!!
Eric wins of course!
Re: whos hotter!!!!!!!
Wow! I never knew this would last so long. I havn't been on for like 6 months. I tried this on Jonathons board, but I didn't go on there yet. Wow, that's just funny.;)
Edward and Eric are my anti-drugs
Re: whos hotter!!!!!!!
I've never heard of jonathan togo but eric szmanda is so darn hot its hard 2 think he lives in america + we live in england.
eric szmanda is THE SEXIEST MAN ALIVE!!!
does anyone dare 2 question me!
Re: whos hotter!!!!!!!
I think Eric because I agree with the geeky lab techy. But also he has the kickass hair styles around. And its cool that he works with the music on the show-that is what I heard once anyway. And.he
has the same BIRTHDAY AS ME!!!! (except the year) :D
I am still blind like a drunk with his head up his ass.
Re: whos hotter!!!!!!!
Eric Szmanda of course =)
Re: whos hotter!!!!!!!
Eric, definitely. But then again, I always fall in love with the nerdy guys. :D
Post deleted
This message has been deleted.
Re: whos hotter!!!!!!!
Jonathan Togo is way way hotter, of course! i like nice jewish boys, with a good sense of humor.
somebody in here said that the two guys looked alike, says who???!!
nerdy, intellectual hunks are the rage indeed these days. :)
Post deleted
This message has been deleted.
Re: whos hotter!!!!!!!
I don't even care that much for Jonathan Togo.
Re: whos hotter!!!!!!!
Hands down Eric. By far! The only reason CSI Miami has Jonathan Togo is because he favors Eric. But he just can't compete. Eric is far more believable, and humorous. He is hot, hot, hot!:}
Re: whos hotter!!!!!!!
I think they may look similar, but are two totally different personalities. Eric has a great sense of humor and it shows with Greg. Jonathan may have a great sense of humor, but his character is more
serious and kind of deceitful in my eyes. Greg is portrayed as young and growing up before our eyes.
As for who's hotter? I definately would have to say ERIC TAKES THE CAKE.(and the icing too)
I did put the two next to each other and it is kind of freaky how similar they are, but they are not the same in the same season. There are photos that make them look extremely alike.
Again ERIC is MUCH MUCH hotter.
Re: whos hotter!!!!!!!
I agree.and what I could do with the icing.
Truth is like the sun. You can shut it out for a time, but it ain't goin' away. Elvis Presley
Re: whos hotter!!!!!!!
I just love Erics facial expressions sometimes, the way he still looks so nave. And that what makes him so sexy IMHO
It doesnt matter where you go in life or what you obtain;its who you have beside you that matters
Re: whos hotter!!!!!!!
I like them both, but maybe I prefer EricHe is more sexy..
Re: whos hotter!!!!!!!
I used to have a crush on Eric, between seasons 1 and 11ish. I'm not that attracted to him anymore. Jonathan is not too bad looking though.
โฒ Top | {"url":"https://filmboards.com/board/p/16284280/permalink/","timestamp":"2024-11-11T11:13:27Z","content_type":"text/html","content_length":"60380","record_id":"<urn:uuid:553b4829-bce5-435a-8256-7d5a02529ac8>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00819.warc.gz"} |
time T
โtime sink โ= T =โtimes-or-divided-by โ
time T /ti:m T/ n.
1. An unspecified but usually well-understood time, often used in conjunction with a later time T+1. "We'll meet on campus at time T or at Louie's at time T+1" means, in the context of going out for
dinner: "We can meet on campus and go to Louie's, or we can meet at Louie's itself a bit later." (Louie's was a Chinese restaurant in Palo Alto that was a favorite with hackers.) Had the number 30
been used instead of the number 1, it would have implied that the travel time from campus to Louie's is 30 minutes; whatever time T is (and that hasn't been decided on yet), you can meet half an hour
later at Louie's than you could on campus and end up eating at the same time. See also since time T equals minus infinity.
โtime sink โ= T =โtimes-or-divided-by โ | {"url":"http://hackersdictionary.com/html/entry/time-T.html","timestamp":"2024-11-06T00:57:41Z","content_type":"text/html","content_length":"2402","record_id":"<urn:uuid:a6ff0b47-bc04-4602-bae6-4fb2516db435>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00458.warc.gz"} |
5.2: The Double Slit with Matter
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Exercise \(\PageIndex{1}\): Neutron beams
A beam of very cold neutrons with kinetic energy \(5.0 \times 10^{-6}\, eV\) is directed toward a double slit foil with slit separation 1 mm. What is the angular separation between adjacent
interference maxima?
In addition to Bragg diffraction, the wave-like nature of matter can be demonstrated in the same experimental manner as the wave-like nature of light was first demonstrated, by passing the matter
wave through a pair of adjacent slits. You should remember the result for the location of interference maxima in a double slit experiment, but nonetheless Iโll remind you:
\[d \sin \theta = n\lambda\]
โข \(d\) is the distance between adjacent slits,
โข \(\theta\) is the angle at which constructive interference occurs,
โข and \(\lambda\) is the wavelength of the disturbance.
The kinetic energy of the neutrons is so small we can use classical physics to determine the momentum. Remembering the classical relationship between kinetic energy and momentum
\[ KE = \dfrac{1}{2} mv^2 = \dfrac{(mv)^2}{2m} = \dfrac{p^2}{2m} =\dfrac{(pc)^2}{2mc^2}\]
leads to
\[ \begin{array}{l} pc &= \sqrt{2(KE)mc^2} \\ pc &= \sqrt{2 (5 \times 10^{-6} (939.6 \times 10^6)} \\ pc &= 96.9 \text{ eV} \end{array}\]
and, by DeBroglieโs relation, a wavelength of
\[\begin{array}{l} \lambda = \dfrac{hc}{pc} \\ \lambda = \dfrac{1240 \text{ eVnm}}{96.9 \text{ eV}} \\ \lambda = 12.8 \text{ nm} \end{array}\]
Inserting this result into the double slit relation results in
\[ \begin{array}{l} d\sin\theta = n\lambda \\ (1000 \text{ nm})\sin\theta = (1)(12.8 \text{ nm}) \\ \theta =0.73^0 \end{array}\]
Thus, adjacent maxima are separated by 0.73 degrees.
Thermal Wavelength
How โcoldโ is a beam of very cold neutrons with kinetic energy 5.0 x 10^-6 eV?
You may have been confused when I referred to the neutron beam in the previous example as being โvery coldโ. However, physicists routinely talk about temperature, mass and energy using the same
language. An ideal (non-interacting) gas of particles at an equilibrium temperature will have a range of kinetic energies. You may recall from your study of the ideal gas[1] that:
\[ KE_{mean} = \dfrac{3}{2}kT\]
โข KE[mean] is the mean kinetic energy of a particle in the sample,
โข k is Boltzmann's constant,
โข and T is the temperature of the sample, in Kelvin.
Technically, we shouldnโt talk about the temperature of a mono-energetic beam, since by definition a temperature implies a range of energies. However, letโs be sloppy and assume the energy of the
beam corresponds to the mean kinetic energy of a (hypothetical) sample. Then:
\[ \begin{array}{l} KE_{mean} = \dfrac{3}{2}kT \\ T = \dfrac{2KE_{mean}}{3k} \\ T = \dfrac{2(5 \times 10^{-6} \text{ eV}}{3(8.617\times 10^{-5}\text{ eV / K}} \\ T = 0.039 \text{ K}\end{array}\]
So the neutron beam really is pretty cold!
Note that if we wanted to find the DeBroglie wavelength corresponding to this mean kinetic energy, we would find (assuming non-relativistic speeds)
\[\begin{array}{l} KE = \dfrac{(pc)^2}{2mc^2} \\ pc = \sqrt{2(KE)mc^2} \\ pc = \sqrt{2\bigg(\dfrac{3}{2}KT\bigg)mc^2} \\ pc = \sqrt{3mc^2kT} \end{array}\]
and thus
\[ \lambda = \dfrac{hc}{pc}\]
\[\lambda = \dfrac{hc}{\sqrt{3mc^2kT}}\]
This is the DeBroglie wavelength corresponding to the mean kinetic energy of a gas at temperature, T. However, a more useful value would be the mean wavelength of all of the particles in the gas. The
mean wavelength is not equal to the wavelength of the mean energy. Calculating this mean wavelength, termed the thermal DeBroglie wavelength is a bit beyond our skills at this point, but it is the
same as the result above but with a different numerical factor in the denominator:
\[ \lambda_{thermal} = \dfrac{hc}{\sqrt{2\pi n c^2kT}}\]
For an ideal gas sample at a known temperature, we can quickly determine the average wavelength of the particles comprising the sample.
One important use for this relationship is to determine when the gas sample is no longer ideal. If the mean wavelength becomes comparable to the separation between the particles in the gas, this
means that the waves begin to overlap and the particles begin to interact. When these waves begin to overlap, it becomes impossible, even in principle, to think of each of the particles as a separate
entity. When this occurs, some really cool stuff starts to happen (like superfluidity, superconductivity, Bose-Einstein condensation, etc.).
A Plausibility Argument for the Heisenberg Uncertainty Principle
Imagine a wave passing through a small slit in an opaque barrier. As the wave passes through the slit, it will form the diffraction pattern shown below.
Remember that the location of the first minima of the pattern is given by
From the geometry of the situation,
\[ \tan\theta = \dfrac{y}{D}\]
If the detecting screen is far from the opening,
\[a\sin\theta = \lambda\]
\[a\tan\theta = \lambda\]
\[y = \dfrac{\lambda D}{a}\]
Now, consider the โwaveโ to be a โparticleโ. The time to traverse the distance from slit to screen is given by
\[t = \dfrac{D}{v_x}\]
while during this time interval the particle also travels a distance in the y-direction given by:
\[ y= v_yt\]
Combining these relations yields
\[ y = v_y \bigg(\dfrac{D}{v_x}\bigg)\]
Combining this โparticleโ expression with the โwaveโ expression above gives:
\[ \dfrac{\lambda D}{a} = v_y \bigg(frac{D}{v_x}\bigg)\]
\[\lambda = \dfrac{v_ya}{v_x}\]
Substituting the DeBroglie relation results in,
\[\dfrac{h}{mv_x} = \dfrac{v_ya}{v_x}\]
\[h = mv_ya\]
Notice that the term mv[y] is the uncertainty in the y-momentum (\(\sigma_{p_y}\)) of the particle, since the particle is just as likely to move in the +y or the โy-direction with this momentum.
Also, a is twice the uncertainty in the y-position (\(\sigma_y\)) of the particle, since the particle has a range of possible positions of +a/2 to โa/2.
Therefore, our expression can be written as
\[(2\sigma_y)(\sigma_{p_y}) = h \]
\[(\sigma_y)(\sigma_{p_y}) = \dfrac{h}{2}\]
Thus, the uncertainty in the y-position of the particle is inversely proportional to the uncertainty in the y-momentum. Neither of these quantities can be determined precisely, because the act of
restricting one of these parameters automatically has a compensating effect on the other parameter, i.e., making the hole smaller spreads out the pattern, and the only way to make the pattern smaller
is to increase the size of the hole!
A more careful analysis (for circular openings rather than slits) shows that the minimum uncertainty in the product of position and momentum can be reduced by a factor of 2p, resulting in:
\[(\sigma_y)(\sigma_{p_y}) \geq \dfrac{1}{2\pi}\dfrac{h}{2}\]
\[(\sigma_y)(\sigma_{p_y}) \geq \dfrac{\hbar}{2}\]
where the symbol ฤง is defined to be Planckโs constant divided by 2\(\pi\).
Contributors and Attributions
Paul DโAlessandris (Monroe Community College)
[1] The factor 3/2 is true only for particles. If the gas is comprised of diatomic molecules the factor is 5/2 at low temperatures and 7/2 at high temperatures. What counts as a low or high
temperature depends on the molecule. More complex molecules have even more complex factors. | {"url":"https://phys.libretexts.org/Bookshelves/Modern_Physics/Book%3A_Spiral_Modern_Physics_(D'Alessandris)/5%3A_Matter_Waves/5.2%3A_The_Double_Slit_with_Matter","timestamp":"2024-11-09T09:36:45Z","content_type":"text/html","content_length":"135220","record_id":"<urn:uuid:7ccbeff2-2449-47a2-b79b-b8a8c2401c0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00546.warc.gz"} |
Brainteaser: Beat the bell - Double Helix
Difficulty: Tricky
Right before the bell rings, Jeriโs teacher throws up a problem on the board:
โIโm thinking of a number. When I add its digits together and multiply by seven, I get the number I first thought of.
โThe number is under 40.โ
Can you help Jeri find the number before class is dismissed?
Need a hint?
To get the number, you have to multiply by seven. So the number is in your seven times table!
Brainteaser answer
Jeriโs answer is 21. Because the sum of its digits, 2 + 1, is 3, and 3 times 7 equals 21.
You have to calculate the number by multiplying by 7. That means the number must be in your seven times tables, like 7, 14, 21, 35 and so on.
We also know itโs less than 40, so there are only 4 possibilities to check. Letโs start with the smallest.
The digit sum of 7 is 7. But multiply it by 7 and you get 49, not 7.
The digit sum of 14 is 1 + 4 = 5. But multiply 5 by 7 and you get 35, not 14.
The digit sum of 21 is 2 + 1 = 3. Multiply 3 by 7 and you get 21, the number you first thought of!
And just for completeness: the digit sum of 35 is 3 + 5 = 8. But multiply 8 by 7 and you get 56, not 35.
Thereโs also a sneaky second answer: 0
Its digit sum is zero, and multiplying by seven gets you zero again!
3 responses
The problem says digits, and as 0 is single digit, 0 not not a sneaky second answer.
28 would also have been a possibilityโฆ
Good catch! We can eliminate it as well:
2 + 8 = 10. But multiply 10 by 7 and you get 70, not 28.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
By submitting this form, you give CSIRO permission to publish your comments on our websites. Please make sure the comments are your own. For more information please see our terms and conditions. | {"url":"https://blog.doublehelix.csiro.au/brainteaser-beat-the-bell/","timestamp":"2024-11-14T08:45:48Z","content_type":"text/html","content_length":"97485","record_id":"<urn:uuid:b9a1b936-073a-4486-be3b-afb7afeb99e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00650.warc.gz"} |
Expected utility for nonstochastic risk
Victor Ivanenko and Illia Pasichnichenko
MPRA Paper from University Library of Munich, Germany
Abstract: The world of random phenomena exceeds the domain of the classical probability theory. In the general case the description of randomness requires a specific set of probability distributions
(which is called statistical regularity) rather than a singe distribution. Such statistical regularity arises as a limit of relative frequencies. This approach to randomness allows to generalize the
expected utility theory in order to cover the decision problems under nonstochastic random events. Applying the von Neumann-Morgenstern utility theorem, we derive the maxmin expected utility
representation for statistical regularities. The derivation is based on the axiom of the preference for stochastic risk, i.e. the decision maker wishes to reduce the set of probability distributions
to a single one.
Keywords: expected utility; risk; mass phenomena; statistical regularity; nonstochastic randomness; multiple prior (search for similar items in EconPapers)
JEL-codes: C10 D81 (search for similar items in EconPapers)
Date: 2016-04-01
New Economics Papers: this item is included in nep-mic, nep-rmg and nep-upt
References: View references in EconPapers View complete reference list from CitEc
Citations: Track citations by RSS feed
Downloads: (external link)
https://mpra.ub.uni-muenchen.de/70433/1/MPRA_paper_70433.pdf original version (application/pdf)
https://mpra.ub.uni-muenchen.de/75947/2/MPRA_paper_75947.pdf revised version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:pra:mprapa:70433
Access Statistics for this paper
More papers in MPRA Paper from University Library of Munich, Germany Ludwigstraรe 33, D-80539 Munich, Germany. Contact information at EDIRC.
Bibliographic data for series maintained by Joachim Winter (). | {"url":"https://econpapers.repec.org/paper/pramprapa/70433.htm","timestamp":"2024-11-07T10:15:56Z","content_type":"text/html","content_length":"13751","record_id":"<urn:uuid:06155311-3dcd-4d03-b3fb-087d2eadda23>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00218.warc.gz"} |
Lower bounds for VLSI
Increased use of Very Large Scale Integration (VLSI) for the fabrication of digital circuits has led to increased interest in complexity results on the inherent VLSI difficulty of various problems.
Lower bounds have been obtained for problems such as integer multiplication (1,2), matrix multiplication [7], sorting [8], and discrete Fourier transform [9], all within VLSI models similar to one
originally developed by Thompson [8.9]. The lower bound results all pertain to a space-time trade-off measure that arises naturally within this model. In particular, for all the problems listed
above, the results show that if A is the area used by a VLSI circuit to compute one of the n-input, n-output functions listed above, and T is the time required for the computation, then the bound.
Publication series
Name Proceedings of the Annual ACM Symposium on Theory of Computing
ISSN (Print) 0737-8017
Other 13th Annual ACM Symposium on Theory of Computing, STOC 1981
Country/Territory United States
City Milwaukee
Period 6/11/81 โ 6/13/81
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'Lower bounds for VLSI'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/lower-bounds-for-vlsi","timestamp":"2024-11-10T10:15:34Z","content_type":"text/html","content_length":"49630","record_id":"<urn:uuid:e026667d-ae43-4fdb-840f-c4dea55c56f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00873.warc.gz"} |
How to Use the HLOOKUP Function in Excel: Step-by-Step
There are a total of four functions in Excel to look up a value. The LOOKUP, VLOOKUP, HLOOKUP, and XLOOKUP functions.
Each of these has a different purpose and is useful in its own way. The Excel HLOOKUP function looks up values in a table horizontally.
How? The following will guide and teach you that ๐
So continue reading and download our free sample workbook here to tag along with the guide.
What is HLOOKUP in Excel?
HLOOKUP is a lookup function of Excel. H in HLOOKUP stands for horizontal โ and the HLOOKUP function performs a horizontal lookup.
It looks for a given value in the first row of a table and returns the corresponding value from any row of that table (as specified).
For example, you can extract any data for the Companies (that are arranged horizontally) in this table by using the HLOOKUP function.
How about the number of employees for Company B? ๐ช
How to use HLOOKUP in Excel
Excited to see how the HLOOKUP function works in Excel? Hereโs an example.
The image below represents some details for four companies.
We have the sales, the expenses and the number of employees for each company.
Did you note that the Companies are organized horizontally โ
Now we want to find the Sales for any selected company instantly. Can we do that using the HLOOKUP function?
For example, for Company A?
Letโs see here.
1. Write the HLOOKUP function as follows:
= HLOOKUP (
2. Write the lookup_value as the first argument of the HLOOKUP function as below.
We want to find the sales for Company A. And as we have Company A written in a cell so, we will simply create a reference to it ๐
= HLOOKUP (B6,
You can also input the text string โCompany Aโ as the lookup_value yourself.
3. As the table_array argument, refer to the table where HLOOKUP will look for the lookup_value.
The table in our case ranges from Cell B1 to E4.
= HLOOKUP (B6, B1:E4
4. As the row_index_num argument, specify the row number from where the matching value must be returned.
We want the sales for Company A to be returned. Sales sit in the second row in table B1:E4. So, we are setting the third argument to 2.
= HLOOKUP (B6, B1:E4, 2
5. As the range_lookup argument, write:
TRUE: If you want Excel to run an approximate match. In this case, the HLOOKUP will look up the exact value (Company A) or the next largest value less than this value.
FALSE: If you want Excel to run an exact match. In this case, the HLOOKUP will look up the exact value (Company A). And if the exact value is not found in the given table, it will return the #N/A
The range_lookup argument is an optional one. If omitted, Excel by default sets it to TRUE โ approximate match mode ๐จ
In this case, we are setting range_lookup to FALSE as we want HLOOKUP to perform an exact match.
= HLOOKUP (B6, B1:E4, 2, FALSE)
6. Press Enter, and there you go.
The HLOOKUP function fetches out the sales for Company A from the table above.
Simple enough ๐
HLOOKUP formula example
Havenโt had enough of the HLOOKUP function yet? Same here ๐คฉ
So letโs look into another (more interesting) example of the HLOOKUP function.
Here we have a list of participants for a marathon.
We can only choose a single participant from these. And the required height to choose one of them is 6 feet (or the nearest to it)๐โ๏ธ
Now we donโt know if we have any participant who is exactly 6 feet tall. So we cannot run the HLOOKUP function with an exact lookup value of 6.
But we can go with an approximate match, no? So letโs do it.
Under the approximate match mode, the HLOOKUP function searches for the exact lookup value first ๐ฏ
If the data doesnโt contain the exact lookup value, it then looks for the value that is the next largest but less than the lookup value itself.
1. Begin writing the HLOOKUP function as follows:
= HLOOKUP ( B5
Alt-text: Lookup_value of the HLOOKUP function
We have created a reference to Cell B5 as it contains our lookup value i.e. 6.
2. As the second argument, specify the table where the value must be looked up.
For our example, it ranges from Cell B1 to E2.
= HLOOKUP ( B5, B1:E2
3. Next, specify the row from where the corresponding value must be returned.
We need the name of the participant (that sits in row 1 of table B1:E2). And so, we are selecting 2 as the row_index_num argument.
= HLOOKUP (B5, B1:E2, 2
4. Set the range_lookup to TRUE.
= HLOOKUP ( B5, B1:E2, 2, โTRUEโ)
We have set the range_lookup to TRUE, which is an approximate match mode. This is because we are not yet sure if the list above has any participant who is exactly 6 feet tall.
So we want to pick out any participant who is 6 feet tall. And if there is no such participant, then the one who is nearest to 6 feet tall.
Pro Tip!
To run an approximate match under the HLOOKUP function, the values in the first row of your table must be sorted in ascending order ๐
The first row is where the HLOOKUP function looks for the lookup_value. Unless it is arranged in ascending order, the HLOOKUP function might not return the correct results.
Note that the heights in our dataset above are arranged in ascending order. Starting from 5.5 inches (the leftmost value) up to 6.2 inches (the rightmost value).
5. Hit โEnterโ to get the results as follows:
And the HLOOKUP function has nominated Mr. Y as the marathon participant ๐ช
But did you note? The height of Mr. Y is not 6 feet but only 5.9 feet.
However, as none of the participants from the list have a 6 feet height, the approximate mode of HLOOKUP activates ๐
And it looks for the value next largest but less than the lookup value of 6. The next smallest value is 5.9 feet, and so the result is Mr. Y.
Thatโs it โ Now what?
That was all about the Microsoft Excel HLOOKUP function. Through the guide above, we have looked into each argument of the HLOOKUP function.
We have also seen multiple examples of how the HLOOKUP function might be used in Excel. The lookup functions of Excel are a blessing โ and the HLOOKUP function is no different.
If you enjoyed learning about the HLOOKUP function, youโd love to know about other functions of Excel too.
Like the VLOOKUP (the twin of the HLOOKUP function), SUMIF, and IF functions of Excel.
Here is the link to my 30-minute free email course that will take you through these (and many more) functions of Excel. Sign up now!
Frequently asked questions | {"url":"https://spreadsheeto.com/hlookup/","timestamp":"2024-11-08T17:19:24Z","content_type":"text/html","content_length":"296320","record_id":"<urn:uuid:cc5b0243-245f-49e9-97c6-dde48b8e2a58>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00622.warc.gz"} |
2 Semi-graphs with p-rank
Local p-Rank and Semi-Stable Reduction of Curves
Yu Yang
In the present paper, we investigate the localp-ranks of coverings of stable curves.
LetGbe a finitep-group,f :Y โโX a morphism of stable curves over a complete discrete valuation ring with algebraically closed residue field of characteristicp >0, xa singular point of the special
fiberX[s] ofX. Suppose that the generic fiberX[ฮท] of X is smooth, and the morphism of generic fibers f[ฮท] is a Galois ยดetale covering with Galois group G. Write Y^โฒ for the normalization of X in the
function field of Y, ฯ:Y^โฒ โโXfor the resulting normalization morphism. Let y^โฒ โฯ^โ^1(x) be a point of the inverse image ofx. Suppose that the inertia groupI[y]โฒ โGofy^โฒ is an abelian p-group. Then
we give an explicit formula for thep-rank of a connected component of f^โ^1(x). Furthermore, we prove that the p-rank is bounded by โฏI[y]โฒ โ1 under certain assumptions, where โฏI[y]โฒ denotes the order
of I[y]โฒ. These results generalize the results of M. Saยจฤฑdi concerning local p-ranks of coverings of curves to the case whereI[y]โฒ is an arbitrary abelian p-group.
Keywords: p-rank, semi-stable reduction, semi-stable covering, semi-graph with p-rank.
Mathematics Subject Classification: Primary 14E20; Secondary 14H30.
1 Introduction and ideas 2
2 Semi-graphs with p-rank 4
2.1 Definitions . . . 5 2.2 p-ranks and ยดetale-chains of abelian coverings . . . 8 2.3 Bounds of p-ranks of abelian coverings . . . . 18 3 p-ranks of vertical fibers of abelian stable coverings 23
3.1 p-ranks and stable coverings . . . . 23 3.2 Semi-graphs withp-rank associated to vertical fibers . . . . 27 3.3 p-ranks of vertical fibers . . . . 30
4 Appendix 32
1 Introduction and ideas
LetRbe a complete valuation ring with algebraically closed residue fieldkof characteristic p >0, K the quotient field of R, and K an algebraic closure of K. We use the notation S to denote the
spectrum of R. Write ฮท, ฮท and s for the generic point, the geometric generic point, and the closed point corresponding to the natural morphisms SpecK โโS, SpecK โโS, and Speck โโS, respectively. Let
X be a stable curve of genus g[X] over S. WriteXฮท,Xฮท, and Xs for the generic fiber, the geometric generic fiber, and the special fiber, respectively. Moreover, we suppose that X[ฮท] is smooth over ฮท.
Let Y[ฮท] be a geometrically connected curve overฮท, f[ฮท] :Y[ฮท] โโX[ฮท] a finite Galois ยดetale covering over ฮท with Galois group G. By replacing S by a finite extension of S, we may assume that Y[ฮท]
admits a stable model over S. Then f[ฮท] extends uniquely to a G-stable covering(cf. Definition 3.3)f :Y โโX overS(cf. [L2, Theorem 0.2] or Remark 3.3.1 of the present paper). We are interested in
understanding the structure of the special fiber Y[s] ofY. If the orderโฏGofGis prime top, then by the specialization theorem for log ยดetale fundamental groups, f[s] is an admissible covering (cf.
[Y1]); thus, Y[s] may be obtained by gluing together tame coverings of the irreducible components of Xs. On the other hand, if p|โฏG, then f[s] is not a finite morphism in general. For example, if
char(K) = 0 and char(k) = p >0, then there exists a Zariski dense subset Z of the set of closed points of X, which may in fact be taken to be X when k is an algebraic closure of Fp, such that for any
x โ Z, after possibly replacing K by a finite extension of K, there exist a finite group H and an H-stable covering f[W] : W โโ X such that the fiber (f[W])^โ1(x) is not finite (cf. [T], [Y2]).
Iff^โ^1(x) is not finite, we shall callxavertical point associated tof and callf^โ^1(x) the vertical fiber associated to x (cf. Definition 3.4). In order to investigate the properties of Ys, we focus
on a geometric invariantฯ(Ys) which is called thep-rank ofYs (cf. Definition 3.1 and Remark 3.1.1). By the definition of the p-rank of a stable curve, to calculate ฯ(Y[s]), it su๏ฌces to calculate the
rank of H^1(ฮ[Y][s],Z) (where ฮ[Y][s] denotes the dual graph of Ys), the p-ranks of the irreducible components of Ys which are finite over Xs, and the p-ranks of the vertical fibers of f. In the
present paper, we study the p-rank of a vertical fiber and consider the following problem:
Problem 1.1. Let G be a finite p-group, x be a vertical point associated to the G-stable covering f :Y โโX, f^โ^1(x) the vertical fiber associated to x.
(a) Does there exist a minimal bound on the p-rank ฯ(f^โ1(x)) (note that ฯ(f^โ1(x)) is always bounded by the genus of Ys)?
(b) Does there exist an explicit formula for the p-rank ฯ(f^โ^1(x))?
We will answer Problem 1.1 under certain assumptions (cf. Theorem 1.5 and Theorem 1.10). First, let us review some well-known results concerning Problem 1.1.
Ifx is a nonsingular point, M. Raynaud proved the following result (cf. [R, Thยดeor`eme 1]):
Theorem 1.2. If x is a non-singular point of X[s], and G is an arbitrary p-group, then the p-rank ฯ(f^โ^1(x)) is equal to 0.
By Theorem 1.2, in order to resolve Problem 1.1, it is su๏ฌcient to consider the case where x is asingular point of X[s]. In order to explain the results obtained in the present paper, let us
introduce some notations. Write X[1] and X[2] for the irreducible components ofX[s] which containx,ฯ :Y^โฒ โโX for the normalization of X in the function field ofY. Lety^โฒ โฯ^โ^1(x) be a point in the
inverse image ofx. Write I[y]โฒ โG for the inertia group of y^โฒ. In order to calculate thep-rank of f^โ^1(x), since Y /I[y]โฒ โโX is finite ยดetale overx, by replacingX by the stable model of the
quotientY /I[y]โฒ (note thatY /I[y]โฒ is a semi-stable curve overS (cf. [R, Appendice, Corollaire])), we may assume that G is equal toI[y]โฒ.
Thus, from the point of view of resolving Problem 1.1, we may assume without loss of generality thatG=I[y]โฒ. In the remainder of this section, we shall assume that G=I[y]โฒ
is of order p^r for some positive integer r. Then f^โ^1(x) is connected. With regard to Problem 1.1 (a), M. Saยจฤฑdi proved the following result (cf. [S, Theorem 1]), by applying Theorem 1.2:
Theorem 1.3. If G is a cyclic p-group, then we have ฯ(f^โ^1(x)) โค โฏGโ1, where โฏG denotes the order of G.
Furthermore, there is an open problem posed by Saยจฤฑdi as follows (cf. [S, Question]):
Problem 1.4. If G is an arbitrary p-group, does there exist a minimal bound on the p-rank ฯ(f^โ^1(x)) that depends only on the orderโฏG?
Let us introduce some notations. Suppose that G is an abelian p-group. Let ฮฆ :{1}=G[r]โG[r][โ][1] โ ยท ยท ยท โG[0] =G
be a maximal filtration of G (i.e., G[i]/G[i+1] โผ= Z/pZ for i = 0, . . . , rโ1). It follows from [R, Appendice, Corollaire], that fori= 0, . . . , r, Y[i] :=Y /G[i] is a semi-stable curve overS.
Write X^sst forY /G and g for the resulting morphism g :X^sstโโX induced byf. Then we obtain a sequence of Z/pZ-semi-stable coverings (cf. Definition 3.3)
ฮฆ[f] :Y =Y[r] โโโโ^d^r Y[r][โ][1] โโโโ^d^rโ1 . . . โโโโ^d^1 Y[0] =X^sst โโโโ^g X.
In the following, we use the subscript โredโ to denote the reduced induced closed sub- scheme associated to a scheme. For eachi= 1, . . . , r, writeฯ[i] :Y[i] โโY[0]for the composite morphismd[1]โฆยท ยท
ยทโฆd[i]. For simplicity, we suppose thatC :=g^โ^1(x)[red] =โช^nj=1P[j], where, for each j = 1, . . . , n, P[j] is isomorphic to P^1 and meets the other irreducible components of the special fiberX[s]^
sst of X^sst at precisely two points (i.e., a chain of P^1). Thus, thep-rank ฯ(f^โ^1(x)) is equal to ฯ(ฯ^โ[r]^1(C)). For each i= 1, . . . , r, we define a set of subcurves of C associated to ฮฆ[f],
which plays a key role in the present paper, as follows: โฆ
Ei^ฮฆ^f :=ฯ[i](the ยดetale locus of d[i]|[ฯ]^โ1
i (C)red :ฯ^โ[i] ^1(C)[red] โโฯ^โ[i][โ]^1[1](C)[red])โC.
We shall callEi^ฮฆ^f the i-th ยดetale-chain associated to ฮฆ[f] and call the disjoint union E^ฮฆ^f :=โจฟ
the ยดetale-chain associated to ฮฆ[f]. For each connected component E of E[i]^ฮฆ^f, we use the notation l(E) to denote the cardinality of the set of the irreducible components of E and call l(E) the
length ofE.
We generalize Saยจฤฑdiโs result as follows (see also Theorem 3.15):
Theorem 1.5. If G is an arbitrary abelian p-group, and Ei is connected for each i = 1, . . . , n, then we have ฯ(f^โ^1(x))โคโฏGโ1.
Remark 1.5.1. If โฏG is equal to p, then we may construct a Z/pZ-stable covering f : Y โโXsuch that there exists a singular vertical pointxsuch that thep-rank ofฯ(f^โ^1(x)) is equal to pโ1 (cf. [Y4,
Section 4]). Thus, at least in the case where โฏG=p, โฏGโ1 is the minimal bound forฯ(f^โ^1(x)).
Next, let us consider Problem 1.1 (b). Let {V[i]}^n+1[i=0] be a set of irreducible components of the special fiberY[s] ofY such that the following conditions are satisfied: (i)ฯ[r](V[i]) = P[i] if i=
1, . . . , n; (ii) ฯ[r](V[0]) =X[1] and ฯ[r](V[n+1]) =X[2]; (iii) the union โช^n+1i=0V[i] is a connected semi-stable subcurve of the special fiber Y[s] of Y. Write I[P][i] โG for the inertia subgroup
of V[i]. Note that since G is an abelian p-group,I[P][i] does not depend on the choices ofV[i].
If Gis a cyclic p-group, Saยจฤฑdi obtained an explicit formula of thep-rank ฯ(f^โ^1(x)) as follows (cf. [S, Proposition 1]):
Theorem 1.6. If G is a cyclic p-group, and I[P][0] is equal to G, then we have ฯ(f^โ^1(x)) =โฏ(G/I[min])โโฏ(G/I[P][n+1]),
where I[min] denotes the group โฉ^n+1i=0I[P][i].
For aG-covering of semi-graphs withp-rank, we develop a general method to compute thep-rank (cf. Theorem 2.8). As an application, we generalize Saยจฤฑdiโs formula to the case where Gis an arbitrary
abelian p-group as follows (cf. Theorem 3.9 and Remark 3.9.1):
Theorem 1.7. If G is an arbitrary abelian p-group, then we have
ฯ(f^โ^1(x)) =
โฏ(G/(I[P][i][โ][1] +I[P][i])) + 1.
Finally, I would mention that by using the theory of semi-graphs with p-rank, we can generalize Theorem 1.8 to the case whereGis an arbitrary p-group. Furthermore, we can obtain a global p-rank
formula for the special fiberYs (cf. [Y5]).
The present paper contains two parts. In Section 2, we develop the theory of semi- graphs withp-rank and calculate the p-ranks of G-coverings. In Section 3, we construct a semi-graph with p-rank from
a vertical fiber of a G-stable covering in a natural way and apply the results of Section 2 to prove Theorem 1.5 and Theorem 1.8.
2 Semi-graphs with p-rank
In this section, we develop the theory of semi-graphs withp-rank. We always assume that G is an abelian p-group with orderp^r.
2.1 Definitions
We begin with some general remarks concerning semi-graphs (cf. [M]). A semi-graph G consists of the following data: (i) A set VG whose elements we refer to as vertices; (ii) A set E^G whose elements
we refer to as edges. Any element eโ E^G is a set of cardinality 2 satisfying the following property: For any eฬธ=e^โฒ โ E^G, we have eโฉe^โฒ = ร; (iii) A set of maps {ฮถ[e]^G}eโE^G such that ฮถ[e] :e โโ V
โช {V} is a map from the set e to the set V โช {V}. For an edge e โ E^G, we shall refer to an element b โ e as a branch of the edge e. An edge e โ E^G is called closed (resp. open) if ฮถ[e]^โ^1({V^G}) =
ร (resp. ฮถ[e]^โ^1({V^G}) ฬธ= ร). A semi-graph will be called finite if both its set of vertices and its set of edges are finite.
In the present paper, we only consider finite semi-graphs. Since a semi-graph can be regarded as a topological space, we shall callG a connected semi-graph ifG is connected as a topological space.
LetGbe a semi-graph. Writev(G) for the set of vertices ofG,e(G) for the set of closed edges of G, and e^โฒ(G) for the set of open edges of G. For any element v โ v(G), write b(v) for the set of
branches โชeโe(G)โชe^โฒ(G)ฮถ[e]^โ^1(v). For any element eโ e(G)โชe^โฒ(G)), write v(e)for the set which consists of the elements ofv(G) which are abutted bye. A morphism between semi-graphs G โโ H is a
collection of maps v(G) โโ v(H); e(G)โชe^โฒ(G) โโ
e(H)โชe^โฒ(H); and for each e[G] โ e(G)โชe^โฒ(G) mapping to e[H] โ e(H)โชe^โฒ(H), a bijection e[G] โ^โผ e[H]; all of which are compatible with the {ฮถ[e]^G}eโe(G)โชe^โฒ(G) and {ฮถ[e]^H}eโe(H)โชe^โฒ(H).
A sub-semi-graph G^โฒ ofGis a semi-graph satisfying the following properties: (i)v(G^โฒ) (resp. e(G^โฒ)โชe^โฒ(G^โฒ)) is a subset of v(G) (resp. e(G)โชe^โฒ(G)); (ii) If e โe(G^โฒ), then we have ฮถ[e]^G^โฒ(e) = ฮถ
[e]^G(e); (iii) If e ={b[1], b[2]} is an element of e^โฒ(G^โฒ) such that ฮถ[e]^G(b[1]) โv(G^โฒ) and ฮถ[e]^G(b[2])ฬธโv(G^โฒ), then we have ฮถ[e]^G^โฒ(b[1]) =ฮถ[e]^G(b[1]) and ฮถ[e]^G^โฒ(b[2]) ={v(G^โฒ)}.
Definition 2.1. Let G^โฒ be a sub-semi-graph of a semi-graph G. We define a semi-graph G\G^โฒ as follows: (i) The set of verticesv(G\G^โฒ) isv(G)\v(G^โฒ); (ii) The set of closed edges e(G\G^โฒ) ise(G)\e(G
^โฒ); (iii) The set of open edgese^โฒ(G\G^โฒ) is{eโe(G)|v(e)โฉv(G\G^โฒ)ฬธ= ร in G}; (iv) For any e={b[i]}i={1,2} โe(G\G^โฒ)โชe^โฒ(G\G^โฒ), we have ฮถe^G\G^โฒ(b[i]) =ฮถ[e]^G(b[i]) (resp. ฮถe^G\G^โฒ(b[i]) ={v(G\G^โฒ)})
if ฮถ[e]^G(b[i])ฬธโv(G^โฒ) (resp. ฮถ[e]^G(b[i])โv(G^โฒ)).
Definition 2.2. (a) Let n be a positive natural number and Pn a semi-graph such that the following conditions hold: (i) v(Pn) = {p[1], . . . , p[n]}, e(Pn) = {e[1,2], . . . , e[n,n][โ][1]} and e^โฒ(Pn
) ={e0,1, en,n+1}; (ii) v(ei,i+1) ={pi, pi+1}; (iii) v(e0,1) ={p1} and v(en,n+1) ={pn}. We define G to be a triple (G, ฯ[G], ฮฒ[G]) which consists of a semi-graph G, a map ฯ[G] : v(G) โโ Z and a
morphism of semi-graphs ฮฒ[G] : G โโ Pn. We shall call G a n-semi- graph withp-rank. We shall refer toGas the underlying semi-graph ofG,ฯ[G] as thep-rank map ofG,ฮฒ[G] as the base morphism ofG,
respectively. We defineP[n] := (Pn, ฯ[P][n], ฮฒ[P][n]) as follows: ฯ[P][n](p[i]) is equal to 0 for eachi= 1, . . . , n, andฮฒ[P][n] = id[P][n] is an identity morphism of semi-graph Pn. We shall call Pn
(b) We define the p-rank ฯ(G) of G as follows:
ฯ(G) := โ
ฯ(v) + โ
rank[Z]H^1(Gi,Z), where ฯ[0](โ) denotes the set of connected components of (โ).
(c) G is called connected if the underlying semi-graphG is a connected semi-graph.
From now on, we only consider connected n-semi-graphs with p-rank. Let G^1 :=
(G^1, ฯ[G]^1, ฮฒ[G]^1) and G^2 := (G^2, ฯ[G]^2, ฮฒ[G]^2) be two n-semi-graphs with p-rank. A morphism betweenG^1andG^2 is defined by a morphism of the underlying semi-graphsฮฒ :G^1 โโG^2 such that ฮฒ[G]^
2 โฆฮฒ = ฮฒ[G]^1. We use the notation b : G^1 โโ G^2 to denotes the morphism of semi-graphs with p-rank determined by ฮฒ : G^1 โโ G^2 and call ฮฒ the underlying morphism of b. Note that for any
n-semi-graph with p-rank G := (G, ฯ[G], ฮฒ[G]), there is a natural morphismb[G] :GโโP[n] determined by the morphism of underlying semi-graphs ฮฒ[G] :GโโPn.
Write b^i[l] (resp. b^i[r]) forฮถ[e]^โ[iโ1,i]^1 (p[i]) (resp. ฮถ[e]^โ^1
i,i+1(p[i])). For any element v[i] โฮฒ[G]^โ^1(p[i]), write b[l](v[i]) (resp. b[r](v[i])) for the set
{b โb(v[i]) | ฮฒ[G](b) = b^i[l]} (resp. {bโb(v[i]) | ฮฒ[G](b) =b^i[r]}).
Definition 2.3. Letb : G^1 := (G^1, ฯ[G]^1, ฮฒ[G]^1)โโG^2 := (G^2, ฯ[G]^2, ฮฒ[G]^2) be a morphism of n-semi-graphs with p-rank, ฮฒ the underlying morphism of b,e โe(G^1)โชe^โฒ(G^1) an edge, v[1] a vertex
of G^1 contained in ฮฒ[G]^โ1^1(p[i]), andv[2] :=ฮฒ(v[1])โฮฒ[G]^โ2^1(p[i]) the image of v[1].
(a) We shall call b p-ยดetale (resp. p-purely inseparable) at e if โฏฮฒ^โ^1(ฮฒ(e)) = p (resp.
โฏฮฒ^โ^1(ฮฒ(e)) = 1). We shall call b p-generically ยดetale at v[1] โฮฒ[G]^โ1^1(p[i]) if one of the following
etale types holds:
(Type-I) โฏฮฒ^โ^1(v2) =p and ฯ[G]^1(v1) =ฯ[G]^2(v2);
(Type-II) โฏฮฒ^โ^1(v[2]) = 1, โฏb[l](v[1]) =pโฏb[l](v[2]), โฏb[r](v[1]) =pโฏb[r](v[2]), and ฯ[G]^1(v[1])โ1 =p(ฯ[G]^2(v[2])โ1);
(Type-III) If โฏฮฒ^โ^1(v[2]) = 1, โฏb[l](v[1]) = โฏb[l](v[2]), โฏb[r](v[1]) = pโฏb[r](v[2]), and ฯ[G]^1(v[1])โ1 =p(ฯ[G]^2(v[2])โ1) + (โฏb[l](v[1]))(pโ1);
(Type-IV) โฏฮฒ^โ^1(v[2]) = 1, โฏb[l](v[1]) = pโฏb[l](v[2]),โฏb[r](v[1]) =โฏb[r](v[2]), and ฯ[G]^1(v[1])โ1 = p(ฯ[G]^2(v[2])โ1) + (โฏb[r](v[1]))(pโ1);
(Type-V) โฏฮฒ^โ1(v[2]) = 1,โฏb[l](v[1]) =โฏb[l](v[2]),โฏb[r](v[1]) =โฏb[r](v[2]), and ฯ[G]^1(v1)โ1 =p(ฯ[G]^2(v2)โ1) + (โฏbl(v1) +โฏbr(v1))(pโ1).
(b) We shall call bpurely inseparableatv[1] โฮฒ[G]^โ1^1(p[i]) ifโฏฮฒ^โ^1(v[2]) = 1,โฏb[l](v[1]) =โฏb[l](v[2]),
โฏb[r](v[1]) =โฏb[r](v[2]), andฯ[G]^1(v[1]) =ฯ[G]^2(v[2]) hold.
(c) We shall call b a p-covering if the following conditions hold: (i) There exists a Z/pZ-action (which may be trivial) on G^1 (resp. a trivial Z/pZ-action on G^2), and the underlying morphism ฮฒ of
b is compatible with the Z/pZ-actions. Then the natural morphism G^1/Z/pZโโG^2 induced by b is an isomorphism; (ii) For any v โv(G^1),b is eitherp-generically ยดetale or purely inseparable at v; (iii)
Leteโe(G^1) and v(e) ={v, v^โฒ}. If b is p-generically ยดetale at v and v^โฒ, then b is p-ยดetale at e; (iv) For any v โv(G^1), then ฯ[G]^1(v) = ฯ[G]^1(ฯ(v)) holds for eachฯ โZ/pZ.
Note that by the definition of p-covering, the identity morphism of a semi-graph with p-rank is ap-covering.
(d) We shall call b a covering if b is a composite of p-coverings.
(e) We shall call
ฮฆ :{1}=G[r] โG[r][โ][1] โ ยท ยท ยท โG[1] โG[0] =G
an maximal filtration of G if G[j]/G[j+1] โผ= Z/pZ for each j = 1, . . . , rโ1. Suppose that G^1 (resp. G^2) admits a (resp. trivial) G-action (which may be trivial). Then for any maximal filtration ฮฆ
ofG, there is a sequence of semi-graphs induced by ฮฆ:
G^1 =Gr ฮฒr
โโโโ Grโ1 ฮฒrโ1
โโโโ . . . โโโโ^ฮฒ^1 G0,
whereGj denotes the quotient ofG^1 byG[j]. We shall callbaG-coveringif for any maximal filtration ฮฆ of G, there exists a set of p-coverings {bj : Gj โโ Gjโ1, j = 1, . . . , r} such that the following
conditions hold: (i) the underlying morphismฮฒ of bis compatible with the G-actions, and the natural morphism G^1/GโโG^2 induced byฮฒ is an isomorphism;
(ii) The underlying graph of Gj is equal to Gj for each j = 0, . . . , r; (iii) The underlying morphism Gj โโ Gjโ1 of b[j] is equal to ฮฒ[j] for each j = 1, . . . , r; (iv) The composite morphism b[1]
โฆ ยท ยท ยท โฆb[r] is equal tob. Then we obtain a sequence ofp-coverings:
ฮฆ[G]^1 :G^1 =G[r] โโโโ^b^r G[r][โ][1] โโโโ^b^rโ1 . . . โโโโ^b^1 G[0] =G^2. We shall call ฮฆ[G]^1 a sequence of p-coverings induced by ฮฆ.
(f) LetGbe an-semi-graph withp-rank. We shall callGacovering(resp. G-covering) over P[n] if b[G] is a covering (resp. G-covering).
(g) Let b : G^1 โโ G^2 be a G-covering, v โ v(G) a vertex, and e โ e(G)โชe^โฒ(G) an edge. For any subgroup H โ G, by Definition 2.3 (e), there exists a maximal filtration ฮฆ^H and the sequence of
ฮฆ^H[G]1 :G^1 =G[r] ^b
โโโโ G[r][โ][1] ^b
โโโโ . . . ^b
โโโโ1 G[0] =G^2
induced by ฮฆ^H such that there existsisuch that the underlying graph ofGi is isomorphic toG^1/H. We writeG^1/H forG[i]. Thus, the natural morphismb^H[1] โฆยท ยท ยทโฆb^H[i] :G^1/H โโG^2 is a covering. Then
we define five subgroups of Gas follows:
D[v] :={ฯ โG |ฯ(v) =v},
I[v] := the maximal element of {H โG| G^1 โโG^1/H is purely inseparable at v}, I[v]^l(b) :={ฯ โD[v] |ฯ(b) = b for a branch bโb[l](v)}/I[v],
I[v]^r(b) := {ฯ โD[v] | ฯ(b) =b for a branchbโb[r](v)}/I[v], I[e] :={ฯ โG | ฯ(e) = e}.
We shall callD[v] (resp. I[v], I[v]^l(b),I[v]^r(b),I[e]) the decomposition group ofv (resp. the inertia group of v, the inertia group of a left branch b, the inertia group of a right branch b, the
inertia group ofe). Moreover, sinceGis an abelianp-group, the groupI[v]^l(b) (resp. I[v]^r(b))
does not depend on the choice of b โ b[l](v) (resp. b โ b[r](v)), then we denote this group briefly by I[v]^l (resp. I[v]^r). Define
D[v]^e =D[v]/(I[v]^l/(I[v]^l โฉI[v]^r)โI[v]^r/(I[v]^l โฉI[v]^r)โI[v]^l โฉI[v]^rโI[v]).
Then we have the following exact sequence
0โโI[v]^l/(I[v]^l โฉI[v]^r)โI[v]^r/(I[v]^l โฉI[v]^r)โI[v]^l โฉI[v]^rโI[v] โโD[v] โโD[v]^eโโ0.
Remark 2.3.1. LetGbe aG-covering overP[n]andv[i] โฮฒ[G]^โ^1(p[i]) a vertex of the underlying graph ofG. Then we have the following Deuring-Shafarevich type formula (cf. Proposition 3.2 for the
Deuring-Shafarevich formula for curves)
ฯ[G](v[i])โ1 =โโฏD[v][i]/I[v][i]+โฏ((D[v][i]/I[v][i])/I[v]^l[i])(โฏI[v]^l[i] โ1) +โฏ((D[v][i]/I[v][i])/I[v]^r[i])(โฏI[v]^r[i] โ1).
Let G be a G-covering over Pn. By the definition of G-coverings, for any maximal filtration ฮฆ of G, we have a sequence ofp-coverings ofn-semi-graphs withp-rank
ฮฆ[G] :G=G[r] โโโโ^b^r G[r][โ][1] โโโโ^b^r^โ^1 . . . โโโโ^b^1 G[0] =P[n] induced by ฮฆ. For each j = 1, . . . , r, we write Vj^ยด^et for the set
{v โv(Gj)| b[j] is ยดetale atv}, Ej^ยด^et for the set
{eโe(Gj)โชe^โฒ(Gj)| b[j] is ยดetale at e}.
Since (Vj^ยด^et,Ej^ยด^et) admits a natural structure of semi-graph induced by Gj, we may regard (V[j]^ยด^et,E[j]^ยด^et) as a sub-semi-graph of Gj. Thus, the image ฮฒ[G][j]((V[j]^ยด^et,E[j]^ยด^et)) can be
regarded as a sub-semi-graph of Pn.
Definition 2.4. We shall call E^ฮฆj^G := ฮฒ[G][j]((Vj^ยด^et,Ej^ยด^et)) (resp. the disjoint union E^ฮฆ^G :=
jE^ฮฆj^G) the j-th ยดetale-chain (resp. theยดetale-chain) associated to ฮฆ[G].
2.2 p-ranks and ยด etale-chains of abelian coverings
LetG := (G, ฯ[G], ฮฒ[G]) be a G-covering over Pn. We introduce two operators for G.
Operator I: First, let us define a G-covering G^โ[p[i]] over P[n]. For any p[i] โ v(Pn), let v[i] be an element of ฮฒ[G]^โ^1(p[i]).
If โฏฮฒ[G]^โ^1(p[i]) = 1 (i.e., D[v][i] = G), then we define G^โ[p[i]] to be G; If โฏฮฒ[G]^โ^1(p[i]) ฬธ= 1, we define a new semi-graph G^โ[p[i]] as follows.
Definev(G^โ[p[i]]) (resp. e(G^โ[p[i]])โชe^โฒ(G^โ[p[i]])) to be the disjoint union (v(G)\ฮฒ[G]^โ^1(p[i]))โจฟ {v^โ} (resp. e(G)โชe^โฒ(G)).
The collection of maps {ฮถe^G^โ^[p^i^]}e is as follows: (i) For any branch b ฬธโ โช[v][โ][ฮฒ]^โ^1
G (pi)b(v), ฮถe^G^โ^[p^i^](b) = ฮถ[e]^G(b) if b โ e and ฮถe^G^โ^[p^i^](b) = ร if b ฬธโ e; (ii) For any v โ ฮฒ[G]^โ^1(p[i]) and any branch bโb(v),ฮถe^G^โ^[p^i^](b) = v^โ if b โe and ฮถe^G^โ^[p^i^](b) = ร if
We define a map ฯ[G]โ[pi]:v(G^โ[p[i]])โโZ as follows: (i) Ifv^โ ฬธ=v โv(G^โ[p[i]]), then we haveฯ[G]โ[pi](v) := ฯ[G](v); (ii) Ifv =v^โ, then we have
ฯ[G]โ[pi](v^โ) :=โโฏ(G/I[v][i]) + โ
(โฏI[v]^l(b)โ1) + โ
(โฏI[v]^r(b)โ1) + 1
=โโฏ(G/Ivi) +โฏ((G/Ivi)/I[v]^l[i])(โฏI[v]^l[i] โ1) +โฏ((G/Ivi)/I[v]^r[i])(โฏI[v]^r[i]โ1) + 1.
We define a morphism of semi-graphs ฮฒ[G]โ[pi] : G^โ[p[i]] โโ Pn as follows: (i) For any v โ v(G^โ[p[i]]), ฮฒ[G]โ[pi](v) = p[i] if v = v^โ and ฮฒ[G]โ[pi](v) = ฮฒ[G](v) if v ฬธโ ฮฒ[G]^โ^1(p[i]); (ii) If eโe(
G^โ[p[i]])โชe^โฒ(G^โ[p[i]]), then we have ฮฒ[G]โ[pi](e) =ฮฒ[G](e).
Thus, the triple G^โ[p[i]] := (G^โ[p[i]], ฯ[G]โ[pi], ฮฒ[G]โ[pi]) is a n-semi-graph with p-rank.
Moreover,G^โ[p[i]] admits a naturalG-action as follows: (i) the action ofGonv(G^โ[p[i]])\ {v^โ}(resp. e(G^โ[p[i]])โชe^โฒ(G^โ[p[i]])) is the action ofGonv(G)\ฮฒ[G]^โ^1(p[i]) (resp. e(G)โชe^โฒ(G));
(ii) For any ฯ โG, we have ฯ(v^โ) =v^โ.
Let us explain that with the G-action defined above, G^โ[p[i]] is a G-covering over P[n]. Let
ฮฆ :{1}=G[r] โG[r][โ][1] โ ยท ยท ยท โG[1] โG[0] =G be an arbitrary maximal filtration of G. Write
ฮฆ[G] :G=G[r] โโโโ^b^r G[r][โ][1] โโโโ^b^rโ1 . . . โโโโ^b^1 G[0] =P[n]
for the sequence ofp-coverings of n-semi-graphs withp-rank induced by ฮฆ. Note that for eachj = 0, . . . , r,G[j] is aG/G[j]-covering over P[n]. By the construction of G^โ[j][p[i]], we have
ฮฆ[G]โ[pi]:G^โ[p[i]] =G^โ[r][p[i]] โโโโ^b^โ^r^[p^i^] G^โ[r][โ][1][p[i]] ^b
โโโโโ . . . ^b
โโโโ P[n].
is a sequence of p-coverings of n-semi-graphs with p-rank. Thus, G^โ[p[i]] can be regarded as a G-covering over P[n].
Note that by the construction ofG^โ[p[i]], we see thatE^ฮฆ[j]^G =E^ฮฆ[j]^Gโ[^pi^]for eachj = 1, . . . , r.
Operator II:Let us define a G-coveringG^โ[pi] overPn. For anypi โv(Pn), letvi be an element of ฮฒ[G]^โ^1(p[i]), I[v][i] the inertia group ofv[i]. Since G is a abelian group, we may write {v[i]^u}uโG/D
[vi] for ฮฒ[G]^โ^1(p[i]), and {v[i]^u}uโG/D[vi] admits an natural action of G on the index set G/Dvi. We define a new semi-graph G^โ[pi] as follows. If โฏฮฒ[G]^โ^1(pi) = โฏ(G/Ivi), we define G^โ[p[i]] to
beG. If โฏฮฒ[G]^โ^1(p[i])ฬธ=โฏ(G/I[v][i]), we haveฮฒ[G]^โ^1(b^i[l]) = {b^i,u,s,t[l] }uโG/D[vi],sโI[vi]^r /I[vi]^l โฉI[vi]^r ,tโD^e[vi]. Then ฮฒ[G]^โ^1(b^i[l]) = {b^i,u,s,t[l] }uโG/D[vi],sโI[vi]^r/I[vi]^l โฉI
[vi]^r,tโD^e[vi] admits a natural action of G as follows:
for ฯ โ G, ฯ(b^i,u,s,t[l] ) = b^i,ฯ[l] ^โฆ^u,s,t if ฯ ฬธโ Dvi, where ฯ denotes the image of ฯ under the quotient G โโ G/D[v][i], ฯ(b^i,u,s,t[l] ) = b^i,u,ฯ[l] ^โฆ^s,t if ฯ โ I[v]^r
i โฉ I[v]^r
i, ฯ(b^i,u,s,t[l] ) = b^i,u,s,ฯ[l] ^โฆ^t if ฯ ฬธโ I[v]^l[i] +I[v]^r[i] +I[v][i], where ฯ denotes the image of ฯ under the quotient D[v][i] โโD^e[v][i], and ฯ(b^i,u,s,t[l] ) = b^i,u,s,t[l] ifฯ โI[v][i]+I
i. Similarly,ฮฒ[G]^โ^1(b^i[r]) :={b^i,u,s,t[r] }uโG/D[vi],sโI[vi]^l /I[vi]^l โฉI[vi]^r,tโD^e[vi] also admits a natural action ofG.
Definev(G^โ[p[i]]) (resp. e(G^โ[p[i]])โชe^โฒ(G^โ[p[i]])) to be the disjoint union (v(G)\ฮฒ[G]^โ^1(p[i]))
โจฟ{v[u,t]^โ }uโG/D[vi],tโD[vi]^e (resp. e(G)โชe^โฒ(G)). {v^โ[u,t]}uโG/D[vi],tโD[vi]^e admits a natural G-action | {"url":"https://123deta.com/document/zx540n24-semi-graphs-with-p-rank.html","timestamp":"2024-11-14T15:08:31Z","content_type":"text/html","content_length":"215331","record_id":"<urn:uuid:96460428-5f65-417f-98a0-7605512c5985>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00882.warc.gz"} |
On some Steffensen-type iterative methods for a class of nonlinear equationsOn some Steffensen-type iterative methods for a class of nonlinear equations - Tiberiu Popoviciu Institute of Numerical Analysis
Consider the nonlinear equations \(H(x):=F(x)+G(x)=0\), with \(F\) differentiable and \(G\) continuous, where \(F,G,H:X \rightarrow X\) are nonlinear operators and \(X\) is a Banach space.
The Newton method for solving the nonlinear equation \(H(x)=0\) cannot be applied, and we propose an iterative method for solving this equation by combining the Newton method with the Steffensen
method: \[x_{k+1} = \big(F^\prime(x_k)+[x_k,\varphi(x_k);G]\big)^{-1}(F(x_k)+G(x_k)),\] where \(\varphi(x)=x-\lambda (F(x)+G(x))\), \(\lambda >0\) fixed.
The method is obtained by combining the Newton method for the differentiable part with the Steffensen method for the nondifferentiable part.
We show that the R-convergence order of this method is 2, the same as of the Newton method.
We provide some numerical examples and compare different methods for a nonlinear system in \(\mathbb{R}^2\).
E. Cฤtinaล
(Tiberiu Popoviciu Institute of Numerical Analysis)
nonlinear equation; Banach space; Newton method; Steffensen method; combined method; nondifferentiable mapping; nonsmooth mapping; r-convergence order.
E. Cฤtinaล, On some Steffensen-type iterative methods for a class of nonlinear equations, Rev. Anal. Numรฉr. Thรฉor. Approx., 24 (1995) nos. 1-2, pp. 37-43.
[1] I.K. Argyros, On the secant method and the Ptak error estimates, Rev. Anal. Numer. Theor. Approx., 24 (1995) nos. 1โ2, pp. 3โ14.
article on journal website
[2] M. Balazs, A bilateral approximating method for finding the real roots of real equations, Rev. Anal. Numer. Theor. Approx., 21 (1992) no. 2, pp. 111โ117.
article on journal website
[3] E. Catinas, On some iterative methods for solving nonlinear equations, Rev. Anal. Numer. Theor. Approx., 23 (1994) no. 1, pp. 47โ53.
article on journal website, article on post
[4] G. Goldner, M. Balazs, Asupra metodei coardei si a unei modificari a ei pentru rezolvarea ecuaศiilor operationale neliniare, Stud. Cerc. Mat., 20 (1968), pp. 981โ990. [English title: On the
method of chord and on its modification for solving the nonlinear operator equations]
[5] G. Goldner, M. Balazs, Observaศii asupra diferenศelor divizate ศi asupra metodei coardei, Revista de Analiza Numerica si Teoria Aproximatiei, 3 (1974) no. 1, pp. 19โ30 (in Romanian).
[English title: Remarks on divided differences and method of chords]
article on journal website
[6] L.V. Kantorovici, G.P. Akilov, Functional Analysis, Editura Stiintifica si Enciclopedica, Bucuresti, 1986 (in Romanian).
[7] I. Pavaloiu, On the monotonicity of the sequences of approximations obtained by Steffensenโs method, Mathematica, 35(58) (1993) no. 1, pp. 71โ76.
article on post
[8] T. Yamamoto, A note on a posteriori error bound of Zabrejko and Nguen for Zincenkoโs iteration, Numer. Funct. Anal. Optimiz., 9 (1987) nos. 9&10, pp. 987โ994.
[9] T. Yamamoto, Ball convergence theorems and error estimates for certain iterative methods for nonlinear equations, Japan Journal of Applied Mathematics, 7 (1990) no. 1, pp. 131โ143.
[10] X. Chen, T. Yamamoto, Convergence domains of certain iterative methods for solving nonlinear equations, Numer. Funct. Anal. Optimiz., 10 (1989) 1&2, pp. 37โ48. | {"url":"https://ictp.acad.ro/steffensen-type-iterative-methods-class-nonlinear-equations/","timestamp":"2024-11-02T17:49:32Z","content_type":"text/html","content_length":"126708","record_id":"<urn:uuid:e8ea61ed-47f5-494e-a899-89583c7e0798>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00553.warc.gz"} |
Volume of Hemisphere | MECHHEART
What is half a sphere is called as a Hemisphere. โHemiโ is a Greek Word, the meaning of โHalfโ. โSphaereโ is a Latin word, meaning of โglobeโ or a spherical/round object. โHemisphaereโ is a Latin
word and in English it is Hemisphere. For most of the calculation, you will need Volume of the Hemisphere and this article will help you to find the equations to find the Volume of Hemisphere.
Hemisphere Definition
The hemisphere is the half of a Sphere. In a different way, we can say that when a plane cuts a sphere at the center two hemispheres are formed.
Volume of Hemisphere
To calculate the volume of a hemisphere, we need to know its radius, which represents the distance from the center of the sphere to any point on its surface.
The easiest way to find the volume of the hemisphere is to get the hemisphere as part of the full sphere. 1st you can find the volume of the sphere.
Hemisphere is half of the sphere. Then you can divide the volume of the sphere by two.
Letโs add volume of sphere equation.
The final equation is look like below.
Applications of Hemispheres
Most of the time you can see hemisphere shapes with other geometries. However, knowing about hemispheres will help you to do different designs. Following are some examples of hemispheres.
โข Architectural Domes
โข Cooking Utensils
โข Geodesic Domes
โข Hemispherical Bowls
Really appreciate, taking the time to read the article about the Hemisphere.
So keep in touch with MechHeart and feel free to add some comments here and share your knowledge with us.
0 Comments | {"url":"https://mechheart.com/volume-of-hemisphere/","timestamp":"2024-11-14T13:36:29Z","content_type":"text/html","content_length":"178959","record_id":"<urn:uuid:90ea1754-c34d-4013-8440-dbef23272db6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00815.warc.gz"} |
โข boosting
Measure which accounts the gain of Gini index given by a feature in a tree and the weight of that tree.
โข cforest
Permutation principle of the 'mean decrease in accuracy' principle in randomForest. If auc=TRUE (only for binary classification), area under the curve is used as measure. The algorithm used for
the survival learner is 'extremely slow and experimental; use at your own risk'. See party::varimp() for details and further parameters.
โข gbm
Estimation of relative influence for each feature. See gbm::relative.influence() for details and further parameters.
โข h2o
Relative feature importances as returned by h2o::h2o.varimp().
โข randomForest
For type = 2 (the default) the 'MeanDecreaseGini' is measured, which is based on the Gini impurity index used for the calculation of the nodes. Alternatively, you can set type to 1, then the
measure is the mean decrease in accuracy calculated on OOB data. Note, that in this case the learner's parameter importance needs to be set to be able to compute feature importance values. See
randomForest::importance() for details.
โข RRF
This is identical to randomForest.
โข ranger
Supports both measures mentioned above for the randomForest learner. Note, that you need to specifically set the learners parameter importance, to be able to compute feature importance measures.
See ranger::importance() and ranger::ranger() for details.
โข rpart
Sum of decrease in impurity for each of the surrogate variables at each node
โข xgboost
The value implies the relative contribution of the corresponding feature to the model calculated by taking each feature's contribution for each tree in the model. The exact computation of the
importance in xgboost is undocumented. | {"url":"https://www.rdocumentation.org/packages/mlr/versions/2.19.1/topics/getFeatureImportance","timestamp":"2024-11-12T20:09:26Z","content_type":"text/html","content_length":"71270","record_id":"<urn:uuid:32428d32-dadb-4cbe-b2b9-108f2b3517d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00231.warc.gz"} |
Theory MessageGA
sectionโนTheory of Agents and Messages for Security Protocols against the General Attackerโบ
theory MessageGA imports Main begin
(*Needed occasionally with spy_analz_tac, e.g. in analz_insert_Key_newK*)
lemma [simp] : "A โช (B โช A) = B โช A"
by blast
key = nat
all_symmetric :: bool โ โนtrue if all keys are symmetricโบ
invKey :: "key=>key" โ โนinverse of a symmetric keyโบ
specification (invKey)
invKey [simp]: "invKey (invKey K) = K"
invKey_symmetric: "all_symmetric โถ invKey = id"
by (rule exI [of _ id], auto)
textโนThe inverse of a symmetric key is itself; that of a public key
is the private key and vice versaโบ
definition symKeys :: "key set" where
"symKeys == {K. invKey K = K}"
datatype โ โนWe only allow for any number of friendly agentsโบ
agent = Friend nat | {"url":"https://www.isa-afp.org/browser_info/current/AFP/Inductive_Confidentiality/MessageGA.html","timestamp":"2024-11-12T02:20:37Z","content_type":"application/xhtml+xml","content_length":"519113","record_id":"<urn:uuid:48333764-e270-42d6-9e99-15174567fd22>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00628.warc.gz"} |
Work in progress
2018081200 - Prototype register set for a next generation programming language, and substrate of the extended (2^N * 64)-bit floating point computations below. In theory the register set can be
initialized at ANY given horizontal and vertical order.
2016113001 - Complex and inverse complex transformations - AMD / Intel Numeric Processor Extension versus prototype replacements.
2016113002 - Exponential and logarithmic transformations - AMD / Intel Numeric Processor Extension versus prototype replacements. | {"url":"http://long4core.com/progress/","timestamp":"2024-11-13T02:38:38Z","content_type":"text/html","content_length":"12045","record_id":"<urn:uuid:7562fa2a-bb13-4cab-99a3-f593c19e0c62>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00260.warc.gz"} |
Wide Area Networks
Project The purpose of the work is the theoretical
Wide Area Networks Project The purpose of the work is the theoretical and experimental study, using simulation in Matlab, of the performance of a digital telecommunication system. We consider the
following model of a Digital Telecommunication System ฮฮนฮฑฮผฯฯฯฯฯฮท ฮ ฮฟ ฮฯฮดฮนฮบฮฟฯฮฟฮฏฮทฯฮท ฮฮฑฮฝฮฑฮปฮนฮฟฯ ฮฯฯฯ
ฮฒฮฟฯ ฮฮฑฮฝฮฌฮปฮน ฮจฮทฯฮนฮฑฮบฯ ฮฃฮฎฮผฮฑ ฮฮนฯฯฮดฮฟฯ
ฮ ฮฟฮผฯฯฯ ฮฯฯฯฮฑฯฮท ฮฯฮฟฮดฮนฮฑฮผฯฯฯฯฯฮท ฮฯฮฟฮบฯฮดฮนฮบฮฟฯฮฟฮฏฮทฯฮท ฮฮฑฮฝฮฑฮปฮนฮฟฯ ฮจฮทฯฮนฮฑฮบฯ ฮฃฮฎฮผฮฑ ฮฮพฯฮดฮฟฯ
ฮฮญฮบฯฮทฯ A sequence of bits appears at the input of the system. To improve the performance of the system, based on the criterion of average bit error probability, you will implement a simple channel
coding which is repetition coding. According to it, the same bit of information is repeated several times. Specifically, each bit arriving at the encoder input is repeated N times (where N โฌ
{1,3,...,}). The output of the encoder is input to the digital modulator where the input bits are mapped to the input symbols as follows Bit 00 Symbol S1= 1 +j ฮฃ 01 S 2= โ 1 +j --1 ฮฃ 11 S 3= - 1 +j
โ2 10 S4= 1-j โ2 The symbol that is emitted is the x = โEssi,, where Es is the energy of the transmitted symbols. Gaussian noise modeled as a complex random variable is added to the broadcast ฯฯ
signal which follows the normal distribution with mean value 0 and variance (for both the real and the imaginary part). The received signal at the receiver is the = 10-6 r = x+w The receiver makes
the decision about which symbol has been sent based on the minimum distance criterion in this system, the symbol error probability can be calculated as PR = 2Q Es 20 1 1 (โ Es 20 Where Q(.) is the
complementary error function defined as Q(x) = 1 โโ 1" exp (-1/7) du /2ฯ X 2 Demodulation is then performed where the received symbols are mapped to bits using the previous table. At the end, the
decoding process takes place, according to which in each N symbols used during encoding, the number of aces and zeroes is counted and a decision is made in favor of the digit that appears more times.
To calculate the average (experimental) bit error probability (BEP) the relationship can be used PB = P 2 It should be noted that Es=2Eb, where Eb is the average bit energy. For the corresponding
calculation of the average bit error probability using encoding you will use the previous relation in combination with the fact that a Binomial random variable is created. Specifically, we have a
sequence of N independent Bernoulli trials with a "success" probability in each of them equal to With PB. The cumulative probability of their occurrence needs to be calculated ยฆโยฆ + 1, ..., N In the
end to note errors bits (SNR) dB = 10log10 Es 2 Questions To the under-study telecommunication system we send a gray image. To convert any jpeg image to grayscale image, as well as to convert the
image to bitmap, you can use the material related to image processing with the help of Matlab and given in eclass. 1. The column table is sent to said digital telecommunication system. Calculate the
average bit error probability for different values of (SNR db), [0-10] dB. The results should be given in a semi-logarithmic graph, with respect to the y-axis. The graphical representations that
should be made concern a. Theoretical BEP, without repetition coding b. The experimental BEP, without repetition coding c. The theoretical BEP, with repetition coding, where N=3 d. The experimental
BEP, with repetition coding, where N=3 e. The theoretical BEP, with repetition coding, where N=5 f. The experimental BEP, with repetition coding, where N=5 g. The theoretical BEP, with repetition
coding, where N=7 h. The experimental BEP, with repetition coding, where N=7 2. Suppose the symbol mapping had been done in the following way Bit 00 Symbol S1= 1 +j โ2 01 S2= โ1 +j โ2 10 S3=- 1+j โ2
11 S 4= 1-j โ2 Give the graphs for the BER a. The theoretical BEP as a function of (SNR) db, [0-10] dB, without repetition coding and with the help of the first match b. The experimental BEP as a
function of (SNR) db, [0-10] dB, without repetition coding and with the help of the first match c. The theoretical BEP as a function of (SNR )db, [0-10] dB, without repetition coding and using the
second mapping d. The experimental BEP as a function of (SNR) db, [0-10] dB, without repetition coding and using the second mapping | {"url":"https://tutorbin.com/questions-and-answers/wide-area-networks-project-the-purpose-of-the-work-is-the-theoretical-and-experimental-study-using-simulation-in-matlab","timestamp":"2024-11-05T12:22:46Z","content_type":"text/html","content_length":"71846","record_id":"<urn:uuid:b04a946a-8f39-40c5-801f-b0a091796c45>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00223.warc.gz"} |
Multivariable and Vector Analysis
by W W L Chen
Publisher: Macquarie University 2008
Number of pages: 203
This set of notes is suitable for an introduction to some of the basic ideas in multivariable and vector analysis: functions of several variables, differentiation, implicit and inverse function
theorems, higher order derivatives, double and triple integrals, change of variables, paths, vector fields, integrals over paths, parametrized surfaces, integrals over surfaces, integration theorems.
Download or read it online for free here:
Download link
(multiple PDF files)
Similar books
Introduction to Vectors
Christopher C. Tisdell
BookboonVectors provide a fascinating tool to describe motion and forces in physics and engineering. This book takes learning to a new level by combining written notes with online video. Each lesson
is linked with a YouTube video from Dr Chris Tisdell.
Vector Analysis and Quaternions
Alexander Macfarlane
John Wiley & SonsContents: Addition of Coplanar Vectors; Products of Coplanar Vectors; Coaxial Quaternions; Addition of Vectors in Space; Product of Two Vectors; Product of Three Vectors; Composition
of Quantities; Spherical Trigonometry; Composition of Rotations.
Vector Analysis Notes
Matthew Hutton
matthewhutton.comContents: Line Integrals; Gradient Vector Fields; Surface Integrals; Divergence of Vector Fields; Gauss Divergence Theorem; Integration by Parts; Green's Theorem; Stokes Theorem;
Spherical Coordinates; Complex Differentation; Complex power series...
Vector Calculus: Course
Peter SavelievThis is a two-semester course in n-dimensional calculus with a review of the necessary linear algebra. It covers the derivative, the integral, and a variety of applications. An emphasis
is made on the coordinate free, vector analysis. | {"url":"https://www.e-booksdirectory.com/details.php?ebook=2516","timestamp":"2024-11-02T03:16:04Z","content_type":"text/html","content_length":"11327","record_id":"<urn:uuid:d56d8951-2340-4db7-aecd-e763d1f1005c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00300.warc.gz"} |
Unlocking the Power of the Most Popular Excel Formulas ยป THEAMITOS
Unlocking the Power of the Most Popular Excel Formulas
Microsoft Excel is an indispensable tool in the world of business, finance, data analysis, and beyond. With its powerful capabilities, Excel enables users to perform complex calculations, analyze
large datasets, and automate repetitive tasks. However, to truly harness the power of Excel, itโs essential to master the most popular Excel formulas. These formulas are the backbone of efficient
data management and can significantly enhance productivity in any career that involves working with data.
In this article, weโll explore the most popular Excel formulas, why theyโre essential, and how mastering them can make you an Excel power user. Whether youโre a beginner or an advanced user,
understanding these formulas will take your Excel skills to the next level.
1. SUM Function
The SUM function is one of the most basic yet powerful Excel formulas. It allows you to add up a range of numbers, saving you time and ensuring accuracy. This formula is widely used in financial
modeling, budgeting, and any task that involves adding numbers across rows or columns.
This formula will add all the values from cell B2 to B10.
2. IF Function
The IF function is a logical formula that returns one value if a condition is true and another value if it is false. This function is incredibly versatile and is often used for decision-making
processes within Excel.
=IF(A1>10, "Yes", "No")
This formula will return โYesโ if the value in cell A1 is greater than 10; otherwise, it will return โNo.โ
3. XLOOKUP Function
XLOOKUP is the modern, more powerful replacement for VLOOKUP. It allows you to search a range or an array, and unlike VLOOKUP, it can search both vertically and horizontally. XLOOKUP is more
flexible, does not require the lookup value to be in the first column, and can return results from any column.
=XLOOKUP(A2, B2:B10, C2:C10)
This formula looks for the value in cell A2 within the range B2
and returns the corresponding value from the range C2
. If the value is not found, you can also specify a default value to return.
4. INDEX-MATCH Combination
While XLOOKUP has largely replaced the need for the INDEX-MATCH combination, itโs still worth knowing as it provides flexibility in certain scenarios. The INDEX-MATCH combination allows for complex
lookups and can be used in place of VLOOKUP or HLOOKUP when working with large datasets.
=INDEX(B2:B10, MATCH(A1, C2:C10, 0))
This formula searches for the value in cell A1 within the range C2
and returns the corresponding value from the range B2
5. COUNTIF Function
The COUNTIF function is used to count the number of cells that meet a specific criterion within a range. This is particularly useful in scenarios where you need to count occurrences of a specific
value or text in a dataset.
=COUNTIF(A2:A10, "Completed")
This formula will count the number of cells in the range A2
that contain the word โCompleted.โ
6. TEXT Function
The TEXT function is used to convert numbers to text, or format numbers as text in a specific way. This is particularly useful for displaying dates, times, or numeric values in a custom format.
=TEXT(A1, "MM/DD/YYYY")
This formula will convert the date in cell A1 to the โMM/DD/YYYYโ format.
7. CONCATENATE Function
The CONCATENATE function (or its modern equivalent, the CONCAT function) is used to combine text from different cells into one cell. This is useful for creating custom labels, combining names, or
merging data from multiple columns.
=CONCATENATE(A1, " ", B1)
This formula will combine the text in cells A1 and B1, with a space in between.
8. SUMIF and SUMIFS Functions
The SUMIF function adds all numbers in a range that meet a single criterion, while the SUMIFS function allows you to sum values based on multiple criteria. These functions are essential for
conditional summing in Excel.
Example (SUMIF):
=SUMIF(A2:A10, ">100", B2:B10)
This formula adds all values in the range B2
where the corresponding value in A2
is greater than 100.
Example (SUMIFS):
=SUMIFS(B2:B10, A2:A10, ">100", C2:C10, "East")
This formula adds all values in B2
where the corresponding values in A2
are greater than 100 and the values in C2
are โEast.โ
9. PMT Function
The PMT function is used to calculate the payment for a loan based on constant payments and a constant interest rate. This formula is essential in financial modeling and is widely used in mortgage
=PMT(0.05/12, 360, 300000)
This formula calculates the monthly payment for a loan of $300,000 at an annual interest rate of 5% over 30 years.
10. CHOOSE Function
The CHOOSE function returns a value from a list of values based on a specified position. This is useful for scenarios where you want to select a value based on a certain condition or index.
=CHOOSE(2, "Red", "Blue", "Green")
This formula will return โBlueโ because it is the second value in the list.
11. AVERAGEIF and AVERAGEIFS Functions
The AVERAGEIF function calculates the average of cells that meet a specific criterion, while the AVERAGEIFS function allows for multiple criteria. These functions are particularly useful for finding
average values based on conditions.
Example (AVERAGEIF):
=AVERAGEIF(A2:A10, ">10")
This formula calculates the average of all values in the range A2
that are greater than 10.
Example (AVERAGEIFS):
=AVERAGEIFS(B2:B10, A2:A10, ">10", C2:C10, "<20")
This formula calculates the average of values in B2
where the corresponding values in A2
are greater than 10 and the values in C2
are less than 20.
12. LEFT, RIGHT, and MID Functions
These functions extract specific portions of text from a string. LEFT extracts a certain number of characters from the beginning, RIGHT extracts from the end, and MID extracts a specific number of
characters from a starting point.
Example (LEFT):
=LEFT(A1, 5)
This formula extracts the first five characters from the text in cell A1.
Example (RIGHT):
=RIGHT(A1, 3)
This formula extracts the last three characters from the text in cell A1.
Example (MID):
=MID(A1, 3, 5)
This formula extracts five characters from the text in cell A1, starting at the third character.
Conclusion: The Power of Excel Formulas
Mastering these most popular Excel formulas will transform the way you work with data, making your processes more efficient and your analyses more accurate. Whether youโre calculating totals, making
decisions, or analyzing data trends, these formulas will empower you to unlock the full potential of Excel.
By incorporating these formulas into your daily workflow, you can streamline tasks, reduce errors, and focus on deriving insights from your data. Excel is more than just a spreadsheet program; itโs a
powerful tool for data-driven decision-making, and these formulas are the keys to unlocking that power.
Leave a Comment | {"url":"https://theamitos.com/top-101-most-popular-excel-formulas-new-free-pdf-2024/","timestamp":"2024-11-05T21:39:11Z","content_type":"text/html","content_length":"214692","record_id":"<urn:uuid:d6c1b9cc-9a86-4a49-942c-e9fd08289cd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00543.warc.gz"} |
Stochastic differential equations โ The Dan MacKinlay stable of variably-well-considerโd enterprises
Stochastic differential equations
September 19, 2019 โ June 22, 2021
dynamical systems
Lรฉvy processes
signal processing
stochastic processes
time series
By analogy with differential equations, which use vanilla calculus to define deterministic dynamics, we can define stochastic differential equations, which use stochastic calculus to define random
SDEs are time-indexed, causal stochastic processes which notionally integrate an ordinary differential equation over some driving noise. Stochastic partial differential equations are to SDEs as PDEs
are to ODEs.
Useful in state filters, optimal control, financial mathematics, etc.
Usually, we talk about differential equations, but the broader and I think more common class of ODEs is naturally defined through integral equations rather than differential equations, in the sense
that the driving noise process is an integrator. When you differentiate the noise process, it leads, AFAICT, to Malliavin calculus, or something? I am not sure about that theory.
Useful tools: infinitesimal generators, martingales, Dale Robertโs cheat sheet, Itรด-Taylor expansionsโฆ
Warning: beware the terminology problem that some references take SDEs to be synonymous with Itรด processes, whose driving noise is Brownian. Some writers, when they really want to be clear that they
are not assuming Brownian motion, but some SDEs driven by Lรฉvy noise, use the term sparse stochastic processes.
1 Pathwise solutions
Random ODEs is the highly ambiguous phrasing of Bongers and Mooij (2018) when referring to a certain class of smooth SDEs; better might be โnoise-driven ODEsโ. In practice, this is useful for the
kind of smooth systems I encounter often.
AFAICT we can consider these in the context of Wong-Zakai approximation in the classical SDE context, with respect to Stratonovich integrals. However, the cleanest introduction I have is in the
context of rough paths so letโs look at it in that context instead. For a classical setting, see (Teye 2010; Wedig 1984). | {"url":"https://danmackinlay.name/notebook/stochastic_differential_equations.html","timestamp":"2024-11-04T23:36:09Z","content_type":"application/xhtml+xml","content_length":"55354","record_id":"<urn:uuid:d7d57fb6-b645-4ceb-b25a-a2617f06c187>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00472.warc.gz"} |
Square Root Transformation - (Market Research Tools) - Vocab, Definition, Explanations | Fiveable
Square Root Transformation
from class:
Market Research Tools
Square root transformation is a mathematical technique used to stabilize variance and make data more normally distributed by taking the square root of each data point. This transformation can help
address issues like skewness, especially when dealing with count data or data that contain outliers, improving the robustness of statistical analyses.
congrats on reading the definition of Square Root Transformation. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Square root transformation is particularly useful for count data, where values are non-negative integers, like the number of occurrences of an event.
2. By applying this transformation, it can reduce right skewness in the data, making it closer to a normal distribution.
3. This technique is often considered when assessing models like ANOVA or regression, where normality of residuals is an assumption.
4. While square root transformation can help with variance stabilization, it may not be appropriate for all datasets, particularly those with negative values.
5. It is essential to interpret results after transformation carefully, as the transformed values may not have direct real-world meaning without back-transformation.
Review Questions
โข How does square root transformation specifically help in addressing skewness and variance in data?
โก Square root transformation helps reduce right skewness by compressing larger values more than smaller ones, leading to a more symmetric distribution. This adjustment stabilizes variance
across the dataset, making it easier to meet statistical assumptions required for analyses like ANOVA and regression. By ensuring that the spread of data points is more consistent, it
enhances the reliability of results derived from statistical models.
โข In what situations would you consider using square root transformation over other types of transformations when handling outliers?
โก Square root transformation should be considered when dealing with count data that may have a right-skewed distribution due to the presence of outliers. Unlike logarithmic transformations that
can't handle zero or negative values, square root can effectively adjust counts while remaining applicable to datasets where values are non-negative. When exploring alternatives for outlier
management, understanding the distribution and nature of your data will guide whether this transformation or others would be most beneficial.
โข Evaluate how square root transformation impacts the interpretation of statistical results compared to raw data analysis.
โก When using square root transformation, interpreting statistical results requires careful consideration because the transformed data no longer reflects original counts directly. For example,
regression coefficients derived from transformed data indicate changes in square roots of the outcome variable rather than the original units. Therefore, analysts must remember to
back-transform results for meaningful conclusions and communicate these changes clearly to stakeholders. This necessity highlights how transformations can enhance analytical rigor while
complicating interpretation.
"Square Root Transformation" also found in:
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/market-research-tools-and-techniques-for-data-collection-and-analysis/square-root-transformation","timestamp":"2024-11-12T02:33:53Z","content_type":"text/html","content_length":"181160","record_id":"<urn:uuid:39cdf59e-421f-4756-a848-92a18173d7e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00546.warc.gz"} |
Protocol Update: Introducing Interest Rate into Everlasting Options, Power Perps, and Gamma Swaps.
A couple of weeks ago, we made an update to Deri Protocolโs funding rate algorithm for perpetual futures to better align with the rates seen on major centralized exchanges (CEXs). This update created
an inconsistency across the different derivative types available on Deri: while perpetual futures are based on a non-zero risk-free interest rate, the other three derivative types provided by Deri
Protocol are assuming a zero interest rate. Hence we are bringing forward another update to the protocol so the funding fee mechanisms are consistent across all our derivative products.
What Was Changed
We have added an interest rate component as a baseline to the funding rate calculations of Everlasting Options, Power Perps, and Gamma Swaps. The BaselineDailyFundingRate has been set to 0.03%,
consistent with that of the perpetual futures, as well as the rates adopted by leading futures exchanges such as Binance and BitMEX.
The New Funding Fee Algorithms
Everlasting Options
The update for the options algorithm is a bit complicated as the original pricing formula calculating the theoretical prices of everlasting options was based on zero risk-free interest rate. To make
this upgrade, we re-derived the pricing formula for the everlasting option, incorporating a non-zero risk-free interest rate.
Please refer to this paper if you are interested in the details of the pricing.
Power Perps
The original paper already provided the formula with the interest rate so we simply changed that parameterโs value to 0.03% (daily).
Gamma Swap
Gamma Swap is a composite derivative consisting of power perps and perpetual futures. We simply updated the two parts just as how power perps and perpetual futures were handled, respectively.
Impact on Traders
Everlasting Options
The more in-the-money the option is, the more it is affected. That is, the puts with higher strikes and the calls with lower strikes are more affected. For out-of-money options (e.g.
BTCUSD-100000-C), the difference is very small and negligible. The most affected ones are the deep in-the-money options. For example, a long position in BTCUSD-30000-C incurs an additional funding
fee similar to that of perpetual futures. Conversely, a long position in BTCUSD-100000-P pays less in funding, with the difference approaching the funding rate of perpetual futures. This actually
results in a negative funding rate for BTCUSD-100000-P. That is, a short position in BTCUSD-100000-P actually receives funding fees, which makes sense.
The PnL of a long position in BTCUSD-30000-C is almost linear (Delta โ
1), making it closely resemble a long futures position. On the other hand, a long position in BTCUSD-100000-P (Delta โ
-1) behaves
almost like a short futures position. It would actually be inconsistent if this were not the case, as an arbitrage opportunity would arise if the funding fee for BTCUSD-30000-C did not align closely
with that of long BTCUSD futures or if the funding for BTCUSD-100000-P did not align with short BTCUSD futures.
Power Perps
Long positions pay more funding fees and, accordingly, short positions receive more.
Gamma Swap
For the gamma swap position still around its entry price the impact is negligible, as the impact on the power part and that on the futures part cancel each other out.
About Deri Protocol
Deri, your option, your future!
Deri is the DeFi way to trade derivatives: to hedge, to speculate, to arbitrage, all on chain. With Deri Protocol, trades are executed under AMM paradigm and positions are tokenized as NFTs, highly
composable with other DeFi projects. Having provided an on-chain mechanism to exchange risk exposures precisely and capital-efficiently, Deri Protocol has minted one of the most important blocks of
the DeFi infrastructure. | {"url":"https://deri-protocol.medium.com/protocol-update-introducing-interest-rate-into-everlasting-options-power-perps-and-gamma-swaps-198d5479698c?source=user_profile_page---------2-------------e1759d373404---------------","timestamp":"2024-11-05T03:59:45Z","content_type":"text/html","content_length":"104948","record_id":"<urn:uuid:f27b9f8f-8c0e-4956-8e48-b76bf971bb6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00841.warc.gz"} |
RandomState.randn(d0, d1, ..., dn)ยถ
Return a sample (or samples) from the โstandard normalโ distribution.
If positive, int_like or int-convertible arguments are provided, randn generates an array of shape (d0, d1, ..., dn), filled with random floats sampled from a univariate โnormalโ (Gaussian)
distribution of mean 0 and variance 1 (if any of the
This is a convenience function. If you want an interface that takes a tuple as the first argument, use numpy.random.standard_normal instead.
d0, d1, โฆ, dn : int, optional
The dimensions of the returned array, should be all positive. If no argument is given a single Python float is returned.
Z : ndarray or float
A (d0, d1, ..., dn)-shaped array of floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied.
See also
Similar, but takes a tuple as its argument.
For random samples from
sigma * np.random.randn(...) + mu
>>> np.random.randn()
2.1923875335537315 #random
Two-by-four array of samples from N(3, 6.25):
>>> 2.5 * np.random.randn(2, 4) + 3
array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], #random
[ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) #random | {"url":"https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.RandomState.randn.html","timestamp":"2024-11-04T02:22:56Z","content_type":"text/html","content_length":"10592","record_id":"<urn:uuid:bb618209-0ac7-4d2c-acbe-8746ca3807f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00532.warc.gz"} |
P vs NP
In my recent work, I have developed a new perspective on the Boolean satisfiability problem (SAT), focusing particularly on formulas in Conjunctive Normal Form (CNF). This exploration introduces a
classification of SAT subproblems based on the nature of the literals within the clauses, and a novel concept called saturation as a measure of contradiction. Below, I will provide an overview of
this approach and highlight its key implications for SAT complexity and algorithmic design.
1. Classification of Clauses
I categorize the clauses of a CNF formula into three distinct types:
P: Clauses containing only positive literals.
N: Clauses containing only negative literals.
M: Clauses that include at least one positive and one negative literal.
Each clause in a CNF formula fits into exactly one of these categories, leading to seven potential subproblem types in SAT:
P (only positive clauses)
N (only negative clauses)
M (mixed clauses)
M โง N
M โง P
P โง N
M โง P โง N (the most general case)
2. Triviality of Subproblems
One of the significant findings is that the subproblems P, N, M ( it is clear if you put the formula in an algebric/logic form), M AND N, M AND P, P AND N(this case is satisfiable only if there is
no direct contraddiction, s.a. Xi and notXi, due to the Saturation concept) are trivially satisfiable. For example:
In subproblem P, assigning all variables the value 1 satisfies every clause.
Similarly, for N, assigning 0 satisfies every clause.
M is also trivial, since any assignment of 0 or 1 satisfies its mixed structure.
M AND N assigning all variables the value 0 satisfies every clause.
M AND P assigning all variables the value 1 satisfies every clause.
This triviality offers insights into simpler SAT instances, but the combination of these subproblems (especially M โง P โง N) becomes much more challenging.
3. The Concept of Saturation
In this work, I introduce the idea of saturation, which is essentially a condition of over-determination in SAT. A formula is said to be "saturated" when it includes all possible variations
(complementary forms) of a clause. This concept leads to a useful theorem:
If a formula contains all clauses from the complement set of any of its clauses, the formula is unsatisfiable.
This result stems from the fact that if every possible assignment is blocked by at least one clause, there is no way to satisfy the entire formula. Saturation thus provides a direct way to detect
unsatisfiability in some cases.
4. A Global Saturation Conjecture
One of the open questions in this research is whether there exists a critical saturation threshold beyond which a formula becomes unsatisfiable. I propose the following conjecture:
There exists a critical saturation degree
such that, if a formula has a saturation degree
, it becomes unsatisfiable.
Understanding and quantifying this threshold could lead to new tools for analyzing the complexity of SAT instances and potentially developing more efficient algorithms for detecting unsatisfiability.
Right now I'm exploring another way using generating function , to count instances of the varaibles that satisfies the formula.
I'll be vary grateful if you could verify or help me to finish this study :) | {"url":"https://www.scienceforums.net/topic/134838-p-vs-np/#comment-1277985","timestamp":"2024-11-04T13:49:41Z","content_type":"text/html","content_length":"78785","record_id":"<urn:uuid:54f90950-c333-4aca-96f0-acdde481710d>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00311.warc.gz"} |
section modulus of rectangular section calculation for Calculations
29 Mar 2024
Popularity: โญโญโญ
Section Modulus of Rectangular Section
This calculator provides the calculation of section modulus of a rectangular section.
Calculation Example: The section modulus of a rectangular section is a measure of its resistance to bending. It is given by the formula S = (b * h^2) / 6, where b is the width of the section and h is
its height.
Related Questions
Q: What is the importance of section modulus in structural design?
A: Section modulus is an important property in structural design as it helps to determine the bending strength of a structural member. Engineers use section modulus to select the appropriate size and
shape of structural members to ensure that they can safely resist the applied loads.
Q: How does section modulus affect the design of a beam?
A: Section modulus affects the design of a beam by providing information about its bending strength. Engineers can use this information to optimize the size and shape of the beam, ensuring that it
can safely support the applied loads.
| โโ | โ- | โ- |
Calculation Expression
Section Modulus: The section modulus of a rectangular section is given by S = (b * h^2) / 6.
Calculated values
Considering these as variable values: b=100.0, h=200.0, the calculated value(s) are given in table below
| โโ | โ- |
Similar Calculators
Calculator Apps
Matching 3D parts for section modulus of rectangular section calculation for Calculations
App in action
The video below shows the app in action. | {"url":"https://blog.truegeometry.com/calculators/section_modulus_of_rectangular_section_calculation_for_Calculations.html","timestamp":"2024-11-08T08:14:25Z","content_type":"text/html","content_length":"25392","record_id":"<urn:uuid:5b2deaec-a72d-4423-9673-3d76b72ccdad>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00712.warc.gz"} |
Performing Fits and Analyzing Outputs
Performing Fits and Analyzing Outputsยถ
As shown in the previous chapter, a simple fit can be performed with the minimize() function. For more sophisticated modeling, the Minimizer class can be used to gain a bit more control, especially
when using complicated constraints or comparing results from related fits.
The minimize() function is a wrapper around Minimizer for running an optimization problem. It takes an objective function (the function that calculates the array to be minimized), a Parameters
object, and several optional arguments. See Writing a Fitting Function for details on writing the objective function.
minimize(fcn, params, method='leastsq', args=None, kws=None, iter_cb=None, scale_covar=True, nan_policy='raise', reduce_fcn=None, calc_covar=True, max_nfev=None, **fit_kws)ยถ
Perform the minimization of the objective function.
The minimize function takes an objective function to be minimized, a dictionary (Parameters ; Parameters) containing the model parameters, and several optional arguments including the fitting
โ fcn (callable) โ
Objective function to be minimized. When method is โleastsqโ or โleast_squaresโ, the objective function should return an array of residuals (difference between model and data) to be
minimized in a least-squares sense. With the scalar methods the objective function can either return the residuals array or a single scalar value. The function must have the signature:
fcn(params, *args, **kws)
โ params (Parameters) โ Contains the Parameters for the model.
โ method (str, optional) โ
Name of the fitting method to use. Valid values are:
โ โleastsqโ: Levenberg-Marquardt (default)
โ โleast_squaresโ: Least-Squares minimization, using Trust Region Reflective method
โ โdifferential_evolutionโ: differential evolution
โ โbruteโ: brute force method
โ โbasinhoppingโ: basinhopping
โ โampgoโ: Adaptive Memory Programming for Global Optimization
โ โnelderโ: Nelder-Mead
โ โlbfgsbโ: L-BFGS-B
โ โpowellโ: Powell
โ โcgโ: Conjugate-Gradient
โ โnewtonโ: Newton-CG
โ โcobylaโ: Cobyla
โ โbfgsโ: BFGS
โ โtncโ: Truncated Newton
โ โtrust-ncgโ: Newton-CG trust-region
โ โtrust-exactโ: nearly exact trust-region
โ โtrust-krylovโ: Newton GLTR trust-region
โ โtrust-constrโ: trust-region for constrained optimization
โ โdoglegโ: Dog-leg trust-region
โ โslsqpโ: Sequential Linear Squares Programming
โ โemceeโ: Maximum likelihood via Monte-Carlo Markov Chain
โ โshgoโ: Simplicial Homology Global Optimization
โ โdual_annealingโ: Dual Annealing optimization
In most cases, these methods wrap and use the method of the same name from scipy.optimize, or use scipy.optimize.minimize with the same method argument. Thus โleastsqโ will use
scipy.optimize.leastsq, while โpowellโ will use scipy.optimize.minimizer(โฆ, method=โpowellโ)
For more details on the fitting methods please refer to the SciPy docs.
โ args (tuple, optional) โ Positional arguments to pass to fcn.
โ kws (dict, optional) โ Keyword arguments to pass to fcn.
โ iter_cb (callable, optional) โ
Function to be called at each fit iteration. This function should have the signature:
iter_cb(params, iter, resid, *args, **kws),
where params will have the current parameter values, iter the iteration number, resid the current residual array, and *args and **kws as passed to the objective function.
โ scale_covar (bool, optional) โ Whether to automatically scale the covariance matrix (default is True).
โ nan_policy ({'raise', 'propagate', 'omit'}, optional) โ
Specifies action if fcn (or a Jacobian) returns NaN values. One of:
โ โraiseโ : a ValueError is raised
โ โpropagateโ : the values returned from userfcn are un-altered
โ โomitโ : non-finite values are filtered
โ reduce_fcn (str or callable, optional) โ Function to convert a residual array to a scalar value for the scalar minimizers. See Notes in Minimizer.
โ calc_covar (bool, optional) โ Whether to calculate the covariance matrix (default is True) for solvers other than โleastsqโ and โleast_squaresโ. Requires the numdifftools package to be
โ max_nfev (int or None, optional) โ Maximum number of function evaluations (default is None). The default value depends on the fitting method.
โ **fit_kws (dict, optional) โ Options to pass to the minimizer being used.
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type:
The objective function should return the value to be minimized. For the Levenberg-Marquardt algorithm from leastsq(), this returned value must be an array, with a length greater than or equal to
the number of fitting variables in the model. For the other methods, the return value can either be a scalar or an array. If an array is returned, the sum-of- squares of the array will be sent to
the underlying fitting method, effectively doing a least-squares optimization of the return values.
A common use for args and kws would be to pass in other data needed to calculate the residual, including such things as the data array, dependent variable, uncertainties in the data, and other
data structures for the model calculation.
On output, params will be unchanged. The best-fit values and, where appropriate, estimated uncertainties and correlations, will all be contained in the returned MinimizerResult. See
MinimizerResult โ the optimization result for further details.
This function is simply a wrapper around Minimizer and is equivalent to:
fitter = Minimizer(fcn, params, fcn_args=args, fcn_kws=kws,
iter_cb=iter_cb, scale_covar=scale_covar,
nan_policy=nan_policy, reduce_fcn=reduce_fcn,
calc_covar=calc_covar, **fit_kws)
Writing a Fitting Functionยถ
An important component of a fit is writing a function to be minimized โ the objective function. Since this function will be called by other routines, there are fairly stringent requirements for its
call signature and return value. In principle, your function can be any Python callable, but it must look like this:
func(params, *args, **kws):
Calculate objective residual to be minimized from parameters.
โ params (Parameters) โ Parameters.
โ args โ Positional arguments. Must match args argument to minimize().
โ kws โ Keyword arguments. Must match kws argument to minimize().
Residual array (generally data-model) to be minimized in the least-squares sense.
Return type:
numpy.ndarray. The length of this array cannot change between calls.
A common use for the positional and keyword arguments would be to pass in other data needed to calculate the residual, including things as the data array, dependent variable, uncertainties in the
data, and other data structures for the model calculation.
The objective function should return the value to be minimized. For the Levenberg-Marquardt algorithm from leastsq(), this returned value must be an array, with a length greater than or equal to the
number of fitting variables in the model. For the other methods, the return value can either be a scalar or an array. If an array is returned, the sum of squares of the array will be sent to the
underlying fitting method, effectively doing a least-squares optimization of the return values.
Since the function will be passed in a dictionary of Parameters, it is advisable to unpack these to get numerical values at the top of the function. A simple way to do this is with
Parameters.valuesdict(), as shown below:
from numpy import exp, sign, sin, pi
def residual(pars, x, data=None, eps=None):
# unpack parameters: extract .value attribute for each parameter
parvals = pars.valuesdict()
period = parvals['period']
shift = parvals['shift']
decay = parvals['decay']
if abs(shift) > pi/2:
shift = shift - sign(shift)*pi
if abs(period) < 1.e-10:
period = sign(period)*1.e-10
model = parvals['amp'] * sin(shift + x/period) * exp(-x*x*decay*decay)
if data is None:
return model
if eps is None:
return model - data
return (model-data) / eps
In this example, x is a positional (required) argument, while the data array is actually optional (so that the function returns the model calculation if the data is neglected). Also note that the
model calculation will divide x by the value of the period Parameter. It might be wise to ensure this parameter cannot be 0. It would be possible to use bounds on the Parameter to do this:
params['period'] = Parameter(name='period', value=2, min=1.e-10)
but putting this directly in the function with:
if abs(period) < 1.e-10:
period = sign(period)*1.e-10
is also a reasonable approach. Similarly, one could place bounds on the decay parameter to take values only between -pi/2 and pi/2.
Types of Data to Use for Fittingยถ
Minimization methods assume that data is numerical. For all the fitting methods supported by lmfit, data and fitting parameters are also assumed to be continuous variables. As the routines make heavy
use of numpy and scipy, the most natural data to use in fitting is then numpy nd-arrays. In fact, many of the underlying fitting algorithms - including the default leastsq() method - require the
values in the residual array used for the minimization to be a 1-dimensional numpy array with data type (dtype) of โfloat64โ: a 64-bit representation of a floating point number (sometimes called a
โdouble precision floatโ).
Python is generally forgiving about data types, and in the scientific Python community there is a concept of an object being โarray likeโ which essentially means that the can usually be coerced or
interpreted as a numpy array, often with that object having an __array__() method specially designed for that conversion. Important examples of objects that can be considered โarray likeโ include
Lists and Tuples that contain only numbers, pandas Series, and HDF5 Datasets. Many objects from data-processing libraries like dask, xarray, zarr, and more are also โarray likeโ.
Lmfit tries to be accommodating in the data that can be used in the fitting process. When using Minimizer, the data you pass in as extra arrays for the calculation of the residual array will not be
altered, and can be used in your objective function in whatever form you send. Usually, โarray likeโ data will work, but some care may be needed. In the example above, if x was not a numpy array but
a list of numbers, this would give an error message like:
TypeError: unsupported operand type(s) for /: 'list' and 'float'
TypeError: can't multiply sequence by non-int of type 'float'
because a list of numbers is only sometimes โarray likeโ.
Sending in a โmore array-likeโ object like a pandas Series will avoid many (though maybe not all!) such exceptions, but the resulting calculation returned from the function would then also be a
pandas Series. Lmfit minimize() will always coerce the return value from the objective function into a 1-D numpy array with dtype of โfloat64โ. This will usually โjust workโ, but there may be
When in doubt, or if running it trouble, converting data to float64 numpy arrays before being used in a fit is recommended. If using complex data or functions, a dtype of โcomplex128โ will also
always work, and will be converted to โfloat64โ with ndaarray.view("float64"). Numpy arrays of other dtype (say, โint16โ or โfloat32โ) should be used with caution. In particular, โfloat32โ data
should be avoided: Multiplying a โfloat32โ array and a Python float will result in a โfloat32โ array for example. As fitting variables may have small changes made to them, the results may be at or
below โfloat32โ precision, which will cause the fit to give up. For integer data, results are more sometimes promoted to โfloat64โ, but many numpy ufuncs (say, numpy.exp()) will promote only to
โfloat32โ, so care is still needed.
See also Data Types for data and independent data with Model for discussion of data passed in for curve-fitting.
Choosing Different Fitting Methodsยถ
By default, the Levenberg-Marquardt algorithm is used for fitting. While often criticized, including the fact it finds a local minimum, this approach has some distinct advantages. These include being
fast, and well-behaved for most curve-fitting needs, and making it easy to estimate uncertainties for and correlations between pairs of fit variables, as discussed in MinimizerResult โ the
optimization result.
Alternative algorithms can also be used by providing the method keyword to the minimize() function or Minimizer.minimize() class as listed in the Table of Supported Fitting Methods. If you have the
numdifftools package installed, lmfit will try to estimate the covariance matrix and determine parameter uncertainties and correlations if calc_covar is True (default).
Table of Supported Fitting Methods:
Fitting Method method arg to minimize() or Minimizer.minimize()
Levenberg-Marquardt leastsq or least_squares
Nelder-Mead nelder
L-BFGS-B lbfgsb
Powell powell
Conjugate Gradient cg
Newton-CG newton
COBYLA cobyla
BFGS bfgsb
Truncated Newton tnc
Newton CG trust-region trust-ncg
Exact trust-region trust-exact
Newton GLTR trust-region trust-krylov
Constrained trust-region trust-constr
Dogleg dogleg
Sequential Linear Squares Programming slsqp
Differential Evolution differential_evolution
Brute force method brute
Basinhopping basinhopping
Adaptive Memory Programming for Global Optimization ampgo
Simplicial Homology Global Optimization shgo
Dual Annealing dual_annealing
Maximum likelihood via Monte-Carlo Markov Chain emcee
The objective function for the Levenberg-Marquardt method must return an array, with more elements than variables. All other methods can return either a scalar value or an array. The Monte-Carlo
Markov Chain or emcee method has two different operating methods when the objective function returns a scalar value. See the documentation for emcee.
Much of this documentation assumes that the Levenberg-Marquardt (leastsq) method is used. Many of the fit statistics and estimates for uncertainties in parameters discussed in MinimizerResult โ the
optimization result are done only unconditionally for this (and the least_squares) method. Lmfit versions newer than 0.9.11 provide the capability to use numdifftools to estimate the covariance
matrix and calculate parameter uncertainties and correlations for other methods as well.
An optimization with minimize() or Minimizer.minimize() will return a MinimizerResult object. This is an otherwise plain container object (that is, with no methods of its own) that simply holds the
results of the minimization. These results will include several pieces of informational data such as status and error messages, fit statistics, and the updated parameters themselves.
Importantly, the parameters passed in to Minimizer.minimize() will be not be changed. To find the best-fit values, uncertainties and so on for each parameter, one must use the MinimizerResult.params
attribute. For example, to print the fitted values, bounds and other parameter attributes in a well-formatted text tables you can execute:
with results being a MinimizerResult object. Note that the method pretty_print() accepts several arguments for customizing the output (e.g., column width, numeric format, etcetera).
class MinimizerResult(**kws)ยถ
The results of a minimization.
Minimization results include data such as status and error messages, fit statistics, and the updated (i.e., best-fit) parameters themselves in the params attribute.
The list of (possible) MinimizerResult attributes is given below:
Goodness-of-Fit Statisticsยถ
Attribute Name Description / Formula
nfev number of function evaluations
nvarys number of variables in fit \(N_{\rm varys}\)
ndata number of data points: \(N\)
nfree degrees of freedom in fit: \(N - N_{\rm varys}\)
aborted boolean of whether the fit has been aborted.
success boolean for a minimal test of whether the fit finished successfully
errorbars boolean of whether error bars and unccertainty were estimated
ier integer flag describing message from leastsq.
message simple message from leastsq
method name of fitting methods
residual residual array, returned by the objective function: \(\{\rm Resid_i\}\)
chisqr chi-square: \(\chi^2 = \sum_i^N [{\rm Resid}_i]^2\)
redchi reduced chi-square: \(\chi^2_{\nu}= {\chi^2} / {(N - N_{\rm varys})}\)
aic Akaike Information Criterion statistic (see below)
bic Bayesian Information Criterion statistic (see below)
params best-fit parameters after fit, with uncertainties is available
var_names ordered list of variable parameter names used for init_vals and covar
covar covariance matrix (with rows/columns using var_names)
init_vals list of initial values for variable parameters
init_values dictionary of initial values for variable Parameters.
uvars dictionary of uncertainties uvalues for all Parameters.
call_kws dict of keyword arguments sent to underlying solver
Note that the calculation of chi-square and reduced chi-square assume that the returned residual function is scaled properly to the uncertainties in the data. For these statistics to be meaningful,
the person writing the function to be minimized must scale them properly.
After a fit using the leastsq() or least_squares() method has completed successfully, standard errors for the fitted variables and correlations between pairs of fitted variables are automatically
calculated from the covariance matrix. For other methods, the calc_covar parameter (default is True) in the Minimizer class determines whether or not to use the numdifftools package to estimate the
covariance matrix. The standard error (estimated \(1\sigma\) error-bar) goes into the stderr attribute of the Parameter. The correlations with all other variables will be put into the correl
attribute of the Parameter โ a dictionary with keys for all other Parameters and values of the corresponding correlation.
In some cases, it may not be possible to estimate the errors and correlations. For example, if a variable actually has no practical effect on the fit, it will likely cause the covariance matrix to be
singular, making standard errors impossible to estimate. Placing bounds on varied Parameters makes it more likely that errors cannot be estimated, as being near the maximum or minimum value makes the
covariance matrix singular. In these cases, the errorbars attribute of the fit result (Minimizer object) will be False.
Akaike and Bayesian Information Criteriaยถ
The MinimizerResult includes the traditional chi-square and reduced chi-square statistics:
\begin{eqnarray*} \chi^2 &=& \sum_i^N r_i^2 \\ \chi^2_\nu &=& \chi^2 / (N-N_{\rm varys}) \end{eqnarray*}
where \(r\) is the residual array returned by the objective function (likely to be (data-model)/uncertainty for data modeling usages), \(N\) is the number of data points (ndata), and \(N_{\rm varys}
\) is number of variable parameters.
Also included are the Akaike Information Criterion, and Bayesian Information Criterion statistics, held in the aic and bic attributes, respectively. These give slightly different measures of the
relative quality for a fit, trying to balance quality of fit with the number of variable parameters used in the fit. These are calculated as:
\begin{eqnarray*} {\rm aic} &=& N \ln(\chi^2/N) + 2 N_{\rm varys} \\ {\rm bic} &=& N \ln(\chi^2/N) + \ln(N) N_{\rm varys} \\ \end{eqnarray*}
When comparing fits with different numbers of varying parameters, one typically selects the model with lowest reduced chi-square, Akaike information criterion, and/or Bayesian information criterion.
Generally, the Bayesian information criterion is considered the most conservative of these statistics.
Uncertainties in Variable Parameters, and their Correlationsยถ
As mentioned above, when a fit is complete the uncertainties for fitted Parameters as well as the correlations between pairs of Parameters are usually calculated. This happens automatically either
when using the default leastsq() method, the least_squares() method, or for most other fitting methods if the highly-recommended numdifftools package is available. The estimated standard error (the \
(1\sigma\) uncertainty) for each variable Parameter will be contained in the stderr, while the correl attribute for each Parameter will contain a dictionary of the correlation with each other
variable Parameter. These updated parameters with uncertainty and correlation information will be placed in MinimizerResult.params, so that you may access the best fit value, standard error and
correlation. For a successful fit for which uncertainties and correlations can be calculated, the MinimizerResult will also have a uvars attribute that is a dictionary with keynames for each
Parameter (includnig constraints) and values of Ufloats from the uncertainties package using the best fit values, the standard error and the correlation between Parameters.
These estimates of the uncertainties are done by inverting the Hessian matrix which represents the second derivative of fit quality for each variable parameter. There are situations for which the
uncertainties cannot be estimated, which generally indicates that this matrix cannot be inverted because one of the fit is not actually sensitive to one of the variables. This can happen if a
Parameter is stuck at an upper or lower bound, if the variable is simply not used by the fit, or if the value for the variable is such that it has no real influence on the fit.
In principle, the scale of the uncertainties in the Parameters is closely tied to the goodness-of-fit statistics chi-square and reduced chi-square (chisqr and redchi). The standard errors or \(1 \
sigma\) uncertainties are those that increase chi-square by 1. Since a โgood fitโ should have redchi of around 1, this requires that the data uncertainties (and to some extent the sampling of the N
data points) is correct. Unfortunately, it is often not the case that one has high-quality estimates of the data uncertainties (getting the data is hard enough!). Because of this common situation,
the uncertainties reported and held in stderr are not those that increase chi-square by 1, but those that increase chi-square by reduced chi-square. This is equivalent to rescaling the uncertainty in
the data such that reduced chi-square would be 1. To be clear, this rescaling is done by default because if reduced chi-square is far from 1, this rescaling often makes the reported uncertainties
sensible, and if reduced chi-square is near 1 it does little harm. If you have good scaling of the data uncertainty and believe the scale of the residual array is correct, this automatic rescaling
can be turned off using scale_covar=False.
Note that the simple (and fast!) approach to estimating uncertainties and correlations by inverting the second derivative matrix assumes that the components of the residual array (if, indeed, an
array is used) are distributed around 0 with a normal (Gaussian distribution), and that a map of probability distributions for pairs would be elliptical โ the size of the of ellipse gives the
uncertainty itself and the eccentricity of the ellipse gives the correlation. This simple approach to assessing uncertainties ignores outliers, highly asymmetric uncertainties, or complex
correlations between Parameters. In fact, it is not too hard to come up with problems where such effects are important. Our experience is that the automated results are usually the right scale and
quite reasonable as initial estimates, but a more thorough exploration of the Parameter space using the tools described in Minimizer.emcee() - calculating the posterior probability distribution of
parameters and An advanced example for evaluating confidence intervals can give a more complete understanding of the distributions and relations between Parameters.
Getting and Printing Fit Reportsยถ
fit_report(inpars, modelpars=None, show_correl=True, min_correl=0.1, sort_pars=False, correl_mode='list')ยถ
Generate a report of the fitting results.
The report contains the best-fit values for the parameters and their uncertainties and correlations.
โ inpars (Parameters) โ Input Parameters from fit or MinimizerResult returned from a fit.
โ modelpars (Parameters, optional) โ Known Model Parameters.
โ show_correl (bool, optional) โ Whether to show list of sorted correlations (default is True).
โ min_correl (float, optional) โ Smallest correlation in absolute value to show (default is 0.1).
โ sort_pars (bool or callable, optional) โ Whether to show parameter names sorted in alphanumerical order. If False (default), then the parameters will be listed in the order they were
added to the Parameters dictionary. If callable, then this (one argument) function is used to extract a comparison key from each list element.
โ correl_mode ({'list', table'} str, optional) โ Mode for how to show correlations. Can be either โlistโ (default) to show a sorted (if sort_pars is True) list of correlation values, or
โtableโ to show a complete, formatted table of correlations.
Multi-line text of fit report.
Return type:
An example using this to write out a fit report would be:
# <examples/doc_fitting_withreport.py>
from numpy import exp, linspace, pi, random, sign, sin
from lmfit import create_params, fit_report, minimize
p_true = create_params(amp=14.0, period=5.46, shift=0.123, decay=0.032)
def residual(pars, x, data=None):
"""Model a decaying sine wave and subtract data."""
vals = pars.valuesdict()
amp = vals['amp']
per = vals['period']
shift = vals['shift']
decay = vals['decay']
if abs(shift) > pi/2:
shift = shift - sign(shift)*pi
model = amp * sin(shift + x/per) * exp(-x*x*decay*decay)
if data is None:
return model
return model - data
x = linspace(0.0, 250., 1001)
noise = random.normal(scale=0.7215, size=x.size)
data = residual(p_true, x) + noise
fit_params = create_params(amp=13, period=2, shift=0, decay=0.02)
out = minimize(residual, fit_params, args=(x,), kws={'data': data})
# <end examples/doc_fitting_withreport.py>
which would give as output:
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 83
# data points = 1001
# variables = 4
chi-square = 498.811759
reduced chi-square = 0.50031270
Akaike info crit = -689.222517
Bayesian info crit = -669.587497
amp: 13.9121959 +/- 0.14120321 (1.01%) (init = 13)
period: 5.48507038 +/- 0.02666520 (0.49%) (init = 2)
shift: 0.16203673 +/- 0.01405662 (8.67%) (init = 0)
decay: 0.03264539 +/- 3.8015e-04 (1.16%) (init = 0.02)
[[Correlations]] (unreported correlations are < 0.100)
C(period, shift) = +0.7974
C(amp, decay) = +0.5816
C(amp, shift) = -0.2966
C(amp, period) = -0.2432
C(shift, decay) = -0.1819
C(period, decay) = -0.1496
To be clear, you can get at all of these values from the fit result out and out.params. For example, a crude printout of the best fit variables and standard errors could be done as
print('Parameter Value Stderr')
for name, param in out.params.items():
print(f'{name:7s} {param.value:11.5f} {param.stderr:11.5f}')
Parameter Value Stderr
amp 13.91220 0.14120
period 5.48507 0.02667
shift 0.16204 0.01406
decay 0.03265 0.00038
Using a Iteration Callback Functionยถ
An iteration callback function is a function to be called at each iteration, just after the objective function is called. The iteration callback allows user-supplied code to be run at each iteration,
and can be used to abort a fit.
iter_cb(params, iter, resid, *args, **kws):
User-supplied function to be run at each iteration.
Iteration abort flag.
Return type:
None for normal behavior, any value like True to abort the fit.
Normally, the iteration callback would have no return value or return None. To abort a fit, have this function return a value that is True (including any non-zero integer). The fit will also abort if
any exception is raised in the iteration callback. When a fit is aborted this way, the parameters will have the values from the last iteration. The fit statistics are not likely to be meaningful, and
uncertainties will not be computed.
For full control of the fitting process, you will want to create a Minimizer object.
class Minimizer(userfcn, params, fcn_args=None, fcn_kws=None, iter_cb=None, scale_covar=True, nan_policy='raise', reduce_fcn=None, calc_covar=True, max_nfev=None, **kws)ยถ
A general minimizer for curve fitting and optimization.
โ userfcn (callable) โ
Objective function that returns the residual (difference between model and data) to be minimized in a least-squares sense. This function must have the signature:
userfcn(params, *fcn_args, **fcn_kws)
โ params (Parameters) โ Contains the Parameters for the model.
โ fcn_args (tuple, optional) โ Positional arguments to pass to userfcn.
โ fcn_kws (dict, optional) โ Keyword arguments to pass to userfcn.
โ iter_cb (callable, optional) โ
Function to be called at each fit iteration. This function should have the signature:
iter_cb(params, iter, resid, *fcn_args, **fcn_kws)
where params will have the current parameter values, iter the iteration number, resid the current residual array, and *fcn_args and **fcn_kws are passed to the objective function.
โ scale_covar (bool, optional) โ Whether to automatically scale the covariance matrix (default is True).
โ nan_policy ({'raise', 'propagate', 'omit}, optional) โ
Specifies action if userfcn (or a Jacobian) returns NaN values. One of:
โ โraiseโ : a ValueError is raised (default)
โ โpropagateโ : the values returned from userfcn are un-altered
โ โomitโ : non-finite values are filtered
โ reduce_fcn (str or callable, optional) โ
Function to convert a residual array to a scalar value for the scalar minimizers. Optional values are (where r is the residual array):
โ None : sum-of-squares of residual (default)
โ โnegentropyโ : neg entropy, using normal distribution
= rho*log(rho).sum()`, where rho = exp(-r*r/2)/(sqrt(2*pi))
โ โneglogcauchyโ : neg log likelihood, using Cauchy distribution
= -log(1/(pi*(1+r*r))).sum()
โ callable : must take one argument (r) and return a float.
โ calc_covar (bool, optional) โ Whether to calculate the covariance matrix (default is True) for solvers other than 'leastsq' and 'least_squares'. Requires the numdifftools package to be
โ max_nfev (int or None, optional) โ Maximum number of function evaluations (default is None). The default value depends on the fitting method.
โ **kws (dict, optional) โ Options to pass to the minimizer being used.
The objective function should return the value to be minimized. For the Levenberg-Marquardt algorithm from leastsq() or least_squares(), this returned value must be an array, with a length
greater than or equal to the number of fitting variables in the model. For the other methods, the return value can either be a scalar or an array. If an array is returned, the sum-of-squares of
the array will be sent to the underlying fitting method, effectively doing a least-squares optimization of the return values. If the objective function returns non-finite values then a ValueError
will be raised because the underlying solvers cannot deal with them.
A common use for the fcn_args and fcn_kws would be to pass in other data needed to calculate the residual, including such things as the data array, dependent variable, uncertainties in the data,
and other data structures for the model calculation.
The Minimizer object has a few public methods:
Minimizer.minimize(method='leastsq', params=None, **kws)ยถ
Perform the minimization.
โ method (str, optional) โ
Name of the fitting method to use. Valid values are:
โ โleastsqโ: Levenberg-Marquardt (default)
โ โleast_squaresโ: Least-Squares minimization, using Trust Region Reflective method
โ โdifferential_evolutionโ: differential evolution
โ โbruteโ: brute force method
โ โbasinhoppingโ: basinhopping
โ โampgoโ: Adaptive Memory Programming for Global Optimization
โ โnelderโ: Nelder-Mead
โ โlbfgsbโ: L-BFGS-B
โ โpowellโ: Powell
โ โcgโ: Conjugate-Gradient
โ โnewtonโ: Newton-CG
โ โcobylaโ: Cobyla
โ โbfgsโ: BFGS
โ โtncโ: Truncated Newton
โ โtrust-ncgโ: Newton-CG trust-region
โ โtrust-exactโ: nearly exact trust-region
โ โtrust-krylovโ: Newton GLTR trust-region
โ โtrust-constrโ: trust-region for constrained optimization
โ โdoglegโ: Dog-leg trust-region
โ โslsqpโ: Sequential Linear Squares Programming
โ โemceeโ: Maximum likelihood via Monte-Carlo Markov Chain
โ โshgoโ: Simplicial Homology Global Optimization
โ โdual_annealingโ: Dual Annealing optimization
In most cases, these methods wrap and use the method with the same name from scipy.optimize, or use scipy.optimize.minimize with the same method argument. Thus โleastsqโ will use
scipy.optimize.leastsq, while โpowellโ will use scipy.optimize.minimizer(โฆ, method=โpowellโ).
For more details on the fitting methods please refer to the SciPy documentation.
โ params (Parameters, optional) โ Parameters of the model to use as starting values.
โ **kws (optional) โ Additional arguments are passed to the underlying minimization method.
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type:
Minimizer.leastsq(params=None, max_nfev=None, **kws)ยถ
Use Levenberg-Marquardt minimization to perform a fit.
It assumes that the input Parameters have been initialized, and a function to minimize has been properly set up. When possible, this calculates the estimated uncertainties and variable
correlations from the covariance matrix.
This method calls scipy.optimize.leastsq and, by default, numerical derivatives are used.
โ params (Parameters, optional) โ Parameters to use as starting point.
โ max_nfev (int or None, optional) โ Maximum number of function evaluations. Defaults to 2000*(nvars+1), where nvars is the number of variable parameters.
โ **kws (dict, optional) โ Minimizer options to pass to scipy.optimize.leastsq.
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type:
Minimizer.least_squares(params=None, max_nfev=None, **kws)ยถ
Least-squares minimization using scipy.optimize.least_squares.
This method wraps scipy.optimize.least_squares, which has built-in support for bounds and robust loss functions. By default it uses the Trust Region Reflective algorithm with a linear loss
function (i.e., the standard least-squares problem).
โ params (Parameters, optional) โ Parameters to use as starting point.
โ max_nfev (int or None, optional) โ Maximum number of function evaluations. Defaults to 2000*(nvars+1), where nvars is the number of variable parameters.
โ **kws (dict, optional) โ Minimizer options to pass to scipy.optimize.least_squares.
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type:
Minimizer.scalar_minimize(method='Nelder-Mead', params=None, max_nfev=None, **kws)ยถ
Scalar minimization using scipy.optimize.minimize.
Perform fit with any of the scalar minimization algorithms supported by scipy.optimize.minimize. Default argument values are:
scalar_minimize() arg Default Value Description
method โNelder-Meadโ fitting method
tol 1.e-7 fitting and parameter tolerance
hess None Hessian of objective function
โ method (str, optional) โ
Name of the fitting method to use. One of:
โ โNelder-Meadโ (default)
โ โL-BFGS-Bโ
โ โPowellโ
โ โCGโ
โ โNewton-CGโ
โ โCOBYLAโ
โ โBFGSโ
โ โTNCโ
โ โtrust-ncgโ
โ โtrust-exactโ
โ โtrust-krylovโ
โ โtrust-constrโ
โ โdoglegโ
โ โSLSQPโ
โ โdifferential_evolutionโ
โ params (Parameters, optional) โ Parameters to use as starting point.
โ max_nfev (int or None, optional) โ Maximum number of function evaluations. Defaults to 2000*(nvars+1), where nvars is the number of variable parameters.
โ **kws (dict, optional) โ Minimizer options pass to scipy.optimize.minimize.
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type:
If the objective function returns a NumPy array instead of the expected scalar, the sum-of-squares of the array will be used.
Note that bounds and constraints can be set on Parameters for any of these methods, so are not supported separately for those designed to use bounds. However, if you use the
differential_evolution method you must specify finite (min, max) for each varying Parameter.
Prepare parameters for fitting.
Prepares and initializes model and Parameters for subsequent fitting. This routine prepares the conversion of Parameters into fit variables, organizes parameter bounds, and parses, โcompilesโ and
checks constrain expressions. The method also creates and returns a new instance of a MinimizerResult object that contains the copy of the Parameters that will actually be varied in the fit.
params (Parameters, optional) โ Contains the Parameters for the model; if None, then the Parameters used to initialize the Minimizer object are used.
Return type:
This method is called directly by the fitting methods, and it is generally not necessary to call this function explicitly.
Minimizer.brute(params=None, Ns=20, keep=50, workers=1, max_nfev=None)ยถ
Use the brute method to find the global minimum of a function.
The following parameters are passed to scipy.optimize.brute and cannot be changed:
brute() arg Value Description
full_output 1 Return the evaluation grid and the objective functionโs values on it.
finish None No โpolishingโ function is to be used after the grid search.
disp False Do not print convergence messages (when finish is not None).
It assumes that the input Parameters have been initialized, and a function to minimize has been properly set up.
Object containing the parameters from the brute force method. The return values (x0, fval, grid, Jout) from scipy.optimize.brute are stored as brute_<parname> attributes. The MinimizerResult
also contains the :attr:candidates attribute and show_candidates() method. The candidates attribute contains the parameters and chisqr from the brute force method as a namedtuple,
('Candidate', ['params', 'score']) sorted on the (lowest) chisqr value. To access the values for a particular candidate one can use result.candidate[#].params or result.candidate[#].score,
where a lower # represents a better candidate. The show_candidates() method uses the pretty_print() method to show a specific candidate-# or all candidates when no number is specified.
Return type:
The brute() method evaluates the function at each point of a multidimensional grid of points. The grid points are generated from the parameter ranges using Ns and (optional) brute_step. The
implementation in scipy.optimize.brute requires finite bounds and the range is specified as a two-tuple (min, max) or slice-object (min, max, brute_step). A slice-object is used directly, whereas
a two-tuple is converted to a slice object that interpolates Ns points from min to max, inclusive.
In addition, the brute() method in lmfit, handles three other scenarios given below with their respective slice-object:
lower bound (min) and brute_step are specified:
range = (min, min + Ns * brute_step, brute_step).
upper bound (max) and brute_step are specified:
range = (max - Ns * brute_step, max, brute_step).
numerical value (value) and brute_step are specified:
range = (value - (Ns//2) * brute_step`, value + (Ns//2) * brute_step, brute_step).
For more information, check the examples in examples/lmfit_brute_example.ipynb.
Minimizer.basinhopping(params=None, max_nfev=None, **kws)ยถ
Use the basinhopping algorithm to find the global minimum.
This method calls scipy.optimize.basinhopping using the default arguments. The default minimizer is BFGS, but since lmfit supports parameter bounds for all minimizers, the user can choose any of
the solvers present in scipy.optimize.minimize.
โ params (Parameters, optional) โ Contains the Parameters for the model. If None, then the Parameters used to initialize the Minimizer object are used.
โ max_nfev (int or None, optional) โ Maximum number of function evaluations (default is None). Defaults to 200000*(nvarys+1).
โ **kws (dict, optional) โ Minimizer options to pass to scipy.optimize.basinhopping.
Object containing the optimization results from the basinhopping algorithm.
Return type:
Minimizer.ampgo(params=None, max_nfev=None, **kws)ยถ
Find the global minimum of a multivariate function using AMPGO.
AMPGO stands for โAdaptive Memory Programming for Global Optimizationโ and is an efficient algorithm to find the global minimum.
โ params (Parameters, optional) โ Contains the Parameters for the model. If None, then the Parameters used to initialize the Minimizer object are used.
โ max_nfev (int, optional) โ Maximum number of total function evaluations. If None (default), the optimization will stop after totaliter number of iterations (see below)..
โ **kws (dict, optional) โ
Minimizer options to pass to the ampgo algorithm, the options are listed below:
local: str, optional
Name of the local minimization method. Valid options
- `'L-BFGS-B'` (default)
- `'Nelder-Mead'`
- `'Powell'`
- `'TNC'`
- `'SLSQP'`
local_opts: dict, optional
Options to pass to the local minimizer (default is
maxfunevals: int, optional
Maximum number of function evaluations. If None
(default), the optimization will stop after
`totaliter` number of iterations (deprecated: use
`max_nfev` instead).
totaliter: int, optional
Maximum number of global iterations (default is 20).
maxiter: int, optional
Maximum number of `Tabu Tunneling` iterations during
each global iteration (default is 5).
glbtol: float, optional
Tolerance whether or not to accept a solution after a
tunneling phase (default is 1e-5).
eps1: float, optional
Constant used to define an aspiration value for the
objective function during the Tunneling phase (default
is 0.02).
eps2: float, optional
Perturbation factor used to move away from the latest
local minimum at the start of a Tunneling phase
(default is 0.1).
tabulistsize: int, optional
Size of the (circular) tabu search list (default is 5).
tabustrategy: {'farthest', 'oldest'}, optional
Strategy to use when the size of the tabu list exceeds
`tabulistsize`. It can be `'oldest'` to drop the oldest
point from the tabu list or `'farthest'` (defauilt) to
drop the element farthest from the last local minimum
disp: bool, optional
Set to True to print convergence messages (default is
Object containing the parameters from the ampgo method, with fit parameters, statistics and such. The return values (x0, fval, eval, msg, tunnel) are stored as ampgo_<parname> attributes.
Return type:
The Python implementation was written by Andrea Gavana in 2014 (http://infinity77.net/global_optimization/index.html).
The details of the AMPGO algorithm are described in the paper โAdaptive Memory Programming for Constrained Global Optimizationโ located here:
Minimizer.shgo(params=None, max_nfev=None, **kws)ยถ
Use the SHGO algorithm to find the global minimum.
SHGO stands for โsimplicial homology global optimizationโ and calls scipy.optimize.shgo using its default arguments.
โ params (Parameters, optional) โ Contains the Parameters for the model. If None, then the Parameters used to initialize the Minimizer object are used.
โ max_nfev (int or None, optional) โ Maximum number of function evaluations. Defaults to 200000*(nvars+1), where nvars is the number of variable parameters.
โ **kws (dict, optional) โ Minimizer options to pass to the SHGO algorithm.
Object containing the parameters from the SHGO method. The return values specific to scipy.optimize.shgo (x, xl, fun, funl, nfev, nit, nlfev, nlhev, and nljev) are stored as shgo_<parname>
Return type:
Minimizer.dual_annealing(params=None, max_nfev=None, **kws)ยถ
Use the dual_annealing algorithm to find the global minimum.
This method calls scipy.optimize.dual_annealing using its default arguments.
โ params (Parameters, optional) โ Contains the Parameters for the model. If None, then the Parameters used to initialize the Minimizer object are used.
โ max_nfev (int or None, optional) โ Maximum number of function evaluations. Defaults to 200000*(nvars+1), where nvars is the number of variables.
โ **kws (dict, optional) โ Minimizer options to pass to the dual_annealing algorithm.
Object containing the parameters from the dual_annealing method. The return values specific to scipy.optimize.dual_annealing (x, fun, nfev, nhev, njev, and nit) are stored as da_<parname>
Return type:
Minimizer.emcee(params=None, steps=1000, nwalkers=100, burn=0, thin=1, ntemps=1, pos=None, reuse_sampler=False, workers=1, float_behavior='posterior', is_weighted=True, seed=None, progress=True,
Bayesian sampling of the posterior distribution.
The method uses the emcee Markov Chain Monte Carlo package and assumes that the prior is Uniform. You need to have emcee version 3 or newer installed to use this method.
โ params (Parameters, optional) โ Parameters to use as starting point. If this is not specified then the Parameters used to initialize the Minimizer object are used.
โ steps (int, optional) โ How many samples you would like to draw from the posterior distribution for each of the walkers?
โ nwalkers (int, optional) โ Should be set so \(nwalkers >> nvarys\), where nvarys are the number of parameters being varied during the fit. โWalkers are the members of the ensemble. They
are almost like separate Metropolis-Hastings chains but, of course, the proposal distribution for a given walker depends on the positions of all the other walkers in the ensemble.โ - from
the emcee webpage.
โ burn (int, optional) โ Discard this many samples from the start of the sampling regime.
โ thin (int, optional) โ Only accept 1 in every thin samples.
โ ntemps (int, deprecated) โ ntemps has no effect.
โ pos (numpy.ndarray, optional) โ Specify the initial positions for the sampler, an ndarray of shape (nwalkers, nvarys). You can also initialise using a previous chain of the same nwalkers
and nvarys. Note that nvarys may be one larger than you expect it to be if your userfcn returns an array and is_weighted=False.
โ reuse_sampler (bool, optional) โ Set to True if you have already run emcee with the Minimizer instance and want to continue to draw from its sampler (and so retain the chain history). If
False, a new sampler is created. The keywords nwalkers, pos, and params will be ignored when this is set, as they will be set by the existing sampler. Important: the Parameters used to
create the sampler must not change in-between calls to emcee. Alteration of Parameters would include changed min, max, vary and expr attributes. This may happen, for example, if you use
an altered Parameters object and call the minimize method in-between calls to emcee.
โ workers (Pool-like or int, optional) โ For parallelization of sampling. It can be any Pool-like object with a map method that follows the same calling sequence as the built-in map
function. If int is given as the argument, then a multiprocessing-based pool is spawned internally with the corresponding number of parallel processes. โmpi4pyโ-based parallelization and
โjoblibโ-based parallelization pools can also be used here. Note: because of multiprocessing overhead it may only be worth parallelising if the objective function is expensive to
calculate, or if there are a large number of objective evaluations per step (nwalkers * nvarys).
โ float_behavior (str, optional) โ Meaning of float (scalar) output of objective function. Use โposteriorโ if it returns a log-posterior probability or โchi2โ if it returns \(\chi^2\). See
Notes for further details.
โ is_weighted (bool, optional) โ Has your objective function been weighted by measurement uncertainties? If is_weighted=True then your objective function is assumed to return residuals that
have been divided by the true measurement uncertainty (data - model) / sigma. If is_weighted=False then the objective function is assumed to return unweighted residuals, data - model. In
this case emcee will employ a positive measurement uncertainty during the sampling. This measurement uncertainty will be present in the output params and output chain with the name
__lnsigma. A side effect of this is that you cannot use this parameter name yourself. Important: this parameter only has any effect if your objective function returns an array. If your
objective function returns a float, then this parameter is ignored. See Notes for more details.
โ seed (int or numpy.random.RandomState, optional) โ If seed is an int, a new numpy.random.RandomState instance is used, seeded with seed. If seed is already a numpy.random.RandomState
instance, then that numpy.random.RandomState instance is used. Specify seed for repeatable minimizations.
โ progress (bool, optional) โ Print a progress bar to the console while running.
โ run_mcmc_kwargs (dict, optional) โ Additional (optional) keyword arguments that are passed to emcee.EnsembleSampler.run_mcmc.
MinimizerResult object containing updated params, statistics, etc. The updated params represent the median of the samples, while the uncertainties are half the difference of the 15.87 and
84.13 percentiles. The MinimizerResult contains a few additional attributes: chain contain the samples and has shape ((steps - burn) // thin, nwalkers, nvarys). flatchain is a
pandas.DataFrame of the flattened chain, that can be accessed with result.flatchain[parname]. lnprob contains the log probability for each sample in chain. The sample with the highest
probability corresponds to the maximum likelihood estimate. acor is an array containing the auto-correlation time for each parameter if the auto-correlation time can be computed from the
chain. Finally, acceptance_fraction (an array of the fraction of steps accepted for each walker).
Return type:
This method samples the posterior distribution of the parameters using Markov Chain Monte Carlo. It calculates the log-posterior probability of the model parameters, F, given the data, D, \(\ln p
(F_{true} | D)\). This โposterior probabilityโ is given by:
\[\ln p(F_{true} | D) \propto \ln p(D | F_{true}) + \ln p(F_{true})\]
where \(\ln p(D | F_{true})\) is the โlog-likelihoodโ and \(\ln p(F_{true})\) is the โlog-priorโ. The default log-prior encodes prior information known about the model that the log-prior
probability is -numpy.inf (impossible) if any of the parameters is outside its limits, and is zero if all the parameters are inside their bounds (uniform prior). The log-likelihood function is [1
\[\ln p(D|F_{true}) = -\frac{1}{2}\sum_n \left[\frac{(g_n(F_{true}) - D_n)^2}{s_n^2}+\ln (2\pi s_n^2)\right]\]
The first term represents the residual (\(g\) being the generative model, \(D_n\) the data and \(s_n\) the measurement uncertainty). This gives \(\chi^2\) when summed over all data points. The
objective function may also return the log-posterior probability, \(\ln p(F_{true} | D)\). Since the default log-prior term is zero, the objective function can also just return the
log-likelihood, unless you wish to create a non-uniform prior.
If the objective function returns a float value, this is assumed by default to be the log-posterior probability, (float_behavior default is โposteriorโ). If your objective function returns \(\chi
^2\), then you should use float_behavior='chi2' instead.
By default objective functions may return an ndarray of (possibly weighted) residuals. In this case, use is_weighted to select whether these are correctly weighted by measurement uncertainty.
Note that this ignores the second term above, so that to calculate a correct log-posterior probability value your objective function should return a float value. With is_weighted=False the data
uncertainty, s_n, will be treated as a nuisance parameter to be marginalized out. This uses strictly positive uncertainty (homoscedasticity) for each data point, \(s_n = \exp(\rm{\_\_lnsigma})\).
__lnsigma will be present in MinimizerResult.params, as well as Minimizer.chain and nvarys will be increased by one.
Minimizer.emcee() - calculating the posterior probability distribution of parametersยถ
Minimizer.emcee() can be used to obtain the posterior probability distribution of parameters, given a set of experimental data. Note that this method does not actually perform a fit at all. Instead,
it explores parameter space to determine the probability distributions for the parameters, but without an explicit goal of attempting to refine the solution. It should not be used for fitting, but it
is a useful method to to more thoroughly explore the parameter space around the solution after a fit has been done and thereby get an improved understanding of the probability distribution for the
parameters. It may be able to refine your estimate of the most likely values for a set of parameters, but it will not iteratively find a good solution to the minimization problem. To use this method
effectively, you should first use another minimization method and then use this method to explore the parameter space around those best-fit values.
To illustrate this, weโll use an example problem of fitting data to function of a double exponential decay, including a modest amount of Gaussian noise to the data. Note that this example is the same
problem used in An advanced example for evaluating confidence intervals for evaluating confidence intervals in the parameters, which is a similar goal to the one here.
import matplotlib.pyplot as plt
import numpy as np
import lmfit
x = np.linspace(1, 10, 250)
y = 3.0 * np.exp(-x / 2) - 5.0 * np.exp(-(x - 0.1) / 10.) + 0.1 * np.random.randn(x.size)
Create a Parameter set for the initial guesses:
p = lmfit.Parameters()
p.add_many(('a1', 4.), ('a2', 4.), ('t1', 3.), ('t2', 3., True))
def residual(p):
v = p.valuesdict()
return v['a1'] * np.exp(-x / v['t1']) + v['a2'] * np.exp(-(x - 0.1) / v['t2']) - y
Solving with minimize() gives the Maximum Likelihood solution. Note that we use the robust Nelder-Mead method here. The default Levenberg-Marquardt method seems to have difficulty with exponential
decays, though it can refine the solution if starting near the solution:
mi = lmfit.minimize(residual, p, method='nelder', nan_policy='omit')
lmfit.printfuncs.report_fit(mi.params, min_correl=0.5)
a1: 2.98623689 +/- 0.15010519 (5.03%) (init = 4)
a2: -4.33525597 +/- 0.11765824 (2.71%) (init = 4)
t1: 1.30993186 +/- 0.13449656 (10.27%) (init = 3)
t2: 11.8240752 +/- 0.47172610 (3.99%) (init = 3)
[[Correlations]] (unreported correlations are < 0.500)
C(a2, t2) = +0.9876
C(a2, t1) = -0.9278
C(t1, t2) = -0.8852
C(a1, t1) = -0.6093
and plotting the fit using the Maximum Likelihood solution gives the graph below:
plt.plot(x, y, 'o')
plt.plot(x, residual(mi.params) + y, label='best fit')
Note that the fit here (for which the numdifftools package is installed) does estimate and report uncertainties in the parameters and correlations for the parameters, and reports the correlation of
parameters a2 and t2 to be very high. As weโll see, these estimates are pretty good, but when faced with such high correlation, it can be helpful to get the full probability distribution for the
parameters. MCMC methods are very good for this.
Furthermore, we wish to deal with the data uncertainty. This is called marginalisation of a nuisance parameter. emcee requires a function that returns the log-posterior probability. The log-posterior
probability is a sum of the log-prior probability and log-likelihood functions. The log-prior probability is assumed to be zero if all the parameters are within their bounds and -np.inf if any of the
parameters are outside their bounds.
If the objective function returns an array of unweighted residuals (i.e., data-model) as is the case here, you can use is_weighted=False as an argument for emcee. In that case, emcee will
automatically add/use the __lnsigma parameter to estimate the true uncertainty in the data. To place boundaries on this parameter one can do:
mi.params.add('__lnsigma', value=np.log(0.1), min=np.log(0.001), max=np.log(2))
Now we have to set up the minimizer and do the sampling (again, just to be clear, this is not doing a fit):
res = lmfit.minimize(residual, method='emcee', nan_policy='omit', burn=300, steps=1000, thin=20,
params=mi.params, is_weighted=False, progress=False)
As mentioned in the Notes for Minimizer.emcee(), the is_weighted argument will be ignored if your objective function returns a float instead of an array. For the documentation we set progress=False;
the default is to print a progress bar to the Terminal if the tqdm package is installed.
The success of the method (i.e., whether or not the sampling went well) can be assessed by checking the integrated autocorrelation time and/or the acceptance fraction of the walkers. For this
specific example the autocorrelation time could not be estimated because the โchain is too shortโ. Instead, we plot the acceptance fraction per walker and its mean value suggests that the sampling
worked as intended (as a rule of thumb the value should be between 0.2 and 0.5).
plt.plot(res.acceptance_fraction, 'o')
plt.ylabel('acceptance fraction')
With the results from emcee, we can visualize the posterior distributions for the parameters using the corner package:
import corner
emcee_plot = corner.corner(res.flatchain, labels=res.var_names,
The values reported in the MinimizerResult are the medians of the probability distributions and a 1 \(\sigma\) quantile, estimated as half the difference between the 15.8 and 84.2 percentiles.
Printing these values:
print('median of posterior probability distribution')
median of posterior probability distribution
a1: 2.98945718 +/- 0.14033921 (4.69%) (init = 2.986237)
a2: -4.34687243 +/- 0.12131092 (2.79%) (init = -4.335256)
t1: 1.32883916 +/- 0.13766047 (10.36%) (init = 1.309932)
t2: 11.7836194 +/- 0.47719763 (4.05%) (init = 11.82408)
__lnsigma: -2.32559226 +/- 0.04542650 (1.95%) (init = -2.302585)
[[Correlations]] (unreported correlations are < 0.100)
C(a2, t2) = +0.9811
C(a2, t1) = -0.9377
C(t1, t2) = -0.8943
C(a1, t1) = -0.5076
C(a1, a2) = +0.2140
C(a1, t2) = +0.1777
You can see that this recovered the right uncertainty level on the data. Note that these values agree pretty well with the results, uncertainties and correlations found by the fit and using
numdifftools to estimate the covariance matrix. That is, even though the parameters a2, t1, and t2 are all highly correlated and do not display perfectly Gaussian probability distributions, the
probability distributions found by explicitly sampling the parameter space are not so far from elliptical as to make the simple (and much faster) estimates from inverting the covariance matrix
completely invalid.
As mentioned above, the result from emcee reports the median values, which are not necessarily the same as the Maximum Likelihood Estimate. To obtain the values for the Maximum Likelihood Estimation
(MLE) we find the location in the chain with the highest probability:
highest_prob = np.argmax(res.lnprob)
hp_loc = np.unravel_index(highest_prob, res.lnprob.shape)
mle_soln = res.chain[hp_loc]
for i, par in enumerate(p):
p[par].value = mle_soln[i]
print('\nMaximum Likelihood Estimation from emcee ')
print('Parameter MLE Value Median Value Uncertainty')
fmt = ' {:5s} {:11.5f} {:11.5f} {:11.5f}'.format
for name, param in p.items():
print(fmt(name, param.value, res.params[name].value,
Maximum Likelihood Estimation from emcee
Parameter MLE Value Median Value Uncertainty
a1 2.93839 2.98946 0.14034
a2 -4.35274 -4.34687 0.12131
t1 1.34310 1.32884 0.13766
t2 11.78782 11.78362 0.47720
Here the difference between MLE and median value are seen to be below 0.5%, and well within the estimated 1-\(\sigma\) uncertainty.
Finally, we can use the samples from emcee to work out the 1- and 2-\(\sigma\) error estimates.
print('\nError estimates from emcee:')
print('Parameter -2sigma -1sigma median +1sigma +2sigma')
for name in p.keys():
quantiles = np.percentile(res.flatchain[name],
[2.275, 15.865, 50, 84.135, 97.275])
median = quantiles[2]
err_m2 = quantiles[0] - median
err_m1 = quantiles[1] - median
err_p1 = quantiles[3] - median
err_p2 = quantiles[4] - median
fmt = ' {:5s} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}'.format
print(fmt(name, err_m2, err_m1, median, err_p1, err_p2))
Error estimates from emcee:
Parameter -2sigma -1sigma median +1sigma +2sigma
a1 -0.2656 -0.1362 2.9895 0.1445 0.3141
a2 -0.3209 -0.1309 -4.3469 0.1118 0.1985
t1 -0.2377 -0.1305 1.3288 0.1448 0.3278
t2 -1.0677 -0.4807 11.7836 0.4739 0.8990
And we see that the initial estimates for the 1-\(\sigma\) standard error using numdifftools was not too bad. Weโll return to this example problem in An advanced example for evaluating confidence
intervals and use a different method to calculate the 1- and 2-\(\sigma\) error bars. | {"url":"https://lmfit.github.io/lmfit-py/fitting.html","timestamp":"2024-11-12T05:38:28Z","content_type":"text/html","content_length":"222111","record_id":"<urn:uuid:e0120a47-fe09-4f21-bd35-f60bcf15ce3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00539.warc.gz"} |
Implicit computation of minimum-cost feedback-vertex sets for partial scan and other applications
The contribution of this paper is an implicit method for computing the minimum cost feedback vertex set for a graph. For an arbitrary graph, we efficiently derive a Boolean function whose satisfying
assignments directly correspond to feedback vertex sets of the graph. Importantly, cycles in the graph are never explicitly enumerated, but rather, are captured implicitly in this Boolean function.
This function is then used to determine the minimum cost feedback vertex set. Even though computing the minimum cost satisfying assignment for a Boolean function remains an NP-hard problem, we can
exploit the advances made in the area of Boolean function representation in logic synthesis to tackle this problem efficiently in practice for even reasonably large sized graphs. The algorithm has
obvious application in flip-flop selection for partial scan. Our algorithm was the first to obtain the MFVS solutions for many benchmark circuits.
All Science Journal Classification (ASJC) codes
โข Hardware and Architecture
โข Control and Systems Engineering
Dive into the research topics of 'Implicit computation of minimum-cost feedback-vertex sets for partial scan and other applications'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/implicit-computation-of-minimum-cost-feedback-vertex-sets-for-par","timestamp":"2024-11-09T03:37:42Z","content_type":"text/html","content_length":"48951","record_id":"<urn:uuid:e9cc0566-6605-420b-9fbc-c4208ec78f76>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00188.warc.gz"} |
3D with HTML5 Canvas: Part 1
3D with HTML5 Canvas: Part 1
Tuesday, August 4, 2009
Ah ha! First time on this blog where I've started a "series" and got past post 1. (Check out post one: called "Part 0"). Well anyway, it turns out that my theory that simple 3D is easier that it
seems was correct. Based on my vague inclination of how the maths should work, I constructed a lil' 3D starfield and, more importantly, I pretty much completely understand how it works. Pretty much
completely. Accordingly, this post will cover setting up our "field of view" and projecting our 3D stars on to our 2D screen with parallel projection.
Modelling 3D in data
This tutorial won't be talking about polygons or even lines: we're sticking with plain-old points until we figure it all out. To model a 3D point we just need to keep track of it's X position, it's Y
position, and it's Z position. To get the results we are after we'll think of the X/Y axis like we know from maths: a cross in the center of the screen, with positive Y going up, negative Y going
down, positive X going right and negative X going left. This is different from how the screen works - where the point X=0, Y=0 is in the top left hand corner, not in the center. So we'll need to do a
little bit of a conversion at the end to translate from "maths axis" to "screen coordinates" - but that's easy.
So to make our starfield, what we'll do is create a bunch of points and set their X value to a random number between the left side of the screen and the right. We'll set the Y value to a random
number between the top of the screen and the bottom. Finally, we'll set the Z value to a random number between 0 and some number that represents the horizon. That's easy to model, but how do we draw
the stars on the screen so it looks like points in the distance move closer to the horizon - like you see in perspective drawings?
Perspective is easy!
Roool easy: we just divide the X and Y values by the Z value. Think about that - it makes sense! Pretend that X = -100 (so it's on the left side of the screen). And pretend that the Z value is 1
(right up the front of the screen). Now, -100 / 1 = -100. So we just draw the point at -100 pixels. NOW, imagine the point moves further into the distance, so Z = 2. With the new distance, -100 / 2 =
-50. So the point is drawn much closer to the center of the screen. if Z = 3, then -100 / 3 = -33.3. The larger that Z becomes, the closer the point moves towards the middle of the screen: thus
simulating perspective! That's all there is to it!
Seriously, that's it. You could take this knowledge and create a starfield now easily:
1. Randomly assign X, Y, Z
2. Move Z a bit closer (if it's too close, send it to the back)
3. Draw all the points at X/Z, Y/Z
4. GOTO 2
In the code, we do random assigning in the init function, and we move Z closer in the update section:
star.z -= this.starSpeed;
if( star.z <= 0 )
star.z = this.maxDistance;
Finally, we figure out the place to draw the stars with this snippet:
// Project to 2D space
star.projectedX =
( star.x * this.hViewDistance ) / star.z;
star.projectedY =
( star.y * this.vViewDistance ) / star.z;
We are doing the projection calculations inside the update function, instead of the drawing function - and store the projected X and Y values in the star object itself: we don't need to do this, we
could just calculate it in the drawing function, but I thought it might make it a bit clearer. It probably didn't.
If you look at the code, you'll notice there is also a Size and Colour variable being stored with each point. This is to create the effect that closer stars are bigger, and brighter than stars in the
distance. It's basically just an inverse function of the Z index (so a small Z equals a big star!)
Anywayz, the only remaining piece of the puzzle is... what are the variables hViewDistance and vViewDistance? They are just some numbers that affect the overall result of how the points as a group
are skewed. You could just test a bunch of numbers in there to see what looks good, buuut, this series is about understanding why it works... So, I give you - "the field of view"...
Field Of Views: The hardest bit
Now don't let those brilliantly illustrated graphs on the left put you off. We just need them to show why we're using a couple of magic numbers in our equations. The top graph is supposed to
illustrate the concept of a "Field Of View", like when you look at stuff, you know? Check out this article on FOV: it says (kind of) that humans see 100 degrees on the X axis, and 80 degrees on the Y
axis (but also says [citation needed] at the end, but good enough). Really we don't need to know anything about FOV, we just want a good number for Y and theta that sort of looks like the real world.
Y (or V on the top graph) is our "viewing distance" and theta is our FOV.
We'll figure out all the bits with using the SOHCAHTOA thing I remember learning. We know the theta, and we know the "OPPOSITE" side - which is half of the screen width (or height for the vertical
one). And we now just want the "ADJACENT" side to figure out the viewing distance. As TAN = OPPOSITE / ADJACENT, then also ADJACENT = OPPOSITE / TAN.
So... theta is FOV / 2... but we gotta convert it to radians, cause that's what computers like. So we end up with something like:
// Convert degrees to radians
var hfov = 100 * Math.PI / 180;
var vfov = 80 * Math.PI / 180;
// Figure out the horizontal and vertical distances
var hViewDistance =
( screenWidth / 2 ) / Math.tan( hfov / 2 );
var vViewDistance =
( screenHeight / 2 ) / Math.tan( vfov / 2 );
Our first mathtastic challenge complete! We have the magic viewing distance numbers. Now we can change the horizontal and vertical fields of view to see how it changes the overall effect.
Translating to screen coordinates
The last thing we need to do is translate from traditional X/Y axis to normal screen coordinates. We just need to perform these simple calculations:
// Transform to screen cordinates
star.projectedX += this.screenWidth / 2;
star.projectedY = ( this.screenHeight / 2 ) - star.projectedY;
Test that out on paper if you don't believe me.
The new dimension
So that was that. Just divide X by Z and Y by Z and we are cooking! Grab the code and have a play with the Fields of View, the number of stars, etc... try incrementing and decrementing X and Y in the
update function too and see what happens.
Next time we'll have a look at moving from points to lines and see if we can't get some cube madness going!
4 Comments
1. Nice! But it looks like some of the stars in the background get drawn over top of stars in the foreground?
2. Very true Non, very true โ the reason is, we are not taking the z order into account when we draw โ weโre just drawing randomly. Probably should have assigned the Z value incrementally, rather
than randomlyโฆ but also, weโll look at z order soon when we get to draw polygons.
3. Awesome tutorial. Guess you never got t part two?
4. I love this post. It explains the basics very well, and thats just what I need when I have just started looking into graphics :) | {"url":"https://www.mrspeaker.net/2009/08/04/3d-html5-part-1/","timestamp":"2024-11-09T22:27:57Z","content_type":"text/html","content_length":"24240","record_id":"<urn:uuid:0075f270-c2f3-4c9a-92cc-c3353e5feb7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00280.warc.gz"} |
Deriving Pair-wise Correlations Using the Copula Method for Credit Portfolio Management Simulation - SAS Risk Data and Analytics
The copula method has been much vilified as the โFormula that killed Wall Street,โ and this criticism is extremely well deserved. The copula approach, however, is still widely used and an
understanding of how it works is important for a key reason: once one understands how it works, one understands why one should not use it for credit portfolio management. This blog talks about one
key issue in the practical use of the copula method: how to derive a pair-wise correlation matrix for all counterparties on the assumption that one knows the proper intra-industry and inter-industry
correlations. We thank Kamakura Managing Director for Research Professor Robert A. Jarrow for his very helpful comments.
The copula method grossly underestimated the risk in the collateralized debt obligation market in the credit crisis for these key reasons:
โข The fundamental assumptions of the Merton model of risky debt, to which the copula method is closely related, are too simple to be accurate
โข The copula method holds default probabilities constant over the modeling period (or at best allows them to drift in a non-random way). This produces an estimate of credit losses which is too big
in the best parts of the business cycle and too small in the worst of times. This happens because default probabilities are not constant, they are random, and they move up and down over the
business cycle.
For articles in the popular press on the inaccuracy of the copula method, we recommend the following:
Mark Whitehouse, โSlices of Risk: How a Formula Ignited Market That Burned Some Big Investors,โ Wall Street Journal, September 12, 2005, page 1.
Felix Salmon, โRecipe for Disaster: The Formula that Killed Wall Street,โ Wired Magazine, February 23, 2009.
For background proving the inaccuracy of the Merton framework compared to a more modern reduced form/logistic regression approach, these articles have been in circulation since as early as 2002 and
summarize the facts nicely:
S. Bharath and T. Shumway, โForecasting Default with the Merton Distance to Default Model,โ Review of Financial Studies, May 2008.
J. Y. Campbell, J. Hilscher, and J. Szilagyi, โIn Search of Distress Risk,โ Journal of Finance, December 2008.
R. Jarrow, M. Mesler, and D. R. van Deventer, Kamakura Default Probabilities
Technical Report, Kamakura Risk Information Services, Version 4.1, Kamakura Corporation memorandum, January 25, 2006.
D. R. van Deventer, L. Li and X. Wang, โAnother Look at Advanced Credit Model Performance Testing to Meet Basel Requirements: How Things Have Changed,โ The Basel Handbook: A Guide for Financial
Practitioners, second edition, Michael K. Ong, editor, Risk Publications, 2006
For a comparison of the copula method with other approaches, these recent blog entries and articles are relevant:
Jarrow, Robert A. and Donald R. van Deventer, โSynthetic CDO Equity: Short or Long Correlation,โ Journal of Fixed Income, Spring, 2008.
Jarrow, Robert A. and Donald R. van Deventer, โLearning Curve: Synthetic CDO Equity: Short or Long Correlation,โ Derivatives Week, March 24, 2008, pp. 8-9.
Jarrow, Robert A., Li Li, Mark Mesler, and Donald R. van Deventer, โCDO Valuation: Fact and Fiction,โ The Definitive Guide to CDOs, Gunter Meissner, Editor, RISK Publications, 2008.
van Deventer, Donald R. โThe Copula Approach to CDO Valuation: A Post Mortem,โ Kamakura blog, www.kamakuraco.com, April 9, 2009. Redistributed on www.riskcenter.com, April 13, 2009.
van Deventer, Donald R. โModeling Default for Credit Portfolio Management and CDO Valuation: A Menu of Alternatives,โ Kamakura blog, www.kamakuraco.com, April 19, 2009. Redistributed on
www.riskcenter.com, April 21, 2009.
van Deventer, Donald. R. โCredit Portfolio Models: The Reduced Form Approach,โ Kamakura blog, www.kamakuraco.com, June 5, 2009. Redistributed on www.riskcenter.com on June 9, 2009.
We now list the highly simplified assumptions of the copula method and explain their implications mathematically. We then summarize why one should reject these assumptions and move to a more realist
Common Assumptions of the Copula Method
There are as many variations on the copula method as there are users of the method. The volume mentioned above, edited by Gunter Meissner, provides a good sampling of approaches. In this section, we
summarize common assumptions that are frequently employed. Among the vendors that use these assumptions are Standard & Poorโs, in its CDO evaluator, Moodyโs Investors Service in its products
Portfolio Manager and Risk Frontier, and Kamakuraโs Kamakura Risk Manager (โKRMโ). KRM includes far superior techniques, and this blog entry is part of Kamakuraโs on-going effort to make our clients
and potential clients aware of the model risk in the copula approach. Tens of billions of dollars have been lost using this approach in the 2007-2010 credit crisis, and the reasons are deeply rooted
in these common assumptions:
โข Credit modeling is done for a single period of arbitrary length, not on a multi-period approach with a dynamic balance sheet
โข The Merton approach is used as the framework. This implies that default probabilities are set at time zero and random values of the โvalue of company assetsโ at the end of the modeling period
are simulated to determine if the default probability is zero (assets worth more than liabilities) or 100% (assets worth less than liabilities) at that time. No other outcomes are possible in a
single period model.
โข The return on the value of company assets is driven by one and only one common โmacroโ factor. This macro factor is not specifically identified, so no hedging with respect to movements in the
factor is possible.
โข The other contribution to random movements in the return on company assets is an idiosyncratic risk factor which is assumed not to be correlated with the idiosyncratic risk of any other
โข All counterparties in a given industry sector are assumed to have the same pair wise correlation in the returns on the value of company assets (a common assumption, although the Kamakura Risk
Manager implementation allows for every pair-wise correlation to be different). This single correlation parameter is called โintra-industry correlationโ in the returns on the assets of each pair
of companies.
โข If there are N industries, there are N macro factors. In common application, the correlation between the returns on each pair of these macro factors is identical, a single correlation
parameter. In the Kamakura Risk Manager implementation, these correlations can be different for each pair of macro factors and we use that more general implementation in this example.
โข If the counterparty is not a company, it is assumed that its default can be modeled as if it were a company in the Merton framework, even if the โcounterpartyโ is the tranche of a mortgage backed
or asset backed security or a tranche of another CDO (which would be the case in a โCDO squaredโ). This assumption is a gross error and we will ignore the implications of this error because they
are so obvious and well documented.
For an example of a paper which makes these assumptions, see
Oldrich Vasicek, โLimiting Loan Loss Distribution,โ KMV Corporation working paper, August 9, 1991.
In the rest of this paper, we will use the same notation as Vasicek for consistency.
An Example of Common Copula Assumptions
We assume that our portfolio has five industry sectors. Within each industry sector, we assume the pair wise correlation between the returns on the values of company assets for companies j and k is
the same for all values of j and k. Although it is very common for this โintra-industryโ correlation coefficient to be assumed the same for all sectors, in this example we use give different
correlation values:
Without loss of generality, we assume there are 20 counterparties in our portfolio, spread over the five industry sectors as in the table above.
As is typical, we assume there is one macro factor driving the returns on the value of company assets in each industry sector, so there are 5 macro factors at work in this example. Although it is
common to assume a single correlation figure for โinter-industryโ correlation, in this example we allow the correlation between each pair of macro factors to be different. The correlations used in
this example are as follows:
We now need to generate a 20 x 20 matrix which gives the pair-wise correlations in the returns on company assets so we can simulate period-end asset values in a way that consistently reflects the
assumptions above. In the next section, we provide the mathematical foundations for this matrix. After that, we use these foundations to produce the matrix.
Mathematical Foundations for the Pair-wise Correlation Matrix
We use Vasicekโs notation in this section. For any company j, the change in the value of company assets is written in stochastic process notation like this:
This โlogarithmic Wiener processโ has two terms. The first term is a drift term, which says that over time asset values will drift up at the (falsely) assumed constant rate of interest r. The second
term induces random shocks from a Wiener process with a mean of zero, an instantaneous standard deviation of 1, and a constant correlation within industry sector m for any pairs of companies j and
k. We write this formally using the expected values E as follows:
Because of these assumptions, the random shock term z[j] for company j is a linear combination of impacts from the macro factor driving industry sector m and an idiosyncratic risk factor unique to
company j. Moreover, the weightings are a function of the correlation coefficient that is assumed to apply to all pairs of companies in industry sector m:
Here again, x[m] is the random (unspecified) macro factor that is the only driver of correlated movements in asset values for all companies in industry sector m. The epsilon is the idiosyncratic
risk factor for company j. These two variables have mean zero, instantaneous standard deviation of 1, and they are uncorrelated with each other. The idiosyncratic risk factor is also uncorrelated
with that of any other company. This is written formally as follows:
Then the product for two firms in the same industry sector m is
When we write out this product and take its expected value using the assumptions above, we confirm that indeed
We now ask, โWhat if companies j and k are not both industry sector m, but company j is in sector m and company k is in sector n?โ Consistent with the approach taken above, we continue to assume that
the macro factors are correlated with each other but not with any idiosyncratic risk factor. We also require that no idiosyncratic risk factor in industry sector m is correlated with any
idiosyncratic risk factor in industry sector n. This insight is consistent with the conclusions of Jarrow, Lando and Yu provided that (a) only 1 macro factor drives each industry and (b) no macro
factors have been omitted or misspecified. Our additional assumptions are written formally as follows, adding the industry sector to the company subscript to make the fact that industries are
different more clear:
The parameter q[mn] is the correlation coefficient on the returns of macro factors driving industry sectors m and n. We then use the results above to write out the changes in the shock terms for
firm j in industry sector m and firm k in industry sector n:
We then write out the product of these two changes in shock terms:
When we take the expected value of this expression using the assumptions above, we get the following expression for the correlation of asset returns on firms in industry sectors m and n:
We now use this expression to populate the pair-wise correlation in asset returns for the 20 firms in 5 industries in our worked example.
Continuing the Example
Using the expression for correlations between asset returns on firms in different industries, plus the inputs given above, the 20 x 20 correlation matrix for asset returns in our example can be
calculated like this:
Once we have the matrix in hand, the usual methods for simulating correlated normally distributed variables can be used. In the matrix above, we have highlighted the correlations that are
โintra-industry.โ As this matrix gets larger, its inversion becomes more difficult from a computer science perspective. Kamakura Corporation announced in a recent press release that matrices that
contain up to 999,999 random variables (counterparties in this case) can be processed, but this capability is not generally available in more basic software packages, particularly in common
spreadsheet software. Even with this capability, however, the copula approach is so flawed that market participants lost tens of billions of dollars from its use in the 2007-2010 credit crisis.
What is wrong with the copula method? Almost all of its principal assumptions are false, and the impact of their โfalsenessโ is a very large degree of inaccuracy. The copula method should only be
used as a bench mark to measure the difference between pre-credit crisis โcommon practiceโ and a modern 21^st century reduced form approach that recognizes that multiple macro factors drive risk,
recognizes that defaults and cash flows appear at multiple points in time, recognizes that interest rates are random, and recognizes that balance sheets are dynamic. For details on that approach,
please see the blog entries above and contact us about KRIS default probabilities and Kamakura Risk Manager at info@kamakuraco.com.
Donald R. van Deventer
Kamakura Corporation
Honolulu, January 24, 2011 | {"url":"https://www.kamakuraco.com/deriving-pair-wise-correlations-using-the-copula-method-for-credit-portfolio-management-simulation/","timestamp":"2024-11-02T11:04:53Z","content_type":"text/html","content_length":"157456","record_id":"<urn:uuid:af91ca75-7ac8-4273-8341-d2f4e06d3575>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00450.warc.gz"} |
Given the equation `3x=x+6` What would be the first step in solving? subtract 3x from both sides o - DocumenTVGiven the equation `3x=x+6` What would be the first step in solving? subtract 3x from both sides o
Given the equation `3x=x+6` What would be the first step in solving? subtract 3x from both sides o
Given the equation `3x=x+6`
What would be the first step in solving?
subtract 3x from both sides of the equation
subtract x from both sides of the equation
subtract 6 from both sides of the equation
Divide by 3 on both sides of the equation
Explain your thinking.
in progress 0
Mathematics 3 years 2021-07-28T00:56:00+00:00 2021-07-28T00:56:00+00:00 1 Answers 8 views 0 | {"url":"https://documen.tv/question/given-the-equation-3-6-what-would-be-the-first-step-in-solving-subtract-3-from-both-sides-o-17801632-12/","timestamp":"2024-11-13T15:49:40Z","content_type":"text/html","content_length":"78149","record_id":"<urn:uuid:b122d82a-57ea-4a46-8bd4-7f5e766f3e17>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00855.warc.gz"} |
seminars - Introduction to XVA and BSDEs, Funding Value Adjustment
Since the financial crisis of 2010 value adjustments have become one of the central topics in derivatives pricing. Starting from immediate concerns in the industry about counterparty credit risk and
funding spreads in the wake of the collapse of Lehman brothers, the study of XVA (the common acronym for the multitude of value adjustments) has also become a topic of major academic interest.
In this lecture series we present an approach for the valuation of XVA based on backward stochastic differential equations (BSDEs). BSDEs are a very natural modeling tool from a hedging perspective;
and they offer a nice framework for nonlinear valuation. We intend to achieve in the lecture series a threefold goal: to understand the economic realities of value adjustments; to introduce a
convenient modeling framework for nonlinear valuations; and to use this as motivation for the study of BSDEs.
After an introduction on the financial background and the review of derivatives pricing from a BSDE perspective, we will first focusing on funding issues and the theory of Brownian BSDEs. Building
upon this, we will investigate credit risk, collateralization and the inclusion of default of a counterparty, leading to the study of BSDEs with random jump terminal conditions and predictable
projection of BSDEs as convenient tool. Finally, we will discuss more advanced topics as the robustness of XVA and risk indifference pricing.
Suggested background: We assume a working knowledge in stochastic (Ito-) calculus and (forward) stochastic differential equations (SDEs). Some knowledge about derivatives pricing and hedging is
desirable, though we will review the main ideas in the beginning. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=49&l=ko&document_srl=799489","timestamp":"2024-11-09T19:39:18Z","content_type":"text/html","content_length":"45665","record_id":"<urn:uuid:15750290-1cbb-414b-8717-8135e8a1932a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00797.warc.gz"} |
Trait SameDimension
pub trait SameDimension<D1, D2>: SameNumberOfRows<D1, D2> + SameNumberOfColumns<D1, D2>
type Representative: Dim;
Expand description
Constrains D1 and D2 to be equivalent, where they both designate dimensions of algebraic entities (e.g. square matrices).
Required Associated Typesยง
This is either equal to D1 or D2, always choosing the one (if any) which is a type-level constant.
Dyn Compatibilityยง
This trait is not dyn compatible.
In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe. | {"url":"https://rustdoc.nyxspace.com/nyx_space/linalg/constraint/trait.SameDimension.html","timestamp":"2024-11-14T07:14:53Z","content_type":"text/html","content_length":"10515","record_id":"<urn:uuid:dbdaf784-fd63-4f1e-99d6-2d2805c392d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00899.warc.gz"} |
Raymond James Bank | Mortgage rate disclosures
Mortgage Rates Disclosures
Interest-Only Products
The benefit of certain mortgage options may vary depending on market conditions, your financial situation and other circumstances. When the principal and interest payment period commences, monthly
payments will be higher. The principal balance will not be reduced during the period that interest-only payments are made. Interest payments are calculated based on the outstanding principal balance.
A client will pay more interest over the life of the loan if they choose to make interest only payments exclusively than they would under a traditional loan with the same interest rate featuring
principal and interest payments. When your interest-only period ends, your monthly mortgage payment will be recalculated to include full principal repayment over the remaining years left on the loan.
Your payment may rise significantly based on the shorter remaining term and if you have an upward rate adjustment on an adjustable rate mortgage. During the interest-only period, without making
principal payments towards your outstanding loan balance, home price appreciation is the only way your equity will grow. The equity in your home is the difference between its market value and the
amount owed on loans secured by the property. There is also a risk that, by not paying down the balance of your loan, you may be in a situation where you owe more on your property than you could sell
it for if your home value declines.
5/1 Adjustable Rate Conforming Mortgage
A $180,000 5/1 adjustable rate mortgage with an initial rate of 3.000% and an annual percentage rate of 3.180% would have 60 estimated monthly principal and interest payments of $758.89. The maximum
amount that the interest rate can rise under this program is 5 percentage points, to 8.000%. The monthly payment could rise from $758.89 to $1,222.88 in the eighth year. If an escrow account is
required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
5/1 Adjustable Rate Jumbo Mortgage
A $500,000 5/1 adjustable rate jumbo mortgage with an initial rate of 3.000% and an annual percentage rate of 3.116% would have 60 estimated monthly principal and interest payments of $2,108.03. The
maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.000%. The monthly payment could rise from $2,108.03 to $3,396.88 in the eighth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
5/1 Adjustable Rate Interest-Only Conforming Mortgage
A $180,000 5/1 adjustable rate mortgage with interest only payments and an initial rate of 4.375% and an annual percentage rate of 5.079% would have 60 estimated interest only payments of $656.25.
The maximum amount that the interest rate can rise under this program is 5 percentage points, to 9.375%. The monthly payment could rise from $656.25 to $1,544.00 in the eighth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
5/1 Adjustable Rate Interest-Only Jumbo Mortgage
A $500,000 5/1 adjustable rate mortgage interest only with an initial rate of 4.000% and an annual percentage rate of 4.9852% would have 60 estimated monthly interest only payments of $1,666.67. The
maximum amount that the interest rate can rise under this program is 5 percentage points, to 9.000%. The monthly payment could rise from $1,666.67 to $4,159.00 in the eighth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
7/1 Adjustable Rate Conforming Mortgage
A $180,000 7/1 adjustable rate mortgage with an initial rate of 3.125% and an annual percentage rate of 3.223% would have 84 estimated monthly principal and interest payments of $771.08. The maximum
amount that the interest rate can rise under this program is 5 percentage points, to 8.125%. The monthly payment could rise from $771.08 to $1,202.86 in the tenth year. If an escrow account is
required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
7/1 Adjustable Rate Jumbo Mortgage
A $500,000 7/1 adjustable rate jumbo mortgage with an initial rate of 3.125% and an annual percentage rate of 3.160% would have 84 estimated monthly principal and interest payments of $2,141.88. The
maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.125%. The monthly payment could rise from $2,141.88 to $3,341.27 in the tenth year. If an escrow account
is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
7/1 Adjustable Rate Interest-Only Conforming Mortgage
A $280,000 7/1 adjustable rate mortgage with interest only payments and an initial rate of 4.125% and an annual percentage rate of 5.183% would have 84 estimated interest only payments of $962.50.
The maximum amount that the interest rate can rise under this program is 5 percentage points, to 9.125%. The monthly payment could rise from $962.50 to $2,407.00 in the tenth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
7/1 Adjustable Rate Interest-Only Jumbo Mortgage
A $500,000 7/1 adjustable rate interest only mortgage with an initial rate of 3.500% and an annual percentage rate of 4.525% would have 84 estimated monthly interest only payments of $1,458.33. The
maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.500%. The monthly payment could rise from $1,458.33 to $4,089.00 in the tenth year. If an escrow account
is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
7/1 Adjustable Rate Construction-Perm Mortgage
A $180,000 7/1 adjustable rate mortgage with an initial rate of 3.750% and an annual percentage rate of 4.530% and a 24 month construction term would have 24 estimated interest only payments of
$281.25 and 60 estimated monthly principal and interest payments of $866.07. The maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.750%. The monthly
estimated payment could rise from $866.07 to $1,296 in the eleventh year. If an escrow account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners
insurance. Down payment used in this example is 20%.
7/1 Adjustable Rate Jumbo Construction-Perm Mortgage
A $500,000 7/1 adjustable rate jumbo mortgage with an initial rate of 3.375% and an annual percentage rate of 4.204% and a construction term of 24 months would have 24 estimated interest only
payments of $703.13 and 60 estimated monthly principal and interest payments of $2,302.30. The maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.375%. The
monthly estimated payment could rise from $2,302.30 to $3,467 in the eleventh year. If an escrow account is required or requested, the actual monthly payments will also include amounts for taxes and
homeowners insurance. Down payment used in this example is 20%.
7/1 Adjustable Rate Interest-Only Construction-Perm Mortgage
A $180,000 7/1 adjustable rate mortgage with interest only payments and an initial rate of 4.000% and an annual percentage rate of 4.671% and a construction term of 24 months would have 24 estimated
interest only payments of $300.00 and 60 estimated interest only payments of $600.00. The maximum amount that the interest rate can rise under this program is 5 percentage points, to 9.000%. The
monthly estimated payment could rise from $600.00 to $1,616.00 in the eleventh year. If an escrow account is required or requested, the actual monthly payments will also include amounts for taxes and
homeowners insurance. Down payment used in this example is 20%.
10/1 Adjustable Rate Conforming Mortgage
A $200,000 10/1 adjustable rate mortgage with an initial rate of 3.750% and an annual percentage rate of 3.5301% would have 120 estimated monthly principal and interest payments of $926.24. The
maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.750%. The monthly payment could rise from $926.24 to $1,365.48 in the thirteenth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
10/1 Adjustable Rate Jumbo Mortgage
A $500,000 10/1 adjustable jumbo rate mortgage with an initial rate of 3.375% and an annual percentage rate of 3.958% would have 120 estimated monthly principal and interest payments of $2,210.49.
The maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.375%. The monthly payment could rise from $2,210.49 to $3,277.00 in the thirteenth year. If an
escrow account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
10/1 Adjustable Rate Interest-Only Conforming Mortgage
A $255,000 10/1 adjustable rate mortgage with interest only payments and an initial rate of 4.125% and an annual percentage rate of 4.602% would have 120 estimated interest only payments of $876.56.
The maximum amount that the interest rate can rise under this program is 5 percentage points, to 9.125%. The monthly payment could rise from $876.56 to $2290.00 in the thirteenth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
10/1 Adjustable Rate Interest-Only Jumbo Mortgage
A $500,000 10/1 adjustable rate interest only mortgage with an initial rate of 3.625% and an annual percentage rate of4.154% would have 120 estimated monthly interest only payments of $1,510.42. The
maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.625%. The monthly payment could rise from $1,510.42 to $4,330.00 in the thirteenth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
10/1 Adjustable Rate Construction-Perm Mortgage
A $200,000 10/1 adjustable rate mortgage with an initial rate of 4.00% and an annual percentage rate of 4.597% and a 24 month construction term would have 24 estimated interest only payments of
$333.33 and 96 estimated monthly principal and interest payments of $990.43. The maximum amount that the interest rate can rise under this program is 5 percentage points, to 9.00%. The monthly
estimated payment could rise from $990.43 to $1,455 in the thirteenth year. If an escrow account is required or requested, the actual monthly payments will also include amounts for taxes and
homeowners insurance. Down payment used in this example is 20%.
10/1 Adjustable Rate Jumbo Construction-Perm Mortgage
A $500,000 10/1 adjustable rate mortgage with an initial rate of 3.625% and an annual percentage rate of 4.150% and a 24 month construction term would have 24 estimated interest only payments of
$755.21 and 96 estimated monthly principal and interest payments of $2.370.99. The maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.625%. The monthly
estimated payment could rise from $2,370.99 to $3391 in the fourteenth year. If an escrow account is required or requested, the actual monthly payments will also include amounts for taxes and
homeowners insurance. Down payment used in this example is 20%.
10/1 Adjustable Rate Interest-Only Construction-Perm Mortgage
A $180,000 10/1 adjustable rate mortgage with interest only payments and an initial rate of 4.250% and an annual percentage rate of 4.700% and a construction term of 24 months would have 24 estimated
interest only payments of $318.75 and 96 estimated interest only payments of $637.50. The maximum amount that the interest rate can rise under this program is 5 percentage points, to 9.250%. The
estimated monthly payment could rise from $637.50 to $1,799.00 in the fourteenth year. If an escrow account is required or requested, the actual monthly payments will also include amounts for taxes
and homeowners insurance. Down payment used in this example is 20%.
15/1 Adjustable Rate Conforming Mortgage
A $200,000 15/1 adjustable rate conforming mortgage with an initial rate of 3.75% and an annual percentage rate of 3.779% would have 180 estimated monthly principal and interest payments of $926.24.
The maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.75%. The monthly payment could rise from $926.24 to $1273 in the sixteenth year. If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in example is 20%.
15/1 Adjustable Rate Jumbo Mortgage
A $560,000 15/1 adjustable jumbo rate mortgage with an initial rate of 3.625% and an annual percentage rate of 3.591% would have 180 estimated monthly principal and interest payments of $2,553.89.
The maximum amount that the interest rate can rise under this program is 5 percentage points, to 8.6250%. The monthly payment could rise from $2,553.89 to $3,514.00 in the sixteenth year. If an
escrow account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
15-Year Fixed Rate Conforming Mortgage
An $180,000 15-year fixed rate conforming mortgage with a rate of 3.500% and an annual percentage rate of 3.680% would have 180 estimated monthly principal and interest payments of $1,286.79. If an
escrow account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
15-Year Fixed Rate Jumbo Mortgage
A $500,000 15-year fixed rate jumbo mortgage with a rate of 3.500% and an annual percentage rate of 3.565% would have 180 estimated monthly principal and interest payments of$3,574.42 If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
30-Year Fixed Rate Conforming Mortgage
An $180,000 30-year fixed rate conforming mortgage with a rate of 4.500% and an annual percentage rate of 4.606% would have 360 estimated monthly principal and interest payments of $912.04. If an
escrow account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
30-Year Fixed Rate Jumbo Mortgage
A $500,000 30-year fixed rate jumbo mortgage with a rate of 4.500% and an annual percentage rate of 4.538% would have 360 estimated monthly principal and interest payments of$2,533.43 If an escrow
account is required or requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 20%.
FHA 30-Year Fixed Rate Mortgage
A $163,200 fixed rate 30-year term loan with a rate of 3.500% and an annual percentage rate of 4.051% would have 103 estimated monthly principal and interest payments of $819.79 followed by 257
estimated monthly principal and interest payments of $745.64. Payment will decrease slightly as mortgage insurance premium is adjusted during the first 103 months. If an escrow account is required or
requested, the actual monthly payments will also include amounts for taxes and homeowners insurance. Down payment used in this example is 3.5%.
VA 30-Year Fixed Rate Mortgage
A $250,705 fixed rate 30-year term loan with a rate of 5.25% and an annual percentage rate of 5.424% would have 360 estimated monthly principal and interest payments of $1,384.41. If an escrow
account is required or requested, the actual payments will also include amounts for taxes and homeownerโs insurance. | {"url":"https://www.raymondjamesbank.com/mortgage-rates-disclosures","timestamp":"2024-11-06T10:41:10Z","content_type":"text/html","content_length":"42381","record_id":"<urn:uuid:3ad39474-ce9e-40df-b883-5e657743cf36>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00006.warc.gz"} |
C# latin squares or shuffling a 2D array
Another problem in "Problems for Computer Solution" by
Fred Gruenberger and George Jaffray
concerns Latin Squares. Latin Squares are essentially two dimensional array that is shuffled in both dimensions.
I believe an example is now in order:
If you start with a set of numbers:
Then shuffle them:
You then have an array shuffled in a single dimension, but if you have a two dimensional array where your set of numbers:
is repeated for each row:
A possible suffling of this two dimensional array could look like this:
The numbers are now shuffled so that each row and column is different from every other row and column and - here is the most important part - no member of the set is repeated in a row or column.
One possible way to progammatically generate a Latin Square is to use this trick - Use the first row not only as data but as the rule on how to place the values for all the rest of the rows.
I believe another example is now in order:
If you start with our shuffled set of numbers:
Now look at the data and say:
In the first row the 2 is in the 3rd column.
In the second row the 2 is in the 1st column.
In the third row the 2 is in the 2nd column.
3rd column, 1st column, 2nd column - thats our first row! 3,1,2!
Now rotate the first row by taking the first value and placing at the end - 1,2,3 - and do the same thing to the original data. So, lets look at the data and say:
In the first row the 3 is in the 1st column.
In the second row the 3 is in the 2nd column.
In the third row the 3 is in the 3rd column.
Rotate one last time - 2,3,1 - and do the same thing to the original data. So, lets look at the data and say one last time:
In the first row the 1 is in the 2nd column.
In the second row the 1 is in the 3rd column.
In the third row the 1 is in the 1st column.
So using the rules above we generated:
Here is a 9x9 array shuffled using the above trick:
And, here is the code that can generate Latin Squares using the above trick:
using System;
using System.Collections;
class LatinSquare
static void Main()
var LatinSquare = new int[9, 9];
var Pattern = new int[] {1,2,3,4,5,6,7,8,9};
Pattern = Shuffle(Pattern);
//Use the first row as a pattern for the rest
//of the Latin Square
for (int i = 0; i < 9; i++)
{LatinSquare[i,0] = Pattern[i];
for (int x = 0; x < 9; x++)
for (int y = 1; y < 9; y++)
LatinSquare[Pattern[y] - 1, y] =
LatinSquare[LatinSquare[x, 0] - 1, 0];
Pattern = RotatePattern(Pattern);
private static T[] Shuffle(T[] OriginalArray)
var matrix = new SortedList();
var r = new Random();
for (int x = 0; x <= OriginalArray.GetUpperBound(0); x++)
int i = r.Next();
if (!matrix.ContainsKey(i))
matrix.Add(i, OriginalArray[x]);
var OutputArray = new T[OriginalArray.Length];
var counter = 0;
foreach (DictionaryEntry entry in matrix)
OutputArray[counter++] = (T)entry.Value;
return OutputArray;
private static int[] RotatePattern(int[] Pattern)
int temp = Pattern[0];
;for (int y = 0; y < 9 - 1; y++)
Pattern[y] = Pattern[y + 1];
Pattern[9 - 1] = temp;
return Pattern;
private static void PrintLatinSquare(int[,] LatinSquare)
//Print out the Latin Square
for (int i = 0; i < 9; i++)
for (int j = 0; j < 9; j++)
Console.Write(LatinSquare[j, i].ToString().PadLeft(3));
The code also available | {"url":"https://oldschooldotnet.blogspot.com/2008/11/c-latin-squares-or-shuffling-2d-array.html","timestamp":"2024-11-07T01:07:43Z","content_type":"application/xhtml+xml","content_length":"68936","record_id":"<urn:uuid:350094e3-f8ce-42ee-8a30-6b9495a3ada2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00655.warc.gz"} |
Time Series Regression VII: Forecasting
This example shows the basic setup for producing conditional and unconditional forecasts from multiple linear regression models. It is the seventh in a series of examples on time series regression,
following the presentation in previous examples.
Many regression models in economics are built for explanatory purposes, to understand the interrelationships among relevant economic factors. The structure of these models is usually suggested by
theory. Specification analysis compares various extensions and restrictions of the model to evaluate the contributions of individual predictors. Significance tests are especially important in these
analyses. The modeling goal is to achieve a well-specified, accurately calibrated description of important dependencies. A reliable explanatory model might be used to inform planning and policy
decisions by identifying factors to be considered in more qualitative analyses.
Regression models are also used for quantitative forecasting. These models are typically built from an initial set (perhaps empty, perhaps quite large) of potentially relevant predictors. Exploratory
data analysis and predictor selection techniques are especially important in these analyses. The modeling goal, in this case, is to accurately predict the future. A reliable forecasting model might
be used to identify risk factors involved in investment decisions and their relationship to critical outcomes like future default rates.
It is important, in practice, to distinguish the type of regression model under study. If a forecasting model is built through exploratory analysis, its overall predictive capability can be
evaluated, but not the significance of individual predictors. In particular, it is misleading to use the same data to construct a model and then to draw inferences about its components.
This example focuses on forecasting methods for multiple linear regression (MLR) models. The methods are inherently multivariate, predicting the response in terms of past and present values of the
predictor variables. As such, the methods are essentially different from the minimum mean squared error (MMSE) methods used in univariate modeling, where forecasts are based on the self-history of a
single series.
We begin by loading relevant data from the previous example Time Series Regression VI: Residual Diagnostics:
Conditional Forecasting
Regression models describe the response produced by, or conditional on, associated values of the predictor variables. If a model has successfully captured the essential dynamics of a data-generating
process (DGP), it can be used to explore contingency scenarios where predictor data is postulated rather than observed.
Models considered in this series of examples have been calibrated and tested using predictor data X0, measured at time t, and response data y0, measured at time t + 1. The time shift in the data
means that these models provide one-step-ahead point forecasts of the response, conditional on the predictors.
To forecast further into the future, the only adjustment necessary is to estimate the model with larger shifts in the data. For example, to forecast two steps ahead, response data measured at time t
+ 2 (y0(2:end)) could be regressed on predictor data measured at time t (X0(1:end-1)). Of course, previous model analyses would have to be revisited to assure reliability.
To illustrate, we use the M0 model to produce a conditional point forecast of the default rate in 2006, given new data on the predictors in 2005 provided in the variable X2005:
betaHat0 = M0.Coefficients.Estimate;
yHat0 = [1,X2005]*betaHat0;
D = dates(end);
Xm = min([X0(:);X2005']);
XM = max([X0(:);X2005']);
hold on
fill([D D D+1 D+1],[Xm XM XM Xm],'b','FaceAlpha',0.1)
hold off
ylabel('Predictor Level')
title('{\bf New Predictor Data}')
axis tight
grid on
Ym = min([y0;yHat0]);
YM = max([y0;yHat0]);
hold on
fill([D D D+1 D+1],[Ym YM YM Ym],'b','FaceAlpha',0.1)
hold off
ylabel('Response Level')
title('{\bf Forecast Response}')
axis tight
grid on
We see that the SPR risk factor held approximately constant from 2004 to 2005, while modest decreases in the AGE and BBB risk factors were offset by a drop in CPF. CPF has a negative model
coefficient, so the drop is associated with increased risk. The net result is a forecast jump in the default rate.
Unconditional Forecasting
In the absence of new predictor data (either measured or postulated), an unconditional forecast of the response may be desired.
One way to do this is to create a dynamic, univariate model of the response, such as an ARIMA model, independent of the predictors. ARIMA models depend on the existence of autocorrelations in the
series from one time period to the next, which the model can exploit for forecasting purposes. ARIMA models are discussed elsewhere in the documentation.
Alternatively, a dynamic, multivariate model of the predictors can be built. This allows new values of the predictors to be forecast rather than observed. The regression model can then be used to
forecast the response, conditional on the forecast of the predictors.
Robust multivariate forecasts are produced by vector autoregressive (VAR) models. A VAR model makes no structural assumptions about the form of the relationships among model variables. It posits only
that every variable potentially influences every other. A system of dynamic regression equations is formed, with each variable appearing on the left-hand side of one equation, and the same lagged
values of all of the variables, and possibly an intercept, appearing on the right-hand sides of all of the equations. The idea is to let the regression sort out which terms are actually significant.
For example, a VAR(3) model for the predictors in the default rate model would look like this:
$AG{E}_{t}={a}_{1}+\sum _{i=1}^{3}{b}_{11i}AG{E}_{t-i}+\sum _{i=1}^{3}{b}_{12i}BB{B}_{t-i}+\sum _{i=1}^{3}{b}_{13i}CP{F}_{t-i}+\sum _{i=1}^{3}{b}_{14i}SP{R}_{t-i}+{ฯต}_{1t}$
$BB{B}_{t}={a}_{2}+\sum _{i=1}^{3}{b}_{21i}AG{E}_{t-i}+\sum _{i=1}^{3}{b}_{22i}BB{B}_{t-i}+\sum _{i=1}^{3}{b}_{23i}CP{F}_{t-i}+\sum _{i=1}^{3}{b}_{24i}SP{R}_{t-i}+{ฯต}_{2t}$
$CP{F}_{t}={a}_{3}+\sum _{i=1}^{3}{b}_{31i}AG{E}_{t-i}+\sum _{i=1}^{3}{b}_{32i}BB{B}_{t-i}+\sum _{i=1}^{3}{b}_{33i}CP{F}_{t-i}+\sum _{i=1}^{3}{b}_{34i}SP{R}_{t-i}+{ฯต}_{3t}$
$SP{R}_{t}={a}_{4}+\sum _{i=1}^{3}{b}_{41i}AG{E}_{t-i}+\sum _{i=1}^{3}{b}_{42i}BB{B}_{t-i}+\sum _{i=1}^{3}{b}_{43i}CP{F}_{t-i}+\sum _{i=1}^{3}{b}_{44i}SP{R}_{t-i}+{ฯต}_{4t}$
The number of coefficients in the model is the number of variables times the number of autoregressive lags times the number of equations, plus the number of intercepts. Even with only a few
variables, a model with a well-specified lag structure can grow quickly to a size that is untenable for estimation using small data samples.
Equation-by-equation OLS estimation performs well with VAR models, since each equation has the same regressors. This is true regardless of any cross-equation covariances that may be present in the
innovations. Moreover, purely autoregressive estimation is numerically very stable.
The numerical stability of the estimates, however, relies on the stationarity of the variables being modeled. Differenced, stationary predictor variables lead to reliable forecasts of the
differences. However, undifferenced predictor data may be required to forecast the response from the regression model. Integrating forecast differences has the potential to produce distorted forecast
levels (see, for example, [2]). Nevertheless, the standard recommendation is to use stationary variables in the VAR, assuming that a short horizon will produce minimal reintegration errors.
VAR estimation and forecasting are carried out by the functions estimate and forecast. The following produces an unconditional point forecast of the default rate in 2005 from the M0 regression model:
% Estimate a VAR(1) model for the differenced predictors (with
% undifferenced |AGE|):
numLags = 1;
D1X0PreSample = D1X0(1:numLags,:);
D1X0Sample = D1X0(numLags+1:end,:);
numPreds0 = numParams0-1;
VARMdl = varm(numPreds0,numLags);
EstMdl = estimate(VARMdl,D1X0Sample,'Y0',D1X0PreSample);
% Forecast the predictors in D1X0:
horizon = 1;
ForecastD1X0 = forecast(EstMdl,horizon,D1X0);
% Integrate the differenced forecast to obtain the undifferenced forecast:
ForecastX0(1) = ForecastD1X0(1); % AGE
ForecastX0(2:4) = X0(end,2:4)+ForecastD1X0(2:4); % Other predictors
Xm = min([X0(:);ForecastX0(:)]);
XM = max([X0(:);ForecastX0(:)]);
hold on
fill([D D D+1 D+1],[Xm XM XM Xm],'b','FaceAlpha',0.1)
hold off
ylabel('Predictor Level')
title('{\bf Forecast Predictors}')
axis tight
grid on
% Forecast the response from the regression model:
ForecastY0 = [1,ForecastX0]*betaHat0;
Ym = min([y0;ForecastY0]);
YM = max([y0;ForecastY0]);
hold on
fill([D D D+1 D+1],[Ym YM YM Ym],'b','FaceAlpha',0.1)
hold off
ylabel('Response Level')
title('{\bf Forecast Response}')
axis tight
grid on
The result is an unconditional forecast that is similar to the conditional forecast made with actual 2005 data. The forecast depends on the number of lags used in the VAR model, numLags. The issue of
choosing an appropriate lag length is addressed in the example Time Series Regression IX: Lag Order Selection.
The forecast generated by forecast is nonstochastic, in the sense that it uses zero-valued innovations outside of the sample. To generate a stochastic forecast, with specific structure in the
innovations, use simulate or filter.
Forecast Error
Regardless of how new predictor data is acquired, forecasts from MLR models will contain errors. This is because MLR models, by their nature, forecast only expected values of the response. For
example, the MLR model
${y}_{t}={X}_{t}\beta +{e}_{t},$
forecasts ${y}_{t+1}$ using
${\underset{}{\overset{ห}{y}}}_{t+1}=E\left[{y}_{t+1}\right]={X}_{t+1}\underset{}{\overset{ห}{\beta }}.$
Errors occur for two reasons:
$โข\phantom{\rule{0.2222222222222222em}{0ex}}$ The forecast does not incorporate the innovation ${e}_{t+1}$.
$โข\phantom{\rule{0.2222222222222222em}{0ex}}$ Sampling error produces a $\underset{}{\overset{ห}{\beta }}$ that is different from $\beta$.
As discussed in the example Time Series Regression II: Collinearity and Estimator Variance, the forecast error ${\underset{}{\overset{ห}{y}}}_{t+1}-{y}_{t+1}$ is reduced if
$โข\phantom{\rule{0.2222222222222222em}{0ex}}$ The sample size is larger.
$โข\phantom{\rule{0.2222222222222222em}{0ex}}$ The variation of the predictors is larger.
$โข\phantom{\rule{0.2222222222222222em}{0ex}}$ ${X}_{t+1}$ is closer to its mean value.
The last item says that forecasts are improved when they are closer to the center of the distribution of sample values used to estimate the model. This leads to interval forecasts of nonconstant
Assuming normal, homoscedastic innovations, point forecasts can be converted to $N\left({y}_{t+1}|{X}_{t},{\sigma }^{2}\right)$ density and interval forecasts using standard formulas (see, for
example, [1]). As discussed in the example Time Series Regression VI: Residual Diagnostics, however, standard formulas become biased and inefficient in the presence of autocorrelated or
heteroscedastic innovations. In such situations, interval forecasts can be simulated using an appropriate series of innovations, but it is often recommended that a model be respecified to standardize
the innovations as much as possible.
It is common to hold back a portion of the data for forecast evaluation, and estimate the model with an initial subsample. A basic performance test compares the root mean square error (RMSE) of
out-of-subsample forecasts to the RMSE of a simple, baseline forecast that holds the last in-sample value of the response constant. If the model forecast does not significantly improve on the
baseline forecast, then it is reasonable to suspect that the model has not abstracted the relevant economic forces in the DGP.
For example, the following tests the performance of the M0 model:
numTest = 3; % Number of observations held out for testing
% Training model:
X0Train = X0(1:end-numTest,:);
y0Train = y0(1:end-numTest);
M0Train = fitlm(X0Train,y0Train);
% Test set:
X0Test = X0(end-numTest+1:end,:);
y0Test = y0(end-numTest+1:end);
% Forecast errors:
y0Pred = predict(M0Train,X0Test);
DiffPred = y0Pred-y0Test;
DiffBase = y0Pred-y0(end-numTest);
% Forecast comparison:
RMSEPred = sqrt((DiffPred'*DiffPred)/numTest)
RMSEBase = sqrt((DiffBase'*DiffBase)/numTest)
The model forecast does show improvement relative to the baseline forecast. It is useful, however, to repeat the test with various values of numTest. This is complicated by the influential
observation in 2001, three observations before the end of the data.
If a model passes the baseline test, it can be re-estimated with the full sample, as in M0. The test helps to distinguish the fit of a model from its ability to capture the dynamics of the DGP.
To generate new response values from a regression model, new values of the predictors are required. When new predictor values are postulated or observed, response data is extrapolated using the
regression equation. For unconditional extrapolation, new predictor values must be forecast, as with a VAR model. The quality of predictions depends on both the in-sample fit of the model, and on the
model's fidelity to the DGP.
The basic assumption of any forecasting model is that the economic data patterns described by the model will continue into the future. This is an assumption about the stability of the DGP. The social
mechanisms driving economic processes, however, are never stable. The value of a forecasting model, especially one built by exploratory data analysis, can be short-lived. A basis in sound economic
theory will improve the longevity of a model, but the volatile nature of the forecasting process must be acknowledged. This uncertainty is captured, to some degree, in models of the forecast error.
Econometric practice has shown that simple forecasting models often perform the best.
[1] Diebold, F. X. Elements of Forecasting. Mason, OH: Thomson Higher Education, 2007.
[2] Granger, C., and P. Newbold. "Forecasting Transformed Series." Journal of the Royal Statistical Society. Series B, Vol. 38, 1976, pp. 189โ203. | {"url":"https://it.mathworks.com/help/econ/time-series-regression-vii-forecasting.html","timestamp":"2024-11-06T02:08:16Z","content_type":"text/html","content_length":"100008","record_id":"<urn:uuid:3a2eee72-bd88-4e55-8bb5-3c1c7577cc16>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00198.warc.gz"} |
If f(x) =tan^24x and g(x) = sqrt(5x-1 , what is f'(g(x)) ? | HIX Tutor
If #f(x) =tan^24x # and #g(x) = sqrt(5x-1 #, what is #f'(g(x)) #?
Answer 1
$f ' \left(g \left(x\right)\right) = 24 {\tan}^{23} \left(\sqrt{5 x - 1}\right) \times {\sec}^{2} \left(\sqrt{5 x - 1}\right)$
As #f(x)=tan^24x# and #g(x)=sqrt(5x-1)#
As #f(x)=tan^24x#
#(df)/(dx)=f'(x)=24tan^23x xx sec^2x#
and #f'(g(x))=24tan^23(sqrt(5x-1)) xx sec^2(sqrt(5x-1))#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Reqd. Deri. $= \left(\frac{48}{5}\right) {\tan}^{23} x \cdot {\sec}^{2} x \cdot \sqrt{5 x - 1} .$
#f(x)=tan^24x, g(x)=sqrt(5x-1)=t,# say.
Now Reqd. Deri. #=f'(g(x))=f'(t)=d/dt{f(t)}=(df)/dt#
We see that #f# is a fun. of #x#, and #x# of #t#. Hence,
reqd. Deri. #=(df)/dt=(df)/dx*dx/dt..........(1)#
Now, #f(x)=tan^24x rArr (df)/dx=24tan^23x*d/dxtanx=24tan^23x*sec^2x#.....(2)
Next, #t=sqrt(5x-1) rArr dt/dx=1/(2sqrt(5x-1))*d/dx(5x-1)=5/(2sqrt(5x-1)# #rArr dx/dt=(2sqrt(5x-1))/5....................(3)#
From #(1),(2),(3)#, Reqd. Deri. #=24tan^23x*sec^2x*(2sqrt(5x-1))/5#, #=(48/5)tan^23x*sec^2x*sqrt(5x-1).#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
To find ( f'(g(x)) ), we first find ( f'(x) ) and then substitute ( g(x) ) into it.
Given ( f(x) = \tan^2(4x) ) and ( g(x) = \sqrt{5x - 1} ):
1. Find ( f'(x) ): ( f'(x) = \frac{d}{dx}[\tan^2(4x)] ) ( f'(x) = 2\tan(4x) \sec^2(4x) \cdot 4 )
2. Substitute ( g(x) ) into ( f'(x) ): ( f'(g(x)) = 2\tan(4\sqrt{5x-1}) \sec^2(4\sqrt{5x-1}) \cdot 4 )
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
โข 98% accuracy study help
โข Covers math, physics, chemistry, biology, and more
โข Step-by-step, in-depth guides
โข Readily available 24/7 | {"url":"https://tutor.hix.ai/question/if-f-x-tan-24x-and-g-x-sqrt-5x-1-what-is-f-g-x-8f9af9dec8","timestamp":"2024-11-08T05:19:44Z","content_type":"text/html","content_length":"583734","record_id":"<urn:uuid:046b8f07-05cb-45d5-8801-3093e836d134>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00307.warc.gz"} |
Quant - Work problem
Bea can paint a house three times faster than Alice can paint a house. If, working together, it takes Alice and Bea 24 hours to paint a house, then how many hours will it take Bea to paint a house
A) 28
B) 30
C) 32
D) 36
E) 40
The correct answer is C. I tried many ways, but I don't really get the right answer.
Maybe someone can help | {"url":"https://www.beatthegmat.com/what-should-be-the-answer-t267256.html?&view=previous&sid=6d1bdef48c0de82a714a952a0afcda08","timestamp":"2024-11-14T11:09:15Z","content_type":"text/html","content_length":"732919","record_id":"<urn:uuid:a93969f4-136c-4efe-b175-cbafcf7c81e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00051.warc.gz"} |
1.8 C) Overtime โ Financial Maths โ AQA GCSE Maths Foundation
Sometimes individuals who are paid hourly can get paid more money per hour for working above a certain number of hours, or for working antisocial hours (such as nights). This is known as overtime.
Sometimes you will be given the overtime as a multiply of normal pay; for example, โovertime pay is 1.3 times normal payโ. Other times, you will be given the difference in overtime pay to normal pay
as a percentage or a fraction; for example, โovertime pay is 40% more than normal payโ.
Whenever we are working out wages when we have normal and overtime pay, we need to be careful that we are multiplying the number of hours worked at each pay rate by their respective pay rate.
Example 1
An individual gets paid ยฃ6 per hour. For working overtime, our individual gets paid a third more than their normal pay. What is the overtime hourly wage?
We are told that the individual gets paid a third more than their normal pay for working overtime. Therefore, we find what a third of their normal wage is and add the third onto the normal wage.
Their normal wage is ยฃ6 and we find out what a third of ยฃ6 is by either multiplying by a third or dividing by 3.
A third of normal pay is ยฃ2. We now add this (ยฃ2) onto the normal rate of pay (ยฃ6) to find the overtime rate of pay.
Therefore, our individual gets paid ยฃ8 per hour of overtime.
Example 2
An individual works a 13-hour shift. His normal rate of pay is ยฃ12 per hour. However, if the individual works over 8 hours, he earns 40% more for each additional hour above 8 hours worked. How much
money does our individual get paid for the 13-hour shift? This question is non-calculator.
This question contains percentages, click here to be taken through to the percentage section.
The first step in this question is to work out the number of hours that our individual works at a normal wage and the number of hours that our individual works at an overtime wage. The question tells
us that the first 8 hours of a shift is paid at a normal wage and any additional hours over 8 hours receives the overtime rate of pay. Our individual is working for 13 hours, which means that he will
receive 8 hours at normal pay and 5 hours at the overtime pay (13 โ 8 = 5).
The question tells us that the normal rate of pay is ยฃ12 and that he earns 40% more for overtime. The question is non-calculator. Therefore, the best way to find out what 40% of ยฃ12 is, is to find
out what 10% is and then multiply it by 4 to get what 40% is. We find what 10% of ยฃ12 is by dividing by 10.
We want to know what 40% is so we multiply what 10% is by 4.
The final step to find out the overtime pay is to add this 40% onto the normal rate of pay.
Therefore, our individual gets paid ยฃ16.80 for every additional hour worked over 8 hours.
We now have everything that we need to find out how much our individual earns from this 13-hour shift.
When we are multiplying by 5, you may find it easier to multiply by 10 and then half the results. When we multiply ยฃ16.80 by 10, we get ยฃ168. We then need to divide by 2, which gives us ยฃ84.
Therefore, our worker earns ยฃ180 for his 13-hour shift.
Example 3
I work at a restaurant for 10 hours. Out of these 10 hours, 7 hours are at the normal rate of pay and 3 hours are at the overtime rate of pay. Overtime pay is 1.3 times the normal rate of pay. I earn
ยฃ105.73 for working for 10 hours. What is the normal rate of pay?
The first step in answering this question is to let the normal rate of pay be an unknown and the unknown that I am going to choose is w. We are told in the question that the overtime rate of pay is
1.3 times the normal rate of pay. As we have just let the normal rate of pay be w, we can find the overtime rate of pay by multiplying the normal rate of pay (w) by 1.3; this results in the overtime
rate of pay being 1.3w.
We are now able to sub in the values that we know into the equation below. We know the normal hours worked (7), the overtime hours worked (3 [10 โ 7 = 3]) and both the normal and overtime rate of
pay. I am going to ignore units as it makes the working easier to understand.
We can multiply these out and collect like terms.
We want to find what w is and not what 10.9w is. Therefore, we need to divide both sides of the equation by 10.9.
Therefore, we can see that the normal rate of pay is ยฃ9.70.
Extra Points
It may be the case that in a question like this you are given the overtime rate of pay as a percentage increase on the normal rate of pay; for example, โovertime pay is 60% more than normal payโ. If
this is the case, you should turn the percentage into a multiplier.
For the above example, we are told that overtime pay is 60% higher than normal pay. Normal pay is 100% and overtime pay is 60% higher. Therefore, we add 60% to 100%, which means that overtime pay is
The next step is to convert 160% into a multiplier, which we do by dividing by 100.
From the working above, you can see that overtime pay is 1.6 times normal pay. If we let w be normal pay, overtime pay would be 1.6w (1.6 x w). | {"url":"https://www.elevise.co.uk/g-a-m-f-18-c.html","timestamp":"2024-11-03T20:35:39Z","content_type":"text/html","content_length":"100828","record_id":"<urn:uuid:1ab3b564-43b0-427d-aa76-0781e3c3e425>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00558.warc.gz"} |
The Globe and Mail
News Investing Technology Vehicles Careers
Home | Business | National | International | Sports | Travel | Entertainment
Breaking News
The Isaac Newton of logic Latest Weather
Globe Poll
It was 150 years ago that George Boole published his classic The Laws of Thought, in which he outlined concepts that form the underpinnings of the Are there circumstances where you
modern high-speed computer. SIOBHAN ROBERTS chronicles the man and his method would find the death penalty an
appropriate judicial response?
By SIOBHAN ROBERTS
Saturday, March 27, 2004 - Page F9
There is nothing more ubiquitous these days than the computer, the thinking machine that has hardwired itself to our lives.
A quick Google search of "history of the computer" yields the website http://www.computerhistory.org, which pegs the computer's invention to 1945. That
year, John von Neumann, a Hungarian-born mathematician at Princeton, wrote his "First Draft of a Report of the EDVAC" (the Electronic Discrete Variable
Automatic Computer).
In his report, Von Neumann outlined the architecture of a stored-program digital computer, an ancestor of most computers in use today. (Also that year,
Grace Hopper, an admiral in the U.S. Navy, recorded the first computer "bug" -- a moth stuck between the relays of a pre-digital computer.)
But the existence of both the computer and Google can be traced to a much earlier date.
It was 150 years ago that George Boole published his literary classic The Laws of Thought, wherein he devised a mathematical language for dealing with
mental machinations of logic. It was a symbolic language of thought -- an algebra of logic (algebra is the branch of mathematics that uses letters and
other general symbols to represent numbers and quantities in formulas and equations).
In doing so, he provided the raw material needed for the design of the modern high-speed computer. His concepts, developed over the past century by other
mathematicians but still known as "Boolean algebra," form the underpinnings of computer hardware, driving the circuits on computer chips. And, at a much
higher level in the brain stem of computers, Boolean algebra operates the software of search engines such as Google.
"Boole was the first cognitive scientist," says Keith Devlin, executive director of the Center for the Study of Language and Information at Stanford
Dr. Devlin's work attempts to take Boole's concepts -- the mathematics of human thought -- and apply them to human communication. "I'm trying to take it
one step further and it's damn hard," he says. "Boole was bold and successful, and that was a mixture of genius and good luck."
How Boolean logic works isn't very difficult, or so the experts such as Dr. Devlin profess.
The most basic and tangible example is the machinations of Boolean searches, which operate on three logical operators: and, or, not.
Algebra gets factored in to this logical equation when Boole designates a multiplication sign (x) to represent "and," an addition sign (+) to represent
"or," and a subtraction sign (-) to represent "not."
For example, in a Boolean search with the terms "Martin and sponsorship," the "and" logic collates the search results to retrieve all records with both
terms. "Or" logic collates results to retrieve all the records containing one term, the other or both. "Not" logic excludes records from your search
The same "and" gates and "or" gates drive computer circuitry, with streams of electrons performing Boole's algebraic operations -- a computer's bits and
bytes operate on the binary system, as does Boole's algebra. He employs the number 1 to represent the universal class of everything (or true) and 0 to
represent the class of nothing (false).
But rather than delving any deeper in Boole's algebra (which now may seem not so simple; consult the sidebar if you're still curious), it would be
logical to examine instead the historical context in which his invention had such an impact.
"Boole's primary contribution was in showing that logic could be conceived of in a radically different way," says Jim Van Evra, an associate professor of
philosophy at the University of Waterloo.
As Prof. Van Evra chronicles in an article to be published in the Biographical Dictionary of Nineteenth Century British Scientists, logic was considered
a dead subject from the 17th to the 19th century. It was criticized as being superfluous, a device that merely stirred the pot of knowledge already at
hand. In England during the early 19th century, however, that perception began to change. Logic began to develop into a serious science.
Boole was born in Lincoln, England, in 1815, the eldest son of a poor shoemaker who also had a passion for mathematics. He was a precocious child. His
mother boasted that young George, 18 months, wandered out of the house and was found in the centre of town, spelling words for money.
Boole was fluent in Latin and Greek by the time he was 12, and subsequently self-taught in French, German, Italian and Spanish. He became the sole
support for his family (as a teacher) at the age of 16, when his father's business failed.
Having Cambridge University close at hand, he consulted the elite mathematicians of the day. They invited him to attend as a student, but he could not
afford the time or money.
"Everything he did was from his own mind. That's why he was such a trailblazer," says Desmond Mac-Hale, author of George Boole: His Life and Works and an
associate professor of mathematics at University College, Cork. "Had he gone down the standard path of schooling, he might not have hit upon such major
Cambridge mathematicians, still keen to encourage Boole, provided him access to the mathematical library. And he succeeded in publishing several papers
in the Cambridge Mathematical Journal -- one of which, published in 1844, was awarded the first-ever gold medal from London's Royal Society for a paper
in mathematics.
And though Boole was never offered a position at Cambridge, the university asked for his well-regarded opinion about whom they should hire when they were
seeking a new professor of mathematics.
In 1849, he became the founding professor of mathematics at Queen's College (now University College). In 1855, he married Mary Everest (niece of Sir
George Everest, for whom the mountain is named) and they raised five daughters in Ireland not long after the potato famine.
He was also a very religious man. According to Prof. MacHale, all evidence points to Boole's faith as Unitarian -- believing in God as one, not the
Trinity, which meshes with the prominent position he gave the number one in his work. "It's my feeling that his motivation with his logic was religious,"
he says. "He believed that the human mind was the greatest of God's creations."
Prof. MacHale also notes that subsequent to The Laws of Thought, Boole undertook to rewrite the Bible in his mathematical logic. "He was slightly out of
Privacy touch with reality," he says. "It was a foolhardy project and it caused him a great deal of torment because he could never accomplish it."
Subscribe One anecdote about Boole's life that comes to the mind of Geoffrey Hinton, a computer-science professor at the University of Toronto and his
to Globe great-great-grandson, was the way the mathematician died.
Notice to
visitors A devoted professor to his detriment, he walked the four miles one day from his house to the college in a rainstorm. Soaking wet, he lectured all day and
subsequently died of pneumonia.
As Prof. Hinton tells it, "He was killed by homeopathy. His wife wrapped him in wet sheets, thinking what caused the pneumonia would cure it."
(Tangentially, Prof. Hinton is quick to mention that his other great-great-grandfather was also famous and ahead of his time -- James Hinton founded the
first Victorian sex cult, advocating woman should have fun while having sex, and profoundly influenced the work of sexologist and psychologist Havelock
With his PhD in artificial intelligence, it might appear that Prof. Hinton followed after Boole. But in fact, he says, "I'm entirely on the other side."
The field of artificial intelligence, in its early years circa 1950-60, was committed to the Boolean idea that symbols effectively represent human
reasoning. Since the eighties, however, artificial intelligence has come to see human reasoning as not purely logical. Rather, it is more about what is
intuitively plausible. "Boole thought the human brain worked like a pocket calculator or a standard computer," Prof. Hinton says. "I think we're more
like rats."
Despite the fact that he is universally admired, Boole does have his detractors. "People have their own heroes and they serve their heroes by being
critical of Boole," says John Corcoran, a professor of the history and philosophy of logic at the University of Buffalo.
"[Gottlob] Frege is the main hero whose worshippers denigrate Boole," he says, adding that there are five giants of logic: Aristotle, Boole, Frege, Kurt
Godel and Alfred Tarski. "Perhaps a few worshippers of Tarski or Godel will occasionally take a swipe at Boole in order to show how 'advanced' they are.
Many of the Boole-bashers are people dedicated to proving that new ideas are always better than old. Many of the Boole worshippers are also people
dedicated to proving that new ideas are always better than old, but they do not realize how old Boole's ideas really are."
Prof. Corcoran, of course, falls into the class of Boole worshippers. But not beyond all reason.
"There are major flaws in Boole's work that have come to light over the years. It's been said that Boolean algebra isn't Boole's algebra -- it's the
modern refinement of Boole's work."
With the advantage of hindsight on the occasion of the sesquicentennial of the publication of The Laws of Thought, the imperfections in his work go
undisputed; the analogy Boole drew between algebra and logic was not a perfect fit.
Prof. Corcoran addresses one flaw in a paper titled, Boole's Solutions Fallacy. "Boole did not recognize the difference between the consequences of an
equation and the solution of an equation," he says. "This mistake might seem like a technicality, but it mars a lot of Boole's thinking."
Nonetheless, Prof. Corcoran chooses to focus on Boole's positive contribution. "Boole's book is really a classic of literature," he says. "He brought
about a revolutionary paradigm shift that dramatically changed the nature of logic. He thought he was the Isaac Newton of logic, and he was."
Even Boole, dying at just 49, was well aware that The Laws of Thought would give him a lasting reputation. In a letter penned while his book was still in
progress, he betrayed what Prof. MacHale calls an uncharacteristic lack of modesty: "I am now about to set seriously to work upon preparing for the press
an account of my theory of Logic and Probabilities, which in its present state I look upon as the most valuable, if not the only valuable contribution
that I have made or am likely to make to Science and the thing by which I would desire if at all to be remembered hereafter."
Siobhan Roberts is a Toronto writer whose biography of geometer Donald Coxeter will be published by Penguin in 2005.
An idiot's guide
The following is a bit of an idiot's guide to Boolean algebra (for something more sophisticated, see John Corcoran's introduction to the latest edition
of The Laws of Thought, published by Prometheus Books, 2003).
The gist of George Boole's idea was to reduce logical thought to the mathematics taught in an elementary algebra class. He showed how the numbers 1 and 0
and the standard mathematical operations could be hijacked to perform logical reasoning -- operations such as addition, multiplication and methods for
solving equations formed his symbolic language of thought.
Boole wanted his algebra of thought to include what is called the logic of classes, which expanded on Aristotle's logic (the famous "All men are mortal"
syllogisms). And he wanted his method to encompass the logic of propositions, based on logical work originating with the Stoics.
He employed the symbols x, y, z, etc. to denote arbitrary collections of objects -- the collection of all men, the collection of all documents with the
word "Boole," and so on -- and with the number 1 representing the set of everything and 0 representing the set of nothing.
He then explained how performing algebra with the symbols corresponded to performing logical deductions.
In conducting a Boolean search, for example, an "and" operator (or a multiplication sign -- x) between two words or other values (for example, "pear and
apple") means one is searching for documents containing both of the words, not just one of them. An "or" operator (an addition sign -- +) between two
words or other values (for example, "pear or apple") means one is searching for documents containing at least one of the words, not necessarily both.
In computers based on binary operations, Boolean logic is used to describe electromagnetically charged memory locations or circuit states that are either
charged (1, or true) or not charged (0, or false). The computer can use an "and" gate or an "or" gate operation to obtain a result that can be used for
further processing.
Boole's logic of propositions, similarly, is used to derive the truth-value of a complicated proposition from the truth-values of simpler propositions.
An example might be the predicament of then-finance minister Paul Martin when the sponsorship debacle was underfoot: Suppose, for example, we want to
contemplate the proposition that Mr. Martin knew about the scandalous sponsorship slush fund "and" did nothing about it.
We first assign a value of 1 or 0 to the first proposition: Mr. Martin knew about the slush fund. That is, we compute the truth-value: 1 for true, or 0
for false.
Then we assign a value of 1 or 0 to the second proposition: Mr. Martin did nothing about it: again, 1 for true, 0 for false.
Boolean logic tells us to multiply these two truth-values together to get the truth-value of the whole, compound proposition. One possibility being, 1 x
1=1 = True: Mr. Martin knew about the sponsorship slush fund and did nothing about it.
The Prime Minister is saved from culpability for the disappeared hundreds of millions if either proposition elicits a zero. | {"url":"https://cse.buffalo.edu/~rapaport/111F04/boole.html","timestamp":"2024-11-07T15:52:49Z","content_type":"text/html","content_length":"58887","record_id":"<urn:uuid:985cc0f6-4906-4062-81cb-258142fc9429>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00789.warc.gz"} |
In cosmological thermodynamics, black hole thermodynamics is the study of the behaviors of black holes, according to measurements of quanities such as temperature, radiation, energy, entropy, etc.,
from a thermodynamics system point of view. A noted researcher in this field is American physicist Robert Wald. [2]
The term "black hole" was coined in 1967 by American theoretical physicist John Wheeler. [3] The first ideas on the thermodynamical understanding of black holes, traces to early 1970s discussions
between Wheeler, the coiner of the term โblack holeโ, and his graduate student Mexican-born Jewish physicist Jacob Bekenstein. In 1971, Wheeler pointed out to Bekenstein that black holes seem to
flout the second law of thermodynamics. [1] In 1972, to remedy this issue, Bekenstein suggested that black holes should have a well-defined entropy. On this premise, Bekenstein formulated the
generalized second law of thermodynamics, which states that:
โThe sum of black-hole entropy and ordinary entropy outside a black hole never decreases.โ
To find and measure this โblack hole entropyโ, Bekenstein reasoned that, because of the effect that the massive gravity of black holes pulls light, energy, and matter into its body, according to
German-born American Albert Einsteinโs mass-energy relation E=mcยฒ, a black hole's entropy increase must be proportional or related to its surface area. Two years later, in 1974, this postulate was
confirmed when British astrophysicist Stephen Hawking discovered that black holes radiate energy, now called Hawking radiation, and hence they must have a correlative temperature and thus an entropy
(black hole entropy). [2]
In 1992, American particle physicist Steven Weinberg stated the following about black holes and thermodynamics: [4]
โThermodynamics applies to black holes, not because they contain a large number of atoms, but because they contain a large number of fundamental mass units of the quantum theory of gravitation,
equal to about one hundred thousand of a gram and known as the Planck mass. It would not be possible to apply thermodynamics to a black hole that weighted less than a hundred thousandth of a
(add discussion)
1. Wald, Robert M. (1994). Quantum Field Theory in Curved Spacetime and Blackhole Thermodynamics. Chicago: The University of Chicago Press.
2. Baeyer, Hans Christian von. (2004). Information - the New Language of Science. Cambridge, (pgs. 205-11). Massachusetts: Harvard University Press.
3. Fabbri, Alessandro and Navarro-Salas, Jose. (2005). Modeling Black Hole Evaporation (pg. 17). Imperial College Press.
4. Weinberg, Steven. (1992). Dreams of a Final Theory: the Scientistโs Search for the Ultimate Laws of Nature (pg. 286). Random House.
Further reading
โ Jacobson, Ted. (1996). "Introductory Lectures on Black Hole Thermodynamics" (PDF), (70 pgs). Institute for Theoretical Physics University of Utrecht.
โ Wald, Robert M. (2001). "The Thermodynamics of Black Holes" (PDF), (40 pgs). July 09, Living Reviews in Relativity, Max Planck Institute for Gravitational Physics.
Further reading
โ Machamer, Peter K. (2002). The Blackwell Guide to the Philosophy of Science (section: black hole thermodynamics, pg. 189). Wiley-Blackwell.
โ Chakraborty, Subenoy and Bandyopadhyay, Tanwi. (2008). โThe Geometry of Black Hole Thermodynamics in Gauss-Bonnet Theoryโ (Abs), Classical Quantum Gravity. 25, 6-pgs.
External links
โ Black hole thermodynamics - Wikipedia.
โ Black hole thermodynamics - Knowing the universe and its secrets. | {"url":"https://www.eoht.info/page/Black%20hole%20thermodynamics","timestamp":"2024-11-12T15:47:03Z","content_type":"text/html","content_length":"7997","record_id":"<urn:uuid:ed22d3ba-e873-4f0e-8f00-d26e87663581>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00058.warc.gz"} |
Compare the goodness of fit of Benford's and Blondeau Da
BeyondBenford-package {BeyondBenford} R Documentation
Compare the goodness of fit of Benford's and Blondeau Da Silva's digit distributions to a given dataset
The purpose of this package is to compare the goodness of fit of Benford's and Blondeau Da Silva's digit distributions in a dataset. The package is used to check whether the data distribution is
consistent with theoretical distributions highlighted by Blondeau Da Silva or not (through the function 'dat.distr'): this ideal theoretical distribution must be at least approximately followed by
the data for the use of Blondeau Da Silva's model to be well-founded. It also enables to plot histograms of digit distributions, both observed in the dataset and given by the two theoretical
approaches (with the function 'digit.ditr'). Finally, it proposes to quantify the goodness of fit via Pearson's chi-squared test (with the function 'chi2').
Blondeau Da Silva
Maintainer: Blondeau Da Silva
F. Benford (1938). The law of anomalous numbers. Proceedings of the American Philosophical Society, 78:127-131.
A. Berger and T. Hill (2015). An introduction to Benford's Law. Princeton University Press, Princeton, NJ. ISSN/ISBN: 978-0-691-16306-2.
S. Blondeau Da Silva (2020). Benford or not Benford: a systematic but not always well-founded use of an elegant law in experimental fields. Communications in Mathematics and Statistics, 8:167-201.
doi: 10.1007/s40304-018-00172-1.
S. Blondeau Da Silva (2018). Benford or not Benford: new results on digits beyond the first. https://arxiv.org/abs/1805.01291.
S. Blondeau Da Silva (2019). BeyondBenford: An R Package to Determine Which of Benford's or BDS's Distributions is the Most Relevant. https://arxiv.org/abs/1910.06104. https://
T. Hill (1995). The significant-digit phenomenon. The American Mathematical Monthly, 102(4):322-327.
S. J. Miller, editor (2015). Benford's Law: Theory and Applications. Princeton University Press, Princeton, NJ. ISSN/ISBN: 978-0-691-14761-1.
R. Newcomb (1881). Note on the frequency of use of the different digits in natural numbers. American Journal of Mathematics, 4:39-40.
K. Pearson (1900). On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from
random sampling. Philosophical Magazine, 50(302):157-175.
version 1.4 | {"url":"https://search.r-project.org/CRAN/refmans/BeyondBenford/html/BeyondBenford-package.html","timestamp":"2024-11-10T03:04:09Z","content_type":"text/html","content_length":"4762","record_id":"<urn:uuid:0c43a116-d4a9-4a97-b7d0-9e0605b2539f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00748.warc.gz"} |
Introduction to Rankine Cycle in context of rankine cycle efficiency
07 Sep 2024
Journal of Energy and Power Systems
Volume 12, Issue 3, 2023
Introduction to the Rankine Cycle: Efficiency Considerations
The Rankine cycle is a fundamental thermodynamic cycle used in various power generation systems, including steam turbines and heat engines. This article provides an introduction to the Rankine cycle,
focusing on its efficiency considerations. The principles of the cycle are discussed, along with the factors that affect its efficiency.
1. Introduction
The Rankine cycle is a closed-loop thermodynamic cycle that converts thermal energy into mechanical work. It consists of four main stages: isentropic compression, constant-pressure heat addition,
isentropic expansion, and constant-pressure heat rejection. The cycle is named after William John Macquorn Rankine, who first described it in the 19th century.
2. The Rankine Cycle
The Rankine cycle can be represented by the following four stages:
1. Isentropic Compression (1-2)
h_1 = h_2
T_1 = T_2
where h is the specific enthalpy and T is the temperature.
In this stage, the working fluid (usually water or steam) is compressed to a higher pressure while maintaining its entropy.
1. Constant-Pressure Heat Addition (2-3)
q_{in} = h_3 - h_2
where q_in is the heat added to the system and h_3 is the specific enthalpy at the end of this stage.
In this stage, the working fluid absorbs heat from an external source, causing its temperature and pressure to rise.
1. Isentropic Expansion (3-4)
h_3 = h_4
T_3 = T_4
where h_4 is the specific enthalpy at the end of this stage.
In this stage, the working fluid expands through a turbine, converting its thermal energy into mechanical work while maintaining its entropy.
1. Constant-Pressure Heat Rejection (4-1)
q_{out} = h_1 - h_4
where q_out is the heat rejected by the system and h_1 is the specific enthalpy at the beginning of this stage.
In this stage, the working fluid rejects heat to an external sink, causing its temperature and pressure to drop.
3. Efficiency Considerations
The efficiency of the Rankine cycle can be calculated using the following formula:
ฮท = (h_3 - h_4) / (h_3 - h_2)
where ฮท is the efficiency of the cycle.
This formula represents the ratio of the net work output to the total heat added to the system. The efficiency of the Rankine cycle depends on various factors, including the temperature and pressure
of the working fluid, the design of the turbine and pump, and the type of heat exchanger used.
4. Conclusion
The Rankine cycle is a fundamental thermodynamic cycle that plays a crucial role in power generation systems. Its efficiency considerations are essential for designing efficient and effective power
plants. This article has provided an introduction to the Rankine cycle, focusing on its efficiency considerations. Further research is needed to optimize the design of the Rankine cycle and improve
its efficiency.
Related articles for โrankine cycle efficiencyโ :
Calculators for โrankine cycle efficiencyโ | {"url":"https://blog.truegeometry.com/tutorials/education/2a5a38fd2da8ec29b43625e89915c630/JSON_TO_ARTCL_Introduction_to_Rankine_Cycle_in_context_of_rankine_cycle_efficien.html","timestamp":"2024-11-13T20:55:40Z","content_type":"text/html","content_length":"18680","record_id":"<urn:uuid:4cd2884b-85c4-4f5a-8243-16e4cba2a3e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00038.warc.gz"} |
DFBA: Distribution-Free Bayesian Analysis
A set of functions to perform distribution-free Bayesian analyses. Included are Bayesian analogues to the frequentist Mann-Whitney U test, the Wilcoxon Signed-Ranks test, Kendall's Tau Rank
Correlation Coefficient, Goodman and Kruskal's Gamma, McNemar's Test, the binomial test, the sign test, the median test, as well as distribution-free methods for testing contrasts among condition and
for computing Bayes factors for hypotheses. The package also includes procedures to estimate the power of distribution-free Bayesian tests based on data simulations using various probability models
for the data. The set of functions provide data analysts with a set of Bayesian procedures that avoids requiring parametric assumptions about measurement error and is robust to problem of extreme
outlier scores.
Version: 0.1.0
Depends: R (โฅ 2.10)
Imports: methods, graphics, stats
Suggests: knitr, rmarkdown, bookdown, testthat (โฅ 3.0.0), vdiffr
Published: 2023-12-13
DOI: 10.32614/CRAN.package.DFBA
Author: Daniel H. Barch [aut, cre], Richard A. Chechile [aut]
Maintainer: Daniel H. Barch <daniel.barch at tufts.edu>
License: GPL-2
NeedsCompilation: no
CRAN checks: DFBA results
Reference manual: DFBA.pdf
Vignettes: dfba_mann_whitney
Package source: DFBA_0.1.0.tar.gz
Windows binaries: r-devel: DFBA_0.1.0.zip, r-release: DFBA_0.1.0.zip, r-oldrel: DFBA_0.1.0.zip
macOS binaries: r-release (arm64): DFBA_0.1.0.tgz, r-oldrel (arm64): DFBA_0.1.0.tgz, r-release (x86_64): DFBA_0.1.0.tgz, r-oldrel (x86_64): DFBA_0.1.0.tgz
Please use the canonical form https://CRAN.R-project.org/package=DFBA to link to this page. | {"url":"https://cran.uvigo.es/web/packages/DFBA/index.html","timestamp":"2024-11-02T11:51:46Z","content_type":"text/html","content_length":"7824","record_id":"<urn:uuid:bf069644-44c3-40ef-84b9-943b93b3fc5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00819.warc.gz"} |
How is a graph represented using an adjacency matrix? | TutorChase
How is a graph represented using an adjacency matrix?
A graph is represented using an adjacency matrix by creating a square matrix where each cell represents a possible edge.
In more detail, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. In the
adjacency matrix, the rows and columns are labelled by graph vertices, and we fill the cell at the intersection of row v and column w with an edge that connects vertices v and w.
For an undirected graph, the adjacency matrix is symmetric because if there is an edge from vertex v to vertex w, then there is also an edge from vertex w to vertex v. Therefore, the entries of the
matrix are either 0 or 1, where 1 denotes the presence of an edge and 0 denotes the absence of an edge.
For a directed graph (also known as a digraph), the adjacency matrix need not be symmetric because the edges have a direction. In this case, the entry in the row v and column w corresponds to an edge
from vertex v to vertex w.
If the graph is a weighted graph, then the entries of the adjacency matrix can represent the weight of the edge, rather than simply whether the edge is present or not. For example, if the weight of
the edge from vertex v to vertex w is 2, then the cell at the intersection of row v and column w in the adjacency matrix would contain the number 2.
The adjacency matrix is a useful representation of a graph when we want to quickly determine if there is an edge connecting two vertices. However, it is not a space-efficient representation for
sparse graphs, where the number of edges is much less than the number of vertices squared. In such cases, an adjacency list or an edge list can be a more efficient representation.
Study and Practice for Free
Trusted by 100,000+ Students Worldwide
Achieve Top Grades in your Exams with our Free Resources.
Practice Questions, Study Notes, and Past Exam Papers for all Subjects!
Need help from an expert?
The worldโs top online tutoring provider trusted by students, parents, and schools globally. | {"url":"https://www.tutorchase.com/answers/a-level/computer-science/how-is-a-graph-represented-using-an-adjacency-matrix","timestamp":"2024-11-02T18:51:03Z","content_type":"text/html","content_length":"62853","record_id":"<urn:uuid:803970d6-92f8-4407-a995-0105770d9f11>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00377.warc.gz"} |
The Stacks project
Remark 20.37.3. Let $(X, \mathcal{O}_ X)$ be a ringed space. Let $(K_ n)$ be an inverse system in $D(\mathcal{O}_ X)$. Set $K = R\mathop{\mathrm{lim}}\nolimits K_ n$. For each $n$ and $m$ let $\
mathcal{H}^ m_ n = H^ m(K_ n)$ be the $m$th cohomology sheaf of $K_ n$ and similarly set $\mathcal{H}^ m = H^ m(K)$. Let us denote $\underline{\mathcal{H}}^ m_ n$ the presheaf
\[ U \longmapsto \underline{\mathcal{H}}^ m_ n(U) = H^ m(U, K_ n) \]
Similarly we set $\underline{\mathcal{H}}^ m(U) = H^ m(U, K)$. By Lemma 20.32.3 we see that $\mathcal{H}^ m_ n$ is the sheafification of $\underline{\mathcal{H}}^ m_ n$ and $\mathcal{H}^ m$ is the
sheafification of $\underline{\mathcal{H}}^ m$. Here is a diagram
\[ \xymatrix{ K \ar@{=}[d] & \underline{\mathcal{H}}^ m \ar[d] \ar[r] & \mathcal{H}^ m \ar[d] \\ R\mathop{\mathrm{lim}}\nolimits K_ n & \mathop{\mathrm{lim}}\nolimits \underline{\mathcal{H}}^ m_ n \
ar[r] & \mathop{\mathrm{lim}}\nolimits \mathcal{H}^ m_ n } \]
In general it may not be the case that $\mathop{\mathrm{lim}}\nolimits \mathcal{H}^ m_ n$ is the sheafification of $\mathop{\mathrm{lim}}\nolimits \underline{\mathcal{H}}^ m_ n$. If $U \subset X$ is
an open, then we have short exact sequences
$$\label{cohomology-equation-ses-Rlim-over-U} 0 \to R^1\mathop{\mathrm{lim}}olimits \underline{\mathcal{H}}^{m - 1}_ n(U) \to \underline{\mathcal{H}}^ m(U) \to \mathop{\mathrm{lim}}olimits \underline
{\mathcal{H}}^ m_ n(U) \to 0$$
by Lemma 20.37.1.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0BKQ. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0BKQ, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0BKQ","timestamp":"2024-11-07T03:23:26Z","content_type":"text/html","content_length":"15479","record_id":"<urn:uuid:b15e594a-1c86-4d52-9b70-8fc43410e4f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00155.warc.gz"} |
Analysis - Complexity, Functions, Theory | Britannica
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Thank you for your feedback
Our editors will review what youโve submitted and determine whether to revise the article.
In the 18th century a far-reaching generalization of analysis was discovered, centred on the so-called imaginary number i = Square root ofโโ1. (In engineering this number is usually denoted by j.)
The numbers commonly used in everyday life are known as real numbers, but in one sense this name is misleading. Numbers are abstract concepts, not objects in the physical universe. So mathematicians
consider real numbers to be an abstraction on exactly the same logical level as imaginary numbers.
The name imaginary arises because squares of real numbers are always positive. In consequence, positive numbers have two distinct square rootsโone positive, one negative. Zero has a single square
rootโnamely, zero. And negative numbers have no โrealโ square roots at all. However, it has proved extremely fruitful and useful to enlarge the number concept to include square roots of negative
numbers. The resulting objects are numbers in the sense that arithmetic and algebra can be extended to them in a simple and natural manner; they are imaginary in the sense that their relation to the
physical world is less direct than that of the real numbers. Numbers formed by combining real and imaginary components, such as 2 + 3i, are said to be complex (meaning composed of several parts
rather than complicated).
The first indications that complex numbers might prove useful emerged in the 16th century from the solution of certain algebraic equations by the Italian mathematicians Girolamo Cardano and Raphael
Bombelli. By the 18th century, after a lengthy and controversial history, they became fully established as sensible mathematical concepts. They remained on the mathematical fringes until it was
discovered that analysis, too, can be extended to the complex domain. The result was such a powerful extension of the mathematical tool kit that philosophical questions about the meaning of complex
numbers became submerged amid the rush to exploit them. Soon the mathematical community had become so used to complex numbers that it became hard to recall that there had been a philosophical problem
at all.
Formal definition of complex numbers
The modern approach is to define a complex number x + iy as a pair of real numbers (x, y) subject to certain algebraic operations. Thus one wishes to add or subtract, (a, b) ยฑ (c, d), and to
multiply, (a, b) ร (c, d), or divide, (a, b)/(c, d), these quantities. These are inspired by the wish to make (x, 0) behave like the real number x and, crucially, to arrange that (0, 1)^2 = (โ1, 0)
โall the while preserving as many of the rules of algebra as possible. This is a formal way to set up a situation which, in effect, ensures that one may operate with expressions x + iy using all the
standard algebraic rules but recalling when necessary that i^2 may be replaced by โ1. For example, (1 + 3i)^2 = 1^2 + 2โ3i + (3i)^2 = 1 + 6i + 9i^2 = 1 + 6i โ 9 = โ8 + 6i. A geometric interpretation
of complex numbers is readily available, inasmuch as a pair (x, y) represents a point in the plane shown in the figure. Whereas real numbers can be described by a single number line, with negative
numbers to the left and positive numbers to the right, the complex numbers require a number plane with two axes, real and imaginary.
Extension of analytic concepts to complex numbers
Analytic concepts such as limits, derivatives, integrals, and infinite series (all explained in the sections Technical preliminaries and Calculus) are based upon algebraic ideas, together with error
estimates that define the limiting process: certain numbers must be arbitrarily well approximated by particular algebraic expressions. In order to represent the concept of an approximation, all that
is needed is a well-defined way to measure how โsmallโ a number is. For real numbers this is achieved by using the absolute value |x|. Geometrically, it is the distance along the real number line
between x and the origin 0. Distances also make sense in the complex plane, and they can be calculated, using Pythagorasโs theorem from elementary geometry (the square of the hypotenuse of a right
triangle is equal to the sum of the squares of its two sides), by constructing a right triangle such that its hypotenuse spans the distance between two points and its sides are drawn parallel to the
coordinate axes. This line of thought leads to the idea that for complex numbers the quantity analogous to |x| is
Since all the rules of real algebra extend to complex numbers and the absolute value is defined by an algebraic formula, it follows that analysis also extends to the complex numbers. Formal
definitions are taken from the real case, real numbers are replaced by complex numbers, and the real absolute value is replaced by the complex absolute value. Indeed, this is one of the advantages of
analytic rigour: without this, it would be far less obvious how to extend such notions as tangent or limit from the real case to the complex.
In a similar vein, the Taylor series for the real exponential and trigonometric functions shows how to extend these definitions to include complex numbersโjust use the same series but replace the
real variable x by the complex variable z. This idea leads to complex-analytic functions as an extension of real-analytic ones.
Because complex numbers differ in certain ways from real numbersโtheir structure is simpler in some respects and richer in othersโthere are differences in detail between real and complex analysis.
Complex integration, in particular, has features of complete novelty. A real function must be integrated between limits a and b, and the Riemann integral is defined in terms of a sum involving values
spread along the interval from a to b. On the real number line, the only path between two points a and b is the interval whose ends they form. But in the complex plane there are many different paths
between two given points (see figure). The integral of a function between two points is therefore not defined until a path between the endpoints is specified. This done, the definition of the Riemann
integral can be extended to the complex case. However, the result may depend on the path that is chosen.
Surprisingly, this dependence is very weak. Indeed, sometimes there is no dependence at all. But when there is, the situation becomes extremely interesting. The value of the integral depends only on
certain qualitative features of the pathโin modern terms, on its topology. (Topology, often characterized as โrubber sheet geometry,โ studies those properties of a shape that are unchanged if it is
continuously deformed by being bent, stretched, and twisted but not torn.) So complex analysis possesses a new ingredient, a kind of flexible geometry, that is totally lacking in real analysis. This
gives it a very different flavour.
All this became clear in 1811 when, in a letter to the German astronomer Friedrich Bessel, the German mathematician Carl Friedrich Gauss stated the central theorem of complex analysis:
I affirm now that the integralโฆhas only one value even if taken over different paths, provided [the function]โฆdoes not become infinite in the space enclosed by the two paths.
A proof was published by Cauchy in 1825, and this result is now named Cauchyโs theorem. Cauchy went on to develop a vast theory of complex analysis and its applications.
Part of the importance of complex analysis is that it is generally better-behaved than real analysis, the many-valued nature of integrals notwithstanding. Problems in the real domain can often be
solved by extending them to the complex domain, applying the powerful techniques peculiar to that area, and then restricting the results back to the real domain again. From the mid-19th century
onward, the progress of complex analysis was strong and steady. A system of numbers once rejected as impossible and nonsensical led to a powerful and aesthetically satisfying theory with practical
applications to aerodynamics, fluid mechanics, electric power generation, and mathematical physics. No area of mathematics has remained untouched by this far-reaching enrichment of the number concept
Sketched below are some of the key ideas involved in setting up the more elementary parts of complex analysis. Alternatively, the reader may proceed directly to the section Measure theory. | {"url":"https://www.britannica.com/science/analysis-mathematics/Complex-analysis","timestamp":"2024-11-12T13:19:05Z","content_type":"text/html","content_length":"126007","record_id":"<urn:uuid:4a0f5f95-586d-4e70-8fe5-ce910e2bc842>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00756.warc.gz"} |
NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Reviewer 1
Summary: The main points in the paper are: -- expected reward objective has exponentially many local maxima -- smooth risk and hence, the new loss L(q, r, x) which are both calibrated can be used and
L is strongly convex implying a unique global optimum. -- experiments with cost-sensitive classification on MNIST and CIFAR -- batch contextual bandits: generalized reward imputation model, and
showing that a particular surrogate loss which is strongly convex (under some imputation models) tends to perform better empirically. Originality: The work is original. Clarity: The paper is clear to
read, except some details in the experimental section, on page 4, where the meanings of the risk R(\pi) is not described clearly. Significance and comments: First, in the new objective for contextual
bandits, the authors mention that this objective is not the same as the trust-region or proximal objectives used in RL (line 237), but how does this compare with the maximum entropy RL (for example,
Harrnoja et.al, Soft Q-learning and Soft actor-critic) objectives with the same policy and value function/reward models? In these maxent RL formulations, an estimator similar to Eqn 12, Page 5 is
optimized. I would appreciate some discussion on this connection. Second, in both these cases, can we provide some guarantees on the test-time performance and possibly some guarantees outside of the
seen (x, a, r_a) tuples? The model q is supposed to generalize over these tuples, but can we say something concretely about the performance of the policy learned using the proposed objective?
Empirically, the proposed objectives perform a bit better than the other baselines and outperform methods like POEM on standard benchmarks. Overall, I like the idea of looking at surrogates for
learning from logged data in a bandit setting, but I find some discussion on how this relates to existing works missing -- for example -- regularized MDPs (entropy, or generally, Bregman divergence
regularized MDPs). Also, some discussion on how the proposed objective would "generalize" during training, and what considerations we should keep in mind at test time when using their approach would
be appreciated. What I mean by this is that some discussion on the guarantees provided on the expected return of the learned policy under an arbitrary context distribution and action distribution of
learned policy would be interesting (E_{x \sim p(x), a \sim \pi(x)} [r(x, a)]) and not just the data distribution ((x, a, r_a) \in D).
Reviewer 2
Originality: I find the idea of using a surrogate objective instead of the expected reward very interesting and novel. the extension to the partially observable setting is interesting as the proposed
form finds a common denominator to multiple estimators, but its underlying idea is not novel. Clarity: the paper is overall very well written, and it has a nice flow. Although I am not very familiar
with the topic, I certainly enjoyed reading this paper. The only comment is that it might be good to highlight more clearly the form of the convex surrogate. Overall, this seems to me to be a good
paper, and my only main concern is the experimental protocol used and the presentation of the results. Specifically: - in Sec 2.2 the decision to show only 100 epochs is arbitrary. I would prefer if
you could show the learning curves instead - All results (Fig 1 and 2, and Table 1) should include std or equivalent - Visualizing Train and Test error with the same scale does not really make sense,
either use two different scales or separate the plots. - The last sentence of Sec 2.2. is crucial to your finding. I would strongly encourage to replace Fig 1 and 2 with learning curves instead and
run them until convergence. Then any difference in optimization landscape will be visible as a change in the integral of the curves. - Similar comments about learning curves apply for Sec 3.6 - It is
unclear to me if the reward estimation algorithm is actually evaluated in the experiments. Could you clarify? (it would be nice to include results) - Can you comment on the increased variance
demonstrated by Composite on Table 2? Additional comments: - Abstract. "Here we ..." sounds a bit strange since you already list other contributions. - Generally speaking, I find curious that in a
paper talking about policy optimization all the experiments consists of classification tasks "reworked" to be decision making. I wonder if it wouldn't be more interesting to use other decision-making
benchmarks. - It might also be valuable in Sec 3.6 to run experiments with different data size and distributions other than uniform. - The code was available only for the CIFAR-10 experiments, and
not for the MNIST --- Based on the rebuttal, I am increasing my score. The learning curves in the rebuttal might benefit from using log-axis.
Reviewer 3
Detailed comments: The paper considers policy learning for cost-sensitive classification and contextual bandits problems. The focus is on arguing that directly minimizing the empirical risk is
difficult if the policy takes the specific form of \pi(x) = f(q(x)) where x is the context, q an unconstrained model and f the softmax function. To reduce the difficulty, surrogate losses are
designed which have the โcalibratedโ property, which means efficient minimization of the surrogate functions implies efficient minimization of the risk. In the cost-sensitive case, the surrogate is
convex. Originality: to the best of my knowledge, the two surrogate functions are original. They connect the cost-sensitive classification and contextual bandit which provide insights to both classes
of problems. Theorem 1 is a nice result extending the understanding of the difficulty of optimizing empirical risk. Quality: The main contribution of this paper is theoretical and it is of high
quality. One technical question is what the role of the baseline v plays in the theoretical result. It is shown empirically beneficial to have such a shift, could the authors provide some theoretical
intuition on why this is the case? Empirically, how is v chosen? Clarity: The paper is mostly clear to read. One thing that can be improved is that the F function, first appeared in Proposition 2, is
not defined. Later it is defined in the context for KL divergence but it is not clear whether in Proposition 2 it refers to the KL divergence version or not. Significance: I think the contribution is
significant to the community which suggests new surrogate objective for both cost-sensitive classification and contextual bandits. Minor comments: Equation (8) has a typo, the middle expression,
inside the parenthesis, should be a minus sign instead of a plus sign. Line 127: \hat{R}(\pi) does not yield the highest test error in MNIST. Am I missing something here? | {"url":"https://proceedings.neurips.cc/paper_files/paper/2019/file/84899ae725ba49884f4c85c086f1b340-Reviews.html","timestamp":"2024-11-08T08:24:00Z","content_type":"text/html","content_length":"8393","record_id":"<urn:uuid:84e264a1-d6dc-411d-bae5-8f35512f9ff5>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00174.warc.gz"} |
09-01-2022, 08:46 AM (This post was last modified: 09-01-2022, 08:46 AM by ZerBea.)
@CyberPentester, I really want to answer your question, received by PM
I wanted to ask you directly since you are quite the expert on this field.
1. Let's say I got 2 PMKID hashes and 1 WPA Handshake, all from the same WiFi network, using hcxpcapngtool. Would hashcat crack faster if I give all 3 hashes? Or is selecting just 1 PMKID faster?
2. Do you know of any tool (maybe a hashcat option does this) that can bruteforce passwords with a mask attack based on probability? For example: A WiFi password that consists of 10 digits where
different random numbers ("1634845593") are more probable than say "1111111111" or "22224444".
But you disabled private messaging!
Please correct the following errors before continuing:
CyberPentester has private messaging disabled. You cannot send private messages to this user.
To answer 1) we have to take a closer look at the calculation of the keys:
The construction (PBKDF2 calculation) of the plainmasterkey (PMK) is for both hash types (PMKID and EAPOL 4way) the same and take long period of CPU/GPU time.
This part is a really slow part.
Luckyly, we need to calculate PBKDF2 once (each different ESSID) and can use it for PMKID and MIC (EAPOL 4way) calculation:
In the second part, PMKID calculation (PMKID) is much faster:
PMKID = HMAC-SHA1-128(PMK, "PMK Name" | MAC_AP | MAC_STA)
than calculating a MIC from EAPOL (4way handshake) wpa1:
calculate PKE, calculate PTK, calculate MIC (encrypt message and compare MIC) for WPA1:
HMAC(EVP_sha1(), pmk, 32, pkedata, 100, ptk + p * 20, NULL);
HMAC(EVP_md5(), &ptk, 16, eapol, eapol_len, mic, NULL);
or wpa2:
calculate PKE, calculate PTK, calculate MIC (encrypt message and compare MIC) for WPA2:
HMAC(EVP_sha1(), pmk, 32, pkedata, 100, ptk + p * 20, NULL);
HMAC(EVP_sha1(), &ptk, 16, eapol, eapol_len, mic, NULL);
or wpa2 keyversion3:
calculate PKE, calculate PTK, calculate MIC (encrypt message and compare MIC) for WPA2 key version 3:
HMAC(EVP_sha256(), pmk, 32, pkedata_prf, 2 + 98 + 2, ptk, NULL);
omac1_aes_128(&ptk, eapol, eapol_len, mic);
If you only want to recover the PSK just use the PMKID hash line (WPA*01*) and remove the EAPOL 4way hash lines (WPA*02*) for that ESSID.
The second question is not easy to answer, because it depend on the target.
If the default password algo is known, routerkeygen (RKG) should be the first choice.
If the default key space is know, hcxpsktool could be a choice.
Additional you can do a picture search (e.g. DuckDuckGo or ebay) to find a possible pattern and use it as a mask file. | {"url":"https://hashcat.net/forum/showthread.php?tid=6661&pid=56158&mode=threaded","timestamp":"2024-11-11T12:53:29Z","content_type":"application/xhtml+xml","content_length":"324087","record_id":"<urn:uuid:9121e3f4-427c-407c-b271-760bae5fd5da>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00421.warc.gz"} |
Find Peak Element
Problem statement:
A peak element is an element that is strictly greater than its neighbors.
Given a 0-indexed integer array nums, find a peak element, and return its index. If the array contains multiple peaks, return the index to any of the peaks.
You may imagine that nums[-1] = nums[n] = -รขหลพ. In other words, an element is always considered to be strictly greater than a neighbor that is outside the array.
You must write an algorithm that runs in O(log n) time.
Example 1:
Input: nums = [1,2,3,1]
Output: 2
Explanation: 3 is a peak element and your function should return the index number 2.
Example 2:
Input: nums = [1,2,1,3,5,6,4]
Output: 5
Explanation: Your function can return either index number 1 where the peak element is 2, or index number 5 where the peak element is 6.
* 1 <= nums.length <= 1000
* -231 <= nums[i] <= 231 - 1
* nums[i] != nums[i + 1] for all valid i.
Solution in C++
Solution in Python
Solution in Java
Solution in Javascript
Solution explanation
For this problem, we can use the binary search. Instead of comparing the mid-element to the target value, we compare it with its neighbors. Based on the comparison, we adjust the left and right
pointers accordingly:
1. Initialize left and right pointer, left = 0 and right = nums.length - 1.
2. While left < right, do the following:
a. Calculate the mid index, mid = left + (right - left) // 2.
b. If nums[mid] < nums[mid + 1], move the left pointer to mid + 1.
c. Else, move the right pointer to the mid.
3. When left >= right, the peak index is found, so return left.
Since we use a binary search, the algorithm runs in O(log n) time. | {"url":"https://www.freecodecompiler.com/tutorials/dsa/find-peak-element","timestamp":"2024-11-07T13:23:52Z","content_type":"text/html","content_length":"35073","record_id":"<urn:uuid:04968d55-9671-4dce-8ad4-e6fdbff9ca39>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00098.warc.gz"} |
Assets Turnover Ratio Formula, How to Calculate, Definition & Example
Essentially, it is the cost of running the fund, expressed as a percentage of its assets.The expense ratio comprises several components. Management fees, which can typically range from 0.5% to 2% of
assets, cover the cost of the fund managerโs expertise and decision-making. Administrative costs, such as recordkeeping and custodial services, can add another 0.2% or more. For example, if a fund
has an expense ratio of 1%, it means that for every $1,000 invested, $10 will be deducted annually to cover these costs.
โข Lower ratios mean that the company isnโt using its assets efficiently and most likely have management or production problems.
โข The companyโs average total assets for the year was $4 billion (($3 billion + $5 billion) / 2 ).
โข Each industry has different norms for asset turnover ratios, so itโs best to only compare companies within the same sector.
โข It measures how effectively a company utilizes its assets to generate sales revenue.
โข A higher ratio indicates that the company is utilizing its assets efficiently to generate sales, which is generally seen as a positive sign.
Interpreting results from the total asset turnover calculator
The total asset turnover ratio tells you how much revenue a company can generate given its asset base. The asset turnover ratio is calculated by dividing how to calculate asset turnover ratio net
sales by average total assets. It is only appropriate to compare the asset turnover ratio of companies operating in the same industry.
Asset turnover rate formula
For example, consider two mutual funds with identical portfolios and starting values of $100,000. Fund A has an expense ratio of 0.5%, while Fund B has an expense ratio of 1%. Assuming an annual
return of 8% before expenses, after 20 years, Fund A would have grown to roughly $424,785, while Fund B would have only reached about $386,968. This 0.5% difference in expense ratios leads to a final
difference of over $37,817, highlighting the substantial impact of expense ratios on long-term investment growth. The asset turnover ratio can also be analyzed by tracking the ratio for a single
company over time.
Comparisons of Ratios
โข This approach requires estimating the required inventory as accurately as possible and having reliable suppliers in the supply chain.
โข For instance, a utility company or construction company is more likely to have a higher number of assets than a retail company.
โข Investors can look at the asset turnover ratio when evaluating the risk of investing in a company, or when comparing similar companies to one another.
โข The value of a companyโs total assets includes the value of its fixed assets, current assets, accounts receivable, and liquid assets (cash).
โข A good asset turnover ratio depends upon your industry peers and how well similar companies are doing.
For example, it would be incorrect to compare the ratios of Company A to that of Company C, as they operate in different industries. We now have all the required inputs, so weโll take the net sales
for the current period and divide it by the average asset balance of the prior and current periods. Hence, it is often used as a proxy for how efficiently a company has invested in long-term assets.
The answer is that a high ratio implies that a company is in good standing. Itโs generating value with its assets, which can signal that it may be a solid investment. Get instant access to lessons
taught by experienced private equity pros and bulge bracket investment bankers including financial statement modeling, DCF, M&A, LBO, Comps and Excel Modeling.
Want More Helpful Articles About Running a Business?
What Is Turnover in Business, and Why Is It Important? โ Investopedia
What Is Turnover in Business, and Why Is It Important?.
Posted: Sun, 26 Mar 2017 05:04:32 GMT [source]
The high ATR value of companies like Walmart is attributed to their assets. Supermarkets and grocery stores generally have low-profit margins and https://www.bookstime.com/articles/
how-to-record-a-credit-sale high asset turnover. Investors and analysts can use this measure to compare similar companies to know how efficiently they use their assets.
โข A higher turnover ratio signals creditors and investors that the management is using the companyโs resources efficiently.
โข Rohan has a focus in particular on consumer and business services transactions and operational growth.
โข Similar to cash flow, the asset turnover ratio compares the companyโs total assets over the course of a year to its sales.
โข To calculate it, divide net sales or revenue by the average total assets.
How Useful is the Fixed Asset Turnover Ratio to Investors?
What Is the Fixed Asset Turnover Ratio?
โข Be sure to check out our post on analyzing financial statement ratios for a deeper dive into understanding a companyโs financial statements through financial ratio analysis.
โข This ratio divides net sales by net fixed assets, calculated over an annual period.
โข While the income statement measures a metric across two periods, balance sheet items reflect values at a certain point of time.
โข Ratio comparisons across markedly different industries do not provide a good insight into how well a company is doing.
โข You can invest in stocks, exchange-traded funds (ETFs), mutual funds, alternative funds, and more.
How to improve your asset turnover ratio | {"url":"http://www.modabot.de/assets-turnover-ratio-formula-how-to-calculate","timestamp":"2024-11-07T20:22:10Z","content_type":"text/html","content_length":"86131","record_id":"<urn:uuid:0c59b03e-e5fc-4d56-a336-e393b5bb50e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00748.warc.gz"} |
2.8 Graphical Analysis of One-Dimensional Motion - College Physics | OpenStax
A graph, like a picture, is worth a thousand words. Graphs not only contain numerical information; they also reveal relationships between physical quantities. This section uses graphs of position,
velocity, and acceleration versus time to illustrate one-dimensional kinematics.
Slopes and General Relationships
First note that graphs in this text have perpendicular axes, one horizontal and the other vertical. When two physical quantities are plotted against one another in such a graph, the horizontal axis
is usually considered to be an independent variable and the vertical axis a dependent variable. If we call the horizontal axis the $xx size 12{x} {}$-axis and the vertical axis the $yy size 12{y} {}$
-axis, as in Figure 2.46, a straight-line graph has the general form
$y=mx+b.y=mx+b. size 12{y= ital "mx"+`b} {}$
Here $mm size 12{m} {}$ is the slope, defined to be the rise divided by the run (as seen in the figure) of the straight line. The letter $bb size 12{b} {}$ is used for the y-intercept, which is the
point at which the line crosses the vertical axis.
Graph of Position vs. Time (a = 0, so v is constant)
Time is usually an independent variable that other quantities, such as position, depend upon. A graph of position versus time would, thus, have $xx size 12{x} {}$ on the vertical axis and $tt size 12
{t} {}$ on the horizontal axis. Figure 2.47 is just such a straight-line graph. It shows a graph of position versus time for a jet-powered car on a very flat dry lake bed in Nevada.
Using the relationship between dependent and independent variables, we see that the slope in the graph above is average velocity $v-v- size 12{ { bar {v}}} {}$ and the intercept is position at time
zeroโthat is, $x0x0 size 12{x rSub { size 8{0} } } {}$. Substituting these symbols into $y=mx+by=mx+b size 12{y= ital "mx"+b} {}$ gives
$x = v - t + x 0 x = v - t + x 0 size 12{x= { bar {v}}t+x rSub { size 8{0} } } {}$
$x=x0+v-t.x=x0+v-t. size 12{x=x rSub { size 8{0} } + { bar {v}}t} {}$
Thus a graph of position versus time gives a general relationship among displacement(change in position), velocity, and time, as well as giving detailed numerical information about a specific
The slope of the graph of position $xx size 12{x} {}$ vs. time $tt size 12{t} {}$ is velocity $vv size 12{v} {}$.
$slope = ฮ x ฮt = v slope = ฮ x ฮt = v$
Notice that this equation is the same as that derived algebraically from other motion equations in Motion Equations for Constant Acceleration in One Dimension.
From the figure we can see that the car has a position of 525 m at 0.50 s and 2000 m at 6.40 s. Its position at other times can be read from the graph; furthermore, information about its velocity and
acceleration can also be obtained from the graph.
Determining Average Velocity from a Graph of Position versus Time: Jet Car
Find the average velocity of the car whose position is graphed in Figure 2.47.
The slope of a graph of $xx size 12{x} {}$ vs. $tt size 12{t} {}$ is average velocity, since slope equals rise over run. In this case, rise = change in position and run = change in time, so that
$slope = ฮ x ฮt = v- . slope = ฮ x ฮt = v- .$
Since the slope is constant here, any two points on the graph can be used to find the slope. (Generally speaking, it is most accurate to use two widely separated points on the straight line. This is
because any error in reading data from the graph is proportionally smaller if the interval is larger.)
1. Choose two points on the line. In this case, we choose the points labeled on the graph: (6.4 s, 2000 m) and (0.50 s, 525 m). (Note, however, that you could choose any two points.)
2. Substitute the $xx$ and $tt$ values of the chosen points into the equation. Remember in calculating change $(ฮ)(ฮ) size 12{ \( ฮ \) } {}$ we always use final value minus initial value.
$v-= ฮx ฮt =2000 mโ525 m6.4 sโ0.50 s,v-= ฮx ฮt =2000 mโ525 m6.4 sโ0.50 s, size 12{ { bar {v}}= { {ฮx} over {ฮt} } = { {"2000 m" - "525 m"} over {6 "." "4 s" - 0 "." "50 s"} } } {}$
$v-=250 m/s.v-=250 m/s. size 12{ { bar {v}}="250 m/s"} {}$
This is an impressively large land speed (900 km/h, or about 560 mi/h): much greater than the typical highway speed limit of 60 mi/h (27 m/s or 96 km/h), but considerably shy of the record of 343 m/s
(1234 km/h or 766 mi/h) set in 1997.
Graphs of Motion when $aa size 12{a} {}$ is constant but $aโ 0aโ 0 size 12{a <> 0} {}$
The graphs in Figure 2.48 below represent the motion of the jet-powered car as it accelerates toward its top speed, but only during the time when its acceleration is constant. Time starts at zero for
this motion (as if measured with a stopwatch), and the position and velocity are initially 200 m and 15 m/s, respectively.
The graph of position versus time in Figure 2.48(a) is a curve rather than a straight line. The slope of the curve becomes steeper as time progresses, showing that the velocity is increasing over
time. The slope at any point on a position-versus-time graph is the instantaneous velocity at that point. It is found by drawing a straight line tangent to the curve at the point of interest and
taking the slope of this straight line. Tangent lines are shown for two points in Figure 2.48(a). If this is done at every point on the curve and the values are plotted against time, then the graph
of velocity versus time shown in Figure 2.48(b) is obtained. Furthermore, the slope of the graph of velocity versus time is acceleration, which is shown in Figure 2.48(c).
Determining Instantaneous Velocity from the Slope at a Point: Jet Car
Calculate the velocity of the jet car at a time of 25 s by finding the slope of the $xx size 12{x} {}$ vs. $tt size 12{t} {}$ graph in the graph below.
The slope of a curve at a point is equal to the slope of a straight line tangent to the curve at that point. This principle is illustrated in Figure 2.50, where Q is the point at $t=25 st=25 s size
12{t="25"`s} {}$.
1. Find the tangent line to the curve at $t=25 st=25 s size 12{t="25"`s} {}$.
2. Determine the endpoints of the tangent. These correspond to a position of 1300 m at time 19 s and a position of 3120 m at time 32 s.
3. Plug these endpoints into the equation to solve for the slope, $vv size 12{v} {}$.
$slope = v Q = ฮx Q ฮt Q = 3120 m โ 1300 m 32 s โ 19 s slope = v Q = ฮx Q ฮt Q = 3120 m โ 1300 m 32 s โ 19 s size 12{"slope"=v rSub { size 8{Q} } = { {ฮx rSub { size 8{Q} } } over {ฮt rSub { size 8
{Q} } } } = { { left ("3120"`m - "1300"`m right )} over { left ("32"`s - "19"`s right )} } } {}$
$v Q = 1820 m 13 s = 140 m/s. v Q = 1820 m 13 s = 140 m/s.$
This is the value given in this figureโs table for $vv size 12{v} {}$ at $t=25 st=25 s$. The value of 140 m/s for $vQvQ$ is plotted in Figure 2.50. The entire graph of $vv$ vs. $tt$ can be obtained
in this fashion.
Carrying this one step further, we note that the slope of a velocity versus time graph is acceleration. Slope is rise divided by run; on a $vv size 12{v} {}$ vs. $tt$ graph, rise = change in velocity
$ฮvฮv size 12{Dv} {}$ and run = change in time $ฮtฮt size 12{Dt} {}$.
The slope of a graph of velocity $vv size 12{v} {}$ vs. time $tt size 12{t} {}$ is acceleration $aa size 12{a} {}$.
$slope = ฮv ฮt = a slope = ฮv ฮt = a$
Since the velocity versus time graph in Figure 2.48(b) is a straight line, its slope is the same everywhere, implying that acceleration is constant. Acceleration versus time is graphed in Figure 2.48
Additional general information can be obtained from Figure 2.50 and the expression for a straight line, $y=mx+by=mx+b size 12{y= ital "mx"+b} {}$.
In this case, the vertical axis $yy size 12{y} {}$ is $VV size 12{V} {}$, the intercept $bb size 12{b} {}$ is $v0v0 size 12{v rSub { size 8{0} } } {}$, the slope $mm size 12{m} {}$ is $aa size 12{a}
{}$, and the horizontal axis $xx size 12{x} {}$ is $tt size 12{t} {}$. Substituting these symbols yields
$v=v0+at.v=v0+at. size 12{v=v rSub { size 8{0} } + ital "at"} {}$
A general relationship for velocity, acceleration, and time has again been obtained from a graph. Notice that this equation was also derived algebraically from other motion equations in Motion
Equations for Constant Acceleration in One Dimension.
It is not accidental that the same equations are obtained by graphical analysis as by algebraic techniques. In fact, an important way to discover physical relationships is to measure various physical
quantities and then make graphs of one quantity against another to see if they are correlated in any way. Correlations imply physical relationships and might be shown by smooth graphs such as those
above. From such graphs, mathematical relationships can sometimes be postulated. Further experiments are then performed to determine the validity of the hypothesized relationships.
Graphs of Motion Where Acceleration is Not Constant
Now consider the motion of the jet car as it goes from 165 m/s to its top velocity of 250 m/s, graphed in Figure 2.51. Time again starts at zero, and the initial velocity is 165 m/s. (This was the
final velocity of the car in the motion graphed in Figure 2.48.) Acceleration gradually decreases from $5.0 m/s25.0 m/s2$ to zero when the car hits 250 m/s. The velocity increases until 55 s and then
becomes constant, since acceleration decreases to zero at 55 s and remains zero afterward.
Calculating Acceleration from a Graph of Velocity versus Time
Calculate the acceleration of the jet car at a time of 25 s by finding the slope of the $vv size 12{v} {}$ vs. $tt size 12{t} {}$ graph in Figure 2.51(a).
The slope of the curve at $t=25 st=25 s size 12{t="25"`s} {}$ is equal to the slope of the line tangent at that point, as illustrated in Figure 2.51(a).
Determine endpoints of the tangent line from the figure, and then plug them into the equation to solve for slope, $aa size 12{a} {}$.
$slope = ฮv ฮt = 260 m/s โ 210 m/s 51 s โ 1.0 s slope = ฮv ฮt = 260 m/s โ 210 m/s 51 s โ 1.0 s$
$a=50 m/s50 s=1.0 m/s2.a=50 m/s50 s=1.0 m/s2.$
Note that this value for $aa$ is consistent with the value plotted in Figure 2.51(b) at $t=25 st=25 s size 12{t="25"`s} {}$.
A graph of position versus time can be used to generate a graph of velocity versus time, and a graph of velocity versus time can be used to generate a graph of acceleration versus time. We do this by
finding the slope of the graphs at every point. If the graph is linear (i.e., a line with a constant slope), it is easy to find the slope at any point and you have the slope for every point.
Graphical analysis of motion can be used to describe both specific and general characteristics of kinematics. Graphs can also be used for other topics in physics. An important aspect of exploring
physical relationships is to graph them and look for underlying relationships.
A graph of velocity vs. time of a ship coming into a harbor is shown below. (a) Describe the motion of the ship based on the graph. (b)What would a graph of the shipโs acceleration look like?
(a) The ship moves at constant velocity and then begins to decelerate at a constant rate. At some point, its deceleration rate decreases. It maintains this lower deceleration rate until it stops
(b) A graph of acceleration vs. time would show zero acceleration in the first leg, large and constant negative acceleration in the second leg, and constant negative acceleration. | {"url":"https://openstax.org/books/college-physics/pages/2-8-graphical-analysis-of-one-dimensional-motion","timestamp":"2024-11-13T06:46:37Z","content_type":"text/html","content_length":"610756","record_id":"<urn:uuid:692c6b99-cd22-4dbd-bd2b-37e0fe2d6152>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00633.warc.gz"} |
omparing m
Available with Geostatistical Analyst license.
Comparison helps you determine how good the model that created a geostatistical layer is relative to another model. To compare models, you must have two geostatistical layers for comparison (created
using the ArcGIS Geostatistical Analyst extension). These two layers may have been created using different interpolation methods (for example, IDW and ordinary kriging) or by using the same method
with different parameters. In the first case, you are comparing which method is best for your data, and in the second, you are examining the effects of different input parameters on a model when
creating the output surface. To compare two models, right-click on one of their names in the table of contents and click Compare, as shown below:
The Comparison dialog box uses the cross-validation statistics discussed in Performing cross-validation and validation. However, it allows you to examine the statistics and the plots side by side.
Generally, the best model is the one that has the standardized mean nearest to zero, the smallest root-mean-squared prediction error, the average standard error nearest the root-mean-squared
prediction error, and the standardized root-mean-squared prediction error nearest to 1.
It is common practice to create many surfaces before one is identified as best and will be final in itself or will be passed into a larger model (for example, a suitability model for siting houses)
to solve an existing problem. You can systematically compare each surface with another, eliminating the worst of the two being compared, until the two best surfaces remain and are compared with one
another. You can conclude that for this particular analysis, the best of the final two surfaces is the best surface possible.
Concerns when comparing methods and models
There are two issues to consider when comparing the results from different methods and/or models: one is optimality and the other is validity.
For example, the root-mean-squared prediction error may be smaller for a particular model. Therefore, you might conclude that it is the optimal model. However, when comparing to another model, the
root-mean-squared prediction error may be closer to the average estimated prediction standard error. This is a more valid model, because when you predict at a point without data, you have only the
estimated standard errors to assess your uncertainty of that prediction. You also must check that the root-mean-square standardized is close to one. When the root-mean-square standardized is close to
one and the average estimated prediction standard errors are close to the root-mean-squared prediction errors from cross-validation, you can be confident that the model is appropriate. In the figure
above, the kriging model on the left has a lower root-mean-square and a lower average standard error than the model on the right, but the kriging model on the right should be preferred because the
root-mean-square and the average standard error are closer. Additionally, the model on the left has a very large root-mean-square standardized, which indicates severe model problems.
In addition to the statistics provided on the Comparison dialog box, you should use prior information that you have on the dataset and that you derived in ESDA when evaluating which model is best.
1. Right-click one of the geostatistical layers you want to compare in the ArcMap table of contents and click Compare.
2. Click the second layer in the comparison in the To drop-down menu.
3. Click the various tabs to see the different results of the comparison.
4. Click Close.
Feedback on this topic? | {"url":"https://desktop.arcgis.com/en/arcmap/10.3/guide-books/extensions/geostatistical-analyst/comparing-models.htm","timestamp":"2024-11-06T18:01:25Z","content_type":"text/html","content_length":"27483","record_id":"<urn:uuid:c003657a-03e6-4d28-8938-8e5d61960af7>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00620.warc.gz"} |
Approximate quantum codes
Code Description
2D bosonization code A mapping between a 2D lattice of qubits and a 2D lattice quadratic Hamiltonian of Majorana modes. This family also includes a super-compact fermionic encoding with
a qubit-to-fermion ratio of \(1.25\) [1; Table I].
2D color code Color code defined on a two-dimensional planar graph. Each face hosts two stabilizer generators, a Pauli-\(X\) and a Pauli-\(Z\) string acting on all the qubits of
the face.
2D hyperbolic surface code Hyperbolic surface codes based on a tessellation of a closed 2D manifold with a hyperbolic geometry (i.e., non-Euclidean geometry, e.g., saddle surfaces when defined
on a 2D plane).
2D lattice stabilizer code Lattice stabilizer code in two spatial dimensions.
2T-qutrit code Two-mode qutrit code constructed out of superpositions of coherent states whose amplitudes make up the binary tetrahedral group \(2T\).
3D bosonization code A mapping that maps a 3D lattice quadratic Hamiltonian of Majorana modes into a lattice of qubits which realize a \(\mathbb{Z}_2\) gauge theory with a particular
Gauss law.
3D color code Color code defined on a four-valent four-colorable tiling of 3D space. Logical dimension is determined by the genus of the underlying surface (for closed surfaces)
and types of boundaries (for open surfaces).
3D fermionic surface code A non-CSS 3D Kitaev surface code that realizes \(\mathbb{Z}_2\) gauge theory with an emergent fermion, i.e., the fermionic-charge bosonic-loop (FcBl) phase [2]. The
model can be defined on a cubic lattice in several ways [3; Eq. (D45-46)]. Realizations on other lattices also exist [4,5].
3D lattice stabilizer code Lattice stabilizer code in three spatial dimensions. Qubit codes are conjectured to admit either fracton phases or abelian topological phases that are equivalent to
multiple copies of the 3D surface code and/or the 3D fermionic surface code via a local constant-depth Clifford circuit [3].
3D subsystem surface code Subsystem generalization of the surface code on a 3D cubic lattice with gauge-group generators of weight at most three.
3D surface code A generalization of the Kitaev surface code defined on a 3D lattice.
Abelian LP code An LP code for Abelian group \(G\). The case of \(G\) being a cyclic group is a GB code (a.k.a. a quasi-cyclic LP code) [6; Sec. III.E]. A particular family with \(G
=\mathbb{Z}_{\ell}\) yields codes with constant rate and nearly constant distance.
Modular-qudit stabilizer code whose codewords realize 2D modular gapped Abelian topological order. The corresponding anyon theory is defined by an Abelian group and
Abelian TQD stabilizer code a Type-III group cocycle that can be decomposed as a product of Type-I and Type-II group cocycles; see [7; Sec. IV.A]. Abelian TQDs realize all modular gapped
Abelian topological orders [7]. Many Abelian TQD code Hamiltonians were originally formulated as commuting-projector models [8].
Abelian quantum-double stabilizer Modular-qudit stabilizer code whose codewords realize 2D modular gapped Abelian topological order with trivial cocycle. The corresponding anyon theory is defined by
code an Abelian group. All such codes can be realized by a stack of modular-qudit surface codes because all Abelian groups are Kronecker products of cyclic groups.
Abelian topological code Code whose codewords realize topological order associated with an Abelian anyon theory. In 2D, this is equivalent to a unitary braided fusion category which is also
an Abelian group under fusion [9]. Unless otherwise noted, the phases discussed are bosonic.
Amplitude-damping (AD) code Block quantum code on either qubits or bosonic modes that is designed to detect and correct qubit or bosonic AD errors, respectively.
Amplitude-damping CWS code Self-complementary CWS code that is designed to detect and correct AD errors.
Analog stabilizer code An oscillator-into-oscillator stabilizer code encoding \(k\) logical modes into \(n\) physical modes. An \(((n,k,d))_{\mathbb{R}}\) analog stabilizer code is denoted
as \([[n,k,d]]_{\mathbb{R}}\), where \(d\) is the code's distance.
Analog surface code An analog CSS version of the Kitaev surface code.
A code based on a continuous-variable (CV), or analog, cluster state. Such a state can be used to perform MBQC of logical modes, which substitutes the temporal
Analog-cluster-state code dimension necessary for decoding a conventional code with a spatial dimension. The exact analog cluster state is non-normalizable, so approximate constructs have to
be considered.
Approximate quantum Encodes quantum information so that it is possible to approximately recover that information from noise up to an error bound in recovery.
error-correcting code (AQECC)
A family of \( [[n,k,d]]_q \) CSS codes approximately correcting errors on up to \(\lfloor (n-1)/2 \rfloor\) qubits, i.e., with approximate distance approaching the
Approximate secret-sharing code no-cloning bound \(n/2\). Constructed using a non-degenerate CSS code, such as a polynomial quantum code, and a classical authentication scheme. The code can be
viewed as an \(t\)-error tolerant secret sharing scheme. Since the code yields a small logical subspace using large registers that contain both classical and quantum
information, it is not useful for practical error correction problems, but instead demonstrates the power of approximate quantum error correction.
Quantum systems can be roughly characterized by two types of noise, a bit-flip noise that maps canonical basis states into each other, and a phase-flip noise that
Asymmetric quantum code induces relative phases between superpositions of such basis states. A code cannot protect against both types of noise arbitrarily well, and there is a tradeoff
between the two types of protection. An asymmetric quantum code is one that performs much better against one type of noise than the other type. Such codes typically
have tunable distances against each noise type and include CSS codes, GKP codes, and QSCs.
Auxiliary qubit mapping (AQM) A concatenation of the JW transformation code with a qubit stabilizer code.
Family of CSS quantum codes based on products of two classical codes which share common symmetries. The balanced product can be understood as taking the usual tensor
Balanced product (BP) code /hypergraph product and then factoring out the symmetries factored. This reduces the overall number of physical qubits \(n\), while, under certain circumstances,
leaving the number of encoded qubits \(k\) and the code distance \(d\) invariant. This leads to a more favourable encoding rate \(k/n\) and normalized distance \(d/n
\) compared to the tensor/hypergraph product.
Ball color code A color code defined on a \(D\)-dimensional colex. This family includes hypercube color codes (color codes defined on balls constructed from hyperoctahedra) and 3D
ball color codes (color codes defined on duals of certain Archimedean solids).
Ball-Verstraete-Cirac (BVC) code A 2D fermion-into-qubit encoding that builds upon the JW transformation encoding by eliminating the weight-\(O(n)\) \(X\)-type string at the expense introducing
additional qubits. See [1; Sec. IV.B] for details.
Bicycle code A CSS code whose stabilizer generator matrix blocks are \(H_{X}=H_{Z}=(A|A^T)\), where \(A\) is a circulant matrix. The fact that \(A\) commutes with its transpose
ensures that the CSS condition is satisfied. Bicycle codes are the first QLDPC codes.
Multi-qubit code designed to realize gates from the binary dihedral group transversally. Can also be interpreted as a single-spin code. The codespace projection is a
Binary dihedral PI code projection onto an irrep of the binary dihedral group \( \mathsf{BD}_{2N} = \langle\omega I, X, P\rangle \) of order \(8N\), where \( \omega \) is a \( 2N \)th root
of unity, and \( P = \text{diag} ( 1, \omega^2) \).
Binomial code Bosonic rotation codes designed to approximately protect against errors consisting of powers of raising and lowering operators up to some maximum power. Binomial
codes can be thought of as spin-coherent states embedded into an oscillator [10].
Bivariate bicycle (BB) code One of several Abelian 2BGA codes which admit time-optimal syndrome measurement circuits that can be implemented in a two-layer architecture, a generalization of the
square-lattice architecture optimal for the surface codes.
A code constructed in a multi-partite quantum system, i.e., a physical space consisting of a tensor product of \(n > 1\) identical factors called subsystems, parties
Block quantum code , or bodies. The subsystems include qubits, modular qudits, Galois qudits, oscillators, or more general groups. For finite dimensional codes, the dimension of the
underlying subsystem is denoted by \(q\) and is sometimes called the local dimension.
A one-to-one mapping between basis states on \(n\) prime-dimensional qudits (of dimension \(q=p\)) and the subspace of the first \(p^n\) single-mode Fock states.
Bosonic \(q\)-ary expansion While this mapping offers a way to map qudits into a single mode, noise models for the two code families induce different notions of locality and thus qualitatively
different physical interpretations [11].
Bosonic code Encodes logical Hilbert space, finite- or infinite-dimensional, into a physical Hilbert space that contains at least one oscillator (a.k.a. bosonic mode or qumode).
A single-mode Fock-state bosonic code whose codespace is preserved by a phase-space rotation by a multiple of \(2\pi/N\) for some \(N\). The rotation symmetry
Bosonic rotation code ensures that encoded states have support only on every \(N^{\textrm{th}}\) Fock state. For example, single-mode Fock-state codes for \(N=2\) encoding a qubit admit
basis states that are, respectively, supported on Fock state sets \(\{|0\rangle,|4\rangle,|8\rangle,\cdots\}\) and \(\{|2\rangle,|6\rangle,|10\rangle,\cdots\}\).
Bosonic code whose codespace is defined as the common \(+1\) eigenspace of a group of mutually commuting displacement operators. Displacements form the stabilizers
Bosonic stabilizer code of the code, and have continuous eigenvalues, in contrast with the discrete set of eigenvalues of qubit stabilizers. As a result, exact codewords are
non-normalizable, so approximate constructions have to be considered.
Bosonization code A mapping that maps a \(D\)-dimensional lattice quadratic Hamiltonian of Majorana modes into a lattice of qubits. The resulting qubit code can realize various
topological phases, depending on the initial Majorana-mode Hamiltonian and its symmetries.
Branching MERA code Qubit stabilizer code whose encoding circuit corresponds to a branching MERA tensor network [12].
Bravyi-Kitaev superfast (BKSF) An single error-detecting fermion-into-qubit encoding defined on 2D qubit lattice whose stabilizers are associated with loops in the lattice. The code can be
code generalized to a single error-correcting code (i.e., with distance three) on graphs of degree \(\geq 6\) [13].
Bravyi-Kitaev transformation A fermion-into-qubit encoding that maps Majorana operators into Pauli strings of weight \(\lceil \log (n+1) \rceil\). The code can be reformulated in terms of
(BKT) code Fenwick trees [14], and the Pauli-string weight can be further optimized to yield the segmented Bravyi-Kitaev (SBK) transformation code [15].
Brown-Fawzi random An \([[n,k]]\) stabilizer code whose encoder is a random Clifford circuit of depth \(O(\log^3 n)\).
Clifford-circuit code
CSS-T code A CSS code for which a physical transversal \(T\) gate is either the identity (up to a global phase) or a logical gate. CSS-T codes are constructed from a pair of
linear binary codes via the CSS construction, with the pair satisfying certain conditions [16].
Calderbank-Shor-Steane (CSS) A stabilizer code admitting a set of stabilizer generators that are either \(Z\)-type or \(X\)-type operators. The two sets of stabilizer generators can often, but
stabilizer code not always, be related to parts of a chain complex over the appropriate ring or field.
Camara-Ollivier-Tillich code A Hermitian qubit QLDPC code whose stabilizer generator matrix is constructed using two nested subgroups of \(GF(4)^n\).
Cat code Rotation-symmetric bosonic Fock-state code encoding a \(q\)-dimensional qudit into one oscillator which utilizes a constellation of \(q(S+1)\) coherent states
distributed equidistantly around a circle in phase space of radius \(\alpha\).
Category-based quantum code Encodes a finite-dimensional logical Hilbert space into a physical Hilbert space associated with a finite category. Codes on modular fusion categories are often
associated with a particular topological quantum field theory (TQFT), as the data of such theories is described by such categories.
Chamon model code A foliated type-I fracton non-CSS code defined on a cubic lattice using one weight-eight stabilizer generator acting on the eight vertices of each cube in the
lattice [3; Eq. (D38)].
Single-mode bosonic Fock-state code that can be used for error-corrected sensing of a signal Hamiltonian \({\hat n}^s\), where \({\hat n}\) is the occupation number
operator. Codewords for the \(s\)th-order Chebyshev code are \begin{split} \ket{\overline 0} &=\sum_{k \text{~even}}^{[0,s]} \tilde{c}_k \Ket{\left\lfloor M\sin^2\
Chebyshev code left( k\pi/{2s}\right) \right\rfloor},\\ \ket{\overline 1} &= \sum_{k \text{~odd}}^{[0,s]} \tilde{c}_k \Ket{\left\lfloor M\sin^2 \left(k\pi/{2s}\right) \right\
rfloor}, \end{split} \tag*{(1)} where \(\tilde{c}_k>0\) can be obtained by solving a system of order \(O(s^2)\) linear equations, and where \(\lfloor x \rfloor\) is
the floor function. The code approaches optimality for sensing the signal Hamiltonian as \(M\) increases.
Checkerboard model code A foliated type-I fracton code defined on a cubic lattice that admits weight-eight \(X\)- and \(Z\)-type stabilizer generators on the eight vertices of each cube in
the lattice.
A geometrically local commuting-projector code that realizes beyond-group-cohomology invertible topological phases in arbitrary dimensions. Instances of the code in
Chen-Hsin invertible-order code 4D realize the 3D \(\mathbb{Z}_2\) gauge theory with fermionic charge and either bosonic (FcBl) or fermionic (FcFl) loop excitations at their boundaries [2,17]; see
Ref. [18] for a different lattice-model formulation of the FcBl boundary code.
Chiral semion Walker-Wang model A 3D lattice modular-qudit stabilizer code with qudit dimension \(q=4\) whose low-energy excitations on boundaries realize the chiral semion topological order. The
code model admits 2D chiral semion topological order at one of its surfaces [19,20]. The corresponding phase can also be realized via a non-stabilizer Hamiltonian [21].
Chiral semion subsystem code Modular-qudit subsystem stabilizer code with qudit dimension \(q=4\) that is characterized by the chiral semion topological phase. Admits a set of geometrically
local stabilizer generators on a torus.
Bosonic Fock-state code that encodes \(k\) qubits into \(n\) oscillators, with each oscillator restricted to having at most \(N\) excitations. Codewords are
Chuang-Leung-Yamamoto (CLY) code superpositions of oscillator Fock states which have exactly \(N\) total excitations, and are either uniform (i.e., balanced) superpositions or unbalanced
Circuit-to-Hamiltonian Approximate qubit block code that forms the ground-state space of a frustration-free Hamiltonian with non-commuting terms. Its distance and logical-qubit number are
approximate code both of order \(\Omega(n/\log^5 n)\) [22; Thm. 3.1]. The code is an approximate non-stabilizer QLWC code since the Hamiltonian consists of non-commuting weight-ten
non-Pauli projectors, with each qubit acted on by order \(O(\text{polylog}(n)\) projectors.
A CSS code constructed by separately constructing the \(X\) and \(Z\) check matrices using product constructions from classical codes. A particular \([[512,174,8]]\)
Classical-product code code performed well [23] against erasure and depolarizing noise when compared to other notable CSS codes, such as the asymptotically good quantum Tanner codes. These
codes have been generalized to the intersecting subset code family [24].
Clifford group-representation QSC QSC whose projection is onto a copy of an irreducible representation of the single-qubit Clifford group, taken as the binary octahedral subgroup of the group \(SU(2)
\) of Gaussian rotations. Its codewords consist of non-uniform superpositions of 48 coherent states.
Clifford spin code A single-spin code designed to realize a discrete group of gates using \(SU(2)\) rotations. Codewords are subspaces of a spin's Hilbert space that house irreducible
representations (irreps) of a discrete subgroup of \(SU(2)\).
A \(((2^r,2,2-\sqrt{2},8))\) QSC for \(r \geq 2\) constructed using the real-Clifford subgroup-orbit code. Logical constellations are constructed by applying
Clifford subgroup-orbit QSC elements of an index-two subgroup of the real Clifford group, when taken as a subgroup of the orthogonal group [25] to \(2\) different vectors on the complex sphere.
The code is known as the Witting code for \(r=2\) because its two logical constellations form vertices of Witting polytopes.
Clifford-deformed surface code A generally non-CSS derivative of the surface code defined by applying a constant-depth Clifford circuit to the original (CSS) surface code. Unlike the surface code,
(CDSC) CDSCs include codes whose thresholds and subthreshold performance are enhanced under noise biased towards dephasing. Examples of CDSCs include the XY code, XZZX
code, and random CDSCs.
A code based on a cluster state and often used in measurement-based quantum computation (MBQC) [26] (a.k.a. one-way quantum processing), which substitutes the
Cluster-state code temporal dimension necessary for decoding a conventional code with a spatial dimension. This is done by encoding the computation into the features of the cluster
state''s graph.
Codeword stabilized (CWS) code A code defined using a cluster state and a set of \(Z\)-type Pauli strings defined by a binary classical code.
Coherent-parity-check (CPC) code A qubit stabilizer code for which two binary linear codes are used to directly construct encoding and decoding circuits against \(X\)- and \(Z\)-type errors,
respectively, via ZX calculus [27,28]. CPC codes can be obtained from numerical search [29].
Coherent-state constellation code Qudit-into-oscillator code whose codewords can succinctly be expressed as superpositions of a countable set of coherent states that is called a constellation. Some
useful constellations form a group (see gkp, cat or \(2T\)-qutrit codes) while others make up a Gaussian quadrature rule [30,31].
Color code Member of a family of qubit CSS codes defined on particular \(D\)-dimensional graphs.
Combinatorial PI code A member of a family of PI quantum codes whose correction properties are derived from solving a family of combinatorial identities. The code encodes one logical
qubit in superpositions of Dicke states whose coefficients are square roots of ratios of binomial coefficients.
Commuting-projector Hamiltonian Hamiltonian-based code whose Hamiltonian terms can be expressed as orthogonal projectors (i.e., Hermitian operators with eigenvalues 0 or 1) that commute with each
code other.
Concatenated GKP code A concatenated code whose outer code is a GKP code. In other words, a bosonic code that can be thought of as a concatenation of an arbitrary inner code and another
bosonic outer code. Most examples encode physical qubits of an inner stabilizer code into the square-lattice GKP code.
Concatenated Steane code A member of the family of \([[7^m,1,3^m]]\) CSS codes, each of which is a recursive level-\(m\) concatenatenation of the Steane code. This family is one of the first
to admit a concatenated threshold [32โ36].
Concatenated bosonic code A concatenated code whose outer code is a bosonic code. In other words, a bosonic code that can be thought of as a concatenation of a possibly non-bosonic inner code
and another bosonic outer code.
Concatenated cat code A concatenated code whose outer code is a cat code. In other words, a qubit code that can be thought of as a concatenation of an arbitrary inner code and another cat
outer code. Most examples encode physical qubits of an inner stabilizer code into the two-component cat code.
A combination of two quantum codes, an inner code \(C\) and an outer code \(C^\prime\), where the physical subspace used for the inner code consists of the logical
Concatenated quantum code subspace of the outer code. In other words, first one encodes in the inner code \(C^\prime\), and then one encodes each of the physical registers of \(C^\prime\) in
an outer code \(C\). An inner \(C = ((n_1,K,d_1))_{q_1}\) and outer \(C^\prime = ((n_2,q_1,d_2))_{q_2}\) block quantum code yield an \(((n_1 n_2, K, d \geq d_1d_2))_
{q_2}\) concatenated block quantum code [37].
A concatenated code whose outer code is a qubit code. In other words, a qubit code that can be thought of as a concatenation of an arbitrary inner code and another
Concatenated qubit code qubit outer code. An inner \(C = ((n_1,K,d_1))\) and outer \(C^\prime = ((n_2,2,d_2))\) qubit code yield an \(((n_1 n_2, K, d \geq d_1d_2))\) concatenated qubit
Conformal-field theory (CFT) code Approximate code whose codewords lie in the low-energy subspace of a conformal field theory, e.g., the quantum Ising model at its critical point [38,39]. Its
encoding is argued to perform source coding (i.e., compression) as well as channel coding (i.e., error correction) [38].
Code whose codewords lie in an excited-state eigenspace of a Hamiltonian governing the total energy or total number of excitations of the underlying quantum system.
Constant-excitation (CE) code For qubit codes, such a Hamiltonian is often the total spin Hamiltonian, \(H=\sum_i Z_i\). For spin-\(S\) codes, this generalizes to \(H=\sum_i J_z^{(i)}\), where \
(J_z\) is the spin-\(S\) \(Z\)-operator. For bosonic codes, such as Fock-state codes, codewords are often in an eigenspace with eigenvalue \(N>0\) of the total
excitation or energy Hamiltonian, \(H=\sum_i \hat{n}_i\).
A block code on \(n\) subsystems that admits a group \(G\) of transversal gates. The group has to be finite for finite-dimensional codes due to the Eastin-Knill
Covariant block quantum code theorem. Continuous-\(G\) covariant codes, necessarily infinite-dimensional, are relevant to error correction of quantum reference frames [40] and error-corrected
parameter estimation.
Code dynamically generated by unitary Clifford circuits defined on a lattice with some crystalline symmetry. A notable example is the circuit defined on a rotated
Crystalline-circuit qubit code square lattice with vertices corresponding to iSWAP gates and edges decorated by \(R_X[\pi/2]\), a single-qubit rotation by \(\pi/2\) around the \(X\)-axis. This
circuit is invariant under space-time translations by a unit cell \((T, a)\) and all transformations of the square lattice point group \(D_4\).
Cubic honeycomb color code 3D color code defined on a four-colorable bitruncated cubic honeycomb uniform tiling.
A geometrically local commuting-projector code defined on triangulations of lattices in arbitrary spatial dimensions. Its code Hamiltonian terms include Pauli-\(Z\)
Cubic theory code operators and products of Pauli-\(X\) operators and \(CZ\) gates. The Hamiltonian realizes higher-form \(\mathbb{Z}_2^3\) gauge theories whose excitations obey
non-Abelian Ising-like fusion rules.
Cyclic quantum code A block quantum code such that cyclic permutations of the subsystems leave the codespace invariant. In other words, the automorphism group of the code contains the
cyclic group \(\mathbb{Z}_n\).
Derby-Klassen (DK) code A fermion-into-qubit code defined on regular tilings with maximum degree 4 whose stabilizers are associated with loops in the tiling. The code outperforms several
other encodings in terms of encoding rate [41; Table I]. It has been extended for models with several modes per site [42].
Approximate quantum code that encodes a qudit in the finite-dimensional Hilbert space of a rigid body with \(SO(2)\) symmetry (e.g., a heteronuclear diatomic
Diatomic molecular code molecule). This state space is the space of normalized functions on the two-sphere, consisting of a direct sum of all non-negative integer angular momenta. Ideal
codewords may not be normalizable because the space is infinite-dimensional, so approximate versions have to be constructed in practice.
Quantum-double code whose codewords realize \(G=D_m\) topological order associated with a \(2m\)-element dihedral group \(D_m\). Includes the simplest non-Abelian
Dihedral \(G=D_m\) quantum-double order \(D_3 = S_3\) associated with the permutation group of three objects. The code can be realized as the ground-state subspace of the quantum double model,
code defined for \(D_m\)-valued qudits [43]. An alternative qubit-based formulation realizes the gauged \(G=\mathbb{Z}_3^2\) twisted quantum double phase [44], which is
the same topological order as the \(G=D_4\) quantum double [45,46].
A code whose codewords realize \(D\)-dimensional lattice Dijkgraaf-Witten gauge theory [47,48] for a finite group \(G\) and a \(D+1\)-cocycle \(\omega\) in the
Dijkgraaf-Witten gauge theory cohomology class \(H^{D+1}(G,U(1))\). When the cocycle is non-trivial, the gauge theory is called a twisted gauge theory. For trivial cocycles in 3D, the model can
code be called a quantum triple model, in allusion to being a 3D version of the quantum double model. There exist lattice-model formulations in arbitrary spatial
dimension [49] as well as explicitly in 3D [50,51].
Dinur-Hsieh-Lin-Vidick (DHLV) A family of asymptotically good QLDPC codes which are related to expander LP codes in that the roles of the check operators and physical qubits are exchanged.
Dinur-Lin-Vidick (DLV) code Member of a family of quantum locally testable codes constructed using cubical chain complexes, which are \(t\)-order extensions of the complexes underlying expander
codes (\(t=1\)) and expander lifted-product codes (\(t=2\)).
Galois-qudit CSS code constructed from a CSS code and a classical code using a distance-balancing procedure based on a generalized homological product. The initial
Distance-balanced code code is said to be unbalanced, i.e., tailored to noise biased toward either bit- or phase-flip errors, and the procedure can result in a code that is treats both
types of errors on a more equal footing. The original distance-balancing procedure [52], later generalized [53; Thm. 4.2], can yield QLDPC codes [52; Thm. 1].
A 2D lattice modular-qudit stabilizer code with qudit dimension \(q=4\) that is characterized by the 2D double semion topological phase. The code can be obtained
Double-semion stabilizer code from the \(\mathbb{Z}_4\) surface code by condensing the anyon \(e^2 m^2\) [54]. Originally formulated as the ground-state space of a Hamiltonian with non-commuting
terms [55], which can be extended to other spatial dimensions [56], and later as a commuting-projector code [8,57].
Two-mode bosonic code encoding a logical qubit in Fock states with one excitation. The logical-zero state is represented by \(|10\rangle\), while the logical-one
Dual-rail quantum code state is represented by \(|01\rangle\). This encoding is often realized in temporal or spatial modes, corresponding to a time-bin or frequency-bin encoding. Two
different types of photon polarization can also be used.
Dynamical automorphism (DA) code Dynamically-generated stabilizer-based code whose (not necessarily periodic) sequence of few-body measurements implements state initialization, logical gates and
error detection.
Dynamically-generated QECC Block quantum code whose natural definition is in terms of a many-body scaling limit of a local dynamical process. Such processes, which are often non-deterministic,
update the code structure and can include random unitary evolution or non-commuting projective measurements.
Eigenstate thermalization An \(n\)-qubit approximate code whose codespace is formed by eigenstates of a translationally-invariant quantum many-body system which satisfies the Eigenstate
hypothesis (ETH) code Thermalization Hypothesis (ETH). ETH ensures that codewords cannot be locally distinguished in the thermodynamic limit. Relevant many-body systems include 1D
non-interacting spin chains or frustration-free systems such as Motzkin chains and Heisenberg models.
Code that can be obtained via an optimization procedure that ensures correction against a set \(\cal{E}\) of errors as well as guaranteeting optimal precision in
Error-corrected sensing code locally estimating a parameter using a noiseless ancilla. For tensor-product spaces consisting of \(n\) subsystems (e.g., qubits, modular qudits, or Galois qudits),
the procedure can yield a code whose parameter estimation precision satisfies Heisenberg scaling, i.e., scales quadratically with the number \(n\) of subsystems.
Expander LP code Family of \(G\)-lifted product codes constructed using two random classical Tanner codes defined on expander graphs [58]. For certain parameters, this construction
yields the first asymptotically good QLDPC codes. Classical codes resulting from this construction are one of the first two families of \(c^3\)-LTCs.
Fermion code Finite-dimensional quantum error-correcting code encoding a logical (qudit or fermionic) Hilbert space into a physical Fock space of fermionic modes. Codes are
typically described using Majorana operators, which are linear combinations of fermionic creation and annihilation operators [59].
Fermion-into-qubit code Qubit stabilizer code encoding a logical fermionic Hilbert space into a physical space of \(n\) qubits. Such codes are primarily intended for simulating fermionic
systems on quantum computers, and some of them have error-detecting, correcting, and transmuting properties.
A CSS code constructed by combining one code as the base and another as the fiber of a fiber bundle. In particular, taking a random LDPC code as the base and a
Fiber-bundle code cyclic repetition code as the fiber yields, after distance balancing, a QLDPC code with distance of order \(\Omega(n^{3/5}\text{polylog}(n))\) and rate of order \(\
Omega(n^{-2/5}\text{polylog}(n))\) is obtained.
Fibonacci fractal spin-liquid A fractal type-I fracton CSS code defined on a cubic lattice [3; Eq. (D23)].
Fibonacci string-net code Quantum error correcting code associated with the Levin-Wen string-net model with the Fibonacci input category, admitting two types of encodings.
Finite-dimensional quantum Encodes quantum information in a \(K\)-dimensional (logical) subspace of an \(N\)-dimensional (physical) Hilbert space such that it is possible to recover said
error-correcting code information from errors. The logical subspace is spanned by a basis comprised of code basis states or codewords.
Finite-geometry (FG) QLDPC code CSS code constructed from linear binary codes whose parity-check or generator matrices are incidence matrices of points, hyperplanes, or other structures in finite
geometries. These codes can be interpreted as quantum versions of FG-LDPC codes, but some of them [60,61] are not strictly QLDPC.
Five-qubit perfect code Five-qubit cyclic stabilizer code that is the smallest qubit stabilizer code to correct a single-qubit error.
Five-rotor code Extension of the five-qubit stabilizer code to the integer alphabet, i.e., the angular momentum states of a planar rotor. The code is \(U(1)\)-covariant and ideal
codewords are not normalizable.
Floquet code on a trivalent 2D lattice whose parent topological phase is the \(\mathbb{Z}_2\times\mathbb{Z}_2\) 2D color-code phase and whose measurements cycle
Floquet color code logical quantum information between the nine \(\mathbb{Z}_2\) surface-code condensed phases of the parent phase. The code's ISG is the stabilizer group of one of the
nine surface codes.
Qudit-into-oscillator code whose protection against AD noise (i.e., photon loss) stems from the use of disjoint sets of Fock states for the construction of each code
Fock-state bosonic code basis state. The simplest example is the dual-rail code, which has codewords consisting of single Fock states \(|10\rangle\) and \(|01\rangle\). This code can detect
a single loss error since a loss operator in either mode maps one of the codewords to a different Fock state \(|00\rangle\). More involved codewords consist of
several well-separated Fock states such that multiple loss events can be detected and corrected.
Folded quantum RS (FQRS) code CSS code on \(q^m\)-dimensional Galois-qudits that is constructed from folded RS (FRS) codes (i.e., an RS code whose coordinates have been grouped together) via the
Galois-qudit CSS construction. This code is used to construct Singleton-bound approaching approximate quantum codes.
Four-qubit single-deletion code Four-qubit PI code that is the smallest qubit code to correct one deletion error.
Four-rotor code \([[4,2,2]]_{\mathbb Z}\) CSS rotor code that is an extension of the four-qubit code to the integer alphabet, i.e., the angular momentum states of a planar rotor.
Kitaev surface code on a fractal geometry, which is obtained by removing qubits from the surface code on a cubic lattice. A related construction, the fractal product
Fractal surface code code, is a hypergraph product of two classical codes defined on a Sierpinski carpet graph [62]. The underlying classical codes form classical self-correcting
memories [63โ65].
Floquet code whose qubits are placed on vertices of a truncated cubic honeycomb. Its weight-two check operators are placed on edges of each truncated cube, while
Fracton Floquet code weight-three check operators are placed on each triangle. Its ISG can be that of the X-cube model code or the checkerboard model code. On a three-torus of size \(L_x
\times L_y \times L_z\), the code consists of \(n= 48L_xL_yL_z\) physical qubits and encodes \(k= 2(L_x+L_y+L_z)-6\) logical qubits.
Fracton stabilizer code A 3D translationally invariant modular-qudit stabilizer code whose codewords make up the ground-state space of a Hamiltonian in a fracton phase. Unlike topological
phases, whose excitations can move in any direction, fracton phases are characterized by excitations whose movement is restricted.
Hyperbolic surface code constructed using cellulation of a Riemannian Manifold \(M\) exhibiting systolic freedom [66]. Codes derived from such manifolds can achieve
Freedman-Meyer-Luo code distances scaling better than \(\sqrt{n}\), something that is impossible using closed 2D surfaces or 2D surfaces with boundaries [67]. Improved codes are obtained by
studying a weak family of Riemann metrics on closed 4-dimensional manifolds \(S^2\otimes S^2\) with the \(Z_2\)-homology.
Frobenius code A cyclic prime-qudit stabilizer code whose length \(n\) divides \(p^t + 1\) for some positive integer \(t\).
Frustration-free Hamiltonian code Hamiltonian-based code whose Hamiltonian is frustration free, i.e., whose ground states minimize the energy of each term.
Fusion-based quantum computing Code whose codewords are resource states used in an FBQC scheme. Related to a cluster state via Hadamard transformations.
(FBQC) code
Cluster-state code can consists of a generalized analog cluster state that is initialized in GKP (resource) states for some of its physical modes. Alternatively, it
can be thought of as an oscillator-into-oscillator GKP code whose encoding consists of initializing \(k\) modes in momentum states (or, in the normalizable case,
GKP CV-cluster-state code squeezed vacua), \(n-k\) modes in (normalizable) GKP states, and applying a Gaussian circuit consisting of two-body \(e^{i V_{jk} \hat{x}_j \hat{x}_k }\) for some
angles \(V_{jk}\). Provides a way to perform fault-tolerant MBQC, with the required number \(n-k\) of GKP-encoded physical modes determined by the particular
protocol [68โ71].
GKP-surface code A concatenated code whose outer code is a GKP code and whose inner code is a toric surface code [72], rotated surface code [70,73โ76], or XZZX surface code [77].
GNU PI code PI code whose codewords can be expressed as superpositions of Dicke states with coefficients are square-roots of the binomial distribution.
Galois-qudit BCH code True Galois-qudit stabilizer code constructed from BCH codes via either the Hermitian construction or the Galois-qudit CSS construction. Parameters can be improved
by applying Steane enlargement [78].
Galois-qudit CSS code An \([[n,k,d]]_q \) Galois-qudit true stabilizer code admitting a set of stabilizer generators that are either \(Z\)-type or \(X\)-type Galois-qudit Pauli strings.
Codes can be defined from chain complexes over \(GF(q)\) via an extension of qubit CSS-to-homology correspondence to Galois qudits.
Galois-qudit CWS code A CWS code for Galois qudits, defined using a Galois-qudit cluster state and a set of Galois-qudit \(Z\)-type Pauli strings defined by a \(q\)-ary classical code.
Galois-qudit GRS code True \(q\)-Galois-qudit stabilizer code constructed from GRS codes via either the Hermitian construction [79โ81] or the Galois-qudit CSS construction [82,83].
Galois-qudit HGP code A member of a family of Galois-qudit CSS codes whose stabilizer generator matrix is obtained from a hypergraph product of two classical linear \(q\)-ary codes.
Galois-qudit RS code An \([[n,k,n-k+1]]_q\) (with \(q>n\)) Galois-qudit CSS code constructed using two RS codes over \(GF(q)\).
Galois-qudit USt code A Galois-qubit code whose codespace consists of a direct sum of a Galois-qubit stabilizer codespace and one or more of that stabilizer code's error spaces.
Galois-qudit code Encodes \(K\)-dimensional Hilbert space into a \(q^n\)-dimensional (\(n\)-qudit) Hilbert space, with canonical qudit states \(|k\rangle\) labeled by elements \(k\)
of the Galois field \(GF(q)\) and with \(q\) being a power of a prime \(p\).
Galois-qudit color code Extension of the color code to 2D lattices of Galois qudits.
Galois-qudit expander code Galois-qudit CSS code constructed from a hypergraph product of expander codes.
Galois-qudit quantum RM code True \(q\)-Galois-qudit stabilizer code constructed from generalized Reed-Muller (GRM) codes via the Hermitian construction, the Galois-qudit CSS construction, or
directly from their parity-check matrices [84; Sec. 4.2].
An \(((n,K,d))_q\) Galois-qudit code whose logical subspace is the joint eigenspace of commuting Galois-qudit Pauli operators forming the code's stabilizer group \(\
Galois-qudit stabilizer code mathsf{S}\). Traditionally, the logical subspace is the joint \(+1\) eigenspace, and the stabilizer group does not contain \(e^{i \phi} I\) for any \(\phi \neq 0\).
The distance \(d\) is the minimum weight of a Galois-qudit Pauli string that implements a nontrivial logical operation in the code.
Galois-qudit surface code Extension of the surface code to 2D lattices of Galois qudits.
Generalized 2D color code Member of a family of non-Abelian 2D topological codes, defined by a finite group \( G \), that serves as a generalization of the color code (for which \(G=\mathbb
Generalized Shor code Qubit CSS code constructed by concatenating two classical codes in a way the generalizes the Shor and quantum parity codes.
Generalized bicycle (GB) code A quasi-cyclic Galois-qudit CSS code constructed using a generalized version of the bicycle ansatz [85] from a pair of equivalent index-two quasi-cyclic linear
codes. Equivalently, the codes can constructed via the lifted-product construction for \(G\) being a cyclic group [6; Sec. III.E].
Generalized homological-product CSS code whose properties are determined from an underlying chain complex, which often consists of some type of product of other chain complexes.
CSS code
Generalized homological-product Stabilizer code whose properties are determined from an underlying chain complex, which often consists of some type of product of other chain complexes. The Qubit
code CSS-to-homology correspondence yields an interpretation of codes in terms of manifolds, thus allowing for the use of various products from topology in constructing
Generalized homological-product Qubit CSS code whose properties are determined from an underlying chain complex, which often consists of some type of product of other chain complexes.
qubit CSS code
Generalized quantum Tanner code An extension of quantum Tanner codes to codes constructed from two commuting regular graphs with the same vertex set. This allows for code construction using finite
sets and Schreier graphs, yielding a broader family of square complexes.
Generalized quantum divisible A level-\(\nu\) generalized quantum divisible code is a CSS code whose \(X\)-type stabilizers, in the symplectic representation, have zero norm and form a \((\nu,t)
code \)-null matrix (defined below) with respect to some odd-integer vector \(t\) [86; Def. V.1]. Such codes admit gates at the \(\nu\)th level of the Clifford hierarchy.
Such codes can also be level-lifted [86; Theorem V.6], \(\nu\to\nu+1\), which recursively yields towers of generalized divisible codes from a particular ground code.
Golden code Variant of the Guth-Lubotzky hyperbolic surface code that uses regular tessellations for 4-dimensional hyperbolic space.
Good QLDPC code Also called asymptotically good QLDPC codes. A family of QLDPC codes \([[n_i,k_i,d_i]]\) whose asymptotic rate \(\lim_{i\to\infty} k_i/n_i\) and asymptotic distance
\(\lim_{i\to\infty} d_i/n_i\) are both positive.
Gottesman-Kitaev-Preskill (GKP) Quantum lattice code for a non-degenerate lattice, thereby admitting a finite-dimensional logical subspace. Codes on \(n\) modes can be constructed from lattices
code with \(2n\)-dimensional full-rank Gram matrices \(A\).
A stabilizer code on tensor products of \(G\)-valued qudits for Abelian \(G\) whose encoding isometry is defined using a graph [87; Eqs. (4-5)]. An analytical form
Graph quantum code of the codewords exists in terms of the adjacency matrix of the graph and bicharacters of the Abelian group [87]; see [88; Eq. (1)]. A graph quantum code for \(G=\
mathbb{Z}_2\) contains a cluster state as one of its codewords and reduces to a cluster state when its logical dimension is one [89].
Group GKP code Group-based quantum code whose construction is based on nested subgroups \(H\subset K \subset G\). Logical subspace is spanned by basis states that are equal
superpositions of elements of cosets of \(H\) in \(K\), and can be finite- or infinite-dimensional.
Group-based QPC An \([[m r,1,\min(m,r)]]_G\) generalization of the QPC.
A code based on a group-based cluster state for a finite group \(G\) [90]. Such cluster states can be defined using a graph and conditional group multiplication
Group-based cluster-state code operations. A group-based cluster state for \(G=GF(q)\) for prime-power \(q\) is called a Galois-qudit cluster state, while the state for \(G=\mathbb{Z}_q\) for
positive \(q\) is called a modular-qudit cluster state.
Encodes a logical Hilbert space, finite- or infinite-dimensional, into a physical Hilbert space of \(L^2\)-normalizable functions on a second-countable unimodular
group \(G\), i.e., a \(G\)-valued qudit or \(G\)-qudit. In other words, a group-valued qudit is a vector space whose canonical basis states \(|g\rangle\) are labeled
Group-based quantum code by elements \(g\) of a group \(G\). For \(K\)-dimensional logical subspace and for block codes defined on groups \(G^{n}\), can be denoted as \(((n,K))_G\). When the
logical subspace is the Hilbert space of \(L^2\)-normalizable functions on \(G^{ k}\), can be denoted as \([[n,k]]_G\). Ideal codewords may not be normalizable,
depending on whether \(G\) is continuous and/or noncompact, so approximate versions have to be constructed in practice.
Group-based quantum repetition An \([[n,1]]_G\) generalization of the quantum repetition code.
Group-representation code Code whose projection is onto an irreducible representation of a subgroup \(G\) of a group of canonical or distinguished unitary operations, e.g., transversal gates
in the case of block quantum codes, Gaussian operations in the case of bosonic codes, or \(SU(2)\) operations in the case of single-spin codes.
Extension of the Kitaev surface code from Abelian groups to groupoids, i.e., multi-fusion categories in which every morphism is an isomorphism [91]. Some models
Groupoid toric code admit fracton-like features such as extensive ground-state degeneracy and excitations with restricted mobility. The robustness of these features has not yet been
Guth-Lubotzky code Hyperbolic surface code based on cellulations of certain four-dimensional manifolds. The manifolds are shown to have good homology and systolic properties for the
purposes of code construction, with corresponding codes exhibiting linear rate.
A 3D lattice stabilizer code on a length-\(L\) cubic lattice with one or two qubits per site. Admits two types of stabilizer generators with support on each cube of
Haah cubic code (CC) the lattice. In the non-CSS case, these two are related by spatial inversion. For CSS codes, we require that the product of all corner operators is the identity. We
lastly require that there are no non-trival string operators, meaning that single-site operators are a phase, and any period one logical operator \(l \in \mathsf{S}^
{\perp}\) is just a phase.
Haar-random codewords are generated in a process involving averaging over unitary operations distributed accoding to the Haar measure. Haar-random codes are used to
Haar-random qubit code prove statements about the capacity of a quantum channel to transmit quantum information [92], but encoding and decoding in such \(n\)-qubit codes quickly becomes
impractical as \(n\to\infty\).
Code whose codespace corresponds to a set of energy eigenstates of a quantum-mechanical Hamiltonian i.e., a Hermitian operator whose expectation value measures the
Hamiltonian-based code energy of its underlying physical system. The codespace is typically a set of low-energy eigenstates or ground states, but can include subspaces of arbitrarily high
energy. Hamiltonians whose eigenstates are the canonical basis elements are called classical; otherwise, a Hamiltonian is called quantum.
Hastings-Haah Floquet code DA code whose sequence of check-operator measurements is periodic. The first instance of a dynamical code.
Hayden-Nezami-Salton-Sanders An \([[n,1]]_{\mathbb{R}}\) analog CSS code defined using homological structres associated with an \(n-1\) simplex. Relevant to the study of spacetime replication of
bosonic code quantum information [93].
Hemicubic code Homological code constructed out of cubes in high dimensions. The hemicubic code family has asymptotically diminishing soundness that scales as order \(\Omega(1/\log
n)\), locality of stabilizer generators scaling as order \(O(\log n)\), and distance of order \(\Theta(\sqrt{n})\).
Heptagon holographic code Holographic tensor-network code constructed out of a network of encoding isometries of the Steane code. Depending on how the isometry tensors are contracted, there
is a zero-rate and a finite-rate code family.
Hermitian Galois-qudit code An \([[n,k,d]]_q\) true Galois-qudit stabilizer code constructed from a Hermitian self-orthogonal linear code over \(GF(q^2)\) using the one-to-one correspondence
between the Galois-qudit Pauli matrices and elements of the Galois field \(GF(q^2)\).
Hermitian qubit code An \([[n,k,d]]\) stabilizer code constructed from a Hermitian self-orthogonal linear quaternary code using the \(GF(4)\) representation.
Quantum spherical code encoding a logical qubit, with each codeword an equal superposition of vertices of a Hessian complex polyhedron. For the unit sphere, the
codewords are |\overline{0}\rangle &= \frac{1}{\sqrt{27}}\left( \sum_{\mu,\nu=0}^{2} |0,\omega^{\mu},-\omega^{\nu}\rangle + |-\omega^{\nu},0,\omega^{\mu}\rangle + |\
omega^{\mu},-\omega^{\nu},0\rangle \right) \tag*{(2)}\\ |\overline{1}\rangle &= \frac{1}{\sqrt{27}}\left( \sum_{\mu,\nu=0}^{2} |0,-\omega^{\mu},\omega^{\nu}\rangle +
Hessian QSC |\omega^{\nu},0,-\omega^{\mu}\rangle + |-\omega^{\mu},\omega^{\nu},0\rangle \right)~, \tag*{(3)} where \(\omega = e^{\frac{2\pi i}{3}}\).
Figure I: Projection of the double Hessian code constellation with each copy of the Hessian logical constellation marked in a different colour.
Hexagonal GKP code Single-mode GKP qudit-into-oscillator code based on the hexagonal lattice. Offers the best error correction against displacement noise in a single mode due to the
optimal packing of the underlying lattice.
Member of a family of \([[n,k,d]]\) qubit stabilizer codes resulting from a concatenation of a constant-rate QLDPC code with a rotated surface code. Concatenation
Hierarchical code allows for syndrome extraction to be performed on a 2D geometry while maintining a threshold at the expense of a logarithmically vanishing rate. The growing syndrome
extraction circuit depth allows known bounds in the literature to be weakened [94,95].
CSS code constructed from a Ramanujan quantum code and an asymptotically good classical LDPC code using distance balancing. Ramanujan quantum codes are defined using
High-dimensional expander (HDX) Ramanujan complexes which are simplicial complexes that generalise Ramanujan graphs [96,97]. Combining the quantum code obtained from a Ramanujan complex and a good
code classical LDPC code, which can be thought of as coming from a 1-dimensional chain complex, yields a new quantum code that is defined on a 2-dimensional chain
complex. This 2-dimensional chain complex is obtained by the co-complex of the product of the 2 co-complexes. The length, dimension and distance of the new quantum
code depend on the input codes.
Holographic code Block quantum code whose features serve to model aspects of the AdS/CFT holographic duality and, more generally, quantum gravity.
Holographic hybrid code Holographic tensor-network code constructed out of alternating isometries of the five-qubit and \([[4,1,1,2]]\) Bacon-Shor codes.
Quantum Lego code whose encoding isometry forms a holographic tensor network, i.e., a tensor network associated with a tiling of hyperbolic space. Physical qubits
Holographic tensor-network code are associated with uncontracted tensor legs at the boundary of the tesselation, while logical qubits are associated with uncontracted legs in the bulk. The number
of layers emanating form the central point of the tiling is the radius of the code.
Homological code CSS-type extenstion of the Kitaev surface code to arbitrary manifolds. The version on a Euclidean manifold of some fixed dimension is called the \(D\)-dimensional
"surface" or \(D\)-dimensional toric code.
Homological number-phase code A homological \(n\)-rotor code mapped into the Fock-state space of \(n\) oscillators by identifying non-negative rotor angular-momentum states with oscillator Fock
states. The resulting oscillator code can encode logical rotors or qudits due to the presence of torsion in the chain complex defining the original rotor code.
Homological product code CSS code formulated using the tensor product of two chain complexes (see Qubit CSS-to-homology correspondence).
A homological quantum rotor code is an extension of analog stabilizer codes to rotors. The code is stabilized by a continuous group of rotor \(X\)-type and \(Z\)
Homological rotor code -type generalized Pauli operators. Codes are formulated using an extension of the qubit CSS-to-homology correspondence to rotors. The homology group of the logical
operators has a torsion component because the chain complexes are defined over the ring of integers, which yields codes with finite logical dimension, i.e., encoding
logical qudits instead of only logical rotors. Such finite-dimensional encodings are not possible with analog stabilizer codes.
Honeycomb (6.6.6) color code Triangular color code defined on a patch of the 6.6.6 (honeycomb) tiling.
Floquet code based on the Kitaev honeycomb model [98] whose logical qubits are generated through a particular sequence of measurements. A CSS version of the code has
Honeycomb Floquet code been proposed which loosens the restriction of which sequences to use [99]. The code has also been generalized to arbitrary non-chiral, Abelian topological order
Hopf-algebra cluster-state code Code based on a cluster state defined on qudits valued in a Hopf algebra.
Hopf-algebra quantum-double code Code whose codewords realize 2D gapped topological order defined on qudits valued in a Hopf algebra \(H\). The code Hamiltonian is an generalization [101,102] of the
quantum double model from group algebras to Hopf algebras, as anticipated by Kitaev [43]. Boundaries of these models have been examined [103,104].
Hsieh-Halasz (HH) code Member of one of two families of fracton codes, named HH-I and HH-II, defined on a cubic lattice with two qubits per site. HH-I (HH-II) is a CSS (non-CSS) stabilizer
code family, with the former identified as a foliated type-I fracton code [3].
Hsieh-Halasz-Balents (HHB) code Member of one of two families of fracton codes, named HHB model A and B, defined on a cubic lattice with two qubits per site. Both are expected to be foliated type-I
fracton codes [3; Eqs. (D42-D43)].
Hybrid cat code A hybrid qubit-oscillator code admitting codewords that are tensor products of a single-qubit (e.g., photon polarization) state with either a cat state or a coherent
Encodes a \(K\)-dimensional logical Hilbert space into \(n_1\) modular qudits of dimension \(q\) and \(n_2 \neq 0\) oscillators, i.e., the Hilbert space of \(L^2\)
Hybrid qudit-oscillator code -normalizable functions on \(\mathbb{Z}_q^{n_1} \times \mathbb{R}^{n_2}\). In photonic systems, photonic states of multiple degrees of freedom of a photon (e.g.,
frequency, amplitude, and polarization) are called hyper-entangled states [105].
Hyperbolic Floquet code Floquet code whose check-operators correspond to edges of a hyperbolic lattice of degree 3.
An extension of the color code construction to hyperbolic manifolds. As opposed to there being only three types of uniform three-valent and three-colorable lattice
Hyperbolic color code tilings in the 2D Euclidean plane, there is an infinite number of admissible hyperbolic tilings in the 2D hyperbolic plane [106]. Certain double covers of hyperbolic
tilings also yield admissible tilings [107]. Other admissible hyperbolic tilings can be obtained via a fattening procedure [108]; see also a construction based on
the more general quantum pin codes [109].
Hyperbolic surface code An extension of the Kitaev surface code construction to hyperbolic manifolds. Given a cellulation of a manifold, qubits are put on \(i\)-dimensional faces, \(X\)
-type stabilizers are associated with \((i-1)\)-faces, while \(Z\)-type stabilizers are associated with \(i+1\)-faces.
Hypergraph product (HGP) code A member of a family of CSS codes whose stabilizer generator matrix is obtained from a hypergraph product of two classical linear binary codes. Codes from hypergraph
products in higher dimension are called higher-dimensional HGP codes [110].
Hyperinvariant tensor-network Holographic tensor-network error-detecting code constructed out of a hyperinvariant tensor network [111], i.e., a MERA-like network admitting a hyperbolic geometry.
(HTN) code The network is defined using two layers A and B, with constituent tensors satisfying isometry conditions (a.k.a. multitensor constraints).
Hypersphere product code Homological code based on products of hyperspheres. The hypersphere product code family has asymptotically diminishing soundness that scales as order \(O(1/\log (n)^
2)\), locality of stabilizer generators scaling as order \(O(\log n/ \log\log n)\), and distance of order \(\Theta(\sqrt{n})\).
Jordan-Wigner transformation code A mapping between qubit Pauli strings and Majorana operators that can be thought of as a trivial \([[n,n,1]]\) code. The mapping is best described as converting a
chain of \(n\) qubits into a chain of \(2n\) Majorana modes (i.e., \(n\) fermionic modes). It maps Majorana operators into Pauli strings of weight \(O(n)\).
A CE code designed to detect and correct AD errors. An \(((n,K))\) jump code is denoted as \(((n,K,t))_w\) (which conflicts with modular-qudit notation), where \(t\)
Jump code is the maximum number of qubits that can be corrected after each one has undergone a jump error \(|0\rangle\langle 1|\), and where each codeword is a uniform
superposition of qubit basis states with Hamming weight \(w\).
A quantum error-correcting code that protects the encoded interior of a black hole from computationally bounded exterior observers. Under the assumption that the
Kim-Preskill-Tang (KPT) code Hawking radiation emitted by an old black hole is pseudorandom, there exists a subspace of the radiation system that encodes the black hole interior, entangled with
the late outgoing Hawking quanta. The logical operators of this code commute with efficient operations acting on the radiation, protecting the interior up to
corrections exponentially small in the black hole's entropy.
An \([[n,1,1]]_{f}\) Majorana stabilizer code forming the ground-state of the Kitaev Majorana chain (a.k.a. Kitaev Majorana wire) in its fermionic topological phase,
Kitaev chain code which is unitarily equivalent to the 1D quantum Ising model in the symmetry-breaking phase via the Jordan-Wigner transformation. The code is usually defined using
the algebra of two anti-commuting Majorana operators called Majorana zero modes (MZMs) or Majorana edge modes (MEMs).
Kitaev current-mirror qubit code Member of the family of \([[2n,(0,2),(2,n)]]_{\mathbb{Z}}\) homological rotor codes storing a logical qubit on a thin Mรถbius strip. The ideal code can be obtained
from a Josephson-junction [112] system [113].
Code whose logical subspace is labeled by different fusion outcomes of Ising anyons present in the Ising-anyon topological phase of the Kitaev honeycomb model [98].
Kitaev honeycomb code Each logical qubit is constructed out of four Majorana operators, which admit braiding-based gates due to their non-Abelian statistics and which can be used for
topological quantum computation. Ising anyons also exist in other phases, such as the fractional quantum Hall phase [114].
A family of Abelian topological CSS stabilizer codes whose generators are few-body \(X\)-type and \(Z\)-type Pauli strings associated to the stars and plaquettes,
Kitaev surface code respectively, of a cellulation of a two-dimensional surface (with a qubit located at each edge of the cellulation). Codewords correspond to ground states of the
surface code Hamiltonian, and error operators create or annihilate pairs of anyonic charges or vortices.
Knill code A group representation code whose projection is onto an irrep of a normal subgroup of the group formed by a nice error basis. Knill codes yield stabilizer-like codes
based on error bases that are non-Pauli but that nevertheless maintain many of the useful features of Pauli-type bases.
Code constructed using the hypergraph product of two copies of a cyclic LDPC code. The construction uses cyclic LDPC codes with generating polynomials \(1+x+x^k\)
La-cross code for some \(k\). Using a length-\(n\) seed code yields an \([[2n^2,2k^2]]\) family for periodic boundary conditions and an \([[(n-k)^2+n^2,k^2]]\) family for open
boundary conditions.
Ladder Floquet code Floquet code defined on a ladder qubit geometry, with one qubit per vertex. The check operators consist of \(ZZ\) checks on each rung and alternating \(XX\) and \(YY
\) check on the legs.
Approximate quantum code that encodes a qudit in the finite-dimensional Hilbert space of a single spin, i.e., a spherical Landau level. Codewords are approximately
Landau-level spin code orthogonal Landau-level spin coherent states whose orientations are spaced maximally far apart along a great circle (equator) of the sphere. The larger the spin, the
better the performance.
A geometrically local modular-qudit or Galois-qudit stabilizer code with qudits organized on a lattice modeled by the additive group \(\mathbb{Z}^D\) for spatial
Lattice stabilizer code dimension \(D\). On an infinite lattice, its stabilizer group is generated by few-site Pauli operators and their translations, in which case the code is called
translationally invariant stabilizer code. Boundary conditions have to be imposed on the lattice in order to obtain finite-dimensional versions. Lattice defects and
boundaries between different codes can also be introduced.
Member of a family of 3D lattice CSS codes with stabilizer generator weights \(\leq 6\) that are obtained by coupling layers of 2D surface code according to the
Layer code Tanner graph of a QLDPC code. Geometric locality is maintained because, instead of being concatenated, each pair of parallel surface-code squares are fused (or
quasi-concatenated) with perpendicular surface-code squares via lattice surgery.
Lift-connected surface (LCS) code Member of one of several families of lifted-product codes that consist of sparsely interconnected stacks of surface codes.
Lifted-product (LP) code Galois-qudit code that utilizes the notion of a lifted product in its construction. Lifted products of certain classical Tanner codes are the first (asymptotically)
good QLDPC codes.
An \(n\)-qubit code whose codewords are a pair of approximately locally indistinguishable states produced by starting with any two orthogonal \(n\)-qubit states and
Local Haar-random circuit qubit acting with a random unitary circuit of depth polynomial in \(n\). Two states are locally indistinguishable if they cannot be distinguished by local measurements. A
code single layer of the encoding circuit is composed of about \(n/2\) two-qubit nearest-neighbor gates run in parallel, with each gate drawn randomly from the Haar
distribution on two-qubit unitaries.
Long-range enhanced surface code Code constructed using the hypergraph product of two copies of a concatenated LDPC-repetition seed code. This family interpolates between surface codes and
(LRESC) hypergraph codes since the hypergraph product of two repetition codes yields the planar surface code. The construction uses small \([3,2,2]\) and \([6,2,4]\) LDPC
codes concatenated with \([4,1,4]\) and \([2,1,2]\) repetition codes, respectively. An example using a \([5,2,3]\) code is also presented.
Loop toric code A generalization of the Kitaev surface code defined on a 4D lattice. The code is called a \((2,2)\) toric code because it admits 2D membrane \(Z\)-type and \(X\)
-type logical operators. Both types of operators create 1D (i.e., loop) excitations at their edges. The code serves as a self-correcting quantum memory [115,116].
Lossless expander QLDPC code constructed by taking the balanced product of lossless expander graphs. Using one part of a quantum-code chain complex constructed with one-sided loss
balanced-product code expanders [117] yields a \(c^3\)-LTC [118]. Using two-sided expanders, which are only conjectured to exist, yields an asymptotically good QLDPC code family [119].
Magnon code An \(n\)-spin approximate code whose codespace of \(k=\Omega(\log n)\) qubits is efficiently described in terms of particular matrix product states or Bethe ansatz
tensor networks. Magnon codewords are low-energy excited states of the frustration-free Heisenberg-XXX model Hamiltonian [120].
An \([[n,1,2]]_{f}\) Majorana stabilizer code forming the even-fermion-parity ground-state subspace of two parallel Kitaev Majorana chains in their fermionic
Majorana box qubit topological phase. The \([[2,1,2]]_{f}\) version is called the tetron Majorana code. An \([[3,2,2]]_{f}\) extension using three Kitaev chains and housing two logical
qubits of the same parity is called the hexon Majorana code. Similarly, octon, decon, and dodecon are codes defined by the positive-parity subspace of \(4\), \(5\),
and \(6\) fermionic modes, respectively [121].
Majorana checkerboard code A Majorana analogue of the X-cube model defined on a cubic lattice. The code admits weight-eight Majorana stabilizer generators on the eight vertices of each cube of
a checkerboard sublattice.
Majorana color code Majorana analogue of the color code defined on a 2D tricolorable lattice and constructed out of Majorana box qubit codes placed on patches of the lattice.
Majorana loop stabilizer code An single error-correcting fermion-into-qubit encoding defined on a 2D qubit lattice whose stabilizers are associated with loops in the lattice.
A stabilizer code whose stabilizers are products of an even number of Majorana fermion operators, analogous to Pauli strings for a traditional stabilizer code and
Majorana stabilizer code referred to as Majorana stabilizers. The codespace is the mutual \(+1\) eigenspace of all Majorana stabilizers. In such systems, Majorana fermions may either be
considered individually or paired into creation and annihilation operators for fermionic modes. Codes can be denoted as \([[n,k,d]]_{f}\) [122], where \(n\) is the
number of fermionic modes (equivalently, \(2n\) Majorana modes).
Majorana surface code Majorana analogue of the surface code defined on a 2D lattice and constructed out of Majorana box qubit codes placed on patches of the lattice.
Matching code Member of a class of qubit stabilizer codes based on the Abelian phase of the Kitaev honeycomb model.
Matrix-model code Multimode-mode Fock-state bosonic approximate code derived from a matrix model, i.e., a non-Abelian bosonic gauge theory with a large gauge group. The model's
degrees of freedom are matrix-valued bosons \(a\), each consisting of \(N^2\) harmonic oscillator modes and subject to an \(SU(N)\) gauge symmetry.
An \(((n,K,d))_q\) modular-qudit stabilizer code admitting a set of stabilizer generators that are either \(Z\)-type or \(X\)-type Pauli strings. Codes can be
Modular-qudit CSS code defined from two classical codes and/or chain complexes over the ring \(\mathbb{Z}_q\) via an extension of qubit CSS-to-homology correspondence to modular qudits.
The homology group of the logical operators has a torsion component because the chain complexes are defined over a ring, which yields codes whose logical dimension
is not a power of \(q\).
Modular-qudit CWS code A CWS code for modular qudits, defined using a modular-qudit cluster state and a set of modular-qudit \(Z\)-type Pauli strings defined by a \(q\)-ary classical code
over \(\mathbb{Z}_q\).
Modular-qudit DA code Dynamically-generated stabilizer-based modular-qudit code whose (not necessarily periodic) sequence of few-body measurements implements state initialization, logical
gates and error detection.
Modular-qudit GKP code Modular-qudit analogue of the GKP code. Encodes a qudit into a larger qudit and protects against Pauli shifts up to some maximum value.
Modular-qudit USt code A modular-qubit code whose codespace consists of a direct sum of a modular-qubit stabilizer codespace and one or more of that stabilizer code's error spaces.
Modular-qudit cluster-state code A code based on a modular-qudit cluster state.
Encodes \(K\)-dimensional Hilbert space into a \(q^n\)-dimensional (\(n\)-qudit) Hilbert space, with canonical qudit states \(|k\rangle\) labeled by elements \(k\)
Modular-qudit code of the group \(\mathbb{Z}_q\) of integers modulo \(q\). Usually denoted as \(((n,K))_{\mathbb{Z}_q}\) or \(((n,K,d))_{\mathbb{Z}_q}\), whenever the code's distance \
(d\) is defined, and with \(q=p\) when the dimension is prime.
Extension of the color code to lattices of modular qudits. Codes are defined analogous to qubit color codes on suitable lattices of any spatial dimension, but a
Modular-qudit color code directionality is required in order to make the modular-qudit stabilizer commute. This can be done by puncturing a hyperspherical lattice [123] or constructing a
star-bipartition; see [124; Sec. III]. Logical dimension is determined by the genus of the underlying surface (for closed surfaces), types of boundaries (for open
surfaces), and/or any twist defects present.
Modular-qudit honeycomb Floquet A modular-qudit extension of the honeycomb Floquet code.
An \(((n,K,d))_q\) modular-qudit code whose logical subspace is the joint eigenspace of commuting qudit Pauli operators forming the code's stabilizer group \(\mathsf
Modular-qudit stabilizer code {S}\). Traditionally, the logical subspace is the joint \(+1\) eigenspace, and the stabilizer group does not contain \(e^{i \phi} I\) for any \(\phi \neq 0\). The
distance \(d\) is the minimum weight of a qudit Pauli string that implements a nontrivial logical operation in the code.
Extension of the surface code to prime-dimensional [43,125] and more general modular qudits [126]. Stabilizer generators are few-body \(X\)-type and \(Z\)-type Pauli
Modular-qudit surface code strings associated to the stars and plaquettes, respectively, of a tessellation of a two-dimensional surface. Since qudits have more than one \(X\) and \(Z\)-type
operator, various sets of stabilizer generators can be defined. Ground-state degeneracy and the associated phase depends on the qudit dimension and the stabilizer
Encodes finite-dimensional Hilbert space into the Hilbert space of \(L^2\)-normalizable functions on the group \(SO_3\). Construction is based on nested subgroups \
Molecular code (H\subset K \subset SO_3\), where \(H,K\) are finite. The \(|K|/|H|\)-dimensional logical subspace is spanned by basis states that are equal superpositions of
elements of cosets of \(H\) in \(K\).
Error-correcting code arising from a monitored random circuit. Such a circuit is described by a series of intermittant random local projective Pauli measurements
with random unitary time-evolution operators. An important sub-family consists of Clifford monitored random circuits, where unitaries are sampled from the Clifford
Monitored random-circuit code group [127]. When the rate of projective measurements is independently controlled by a probability parameter \(p\), there can exist two stable phases, one described
by volume-law entanglement entropy and the other by area-law entanglement entropy. The phases and their transition can be understood from the perspective of quantum
error correction, information scrambling, and channel capacities [128,129].
Monolithic quantum code A code constructed in a single quantum system, i.e., a physical space that is not treated as a tensor product of \(n\) identical subsystems. Examples include codes
in a single qudit, spin, oscillator, or molecule.
This is a family of codes derived via an algorithm that takes as input any binary classical code and outputs a quantum code (note that this framework can be extended
Movassagh-Ouyang Hamiltonian code to \(q\)-ary codes). The algorithm is probabalistic but succeeds almost surely if the classical code is random. An explicit code construction does exist for linear
distance codes encoding one logical qubit. For finite rate codes, there is no rigorous proof that the construction algorithm succeeds, and approximate constructions
are described instead.
Multi-fusion string-net code Family of codes resulting from the string-net construction but whose input is a unitary multi-fusion category (as opposed to a unitary fusion category).
NTRU-GKP code Multi-mode GKP code whose underlying lattice is utilized in variations of the NTRU cryptosystem [130]. Randomized constructions yield constant-rate GKP code families
whose largest decodable displacement length scales as \(O(\sqrt{n})\) with high probability.
Neural network code An approximate code obtained from a numerical optimization involving a reinforcement learning agent.
Bosonic rotation code consisting of superpositions of Pegg-Barnett phase states [131], |\phi\rangle \equiv \frac{1}{\sqrt{2\pi}}\sum_{n=0}^{\infty} \mathrm{e}^{\
Number-phase code mathrm{i} n \phi} \ket{n}. \tag*{(4)} Since phase states and thus the ideal codewords are not normalizable, approximate versions need to be constructed. The codes'
key feature is that, in the ideal case, phase measurement has zero uncertainty, making it a good canditate for a syndrome measurement.
Numerically optimized bosonic Bosonic Fock-state code obtained from a numerical minimization procedure, e.g., from enforcing error-correction criteria against some number of losses while
code minimizing average occupation number. Useful single-mode codes can be determined using basic numerical optimization [10,132], semidefinite-program recovery/encoding
optimization [133,134], or reinforcement learning [135,136].
One-hot quantum code Encoding of a \(q\)-dimensional qudit into the single-excitation subspace of \(q\) modes. The \(j\)th logical state is the multi-mode Fock state with one photon in
mode \(j\) and zero photons in the other modes. This code is useful for encoding and performing operations on qudits in multiple qubits [137โ141].
Oscillator-into-oscillator GKP Multimode GKP code with an infinite-dimensional logical space. Can be obtained by considering an \(n\)-mode GKP code with a finite-dimensional logical space,
code removing stabilizers such that the logical space becomes infinite dimensional, and applying a Gaussian circuit.
Oscillator-into-oscillator code Encodes \(k\) bosonic modes into \(n\) bosonic modes.
Ouyang-Chao constant-excitation A constant-excitation PI Fock-state code whose construction is based on integer partitions.
PI code
PI qubit code Block quantum code defined on two-dimensional subsystems such that any permutation of the subsystems leaves any codeword invariant.
Pair-cat code Two- or higher-mode extension of cat codes whose codewords are right eigenstates of powers of products of the modes' lowering operators. Many gadgets for cat codes
have two-mode pair-cat analogues, with the advantage being that such gates can be done in parallel with a dissipative error-correction process.
Pastawski-Yoshida-Harlow-Preskill Holographic code constructed out of a network of hexagonal perfect tensors that tesselates hyperbolic space. The code serves as a minimal model for several aspects
(HaPPY) code of the AdS/CFT holographic duality [142] and potentially a dF/CFT duality [143]. It has been generalized to higher dimensions [144] and to include gauge-like degrees
of freedom on the links of the tensor network [145,146]. All boundary global symmetries must be dual to bulk gauge symmetries, and vice versa [147].
Penrose tiling code Encodes quantum information into superpositions of rotated and translated versions of different Penrose tilings of \(\mathbb{R}^n\).
Perfect quantum code A type of block quantum code whose parameters satisfy the quantum Hamming bound with equality.
Perfect-tensor code Block quantum code encoding one subsystem into \(n\) subsystems whose encoding isometry is a perfect tensor. This code stems from an AME\((n,q)\) AME state, or
equivalently, a \(((n+1,1,\lfloor (n+1)/2 \rfloor + 1))_{\mathbb{Z}_q}\) code.
Permutation-invariant (PI) code Block quantum code such that any permutation of the subsystems leaves any codeword invariant. In other words, the automorphism group of the code contains the
symmetric group \(S_n\).
Planar-perfect-tensor code Block quantum code whose encoding isometry is a block perfect tensor, i.e., a tensor which remains an isometry under partitions into two contiguous components in a
fixed plane. This code stems from a planar maximally entangled state [148].
Post-selected PI code PI qubit code whose recovery succeeds at protecting against AD errors with a success probability less than one.
Prime-qudit RM code Modular-qudit stabilizer code constructed from generalized Reed-Muller (GRM) codes or their duals via the modular-qudit CSS construction. An odd-prime-qudit CSS code
family constructed from first-order punctured GRM codes transversally implements a diagonal gate at any level of the qudit Clifford hierarchy [149].
Prime-qudit RS code Prime-qudit CSS code constructed using two RS codes.
An \(m \times n\) matrix over \(GF(p)=\mathbb{Z}_p\) is triorthogonal if its rows \(r_1, \ldots, r_m\) satisfy \(|r_i \cdot r_j| = 0\) and \(|r_i \cdot r_j \cdot r_k
Prime-qudit triorthogonal code | = 0\) modulo \(p\), where addition and multiplication are done on \(GF(p)\). The triorthogonal prime-qudit CSS code associated with the matrix is constructed by
mapping non-zero entries in self-orhogonal rows to \(X\) operators, and \(Z\) operators for each row in the orthogonal complement [150,151].
Projective-plane surface code A family of Kitaev surface codes on the non-orientable 2-dimensional compact manifold \(\mathbb{R}P^2\) (in contrast to a genus-\(g\) surface). Whereas genus-\(g\)
surface codes require \(2g\) logical qubits, qubit codes on \(\mathbb{R}P^2\) are made from a single logical qubit.
Purity-testing stabilizer code A qubit stabilizer code that is constructed from a normal rational curve and that is relevant to testing the purity of an entangled Bell state stabilized by two
parties [152].
Quantum AG code A Galois-qudit CSS code constructed using two linear AG codes.
Quantum Golay code A \([[23, 1, 7]]\) self-dual CSS code with eleven stabilizer generators of each type, and with each generator being weight eight.
Quantum Goppa code A Galois-qudit CSS code constructed using two Goppa codes.
Member of a family of \([[n,k,d]]\) modular-qudit or Galois-qudit stabilizer codes for which the number of sites participating in each stabilizer generator and the
number of stabilizer generators that each site participates in are both bounded by a constant \(w\) as \(n\to\infty\); can be denoted by \([[n,k,d,w]]\). Sometimes,
Quantum LDPC (QLDPC) code the two parameters are explicitly stated: each site of an an \((l,w)\)-regular QLDPC code is acted on by \(\leq l\) generators of weight \(\leq w\). QLDPC codes can
correct many stochastic errors far beyond the distance, which may not scale as favorably. Together with more accurate, faster, and easier-to-parallelize measurements
than those of general stabilizer codes, this property makes QLDPC codes interesting in practice.
Quantum Reed-Muller code A CSS code formed from a classical Reed-Muller (RM) code or its punctured/shortened versions. Such codes often admit transversal logical gates in the Clifford
Quantum Tamo-Barg (QTB) code A member of a family of Galois-qudit CSS codes whose underlying classical codes consist of Tamo-Barg codes together with specific low-weight codewords. Folded
versions of QTB codes, or FQTB codes, defined on qudits whose dimension depends on \(n\) yield explicit examples of QLRCs of arbitrary locality \(r\) [153; Thm. 2].
Member of a family of QLDPC codes based on two compatible classical Tanner codes defined on a two-dimensional Cayley complex, a complex constructed from Cayley
Quantum Tanner code graphs of groups. For certain choices of codes and complex, the resulting codes have asymptotically good parameters. This construction has been generalized to
Schreier graphs [154].
Quantum check-product code CSS code constructed from an extension of check product (between two classical codes) to a product between a classical and a quantum code.
One-dimensional translationally invariant qubit stabilizer code whose whose stabilizer group can be partitioned into constant-size subsets of constant support and of
Quantum convolutional code constant overlap between neighboring sets. Initially formulated as a quantum analogue of convolutional codes, which were designed to protect a continuous and
never-ending stream of information. Precise formulations sometimes begin with a finite-dimensional lattice, with the intent to take the thermodynamic limit; logical
dimension can be infinite as well.
Quantum data-syndrome (QDS) code Stabilizer code designed to correct both data qubit errors and syndrome measurement errors simultaneously due to extra redundancy in its stabilizer generators.
A level-\(\nu\) quantum divisible code is a CSS code whose \(X\)-type stabilizers form a \(\nu\)-even linear binary code in the symplectic representation and which
Quantum divisible code admits a transversal gate at the \(\nu\)th level of the Clifford hierarchy. A CSS code is doubly even (triply even) if all \(X\)-type stabilizers have weight
divisible by four (eight), i.e., if they form a doubly even (triply even) linear binary code.
Quantum duadic code True Galois-qudit stabilizer code constructed from \(q\)-ary duadic codes via the Hermitian construction or the Galois-qudit CSS construction.
Quantum error-correcting code Encodes quantum information in a (logical) subspace of a (physical) Hilbert space such that it is possible to recover said information from errors that act as linear
(QECC) maps on the physical space.
CSS code constructed from a hypergraph product of bipartite expander graphs [58] with bounded left and right vertex degrees. For every bipartite graph there is an
Quantum expander code associated matrix (the parity check matrix) with columns indexed by the left vertices, rows indexed by the right vertices, and 1 entries whenever a left and right
vertex are connected. This matrix can serve as the parity check matrix of a classical code. Two bipartite expander graphs can be used to construct a quantum CSS code
(the quantum expander code) by using the parity check matrix of one as \(X\) checks, and the parity check matrix of the other as \(Z\) checks.
Quantum lattice code Bosonic stabilizer code on \(n\) bosonic modes whose stabilizer group is an infinite countable group of oscillator displacement operators which implement lattice
translations in phase space.
Quantum locally recoverable code A QLRC of locality \(r\) is a block quantum code whose code states can be recovered after a single erasure by performing a recovery map on at most \(r\) subsystems.
Quantum locally testable code A local commuting-projector Hamiltonian-based block quantum code which has a nonzero average-energy penalty for creating large errors. Informally, QLTC error states
(QLTC) that are far away from the codespace have to be excited states by a number of the code's local projectors that scales linearly with \(n\).
Quantum low-weight check (QLWC) Member of a family of \([[n,k,d]]\) modular-qudit or Galois-qudit stabilizer codes for which the number of sites participating in each stabilizer generator is
code bounded by a constant as \(n\to\infty\).
maximum-distance-separable (MDS) A type of block quantum code whose parameters satisfy the quantum Singleton bound with equality.
Quantum multi-dimensional High-rate low-distance CSS code whose qubits lie on a \(D\)-dimensional rectangle, with \(X\)-type stabilizer generators defined on each \(D-1\)-dimensional
parity-check (QMDPC) code rectangle. The \(Z\)-type stabilizer generators are defined via permutations in order to commute with the \(X\)-type generators.
Quantum parity code (QPC) A \([[m_1 m_2,1,\min(m_1,m_2)]]\) CSS code family obtained from concatenating an \(m_1\)-qubit phase-flip repetition code with an \(m_2\)-qubit bit-flip repetition
Quantum pin code Member of a family of CSS codes that encompasses both quantum Reed-Muller and color codes and that is defined using intersections of pinned sets.
Quantum quadratic-residue (QR) Galois-qudit \([[n,1]]_q\) pure self-dual CSS code constructed from a dual-containing QR code via the Galois-qudit CSS construction. For \(q\) not divisible by \(n\)
code , its distance satisfies \(d^2-d+1 \geq n\) when \(n \equiv 3\) modulo 4 [155; Thm. 40] and \(d \geq \sqrt{n}\) when \(n\equiv 1\) modulo 4 [155; Thm. 41].
Quantum rainbow code A CSS code whose qubits are associated with vertices of a simplex graph with \(m+1\) colors.
Quantum repetition code Encodes \(1\) qubit into \(n\) qubits according to \(|0\rangle\to|\phi_0\rangle^{\otimes n}\) and \(|1\rangle\to|\phi_1\rangle^{\otimes n}\). The code is called a
bit-flip code when \(|\phi_i\rangle = |i\rangle\), and a phase-flip code when \(|\phi_0\rangle = |+\rangle\) and \(|\phi_1\rangle = |-\rangle\).
QLDPC code whose stabilizer generator matrix resembles the parity-check matrix of SC-LDPC codes. There exist CSS [156] and stabilizer constructions [157]. In either
Quantum spatially coupled case, the stabilizer generator matrix is constructed by "spatially" coupling sub-matrix blocks in chain-like fashion (or, more generally, in grid-like fashion) to
(SC-QLDPC) code yield a band matrix. The sub-matrix blocks have to satisfy certain conditions amongst themselves so that the resulting band matrix is a stabilizer generator matrix.
Matrices corresponding to translationally invariant chains are called time-variant, and otherwise are called time-invariant.
Quantum spherical code (QSC) Code whose codewords are superpositions of points on an \(n\)-dimensional real or complex sphere. Such codes can in principle be defined on any configuration space
housing a sphere, but the focus of this entry is on QSCs constructed out of coherent-state constellations.
Quantum synchronizable code A qubit stabilizer code designed to protect against synchronization errors (a.k.a. misalignment), which are errors that misalign the code block in a larger block by
one or more locations.
Quantum tensor-product code CSS code constructed from a tensor code. In some cases, only one of the classical codes forming the tensor code needs to be self-orthogonal.
Quantum turbo code A quantum version of the turbo code, obtained from an interleaved serial quantum concatenation [158; Def. 30] of quantum convolutional codes.
Quantum twisted code Hermitian code arising constructed from twisted BCH codes.
Group-GKP stabilizer code whose codewords realize 2D modular gapped topological order defined by a finite group \(G\). The code's generators are few-body operators
Quantum-double code associated to the stars and plaquettes, respectively, of a tessellation of a two-dimensional surface (with a qudit of dimension \( |G| \) located at each edge of the
Quasi-cyclic QLDPC code A Galois-qudit stabilizer code on \(n\) subsystems such that cyclic shifts of the subsystems by \(\ell\geq 1\) leave the codespace invariant. Such codes have
circulant stabilizer generator matrices [159,160].
Quasi-cyclic quantum code A block code on \(n\) subsystems such that cyclic shifts of the subsystems by \(\ell\geq 1\) leave the codespace invariant.
Quasi-hyperbolic color code An extension of the color code construction to quasi-hyperbolic manifolds, e.g., a product of a 2D hyperbolic surface and a circle.
Qubit BCH code Qubit stabilizer code constructed from a self-orthogonal binary BCH code via the CSS construction, from a Hermitian self-orthogonal quaternary BCH code via the
Hermitian construction, or by taking a Euclidean self-orthogonal BCH code over \(GF(2^m)\), converting it to a binary code, and applying the CSS construction.
An \([[n,k,d]]\) stabilizer code admitting a set of stabilizer generators that are either \(Z\)-type or \(X\)-type Pauli strings. Codes can be defined from two
Qubit CSS code classical codes and/or chain complexes over \(\mathbb{Z}_2\) per the qubit CSS-to-homology correspondence below. Strong CSS codes are codes for which there exists a
set of \(X\) and \(Z\) stabilizer generators of equal weight.
Qubit code Encodes \(K\)-dimensional Hilbert space into a \(2^n\)-dimensional (i.e., \(n\)-qubit) Hilbert space. Usually denoted as \(((n,K))\) or \(((n,K,d))\), where \(d\) is
the code's distance.
An \(((n,2^k,d))\) qubit stabilizer code is denoted as \([[n,k]]\) or \([[n,k,d]]\), where \(d\) is the code's distance. Logical subspace is the joint eigenspace of
Qubit stabilizer code commuting Pauli operators forming the code's stabilizer group \(\mathsf{S}\). Traditionally, the logical subspace is the joint \(+1\) eigenspace of a set of \(2^
{n-k}\) commuting Pauli operators which do not contain \(-I\). The distance is the minimum weight of a Pauli string that implements a nontrivial logical operation in
the code.
Qudit GNU PI code Extension of the GNU PI codes to those encoding logical qudits into physical qubits. Codewords can be expressed as superpositions of Dicke states with coefficients
are square-roots of polynomial coefficients, with the case of binomial coefficients reducing to the GNU PI codes.
Qudit cubic code Generalization of the Haah cubic code to modular qudits.
Qudit-into-oscillator code Encodes \(K\)-dimensional Hilbert space into \(n\) bosonic modes.
Quantum code whose construction is non-deterministic in some way, i.e., codes that utilize an elements of randomness somewhere in their construction. Members of this
Random quantum code class range from fully non-deterministic codes (e.g., random-circuit codes), to codes whose multi-step construction is deterministic with the exception of a single
step (e.g., expander lifter-product codes).
Random stabilizer code An \(n\)-qubit, modular-qudit, or Galois-qudit stabilizer code whose construction is non-deterministic. Since stabilizer encoders are Clifford circuits, such codes
can be thought of as arising from random Clifford circuits.
Random-circuit code Code whose encoding is naturally constructed by randomly sampling from a large set of (not necessarily unitary) quantum circuits.
Raussendorf-Bravyi-Harrington A three-dimensional cluster-state code defined on the bcc lattice (i.e., a cubic lattice with qubits on edges and faces).
(RBH) cluster-state code
Code whose codespace is spanned by \(q\) field-theoretic coherent states which are flowing under the renormalization group (RG) flow of massive free fields. The code
Renormalization group (RG) cat approximately protects against displacements that represent local (i.e., short-distance, ultraviolet, or UV) operators. Intuitively, this is because RG cat codewords
code represent non-local (i.e., long-distance) degrees of freedom, which should only be excitable by acting on a macroscopically large number of short-distance degrees of
Rhombic dodecahedron surface code A \([[14,3,3]]\) twist-defect surface code whose qubits lie on the vertices of a rhombic dodecahedron. Its non-CSS nature is due to twist defects [161] stemming from
the geometry of the polytope.
Rotated surface code Variant of the surface code defined on a square lattice that has been rotated 45 degrees such that qubits are on vertices, and both \(X\)- and \(Z\)-type check
operators occupy plaquettes in an alternating checkerboard pattern.
Rotor GKP code GKP code protecting against small angular position and momentum shifts of a planar rotor.
Encodes a logical Hilbert space, finite- or infinite-dimensional, into a physical Hilbert space of \(L^2\)-normalizable functions on either the integers \(\mathbb Z
Rotor code \) or the circle group \(U(1)\). Ideal codewords may not be normalizable because the space is infinite-dimensional, so approximate versions have to be constructed in
Rotor code whose codespace is defined as the common \(+1\) eigenspace of a group of mutually commuting rotor generalized Pauli operators. The stabilizer group can be
Rotor stabilizer code either discrete or continuous, corresponding to modular or linear constraints on angular positions and momenta. Both cases can yield finite or infinite logical
dimension. Exact codewords are non-normalizable, so approximate constructions have to be considered.
SYK code Approximate \(n\)-fermionic code whose codewords are low-energy states of the Sachdev-Ye-Kitaev (SYK) Hamiltonian [162,163] or other low-rank SYK models [164,165].
A qubit code which admits a basis of codewords of the form \(|c\rangle+|\overline{c}\rangle\), where \(c\) is a bitstring and \(\overline{c}\) is its negation a.k.a.
Self-complementary quantum code complement. Their codewords generalize the two-qubit Bell states and three-qubit GHZ states and are often called (qubit) cat states or poor-man's GHZ states. Such
codes were originally pointed out to perform well against AD noise [166].
A block quantum code that forms the ground-state subspace of an \(n\)-body geometrically local Hamiltonian whose logical information is recoverable for arbitrary
Self-correcting quantum code long times in the \(n\to\infty\) limit after interaction with a sufficiently cold thermal environment. Typically, one also requires a decoder whose decoding time
scales polynomially with \(n\) and a finite energy density. The original criteria for a self-correcting quantum memory, informally known as the Caltech rules [62,167
], also required finite-spin Hamiltonians.
Sierpinsky fractal spin-liquid A fractal type-I fracton CSS code defined on a cubic lattice [3; Eq. (D22)]. The code admits an excitation-moving operator shaped like a Sierpinski triangle [3; Fig.
(SFSL) code 2].
Single-mode bosonic code Encodes \(K\)-dimensional Hilbert space into a single bosonic mode. A trivial single-mode code encoding a qubit into the first two Fock states \(\{|0\rangle,|1\
rangle\}\) is called the single-rail encoding [168,169].
Single-shot code Block quantum qudit code whose error-syndrome weights increase linearly with the distance of the error state to the code space.
Single-spin code An encoding into a monolithic (i.e. non-tensor-product) Hilbert space that houses an irreducible representation of \(SU(2)\) or, more generally, another Lie group.
In some cases, this space can be thought of as the permutation invariant subspace of a particular tensor-product space.
Approximate quantum code of rate \(R\) that can tolerate adversarial errors nearly saturating the quantum Singleton bound of \((1-R)/2\). The formulation of such
Singleton-bound approaching AQECC codes relies on a notion of quantum list decoding [170,171]. Sampling a description of this code can be done with an efficient randomized algorithm with \(2^{-\Omega
(n)}\) failure probability.
Holographic tensor-network code constructed out of a network of encoding isometries of the \([[6,1,3]]\) six-qubit stabilizer code. The structure of the isometry is
Six-qubit-tensor holographic code similar to that of the heptagon holographic code since both isometries are rank-six tensors, but the isometry in this case is neither a perfect tensor nor a
planar-perfect tensor.
Skew-cyclic CSS code A member of a family of Galois-qudit CSS codes constructed from skew-cyclic classical codes over rings [172; Thm. 5.8]. See related study [173] that uses cyclic
codes over rings.
Small-distance block quantum code A block quantum code on \(n\) subsystems that either detects or corrects errors on at most two subsystems, i.e., have distance \(\leq 5\).
A family of \(((n=4k+2l+3,M_{k,l},2))\) self-complementary CWS codes, where \(M_{k,l} \approx 2^{n-2}(1-\sqrt{2/(\pi(n-1))})\). For \(n \geq 11\), these codes have a
Smolin-Smith-Wehner (SSW) code logical subspace whose dimension is larger than that of the largest stabilizer code for the same \(n\) and \(d\). A subset of these codes can be augmented to yield
codes with one higher logical dimension [174].
Spacetime circuit code Qubit stabilizer code used to correct faults in Clifford circuits, i.e., circuits up made of Clifford gates and Pauli measurements. The code utilizes redundancy in
the measurement outcomes of a circuit to correct circuit faults, which correspond to Pauli errors of the code.
An analogue of the single-mode GKP code designed for atomic ensembles. Was designed by using the Holstein-Primakoff mapping [175] (see also [176]) to pull back the
Spin GKP code phase-space structure of a bosonic system to the compact phase space of a quantum spin. A different construction emerges depending on which particular expression for
GKP codewords is pulled back.
Spin cat code An analogue of the two-component cat code designed for a large spin, which is often realized in the PI subspace of atomic ensembles.
Spin code Encodes \(K\)-dimensional Hilbert space into a tensor-product or direct sum of factors, with each factor spanned by states of a quantum mechanical spin or, more
generally, an irreducible representation of a Lie group.
Single-mode GKP qudit-into-oscillator code based on the rectangular lattice. Its stabilizer generators are oscillator displacement operators \(\hat{S}_q(2\alpha)=e^
Square-lattice GKP code {-2i\alpha \hat{p}}\) and \(\hat{S}_p(2\beta)=e^{2i\beta \hat{x}}\). To ensure \(\hat{S}_q(2\alpha)\) and \(\hat{S}_p(2\beta)\) generate a stabilizer group that is
Abelian, there is a constraint that \(\alpha\beta=2q\pi\) where \(q\) is an integer denoting the logical dimension.
Square-octagon (4.8.8) color code Triangular color code defined on a patch of the 4.8.8 (square-octagon) tiling, which itself is obtained by applying a fattening procedure to the square lattice [108]
Squeezed cat code Two-component cat code whose two coherent states have been squeezed in a direction perpendicular to the segment formed by the two coherent state values \(\pm\alpha\)
Squeezed fock-state code Approximate bosonic code that encodes a qubit into the same Fock state, but one which is squeezed in opposite directions.
A code whose logical subspace is the joint eigenspace (usually with eigenvalue \(+1\)) of a set of commuting unitary Pauli-type operators forming the code's
Stabilizer code stabilizer group. They can be block codes defined of tensor-product spaces of qubits or qudits, or non-block codes defined on single sufficiently large Hilbert
spaces such as bosonic modes or group spaces.
Stellated color code A non-CSS color code on a lattice patch with a single twist defect at the center of the patch.
Code whose codewords realize a 2D topological order rendered by a Turaev-Viro topological field theory. The corresponding anyon theory is defined by a
String-net code (multiplicity-free) unitary fusion category \( \mathcal{C} \). The code is defined on a cell decomposition dual to a triangulation of a two-dimensional surface, with
a qudit of dimension \( |\mathcal{C}| \) located at each edge of the decomposition. These models realize local topological order (LTO) [177].
Surface-17 code A \([[9,1,3]]\) rotated surface code named for the sum of its 9 data qubits and 8 syndrome qubits. It uses the smallest number of qubits to perform fault-tolerant
error correction on a surface code with parallel syndrome extraction.
Surface-code-fragment (SCF) Holographic tensor-network code constructed out of a network of encoding isometries of the \([[5,1,2]]\) rotated surface code. The structure of the isometry is
holographic code similar to that of the HaPPY code since both isometries are rank-six tensors. In the case of the SCF holographic code, the isometry is only a planar-perfect tensor
(as opposed to a perfect tensor).
Symmetry-protected A code which admits a restricted notion of thermal stability against symmetric perturbations, i.e., perturbations that commute with a set of operators forming a
self-correcting quantum code group \(G\) called the symmetry group.
Symmetry-protected topological A code whose codewords form the ground-state or low-energy subspace of a code Hamiltonian realizing symmetry-protected topological (SPT) order.
(SPT) code
Tensor-network code Block quantum code constructed using a tensor-network-based graphical framework from atomic tensors a.k.a. quantum Lego blocks [178], which can be encoding
isometries for smaller quantum codes. The class of codes constructed using the framework depends on the choice of atomic Lego blocks.
Code constructed in a similar way as the HDX code, but utilizing tensor products of multiple Ramanujan complexes and then applying distance balancing. These improve
Tensor-product HDX code the asymptotic code distance over the HDX codes from \(\sqrt{n}\log n\) to \(\sqrt{n}~\text{polylog}(n)\). The utility of such tensor products comes from the fact
that one of the Ramanujan complexes is a collective cosystolic expander as opposed to just a cosystolic expander.
Ternary-tree fermion-into-qubit A fermion-into-qubit encoding defined on ternary trees that maps Majorana operators into Pauli strings of weight \(\lceil \log_3 (2n+1) \rceil\).
Tetrahedral color code 3D color code defined on select tetrahedra of a 3D tiling. Qubits are placed on the vertices, edges, triangles, and in the center of each tetrahedron. The code has
both string-like and sheet-like logical operators [179].
Three-fermion (3F) Walker-Wang A 3D lattice stabilizer code whose low-energy excitations on boundaries realize the three-fermion anyon theory [180โ182] and that can be used as a resource state for
model code fault-tolerant MBQC [183].
2D subsystem stabilizer code whose low-energy excitations realize the three-fermion anyon theory [180โ182]. One version uses two qubits at each site [54], while
Three-fermion (3F) subsystem code other manifestations utilize a single qubit per site and only weight-two (two-body) interactions [181,184]. All are expected to be equivalent to each other via a
local constant-depth Clifford circuit.
A \([[3,1,2]]_3\) prime-qudit CSS code that is the smallest qutrit stabilizer code to detect a single-qutrit error. with stabilizer generators \(ZZZ\) and \(XXX\).
Three-qutrit code The code defines a quantum secret-sharing scheme and serves as a minimal model for the AdS/CFT holographic duality. It is also the smallest non-trivial instance of a
quantum maximum distance separable code (QMDS), saturating the quantum Singleton bound.
Three-rotor code \([[3,1,2]]_{\mathbb Z}\) rotor code that is an extension of the \([[3,1,2]]_3\) qutrit CSS code to the integer alphabet, i.e., the angular momentum states of a
planar rotor.
A code whose codewords form the ground-state or low-energy subspace of a (typically geometrically local) code Hamiltonian realizing a topological phase. A
Topological code topological phase may be bosonic or fermionic, i.e., constructed out of underlying subsystems whose operators commute or anti-commute with each other, respectively.
Unless otherwise noted, the phases discussed are bosonic.
Version of the Kitaev surface code on the two-dimensional torus, encoding two logical qubits. Being the first manifestation of the surface code, "toric code" is
Toric code often an alternative name for the general construction. Twisted toric code [185; Fig. 8] refers to the construction on a torus with twisted (a.k.a. shifted) boundary
Transverse-field Ising model A 1D translationally invariant stabilizer code whose encoding is a constant-depth circuit of nearest-neighbor gates on alternating even and odd bonds that consist of
(TFIM) code transverse-field Ising Hamiltonian interactions. The code allows for perfect state transfer of arbitrary distance using local operations and classical communications
Tree cluster-state code Code obtained from a cluster state on a tree graph that has been proposed in the context of quantum repeater and MBQC architectures.
Triangular surface code A surface code with weight-four stabilizer generators defined on a triangular lattice patch that are examples of twist-defect surface code with a single twist defect
at the center of the patch. The codes use about \(25\%\) fewer physical per logical qubit for a given distance compared to the surface code.
Triorthogonal code Qubit CSS code whose \(X\)-type logicals and stabilizer generators form a triorthogonal matrix (defined below) in the symplectic representation.
A \([[n,k,d]]_q\) stabilizer code whose stabilizer's Galois symplectic representation forms a linear subspace. In other words, the set of \(q\)-ary vectors
True Galois-qudit stabilizer code representing the stabilizer group is closed under both addition and multiplication by elements of \(GF(q)\). In contrast, Galois-qudit stabilizer codes admit sets of
vectors that are closed under addition only.
Truncated trihexagonal (4.6.12) Triangular color code defined on a patch of the 4.6.12 (truncated trihexagonal or square-hexagon-dodecagon) tiling.
color code
Twist-defect color code A non-CSS extension of the 2D color code whose non-CSS stabilizer generators are associated with twist defects of the associated lattice.
Twist-defect surface code A non-CSS extension of the 2D surface-code construction whose non-CSS stabilizer generators are associated with twist defects of the associated lattice. A related
construction [186] doubles the number of qubits in the lattice via symplectic doubling.
A cyclic code that can be thought of as the XZZX toric code with shifted (a.k.a twisted) boundary conditions. Admits a set of stabilizer generators that are cyclic
Twisted XZZX toric code shifts of a particular weight-four \(XZZX\) Pauli string. For example, a seven-qubit \([[7,1,3]]\) variant has stabilizers generated by cyclic shifts of \(XZIZXII\)
[187]. Codes encode either one or two logical qubits, depending on qubit geometry, and perform well against biased noise [188].
Block group-representation code realizing particular irreps of particular groups such that a distance of two is automatically guaranteed. Groups which admit irreps
Twisted \(1\)-group code with this property are called twisted (unitary) \(1\)-groups and include the binary icosahedral group \(2I\), the \(\Sigma(360\phi)\) subgroup of \(SU(3)\), the
family \(\{PSp(2b, 3), b \geq 1\}\), and the alternating groups \(A_{5,6}\). Groups whose irreps are images of the appropriate irreps of twisted \(1\)-groups also
yield such properties, e.g., the binary tetrahedral group \(2T\) or qutrit Pauli group \(\Sigma(72\phi)\).
Twisted quantum double (TQD) code Code whose codewords realize a 2D topological order rendered by a Chern-Simons topological field theory. The corresponding anyon theory is defined by a finite group
\(G\) and a Type-III group cocycle \(\omega\), but can also be described in a category theoretic way [189].
Two-block CSS code Galois-qudit CSS code whose stabilizer generator matrices \(H_X=(A,B)\) and \(H_Z=(B^T,-A^T)\), are constructed from a pair of square commuting matrices \(A\) and \
Two-block group-algebra (2BGA) 2BGA codes are the smallest LP codes LP\((a,b)\), constructed from a pair of group algebra elements \(a,b\in \mathbb{F}_q[G]\), where \(G\) is a finite group, and \
codes (\mathbb{F}_q\) is a Galois field. For a group of order \(\ell\), we get a 2BGA code of length \(n=2\ell\). A 2BGA code for an Abelian group is called an Abelian
2BGA code. A construction of such codes in terms of Kronecker products of circulant matrices was introduced in [190].
Two-component cat code Code whose codespace is spanned by two coherent states \(\left|\pm\alpha\right\rangle\) for nonzero complex \(\alpha\).
A code whose codewords realize lattice two-gauge theory [191โ196] for a finite two-group (a.k.a. a crossed module) in arbitrary spatial dimension. There exist
Two-gauge theory code several lattice-model formulations in arbitrary spatial dimension [197,198] as well as explicitly in 3D [199โ202] and 4D [202], with the 3D case realizing the Yetter
model [203โ206].
Two-mode binomial code Two-mode constant-energy CLY code whose coefficients are square-roots of binomial coefficients.
Type-II fractal spin-liquid code A type-II fracton prime-qudit CSS code defined on a cubic lattice [3; Eqs. (D9-D10)].
Union stabilizer (USt) code A qubit code whose codespace consists of a direct sum of a qubit stabilizer codespace and one or more of that stabilizer code's error spaces.
Union-Jack color code Triangular color code defined on a patch of the Tetrakis square tiling (a.k.a. the Union Jack lattice).
Valence-bond-solid (VBS) code An \(n\)-qubit approximate \(q\)-dimensional spin code family whose codespace is described in terms of \(SU(q)\) valence-bond-solid (VBS) [207] matrix product states
with various boundary conditions. The codes become exact when either \(n\) or \(q\) go to infinity.
Very small logical qubit (VSLQ) The two logical codewords are \(|\pm\rangle \propto (|0\rangle\pm|2\rangle)(|0\rangle\pm|2\rangle)\), where the total Hilbert space is the tensor product of two
code transmon qudits (whose ground states \(|0\rangle\) and second excited states \(|2\rangle\) are used in the codewords). Since the code is intended to protect against
losses, the qutrits can equivalently be thought of as oscillator Fock-state subspaces.
W-state code Approximate block quantum code whose encoding resembles the structure of the W state [208]. This code enables universal quantum computation with transversal gates.
A 3D topological code defined by a unitary braided fusion category \( \mathcal{C} \) (also known as a unitary premodular category). The code is defined on a cubic
Walker-Wang model code lattice that is resolved to be trivalent, with a qudit of dimension \( |\mathcal{C}| \) located at each edge. The codespace is the ground-state subspace of the
Walker-Wang model Hamiltonian [209] and realizes the Crane-Yetter model [210โ212]. A single-state version of the code provides a resource state for MBQC [183].
Wasilewski-Banaszek code Three-oscillator constant-excitation Fock-state code encoding a single logical qubit.
X-cube Floquet code Floquet code whose qubits are placed on vertices of a truncated cubic lattice. Its weight-two check operators are placed on various edges. Its ISG can be that of the
X-cube model code or that of several decoupled surface codes.
X-cube model code A foliated type-I fracton code supporting a subextensive number of logical qubits. Variants include the membrane-coupled [213], twice-foliated [214], and several
generalized [215] X-cube models.
The XP Stabilizer formalism is a generalization of the XS and Pauli stabilizer formalisms, with stabilizer generators taken from the group \( \mathsf{BD}_{2N}^{\
XP stabilizer code otimes n} = \langle\omega I, X, P\rangle^{\otimes n} \), which is the tensor product of the binary dihedral group of order \(8N\). Here, \(N\) is called the
precision, \( \omega \) is a \( 2N \)th root of unity, and \( P = \text{diag} ( 1, \omega^2) \). The codespace is a \(+1\) eigenspace of a set of XP stabilizer
generators, which need not commute to define a valid codespace.
XS stabilizer code A type of stabilizer code where stabilizer generators are elements of the group \( \{\alpha I, X, \sqrt{Z}]\}^{\otimes n} \), with \( \sqrt{Z} = \text{diag} (1, i)\)
. The codespace is a joint \(+1\) eigenspace of a set of stabilizer generators, which need not commute to define a valid codespace.
XY surface code Non-CSS derivative of the surface code whose generators are \(XXXX\) and \(YYYY\), obtained by mapping \(Z \to Y\) in the surface code.
XYZ color code Non-CSS variant of the 6.6.6 color code whose generators are \(XZXZXZ\) and \(ZYZYZY\) Pauli strings associated to each hexagonal in the hexagonal (6.6.6) tiling. A
further variation called the domain wall color code admits generators of the form \(XXXZZZ\) and \(ZZZXXX\) [216].
A non-CSS QLDPC code constructed from a particular hypergraph product of three classical codes. The idea is that rather than taking a product of only two classical
XYZ product code codes to produce a CSS code, a third classical code is considered, acting with Pauli-\(Y\) operators. When the underlying classical codes are 1D (e.g., repetition
codes), the XYZ product yields a 3D code. Higher dimensional versions have been constructed [217].
Floquet code whose qubits are placed on vertices of a ruby lattice. Its weight-two check operators are placed on various edges. One third of the time during its
XYZ ruby Floquet code measurement schedule, its ISG is that of the 6.6.6 color code concatenated with a three-qubit repetition code. Together, all ISGs generate the gauge group of the 3F
subsystem code. A Floquet code with the same underlying subsystem code but with a different measurement schedule was developed in Ref. [218].
XYZ\(^2\) hexagonal stabilizer An instance of the matching code based on the Kitaev honeycomb model. It is described on a hexagonal lattice with \(XYZXYZ\) stabilizers on each hexagonal plaquette.
code Each vertical pair of qubits has an \(XX\), \(YY\), or \(ZZ\) link stabilizer depending on the orientation of the plaquette stabilizers.
XZZX surface code Non-CSS variant of the rotated surface code whose generators are \(XZZX\) Pauli strings associated, clock-wise, to the vertices of each face of a two-dimensional
lattice (with a qubit located at each vertex of the tessellation).
Member of a family of \([[n,k,d]]\) qubit CSS codes resulting from a concatenation of a QMDPC code with a rotated surface code. Concatenation does not impose
Yoked surface code additional connectivity constraints and can as much as triple the number of logical qubits per physical qubit when compared to the original surface code.
Concatenation with 1D (2D) QMDPC yields codes with twice (four times) the distance. The stabilizer generators of the outer QMDPC code are referred to as yokes in
this context.
Zero-pi qubit code A \([[2,(0,2),(2,1)]]_{\mathbb{Z}}\) homological rotor code on the smallest tiling of the projective plane \(\mathbb{R}P^2\). The ideal code can be obtained from a
four-rotor Josephson-junction [112] system after a choice of grounding [113].
\(((10,24,3))\) qubit code Ten-qubit CWS code that is unique and optimal for its parameters.
\(((2^m,2^{2^mโ5m+1},8))\) Member of a family of \(((2^m,2^{2^mโ5m+1},8))\) CSS-like union stabilizer codes constructed using the classical Goethals and Preparata codes.
Goethals-Preparata code
\(((3,6,2))_{\mathbb{Z}_6}\) Six-qudit error-detecting code with logical dimension \(K=6\) that is obtained from a particular AME state that serves as a solution to the 36 officers of Euler
Euler code problem. The code is obtained from a \(((4,1,3))_{\mathbb{Z}_6}\) code.
\(((5+2r,3\times 2^{2r+1},2))\) Member of a family of \(((5+2r,3\times 2^{2r+1},2))\) CWS codes; see also [220,221][219; Exam. 8].
Rains code
\(((5,3,2))_3\) qutrit code Smallest qutrit block code realizing the \(\Sigma(360\phi)\) subgroup of \(SU(3)\) transversally. The next smallest code is \(((7,3,2))_3\).
\(((5,6,2))\) qubit code Six-qubit cyclic CWS code detecting a single-qubit error. This code has a logical subspace whose dimension is larger than that of the \([[5,2,2]]\) code, the best
five-qubit stabilizer code with the same distance [174].
\(((7,2,3))\) Pollatsek-Ruskai Seven-qubit PI code that realizes gates from the binary icosahedral group transversally. Can also be interpreted as a spin-\(7/2\) single-spin code. The codespace
code projection is a projection onto an irrep of the binary icosahedral group \(2I\).
\(((9,12,3))\) qubit code Nine-qubit cyclic CWS code correcting a single-qubit error. This code has a logical subspace whose dimension is larger than that of the \([[9,3,3]]\) code, the best
nine-qubit stabilizer code with the same distance [222].
\(((9,2,3))\) Ruskai code Nine-qubit PI code that protects against single-qubit errors as well as two-qubit errors arising from exchange processes.
\(((n,1+n(q-1),2))_q\) union Member of a family of \(((n,1+n(q-1),2))_q\) Galois-qudit union stabilizer code for odd \(n\).
stabilizer code
\(((n,1,2))\) PI distance-two code on \(n\geq4\) qubits whose degree of entanglement vanishes asymptotically with \(n\) [223; Appx. D] (cf. [224]).
Bravyi-Lee-Li-Yoshida PI code
\((1,3)\) 4D toric code A generalization of the Kitaev surface code defined on a 4D lattice. The code is called a \((1,3)\) toric code because it admits 1D \(Z\)-type and 3D \(X\)-type
logical operators.
\((5,1,2)\)-convolutional code Family of quantum convolutional codes that are 1D lattice generalizations of the five-qubit perfect code, with the former''s lattice-translation symmetry being the
extension of the latter''s cyclic permutation symmetry.
Extenstion of the Kitaev toric code to higher-dimensional lattices with shifted (a.k.a twisted) boundary conditions. Such boundary conditions yields quibit
\(D\)-dimensional twisted toric geometries that are tori \(\mathbb{R}^D/\Lambda\), where \(\Lambda\) is an arbitrary \(D\)-dimensional lattice. Picking a hypercubic lattice yields the ordinary \(D
code \)-dimensional toric code. It is conjectured that appropriate twisted boundary conditions yield multi-dimensional toric code families with linear distance and
logarithmic-weight stabilizer generators [225].
\(D_4\) hyper-diamond GKP code Two-mode GKP qudit-into-oscillator code based on the \(D_4\) hyper-diamond lattice.
\(G\)-covariant erasure code A \(G\)-covariant block code that serves as a proof-of-principle construction to demonstrate the existence of \(G\)-covariant codes where \(G\) is a finite group,
and the physical space is finite-dimensional. This construction can be done for any erasure-correcting code.
\(G\)-enriched Walker-Wang model A 3D topological code defined by a unitary \(G\)-crossed braided fusion category \( \mathcal{C} \) [226,227], where \(G\) is a finite group. The model realizes TQFTs
code that include two-gauge theories, those behind Walker-Wang models, as well as the Kashaev TQFT [228,229]. It has been generalized to include domain walls [230].
\(SU(3)\) spin code An extension of Clifford single-spin codes to the group \(SU(3)\), whose codespace is a projection onto a particular irrep of a subgroup of \(SU(3)\) of an
underlying spin that houses some particular irrep of \(SU(3)\).
\(U(d)\)-covariant approximate Covariant code whose construction takes in an arbitrary erasure-correcting code to yield an approximate QECC that is also covariant with respect to the unitary
erasure code group.
\([[10,1,2]]\) CSS code Smallest stabilizer code to implement a logical \(T\) gate via application of physical \(T\), \(T^{\dagger}\), and \(CCZ\) gates.
\([[10,1,4]]_{G}\) tenfold code A \([[10,1,4]]_{G}\) group code for finite Abelian \(G\). The code is defined using a graph that is closely related to the \([[5,1,3]]\) code.
\([[11,1,5]]\) quantum dodecacode Eleven-qubit pure stabilizer code that is the smallest qubit stabilizer code to correct two-qubit errors.
\([[11,1,5]]_3\) qutrit Golay An \([[11,1,5]]_3\) constructed from the ternary Golay code via the CSS construction. The code's stabilizer generator matrix blocks \(H_{X}\) and \(H_{Z}\) are both
code the generator matrix of the ternary Golay code.
\([[12,2,4]]\) carbon code Self-dual twelve-qubit CSS code.
\([[13,1,5]]\) cyclic code Thirteen-qubit twisted surface code for which there is a set of stabilizer generators consisting of cyclic permutations of the \(XZZX\)-type Pauli string \
(XIZZIXIIIIIII\). The code can be thought of as a small twisted XZZX code [231; Ex. 11 and Fig. 3].
\([[144,12,12]]\) gross code A BB QLDPC code which requires less physical and ancilla qubits (for syndrome extraction) than the surface code with the same number of logical qubits and distance.
The name stems from the fact that a gross is a dozen dozen.
\([[15, 7, 3]]\) quantum Hamming Self-dual quantum Hamming code that admits permutation-based CZ logical gates. The code is constructed using the CSS construction from the \([15,11,3]\) Hamming code
code and its \([15,4,8]\) dual code.
\([[15,1,3]]\) quantum \([[15,1,3]]\) CSS code that is most easily thought of as a tetrahedral 3D color code.
Reed-Muller code
\([[16,6,4]]\) Tesseract color A 4D color code defined on a tesseract, with stabilizer generators of both types supported on each cube.
\([[2^D,D,2]]\) hypercube quantum Member of a family of codes defined by placing qubits on a \(D\)-dimensional hypercube, \(Z\)-stabilizers on all two-dimensional faces, and an \(X\)-stabilizer on
code all vertices. These codes realize gates at the \((D-1)\)-st level of the Clifford hierarchy. It can be generalized to a \([[4^D,D,4]]\) error-correcting family [232]
. Various other concatenations give families with increasing distance (see cousins).
\([[2^r, 2^r-r-2, 3]]\) Gottesman A family of non-CSS stabilizer codes of distance \(3\) that saturate the asymptotic quantum Hamming bound.
\([[2^r-1, 2^r-2r-1, 3]]\) Member of a family of self-dual CCS codes constructed from \([2^r-1,2^r-r-1,3]=C_X=C_Z\) Hamming codes and their duals the simplex codes. The code's stabilizer
quantum Hamming code generator matrix blocks \(H_{X}\) and \(H_{Z}\) are both the generator matrix for a simplex code. The weight of each stabilizer generator is \(2^{r-1}\).
\([[2^r-1, 2^r-2r-1, 3]]_p\) A family of CSS codes extending quantum Hamming codes to prime qudits of dimension \(p\) by expressing the qubit code stabilizers in local-dimension-invariant (LDI)
quantum Hamming code form [233].
Member of color-code code family constructed with a punctured first-order RM\((1,m=r)\) \([2^r-1,r+1,2^{r-1}-1]\) code and its even subcode for \(r \geq 3\). Each
\([[2^r-1,1,3]]\) simplex code code transversally implements a diagonal gate at the \((r-1)\)st level of the Clifford hierarchy [234,235]. Each code is a color code defined on a simplex in \(r-1\)
dimensions [123,236], where qubits are placed on the vertices, edges, and faces as well as on the simplex itself.
quantum punctured Reed-Muller Member of CSS code family constructed with a punctured self-dual RM \([2^r-1,2^{r-1},\sqrt{2}^{r-1}-1]\) code and its even subcode for \(r \geq 2\).
\([[2m,2m-2,2]]\) error-detecting Self-complementary CSS code for \(m\geq 2\) with generators \(\{XX\cdots X, ZZ\cdots Z\} \) acting on all \(2m\) physical qubits. The code is constructed via the CSS
code construction from an SPC code and a repetition code [237; Sec. III]. This is the highest-rate distance-two code when an even number of qubits is used [222].
\([[30,8,3]]\) Bring code A \([[30,8,3]]\) hyperbolic surface code on a quotient of the \(\{5,5\}\) hyperbolic tiling called Bring's curve. Its qubits and stabilizer generators lie on the
vertices of the small stellated dodecahedron. Admits a set of weight-five stabilizer generators.
\([[3k + 8, k, 2]]\) Member of the \([[3k + 8, k, 2]]\) family (for even \(k\)) of triorthogonal and quantum divisible codes that admit a transversal \(T\) gate and are relevant for
triorthogonal code magic-state distillation.
\([[4,2,2]]\) Four-qubit code Four-qubit CSS stabilizer code is the smallest qubit stabilizer code to detect a single-qubit error.
\([[4,2,2]]_{G}\) four \([[4,2,2]]_{G}\) group quantum code that is an extension of the four-qubit code to group-valued qudits.
group-qudit code
\([[49,1,5]]\) triorthogonal code Triorthogonal and quantum divisible code which is the smallest distance-five stabilizer code to admit a transversal \(T\) gate [238โ240]. Its \(X\)-type stabilizers
form a triply even linear binary code in the symplectic representation.
\([[5,1,2]]\) rotated surface Rotated surface code on one rung of a ladder, with one qubit on the rung, and four qubits surrounding it.
True stabilizer code that generalizes the five-qubit perfect code to Galois qudits of prime-power dimension \(q=p^m\). It has \(4(m-1)\) stabilizer generators
\([[5,1,3]]_q\) Galois-qudit code expressed as \(X_{\gamma} Z_{\gamma} Z_{-\gamma} X_{-\gamma} I\) and its cyclic permutations, with \(\gamma\) iterating over basis elements of \(GF(q)\) over \(GF(p)
\([[5,1,3]]_{\mathbb{R}}\) An analog stabilizer version of the five-qubit perfect code, encoding one mode into five and correcting arbitrary errors on any one mode.
Braunstein five-mode code
\([[5,1,3]]_{\mathbb{Z}_q}\) Modular-qudit stabilizer code that generalizes the five-qubit perfect code using properties of the multiplicative group \(\mathbb{Z}_q\) [241]; see also [242; Thm.
modular-qudit code 13]. It has four stabilizer generators consisting of \(X Z Z^\dagger X^\dagger I\) and its cyclic permutations.
\([[6,1,3]]\) Six-qubit One of two six-qubit distance-three codes that are unique up to equivalence [222], with the other code a trivial extension of the five-qubit code [243]. Stabilizer
stabilizer code generators and logical Pauli operators are presented in Ref. [243].
\([[6,2,2]]\) \(C_6\) code Error-detecting self-dual CSS code used in concatenation schemes for fault-tolerant quantum computation. A set of stabilizer generators is \(IIXXXX\) and \(XXIIXX\),
together with the same two \(Z\)-type generators.
\([[6,2,3]]_{q}\) code Six-qudit MDS error-detecting code defined for Galois-qudit dimension \(q=3\) [244], \(q=2^2\) [245], and \(q \geq 5\) [155,244]. This code cannot exist for qubits (
\([[6,4,2]]\) error-detecting Error-detecting six-qubit code with rate \(2/3\) whose codewords are cat/GHZ states. A set of stabilizer generators is \(XXXXXX\) and \(ZZZZZZ\). It is the unique
code code for its parameters, up to equivalence [222; Tab. III]. Concatenations of this code with itself yield the \([[6^r,4^r,2^r]]\) level-\(r\) many-hypercube code
\([[6k+2,3k,2]]\) Campbell-Howard Family of \([[6k+2,3k,2]]\) qubit stabilizer codes with quasi-transversal \(CZZ^{\otimes k}\) gates that are relevant to magic-state distillation.
\([[7, 1:1, 3]]\) hybrid A distance-three seven-qubit hybrid stabilizer code storing one qubit and one classical bit. Admits a stabilizer generator set with a weight-two generator, which
stabilizer code delineates the underlying classical code [247; Eq. (3)].
\([[7,1,3]]\) Steane code A \([[7,1,3]]\) self-dual CSS code that is the smallest qubit CSS code to correct a single-qubit error [243]. The code is constructed using the classical binary \
([7,4,3]\) Hamming code for protecting against both \(X\) and \(Z\) errors.
\([[7,1,3]]\) bare code A \([[7,1,3]]\) code that admits fault-tolerant syndrome extraction using only one ancilla per stabilizer generator measurement.
\([[7,1,3]]\) twist-defect A \([[7,1,3]]\) code (different from the Steane code) that is a small example of a twist-defect surface code.
surface code
\([[7,3,3]]_{q}\) code Seven-qudit MDS error-detecting code defined for Galois-qudit dimension \(q=3\) [244] and \(q \geq 7\) [155,244]. This code cannot exist for qubits (\(q=2\)).
\([[8, 2:1, 3]]\) hybrid A code obtained from the \([[8,3,3]]\) Gottesman code by using one of its logical qubits as a classical bit. One can also use two logical qubits as classical bits,
stabilizer code obtaining an \([[8,1:2,3]]\) hybrid stabilizer code.
\([[8, 3, 3]]\) Eight-qubit Eight-qubit non-degenerate code that can be obtained from a modified CSS construction using the \([8,4,4]\) extended Hamming code and a \([8,7,2]\) even-weight code
Gottesman code [248]. The modification introduces signs between the codewords.
\([[8,2,2]]\) hyperbolic color An \([[8,2,2]]\) hyperbolic color code defined on the projective plane.
\([[8,3,2]]\) CSS code Smallest 3D color code whose physical qubits lie on vertices of a cube and which admits a (weakly) transversal CCZ gate.
\([[9,1,3]]\) Shor code Nine-qubit CSS code that is the first quantum error-correcting code.
\([[9,1,3]]_{\mathbb{R}}\) An analog stabilizer version of Shor's nine-qubit code, encoding one mode into nine and correcting arbitrary errors on any one mode.
Lloyd-Slotine code
\([[9,1,3]]_{\mathbb{Z}_q}\) Modular-qudit CSS code that generalizes the \([[9,1,3]]\) Shor code using properties of the multiplicative group \(\mathbb{Z}_q\).
modular-qudit code
\([[9,1,5]]_3\) quantum Glynn Nine-qutrit pure Hermitian code that is the smallest qutrit stabilizer code to correct two-qutrit errors.
\([[9m-k,k,2]]_3\) triorthogonal Member of the \([[9m-k,k,2]]_3\) family of triorthogonal qutrit codes (for \(k\leq 3m-2\)) that admit a transversal diagonal gate in the third level of the qudit
code Clifford hierarchy and that are relevant for magic-state distillation.
\([[k+4,k,2]]\) H code Family of \([[k+4,k,2]]\) self-dual CSS codes (for even \(k\)) with transversal Hadamard gates that are relevant to magic state distillation. The four stablizer
generators are \(X_1X_2X_3X_4\), \(Z_1Z_2Z_3Z_4\), \(X_1X_2X_5X_6...X_{k+4}\), and \(Z_1Z_2Z_5Z_6...Z_{k+4}\).'
A \(3n\)-mode bosonic Fock-state code that requires only linear optics and the \(\chi^{(2)}\) optical nonlinear interaction for encoding, decoding, and logical
\(\chi^{(2)}\) code gates. Codewords lie in Fock-state subspaces that are invariant under Hermitian combinations of the \(\chi^{(2)}\) nonlinearities \(abc^\dagger\) and \(i abc^\dagger
\), where \(a\), \(b\), and \(c\) are lowering operators acting on one of the \(n\) triples of modes on which the codes are defined. Codewords are also \(+1\)
eigenstates of stabilizer-like symmetry operators, and photon parities are error syndromes.
\(\mathbb{Z}_3\times\mathbb{Z}_9 Modular-qudit 2D subsystem stabilizer code whose low-energy excitations realize a non-modular anyon theory with \(\mathbb{Z}_3\times\mathbb{Z}_9\) fusion rules.
\)-fusion subsystem code Encodes two qutrits when put on a torus.
Modular-qudit subsystem code, based on the Kitaev honeycomb model [98] and its generalization [249], that is characterized by the \(\mathbb{Z}_q^{(1)}\) anyon theory
\(\mathbb{Z}_q^{(1)}\) subsystem [250], which is modular for odd prime \(q\) and non-modular otherwise. Encodes a single \(q\)-dimensional qudit when put on a torus for odd \(q\), and a \(q/2\)
code -dimensional qudit for even \(q\). This code can be constructed using geometrically local gauge generators, but does not admit geometrically local stabilizer
generators. For \(q=2\), the code reduces to the subsystem code underlying the Kitaev honeycomb model code as well as the honeycomb Floquet code.
Qubit stabilizer code whose \(X\)-type logicals and generators form a \(k\)-orthogonal matrix (defined below) in the symplectic representation. In other words, the
\(k\)-orthogonal code overlap between any \(k\) \(X\)-type code-preserving Paulis (including the identity) is even. The original definition is for qubit CSS codes [124], but it can be
extended to more general qubit stabilizer codes [251; Def. 1]. This entry is formulated for qubits, but an extension exists for modular qudits [124].
Code defined in a single angular-momentum subspace that is embedded in a larger direct-sum space of different angular momenta, which can arise from combinations of
ร code spin, electronic, or rotational, or nuclear angular momenta of an atom or molecule. A code is obtained by solving an over-constrained system of equations, and many
solutions can be mapped into existing codes defined on other state spaces.
Y.-A. Chen and Y. Xu, โEquivalence between Fermion-to-Qubit Mappings in two Spatial Dimensionsโ, PRX Quantum 4, (2023) arXiv:2201.05153 DOI
L. Fidkowski, J. Haah, and M. B. Hastings, โGravitational anomaly of (3+1) -dimensional Z2 toric code with fermionic charges and fermionic loop self-statisticsโ, Physical Review B 106, (2022)
arXiv:2110.14654 DOI
A. Dua et al., โSorting topological stabilizer models in three dimensionsโ, Physical Review B 100, (2019) arXiv:1908.08049 DOI
S. Mandal and N. Surendran, โExactly solvable Kitaev model in three dimensionsโ, Physical Review B 79, (2009) arXiv:0801.0229 DOI
S. Ryu, โThree-dimensional topological phase on the diamond latticeโ, Physical Review B 79, (2009) arXiv:0811.2036 DOI
P. Panteleev and G. Kalachev, โQuantum LDPC Codes With Almost Linear Minimum Distanceโ, IEEE Transactions on Information Theory 68, 213 (2022) arXiv:2012.04068 DOI
T. D. Ellison et al., โPauli Stabilizer Models of Twisted Quantum Doublesโ, PRX Quantum 3, (2022) arXiv:2112.11394 DOI
J. C. Magdalena de la Fuente, N. Tarantino, and J. Eisert, โNon-Pauli topological stabilizer codes from twisted quantum doublesโ, Quantum 5, 398 (2021) arXiv:2001.11516 DOI
L. Wang and Z. Wang, โIn and around abelian anyon models \({}^{\text{*}}\)โ, Journal of Physics A: Mathematical and Theoretical 53, 505203 (2020) arXiv:2004.12048 DOI
V. V. Albert et al., โPerformance and structure of single-mode bosonic codesโ, Physical Review A 97, (2018) arXiv:1708.05010 DOI
S. M. Girvin, โIntroduction to quantum error correction and fault toleranceโ, SciPost Physics Lecture Notes (2023) arXiv:2111.08894 DOI
G. Evenbly and G. Vidal, โClass of Highly Entangled Many-Body States that can be Efficiently Simulatedโ, Physical Review Letters 112, (2014) arXiv:1210.1895 DOI
K. Setia et al., โSuperfast encodings for fermionic quantum simulationโ, Physical Review Research 1, (2019) arXiv:1810.05274 DOI
P. M. Fenwick, โA new data structure for cumulative frequency tablesโ, Software: Practice and Experience 24, 327 (1994) DOI
V. Havlรญฤek, M. Troyer, and J. D. Whitfield, โOperator locality in the quantum simulation of fermionic modelsโ, Physical Review A 95, (2017) arXiv:1701.07072 DOI
E. Camps-Moreno et al., โAn algebraic characterization of binary CSS-T codes and cyclic CSS-T codes for quantum fault toleranceโ, Quantum Information Processing 23, (2024) arXiv:2312.17518 DOI
T. Johnson-Freyd, โ(3+1)D topological orders with only a \(\mathbb{Z}_2\)-charged particleโ, (2020) arXiv:2011.11165
L. Fidkowski, J. Haah, and M. B. Hastings, โExactly solvable model for a 4+1D beyond-cohomology symmetry-protected topological phaseโ, Physical Review B 101, (2020) arXiv:1912.05565 DOI
J. Haah, โClifford quantum cellular automata: Trivial group in 2D and Witt group in 3Dโ, Journal of Mathematical Physics 62, (2021) arXiv:1907.02075 DOI
W. Shirley et al., โThree-Dimensional Quantum Cellular Automata from Chiral Semion Surface Topological Order and beyondโ, PRX Quantum 3, (2022) arXiv:2202.05442 DOI
C. W. von Keyserlingk, F. J. Burnell, and S. H. Simon, โThree-dimensional topological lattice models with surface anyonsโ, Physical Review B 87, (2013) arXiv:1208.5128 DOI
T. C. Bohdanowicz et al., โGood approximate quantum LDPC codes from spacetime circuit Hamiltoniansโ, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (2019)
arXiv:1811.00277 DOI
D. Ostrev et al., โClassical product code constructions for quantum Calderbank-Shor-Steane codesโ, Quantum 8, 1420 (2024) arXiv:2209.13474 DOI
D. Ostrev, โQuantum LDPC Codes From Intersecting Subsetsโ, IEEE Transactions on Information Theory 70, 5692 (2024) arXiv:2306.06056 DOI
G. Nebe, E. M. Rains, and N. J. A. Sloane, โThe invariants of the Clifford groupsโ, (2000) arXiv:math/0001038
R. Raussendorf, D. Browne, and H. Briegel, โThe one-way quantum computer--a non-network model of quantum computationโ, Journal of Modern Optics 49, 1299 (2002) arXiv:quant-ph/0108118 DOI
B. Coecke and R. Duncan, โInteracting Quantum Observablesโ, Automata, Languages and Programming 298 DOI
B. Coecke and R. Duncan, โInteracting quantum observables: categorical algebra and diagrammaticsโ, New Journal of Physics 13, 043016 (2011) arXiv:0906.4725 DOI
J. Roffe et al., โProtecting quantum memories using coherent parity check codesโ, Quantum Science and Technology 3, 035010 (2018) arXiv:1709.01866 DOI
F. Lacerda, J. M. Renes, and V. B. Scholz, โCoherent-state constellations and polar codes for thermal Gaussian channelsโ, Physical Review A 95, (2017) arXiv:1603.05970 DOI
F. Lacerda, J. M. Renes, and V. B. Scholz, โCoherent state constellations for Bosonic Gaussian channelsโ, 2016 IEEE International Symposium on Information Theory (ISIT) (2016) DOI
E. Knill, R. Laflamme, and W. H. Zurek, โResilient quantum computation: error models and thresholdsโ, Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering
Sciences 454, 365 (1998) arXiv:quant-ph/9702058 DOI
A. M. Steane, โEfficient fault-tolerant quantum computingโ, Nature 399, 124 (1999) arXiv:quant-ph/9809054 DOI
A. M. Steane, โOverhead and noise threshold of fault-tolerant quantum error correctionโ, Physical Review A 68, (2003) arXiv:quant-ph/0207119 DOI
K. M. Svore, B. M. Terhal, and D. P. DiVincenzo, โLocal fault-tolerant quantum computationโ, Physical Review A 72, (2005) arXiv:quant-ph/0410047 DOI
K. M. Svore, D. P. DiVincenzo, and B. M. Terhal, โNoise Threshold for a Fault-Tolerant Two-Dimensional Lattice Architectureโ, (2006) arXiv:quant-ph/0604090
D. Gottesman. Surviving as a quantum computer in a classical world (2024) URL
F. Pastawski, J. Eisert, and H. Wilming, โTowards Holography via Quantum Source-Channel Codesโ, Physical Review Letters 119, (2017) arXiv:1611.07528 DOI
S. Sang, T. H. Hsieh, and Y. Zou, โApproximate quantum error correcting codes from conformal field theoryโ, (2024) arXiv:2406.09555
P. Hayden et al., โError Correction of Quantum Reference Frame Informationโ, PRX Quantum 2, (2021) arXiv:1709.04471 DOI
C. Derby et al., โCompact fermion to qubit mappingsโ, Physical Review B 104, (2021) arXiv:2003.06939 DOI
L. Clinton et al., โTowards near-term quantum simulation of materialsโ, Nature Communications 15, (2024) arXiv:2205.15256 DOI
A. Yu. Kitaev, โFault-tolerant quantum computation by anyonsโ, Annals of Physics 303, 2 (2003) arXiv:quant-ph/9707021 DOI
B. Yoshida, โTopological phases with generalized global symmetriesโ, Physical Review B 93, (2016) arXiv:1508.03468 DOI
M. de W. Propitius, โTopological interactions in broken gauge theoriesโ, (1995) arXiv:hep-th/9511195
L. Lootens et al., โMapping between Morita-equivalent string-net states with a constant depth quantum circuitโ, Physical Review B 105, (2022) arXiv:2112.12757 DOI
R. Dijkgraaf and E. Witten, โTopological gauge theories and group cohomologyโ, Communications in Mathematical Physics 129, 393 (1990) DOI
D. S. Freed and F. Quinn, โChern-Simons theory with finite gauge groupโ, Communications in Mathematical Physics 156, 435 (1993) arXiv:hep-th/9111004 DOI
A. Mesaros and Y. Ran, โClassification of symmetry enriched topological phases with exactly solvable modelsโ, Physical Review B 87, (2013) arXiv:1212.0835 DOI
J. C. Wang and X.-G. Wen, โNon-Abelian string and particle braiding in topological order: ModularSL(3,Z)representation and(3+1)-dimensional twisted gauge theoryโ, Physical Review B 91, (2015)
arXiv:1404.7854 DOI
Y. Wan, J. C. Wang, and H. He, โTwisted gauge theory model of topological phases in three dimensionsโ, Physical Review B 92, (2015) arXiv:1409.3216 DOI
M. B. Hastings, โWeight Reduction for Quantum Codesโ, (2016) arXiv:1611.03790
S. Evra, T. Kaufman, and G. Zรฉmor, โDecodable quantum LDPC codes beyond the \(\sqrt{n}\) distance barrier using high dimensional expandersโ, (2020) arXiv:2004.07935
T. D. Ellison et al., โPauli topological subsystem codes from Abelian anyon theoriesโ, Quantum 7, 1137 (2023) arXiv:2211.03798 DOI
M. A. Levin and X.-G. Wen, โString-net condensation:A physical mechanism for topological phasesโ, Physical Review B 71, (2005) arXiv:cond-mat/0404617 DOI
M. H. Freedman and M. B. Hastings, โDouble Semions in Arbitrary Dimensionโ, Communications in Mathematical Physics 347, 389 (2016) arXiv:1507.05676 DOI
G. Dauphinais et al., โQuantum error correction with the semion codeโ, New Journal of Physics 21, 053035 (2019) arXiv:1810.08204 DOI
S. Hoory, N. Linial, and A. Wigderson, โExpander graphs and their applicationsโ, Bulletin of the American Mathematical Society 43, 439 (2006) DOI
S. B. Bravyi and A. Yu. Kitaev, โFermionic Quantum Computationโ, Annals of Physics 298, 210 (2002) arXiv:quant-ph/0003137 DOI
J. Farinholt, โQuantum LDPC Codes Constructed from Point-Line Subsets of the Finite Projective Planeโ, (2012) arXiv:1207.0732
B. Audoux and A. Couvreur, โOn tensor products of CSS Codesโ, (2018) arXiv:1512.07081
C. G. Brell, โA proposal for self-correcting stabilizer quantum memories in 3 dimensions (or slightly less)โ, New Journal of Physics 18, 013050 (2016) arXiv:1411.7046 DOI
A. Vezzani, โSpontaneous magnetization of the Ising model on the Sierpinski carpet fractal, a rigorous resultโ, Journal of Physics A: Mathematical and General 36, 1593 (2003) arXiv:cond-mat/
0212497 DOI
R. Campari and D. Cassi, โGeneralization of the Peierls-Griffiths theorem for the Ising model on graphsโ, Physical Review E 81, (2010) arXiv:1002.1227 DOI
M. Shinoda, โExistence of phase transition of percolation on Sierpiลski carpet latticesโ, Journal of Applied Probability 39, 1 (2002) DOI
M. H. Freedman, โZ\({}_{\text{2}}\)โSystolic-Freedomโ, Proceedings of the Kirbyfest (1999) DOI
E. Fetaya, โBounding the distance of quantum surface codesโ, Journal of Mathematical Physics 53, (2012) DOI
N. C. Menicucci, โFault-Tolerant Measurement-Based Quantum Computing with Continuous-Variable Cluster Statesโ, Physical Review Letters 112, (2014) arXiv:1310.7596 DOI
J. E. Bourassa et al., โBlueprint for a Scalable Photonic Fault-Tolerant Quantum Computerโ, Quantum 5, 392 (2021) arXiv:2010.02905 DOI
K. Fukui et al., โHigh-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correctionโ, Physical Review X 8, (2018) arXiv:1712.00294 DOI
I. Tzitrin et al., โFault-Tolerant Quantum Computation with Static Linear Opticsโ, PRX Quantum 2, (2021) arXiv:2104.03241 DOI
C. Vuillot et al., โQuantum error correction with the toric Gottesman-Kitaev-Preskill codeโ, Physical Review A 99, (2019) arXiv:1810.00047 DOI
K. Noh and C. Chamberland, โFault-tolerant bosonic quantum error correction with the surfaceโGottesman-Kitaev-Preskill codeโ, Physical Review A 101, (2020) arXiv:1908.03579 DOI
M. V. Larsen et al., โFault-Tolerant Continuous-Variable Measurement-based Quantum Computation Architectureโ, PRX Quantum 2, (2021) arXiv:2101.03014 DOI
K. Noh, C. Chamberland, and F. G. S. L. Brandรฃo, โLow-Overhead Fault-Tolerant Quantum Error Correction with the Surface-GKP Codeโ, PRX Quantum 3, (2022) arXiv:2103.06994 DOI
M. Lin, C. Chamberland, and K. Noh, โClosest Lattice Point Decoding for Multimode Gottesman-Kitaev-Preskill Codesโ, PRX Quantum 4, (2023) arXiv:2303.04702 DOI
J. Zhang, Y.-C. Wu, and G.-P. Guo, โConcatenation of the Gottesman-Kitaev-Preskill code with the XZZX surface codeโ, Physical Review A 107, (2023) arXiv:2207.04383 DOI
G. G. La Guardia and R. Palazzo Jr., โConstructions of new families of nonbinary CSS codesโ, Discrete Mathematics 310, 2935 (2010) DOI
L. Jin and C. Xing, โA Construction of New Quantum MDS Codesโ, (2020) arXiv:1311.3009
X. Liu, L. Yu, and H. Liu, โNew quantum codes from Hermitian dual-containing codesโ, International Journal of Quantum Information 17, 1950006 (2019) DOI
L. Jin et al., โApplication of Classical Hermitian Self-Orthogonal MDS Codes to Quantum MDS Codesโ, IEEE Transactions on Information Theory 56, 4735 (2010) DOI
D. Aharonov and M. Ben-Or, โFault-Tolerant Quantum Computation With Constant Error Rateโ, (1999) arXiv:quant-ph/9906129
Z. Li, L.-J. Xing, and X.-M. Wang, โQuantum generalized Reed-Solomon codes: Unified framework for quantum maximum-distance-separable codesโ, Physical Review A 77, (2008) arXiv:0812.4514 DOI
C.-Y. Lai and C.-C. Lu, โA Construction of Quantum Stabilizer Codes Based on Syndrome Assignment by Classical Parity-Check Matricesโ, IEEE Transactions on Information Theory 57, 7163 (2011)
arXiv:0712.0103 DOI
D. J. C. MacKay, G. Mitchison, and P. L. McFadden, โSparse-Graph Codes for Quantum Error Correctionโ, IEEE Transactions on Information Theory 50, 2315 (2004) arXiv:quant-ph/0304161 DOI
J. Haah, โTowers of generalized divisible quantum codesโ, Physical Review A 97, (2018) arXiv:1709.08658 DOI
D. Schlingemann and R. F. Werner, โQuantum error-correcting codes associated with graphsโ, Physical Review A 65, (2001) arXiv:quant-ph/0012111 DOI
M. Grassl, A. Klappenecker, and M. Rotteler, โGraphs, quadratic forms, and quantum codesโ, Proceedings IEEE International Symposium on Information Theory, arXiv:quant-ph/0703112 DOI
Y. Hwang and J. Heo, โOn the relation between a graph code and a graph stateโ, (2015) arXiv:1511.05647
C. G. Brell, โGeneralized cluster states based on finite groupsโ, New Journal of Physics 17, 023029 (2015) arXiv:1408.6237 DOI
R. Brown, โFrom Groups to Groupoids: a Brief Surveyโ, Bulletin of the London Mathematical Society 19, 113 (1987) DOI
โPreface to the Second Editionโ, Quantum Information Theory xi (2016) arXiv:1106.1445 DOI
P. Hayden and A. May, โSummoning information in spacetime, or where and when can a qubit be?โ, Journal of Physics A: Mathematical and Theoretical 49, 175304 (2016) arXiv:1210.0913 DOI
N. Delfosse, M. E. Beverland, and M. A. Tremblay, โBounds on stabilizer measurement circuits and obstructions to local implementations of quantum LDPC codesโ, (2021) arXiv:2109.14599
N. Baspin, O. Fawzi, and A. Shayeghi, โA lower bound on the overhead of quantum error correction in low dimensionsโ, (2023) arXiv:2302.04317
A. Lubotzky, R. Phillips, and P. Sarnak, โRamanujan graphsโ, Combinatorica 8, 261 (1988) DOI
G. Davidoff, P. Sarnak, and A. Valette, Elementary Number Theory, Group Theory and Ramanujan Graphs (Cambridge University Press, 2001) DOI
A. Kitaev, โAnyons in an exactly solved model and beyondโ, Annals of Physics 321, 2 (2006) arXiv:cond-mat/0506438 DOI
M. Davydova, N. Tantivasadakarn, and S. Balasubramanian, โFloquet Codes without Parent Subsystem Codesโ, PRX Quantum 4, (2023) arXiv:2210.02468 DOI
J. Sullivan, R. Wen, and A. C. Potter, โFloquet codes and phases in twist-defect networksโ, (2023) arXiv:2303.17664
O. Buerschaper et al., โA hierarchy of topological tensor network statesโ, Journal of Mathematical Physics 54, (2013) arXiv:1007.5283 DOI
B. Balsam and A. Kirillov Jr, โKitaevโs Lattice Model and Turaev-Viro TQFTsโ, (2012) arXiv:1206.2308
Z. Jia, D. Kaszlikowski, and S. Tan, โBoundary and domain wall theories of 2d generalized quantum double modelโ, Journal of High Energy Physics 2023, (2023) arXiv:2207.03970 DOI
A. Cowtan and S. Majid, โAlgebraic Aspects of Boundaries in the Kitaev Quantum Double Modelโ, (2022) arXiv:2208.06317
P. G. Kwiat, โHyper-entangled statesโ, Journal of Modern Optics 44, 2173 (1997) DOI
E. B. da Silva and W. S. Soares Jr, โHyperbolic quantum color codesโ, (2018) arXiv:1804.06382
N. Delfosse, โTradeoffs for reliable quantum information storage in surface codes and color codesโ, 2013 IEEE International Symposium on Information Theory (2013) arXiv:1301.6588 DOI
H. Bombin and M. A. Martin-Delgado, โExact topological quantum order inD=3and beyond: Branyons and brane-net condensatesโ, Physical Review B 75, (2007) arXiv:cond-mat/0607736 DOI
C. Vuillot and N. P. Breuckmann, โQuantum Pin Codesโ, IEEE Transactions on Information Theory 68, 5955 (2022) arXiv:1906.11394 DOI
W. Zeng and L. P. Pryadko, โHigher-Dimensional Quantum Hypergraph-Product Codes with Finite Ratesโ, Physical Review Letters 122, (2019) arXiv:1810.01519 DOI
G. Evenbly, โHyperinvariant Tensor Networks and Holographyโ, Physical Review Letters 119, (2017) arXiv:1704.04229 DOI
S. M. Girvin, โCircuit QED: superconducting qubits coupled to microwave photonsโ, Quantum Machines: Measurement and Control of Engineered Quantum Systems 113 (2014) DOI
C. Vuillot, A. Ciani, and B. M. Terhal, โHomological Quantum Rotor Codes: Logical Qubits from Torsionโ, (2023) arXiv:2303.13723
S. Bravyi, โUniversal quantum computation with theฮฝ=5โ2fractional quantum Hall stateโ, Physical Review A 73, (2006) arXiv:quant-ph/0511178 DOI
E. Dennis et al., โTopological quantum memoryโ, Journal of Mathematical Physics 43, 4452 (2002) arXiv:quant-ph/0110143 DOI
R. Alicki et al., โOn thermal stability of topological qubit in Kitaevโs 4D modelโ, (2008) arXiv:0811.0033
M. Capalbo et al., โRandomness conductors and constant-degree lossless expandersโ, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing (2002) DOI
T.-C. Lin and M.-H. Hsieh, โ\(c^3\)-Locally Testable Codes from Lossless Expandersโ, (2022) arXiv:2201.11369
T.-C. Lin and M.-H. Hsieh, โGood quantum LDPC codes with linear time decoder from lossless expandersโ, (2022) arXiv:2203.03581
M. Gschwendtner et al., โQuantum error-detection at low energiesโ, Journal of High Energy Physics 2019, (2019) arXiv:1902.02115 DOI
D. Litinski and F. von Oppen, โQuantum computing with Majorana fermion codesโ, Physical Review B 97, (2018) arXiv:1801.08143 DOI
S. Vijay and L. Fu, โQuantum Error Correction for Complex and Majorana Fermion Qubitsโ, (2017) arXiv:1703.00459
H. Bombin, โGauge Color Codes: Optimal Transversal Gates and Gauge Fixing in Topological Stabilizer Codesโ, (2015) arXiv:1311.0879
F. H. E. Watson et al., โQudit color codes and gauge color codes in all spatial dimensionsโ, Physical Review A 92, (2015) arXiv:1503.08800 DOI
S. S. Bullock and G. K. Brennen, โQudit surface codes and gauge theory with finite cyclic groupsโ, Journal of Physics A: Mathematical and Theoretical 40, 3481 (2007) arXiv:quant-ph/0609070 DOI
H. Watanabe, M. Cheng, and Y. Fuji, โGround state degeneracy on torus in a family of ZN toric codeโ, Journal of Mathematical Physics 64, (2023) arXiv:2211.00299 DOI
Y. Li, X. Chen, and M. P. A. Fisher, โMeasurement-driven entanglement transition in hybrid quantum circuitsโ, Physical Review B 100, (2019) arXiv:1901.08092 DOI
S. Choi et al., โQuantum Error Correction in Scrambling Dynamics and Measurement-Induced Phase Transitionโ, Physical Review Letters 125, (2020) arXiv:1903.05124 DOI
M. J. Gullans and D. A. Huse, โDynamical Purification Phase Transition Induced by Quantum Measurementsโ, Physical Review X 10, (2020) arXiv:1905.05195 DOI
J. Hoffstein, J. Pipher, and J. H. Silverman, โNTRU: A ring-based public key cryptosystemโ, Lecture Notes in Computer Science 267 (1998) DOI
S. M. Barnett and D. T. Pegg, โPhase in quantum opticsโ, Journal of Physics A: Mathematical and General 19, 3849 (1986) DOI
M. H. Michael et al., โNew Class of Quantum Error-Correcting Codes for a Bosonic Modeโ, Physical Review X 6, (2016) arXiv:1602.00008 DOI
K. Noh, V. V. Albert, and L. Jiang, โQuantum Capacity Bounds of Gaussian Thermal Loss Channels and Achievable Rates With Gottesman-Kitaev-Preskill Codesโ, IEEE Transactions on Information Theory
65, 2563 (2019) arXiv:1801.07271 DOI
P. Leviant et al., โQuantum capacity and codes for the bosonic loss-dephasing channelโ, Quantum 6, 821 (2022) arXiv:2205.00341 DOI
Z. Wang et al., โAutomated discovery of autonomous quantum error correction schemesโ, (2021) arXiv:2108.02766
Y. Zeng et al., โApproximate Autonomous Quantum Error Correction with Reinforcement Learningโ, Physical Review Letters 131, (2023) arXiv:2212.11651 DOI
R. D. Somma, โQuantum Computation, Complexity, and Many-Body Physicsโ, (2005) arXiv:quant-ph/0512209
M. R. Geller et al., โUniversal quantum simulation with prethreshold superconducting qubits: Single-excitation subspace methodโ, (2015) arXiv:1505.04990
S. McArdle et al., โDigital quantum simulation of molecular vibrationsโ, Chemical Science 10, 5725 (2019) arXiv:1811.04069 DOI
N. P. D. Sawaya and J. Huh, โQuantum Algorithm for Calculating Molecular Vibronic Spectraโ, The Journal of Physical Chemistry Letters 10, 3586 (2019) arXiv:1812.10495 DOI
N. P. D. Sawaya et al., โResource-efficient digital quantum simulation of d-level systems for photonic, vibrational, and spin-s Hamiltoniansโ, npj Quantum Information 6, (2020) arXiv:1909.12847
T. J. Osborne and D. E. Stiegemann, โDynamics for holographic codesโ, Journal of High Energy Physics 2020, (2020) arXiv:1706.08823 DOI
J. Cotler and A. Strominger, โThe Universe as a Quantum Encoderโ, (2022) arXiv:2201.11658
M. Taylor and C. Woodward, โHolography, cellulations and error correcting codesโ, (2023) arXiv:2112.12468
W. Donnelly et al., โLiving on the edge: a toy model for holographic reconstruction of algebras with centersโ, Journal of High Energy Physics 2017, (2017) arXiv:1611.05841 DOI
K. Dolev et al., โGauging the bulk: generalized gauging maps and holographic codesโ, Journal of High Energy Physics 2022, (2022) arXiv:2108.11402 DOI
D. Harlow and H. Ooguri, โSymmetries in quantum field theory and quantum gravityโ, (2019) arXiv:1810.05338
M. Doroudiani and V. Karimipour, โPlanar maximally entangled statesโ, Physical Review A 102, (2020) arXiv:2004.00906 DOI
E. T. Campbell, H. Anwar, and D. E. Browne, โMagic-State Distillation in All Prime Dimensions Using Quantum Reed-Muller Codesโ, Physical Review X 2, (2012) arXiv:1205.3104 DOI
A. Krishna and J.-P. Tillich, โTowards Low Overhead Magic State Distillationโ, Physical Review Letters 123, (2019) arXiv:1811.08461 DOI
S. Prakash and T. Saha, โLow Overhead Qutrit Magic State Distillationโ, (2024) arXiv:2403.06228
H. Barnum et al., โAuthentication of quantum messagesโ, The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings. arXiv:quant-ph/0205128 DOI
L. Golowich and V. Guruswami, โQuantum Locally Recoverable Codesโ, (2023) arXiv:2311.08653
O. ร
. Mostad, E. Rosnes, and H.-Y. Lin, โGeneralizing Quantum Tanner Codesโ, (2024) arXiv:2405.07980
A. Ketkar et al., โNonbinary stabilizer codes over finite fieldsโ, (2005) arXiv:quant-ph/0508070
M. Hagiwara et al., โSpatially Coupled Quasi-Cyclic Quantum LDPC Codesโ, (2011) arXiv:1102.3181
S. Yang and R. Calderbank, โSpatially-Coupled QDLPC Codesโ, (2023) arXiv:2305.00137
D. Poulin, J.-P. Tillich, and H. Ollivier, โQuantum serial turbo-codesโ, (2009) arXiv:0712.2888
M. Hagiwara and H. Imai, โQuantum Quasi-Cyclic LDPC Codesโ, 2007 IEEE International Symposium on Information Theory (2007) arXiv:quant-ph/0701020 DOI
K. Kasai et al., โQuantum Error Correction Beyond the Bounded Distance Decoding Limitโ, IEEE Transactions on Information Theory 58, 1223 (2012) arXiv:1007.1778 DOI
H. Bombin, โTopological Order with a Twist: Ising Anyons from an Abelian Modelโ, Physical Review Letters 105, (2010) arXiv:1004.1838 DOI
S. Sachdev and J. Ye, โGapless spin-fluid ground state in a random quantum Heisenberg magnetโ, Physical Review Letters 70, 3339 (1993) arXiv:cond-mat/9212030 DOI
Kitaev, Alexei. "A simple model of quantum holography (part 2)." Entanglement in Strongly-Correlated Quantum Matter (2015): 38.
J. Kim, X. Cao, and E. Altman, โLow-rank Sachdev-Ye-Kitaev modelsโ, Physical Review B 101, (2020) arXiv:1910.10173 DOI
J. Kim, E. Altman, and X. Cao, โDirac fast scramblersโ, Physical Review B 103, (2021) arXiv:2010.10545 DOI
R. Lang and P. W. Shor, โNonadditive quantum error correcting codes adapted to the ampltitude damping channelโ, (2007) arXiv:0712.2586
O. Landon-Cardinal et al., โPerturbative instability of quantum memory based on effective long-range interactionsโ, Physical Review A 91, (2015) arXiv:1501.04112 DOI
H.-W. Lee and J. Kim, โQuantum teleportation and Bellโs inequality using single-particle entanglementโ, Physical Review A 63, (2000) arXiv:quant-ph/0007106 DOI
A. P. Lund and T. C. Ralph, โNondeterministic gates for photonic single-rail quantum logicโ, Physical Review A 66, (2002) arXiv:quant-ph/0205044 DOI
D. Leung and G. Smith, โCommunicating over adversarial quantum channels using quantum list codesโ, (2007) arXiv:quant-ph/0605086
T. Bergamaschi, L. Golowich, and S. Gunn, โApproaching the Quantum Singleton Bound with Approximate Error Correctionโ, (2022) arXiv:2212.09935
H. Q. Dinh et al., โA class of skew cyclic codes and application in quantum codes constructionโ, Discrete Mathematics 344, 112189 (2021) DOI
M. Ashraf and G. Mohammad, โQuantum codes over Fp from cyclic codes over Fp[u, v]/โฉu2 โ1, v3 โ v, uv โ vuโชโ, Cryptography and Communications 11, 325 (2018) DOI
S. Yu, Q. Chen, and C. H. Oh, โGraphical Quantum Error-Correcting Codesโ, (2007) arXiv:0709.1780
T. Holstein and H. Primakoff, โField Dependence of the Intrinsic Domain Magnetization of a Ferromagnetโ, Physical Review 58, 1098 (1940) DOI
C. D. Cushen and R. L. Hudson, โA quantum-mechanical central limit theoremโ, Journal of Applied Probability 8, 454 (1971) DOI
C. Jones et al., โLocal topological order and boundary algebrasโ, (2023) arXiv:2307.12552
C. Cao and B. Lackey, โQuantum Lego: Building Quantum Error Correction Codes from Tensor Networksโ, PRX Quantum 3, (2022) arXiv:2109.08158 DOI
A. Kubica et al., โThree-Dimensional Color Code Thresholds via Statistical-Mechanical Mappingโ, Physical Review Letters 120, (2018) arXiv:1708.07131 DOI
E. Rowell, R. Stong, and Z. Wang, โOn classification of modular tensor categoriesโ, (2009) arXiv:0712.1377
H. Bombin, M. Kargarian, and M. A. Martin-Delgado, โInteracting anyonic fermions in a two-body color code modelโ, Physical Review B 80, (2009) arXiv:0811.0911 DOI
H. Bombin, G. Duclos-Cianci, and D. Poulin, โUniversal topological phase of two-dimensional stabilizer codesโ, New Journal of Physics 14, 073048 (2012) arXiv:1103.4606 DOI
S. Roberts and D. J. Williamson, โ3-Fermion Topological Quantum Computationโ, PRX Quantum 5, (2024) arXiv:2011.04693 DOI
H. Bombin, โTopological subsystem codesโ, Physical Review A 81, (2010) arXiv:0908.4246 DOI
N. P. Breuckmann and J. N. Eberhardt, โBalanced Product Quantum Codesโ, IEEE Transactions on Information Theory 67, 6653 (2021) arXiv:2012.09271 DOI
S. Burton, E. Durso-Sabina, and N. C. Brown, โGenons, Double Covers and Fault-tolerant Clifford Gatesโ, (2024) arXiv:2406.09951
A. Robertson et al., โTailored Codes for Small Quantum Memoriesโ, Physical Review Applied 8, (2017) arXiv:1703.08179 DOI
Q. Xu et al., โTailored XZZX codes for biased noiseโ, (2022) arXiv:2203.16486
D. Naidu and D. Nikshych, โLagrangian Subcategories and Braided Tensor Equivalences of Twisted Quantum Doubles of Finite Groupsโ, Communications in Mathematical Physics 279, 845 (2008)
arXiv:0705.0665 DOI
A. A. Kovalev and L. P. Pryadko, โQuantum Kronecker sum-product low-density parity-check codes with finite rateโ, Physical Review A 88, (2013) arXiv:1212.6703 DOI
J. C. Baez and A. D. Lauda, โHigher-Dimensional Algebra V: 2-Groupsโ, (2004) arXiv:math/0307200
J. Baez and U. Schreiber, โHigher Gauge Theory: 2-Connections on 2-Bundlesโ, (2004) arXiv:hep-th/0412325
J. C. Baez and U. Schreiber, โHigher Gauge Theoryโ, (2006) arXiv:math/0511710
J. C. Baez and J. Huerta, โAn invitation to higher gauge theoryโ, General Relativity and Gravitation 43, 2335 (2010) arXiv:1003.4485 DOI
S. Gukov and A. Kapustin, โTopological Quantum Field Theory, Nonlocal Operators, and Gapped Phases of Gauge Theoriesโ, (2013) arXiv:1307.4793
A. Kapustin and R. Thorngren, โTopological Field Theory on a Lattice, Discrete Theta-Angles and Confinementโ, (2013) arXiv:1308.2926
A. Kapustin and R. Thorngren, โHigher symmetry and gapped phases of gauge theoriesโ, (2015) arXiv:1309.4721
A. Bullivant et al., โHigher lattices, discrete two-dimensional holonomy and topological phases in (3 + 1)D with higher gauge symmetryโ, Reviews in Mathematical Physics 32, 2050011 (2019)
arXiv:1702.00868 DOI
A. Bullivant et al., โTopological phases from higher gauge symmetry in3+1dimensionsโ, Physical Review B 95, (2017) arXiv:1606.06639 DOI
C. Delcamp and A. Tiwari, โFrom gauge to higher gauge models of topological phasesโ, Journal of High Energy Physics 2018, (2018) arXiv:1802.10104 DOI
C. Delcamp and A. Tiwari, โOn 2-form gauge models of topological phasesโ, Journal of High Energy Physics 2019, (2019) arXiv:1901.02249 DOI
Z. Wan, J. Wang, and Y. Zheng, โQuantum 4d Yang-Mills theory and time-reversal symmetric 5d higher-gauge topological field theoryโ, Physical Review D 100, (2019) arXiv:1904.00994 DOI
D. N. YETTER, โTQFTโS FROM HOMOTOPY 2-TYPESโ, Journal of Knot Theory and Its Ramifications 02, 113 (1993) DOI
T. Porter, โTopological Quantum Field Theories from Homotopy n -Typesโ, Journal of the London Mathematical Society 58, 723 (1998) DOI
T. PORTER, โINTERPRETATIONS OF YETTERโS NOTION OF G-COLORING: SIMPLICIAL FIBRE BUNDLES AND NON-ABELIAN COHOMOLOGYโ, Journal of Knot Theory and Its Ramifications 05, 687 (1996) DOI
M. Mackaay, โFinite groups, spherical 2-categories, and 4-manifold invariantsโ, (1999) arXiv:math/9903003
I. Affleck et al., โRigorous Results on Valence-Bond Ground States in Antiferromagnetsโ, Condensed Matter Physics and Exactly Soluble Models 249 (2004) DOI
W. Dรผr, G. Vidal, and J. I. Cirac, โThree qubits can be entangled in two inequivalent waysโ, Physical Review A 62, (2000) arXiv:quant-ph/0005115 DOI
K. Walker and Z. Wang, โ(3+1)-TQFTs and Topological Insulatorsโ, (2011) arXiv:1104.2632
L. Crane and D. N. Yetter, โA categorical construction of 4D TQFTsโ, (1993) arXiv:hep-th/9301062
L. Crane, L. H. Kauffman, and D. N. Yetter, โEvaluating the Crane-Yetter Invariantโ, (1993) arXiv:hep-th/9309063
L. Crane, L. H. Kauffman, and D. N. Yetter, โState-Sum Invariants of 4-Manifolds Iโ, (1994) arXiv:hep-th/9409167
H. Ma et al., โFracton topological order via coupled layersโ, Physical Review B 95, (2017) arXiv:1701.00747 DOI
W. Shirley, K. Slagle, and X. Chen, โFractional excitations in foliated fracton phasesโ, Annals of Physics 410, 167922 (2019) arXiv:1806.08625 DOI
T. Rakovszky and V. Khemani, โThe Physics of (good) LDPC Codes II. Product constructionsโ, (2024) arXiv:2402.16831
K. Tiurev et al., โDomain Wall Color Codeโ, Physical Review Letters 133, (2024) arXiv:2307.00054 DOI
Z. Liang et al., โHigh-dimensional quantum XYZ product codes for biased noiseโ, (2024) arXiv:2408.03123
A. Dua et al., โEngineering 3D Floquet Codes by Rewindingโ, PRX Quantum 5, (2024) arXiv:2307.13668 DOI
V. Aggarwal and A. R. Calderbank, โBoolean Functions, Projection Operators, and Quantum Error Correcting Codesโ, IEEE Transactions on Information Theory 54, 1700 (2008) arXiv:cs/0610159 DOI
K. Feng and C. Xing, โA new construction of quantum error-correcting codesโ, Transactions of the American Mathematical Society 360, 2007 (2007) DOI
A. Rigby, J. C. Olivier, and P. Jarvis, โHeuristic construction of codeword stabilized codesโ, Physical Review A 100, (2019) arXiv:1907.04537 DOI
A. R. Calderbank et al., โQuantum Error Correction via Codes over GF(4)โ, (1997) arXiv:quant-ph/9608006
S. Bravyi et al., โHow much entanglement is needed for quantum error correction?โ, (2024) arXiv:2405.01332
G. Gour and N. R. Wallach, โEntanglement of subspaces and error-correcting codesโ, Physical Review A 76, (2007) arXiv:0704.0251 DOI
M. B. Hastings, โQuantum Codes from High-Dimensional Manifoldsโ, (2016) arXiv:1608.05089
M. Barkeshli et al., โSymmetry fractionalization, defects, and gauging of topological phasesโ, Physical Review B 100, (2019) arXiv:1410.4540 DOI
S. X. Cui, โFour dimensional topological quantum field theories from \(G\)-crossed braided categoriesโ, Quantum Topology 10, 593 (2019) arXiv:1610.07628 DOI
R. Kashaev, โA simple model of 4d-TQFTโ, (2014) arXiv:1405.5763
R. Kashaev, โOn realizations of Pachner moves in 4Dโ, (2015) arXiv:1504.01979
D. Bulmash and M. Barkeshli, โAbsolute anomalies in (2+1)D symmetry-enriched topological states and exact (3+1)D constructionsโ, Physical Review Research 2, (2020) arXiv:2003.11553 DOI
A. A. Kovalev, I. Dumer, and L. P. Pryadko, โDesign of additive quantum codes via the code-word-stabilized frameworkโ, Physical Review A 84, (2011) arXiv:1108.5490 DOI
D. Hangleiter et al., โFault-tolerant compiling of classically hard IQP circuits on hypercubesโ, (2024) arXiv:2404.19005
A. J. Moorthy and L. G. Gunderman, โLocal-dimension-invariant Calderbank-Shor-Steane Codes with an Improved Distance Promiseโ, (2021) arXiv:2110.11510
B. Zeng et al., โLocal unitary versus local Clifford equivalence of stabilizer and graph statesโ, Physical Review A 75, (2007) arXiv:quant-ph/0611214 DOI
S. X. Cui, D. Gottesman, and A. Krishna, โDiagonal gates in the Clifford hierarchyโ, Physical Review A 95, (2017) arXiv:1608.06596 DOI
B. J. Brown, N. H. Nickerson, and D. E. Browne, โFault-tolerant error correction with the gauge color codeโ, Nature Communications 7, (2016) arXiv:1503.08217 DOI
N. Rengaswamy et al., โSynthesis of Logical Clifford Operators via Symplectic Geometryโ, 2018 IEEE International Symposium on Information Theory (ISIT) (2018) arXiv:1803.06987 DOI
K. Betsumiya and A. Munemasa, โOn triply even binary codesโ, Journal of the London Mathematical Society 86, 1 (2012) arXiv:1012.4134 DOI
S. Bravyi and J. Haah, โMagic-state distillation with low overheadโ, Physical Review A 86, (2012) arXiv:1209.2426 DOI
K. Betsumiya and A. Munemasa, โOn triply even binary codesโ, Journal of the London Mathematical Society 86, 1 (2012) DOI
H. F. Chau, โFive quantum register error correction code for higher spin systemsโ, Physical Review A 56, R1 (1997) arXiv:quant-ph/9702033 DOI
E. M. Rains, โNonbinary quantum codesโ, (1997) arXiv:quant-ph/9703048
B. Shaw et al., โEncoding one logical qubit into six physical qubitsโ, Physical Review A 78, (2008) arXiv:0803.1495 DOI
Keqin Feng, โQuantum codes [[6, 2, 3]]/sub p/ and [[7, 3, 3]]/sub p/ (p โฅ 3) existโ, IEEE Transactions on Information Theory 48, 2384 (2002) DOI
Z. Wang et al., โQuantum error-correcting codes over mixed alphabetsโ, Physical Review A 88, (2013) arXiv:1205.4253 DOI
H. Goto, โHigh-performance fault-tolerant quantum computing with many-hypercube codesโ, Science Advances 10, (2024) arXiv:2403.16054 DOI
A. Nemec and A. Klappenecker, โInfinite Families of Quantum-Classical Hybrid Codesโ, (2020) arXiv:1911.12260
A. M. Steane, โSimple quantum error-correcting codesโ, Physical Review A 54, 4741 (1996) arXiv:quant-ph/9605021 DOI
M. Barkeshli et al., โGeneralized Kitaev Models and Extrinsic Non-Abelian Twist Defectsโ, Physical Review Letters 114, (2015) arXiv:1405.1780 DOI
P. H. Bonderson, Non-Abelian Anyons and Interferometry, California Institute of Technology, 2007 DOI
S. Koutsioumpas, D. Banfield, and A. Kay, โThe Smallest Code with Transversal Tโ, (2022) arXiv:2210.14066 | {"url":"https://errorcorrectionzoo.org/list/approximate_qecc","timestamp":"2024-11-04T10:17:49Z","content_type":"text/html","content_length":"341054","record_id":"<urn:uuid:04d5322f-b7c0-427c-8ac2-0c800b0cdcf4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00403.warc.gz"} |
The word dividend means the number that is to be divided.
โข 8/2 means 'eight divided by two' and in this case 8 is the dividend, the number that we will divide by 2 to get the answer (4).
โก The above problem could be alternately written 8 รท 2 or .
โข 11/20 means 'eleven divided by twenty' and in this case 11 is the dividend.
โก Therefore it should be noted that the dividend does not have to be greater than the divisor. | {"url":"https://www.mathwarehouse.com/dictionary/D-words/definition-of-dividend.php","timestamp":"2024-11-06T06:21:51Z","content_type":"text/html","content_length":"39027","record_id":"<urn:uuid:7c8037b0-cfde-4d20-9863-704b44da6822>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00737.warc.gz"} |
Area of triangle Calculator - Online Area of triangle Calculator
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
Area of triangle calculator
Area of a triangle calculator is a free online tool that helps you to find the area of a triangle when the lengths of base and height are given. A triangle is a type of closed polygon that has
exactly 3 sides, 3 vertices, and 3 angles.
What is the Area of Triangle Calculator?
Area of triangle calculator computes the value of the area of a triangle when the height and base are known. Depending upon the known parameters as well as the type of triangle there are different
methods to calculate the area of a triangle. To use the area of triangle calculator, enter the values in the input boxes.
How to Use Area of Triangle Calculator?
Follow the steps given below to calculate the area of a triangle using the area of triangle calculator.
โข Step 1: Go to Cuemath's online area of triangle calculator.
โข Step 2: Enter the values of the height and base in the input boxes of the area of triangle calculator.
โข Step 3: Click on the "Calculate" button to find the area of the triangle.
โข Step 4- Click on "Reset" to clear the field and enter new values.
How Does Area of Triangle Calculator Work?
The 2-dimensional space that is enclosed within the three boundaries of a triangle is known as the area of that triangle. Triangles can be classified based on the measure of their sides and angles as
given below:
Based on Sides
โข Equilateral triangle - all sides are of equal measure.
โข Isosceles triangle - only two sides are equal in length.
โข Scalene triangle - all sides are of unequal length.
Based on Angles
โข Acute triangle - all internal angles are acute, that is, they measure less than 90 degrees.
โข Right triangle - one angle is a right angle (measures 90 degrees).
โข Obtuse triangle - one angle is greater than 90 degrees (obtuse angle).
If all sides of a triangle are known we use Heron's formula to calculate the area. In some special cases, we can also apply trigonometric formulas to find the area of a triangle. However, the easiest
technique to find the area of the triangle is given by the base-height formula. This is written as follows:
Area of a triangle = 1/2 ร base ร height (or perpendicular height).
Want to find complex math solutions within seconds?
Use our free online calculator to solve challenging questions. With Cuemath, find solutions in simple and easy steps.
Solved Examples on Area of Triangle
Example 1: Find the area of the triangle whose base is 4 units and height is 7 units.
Base = 4 units.
Height = 7 units
Area of triangle= (1/2 ร base ร height) square units
= 1/2 ร 4 ร 7 square units
= 14 square units
Therefore, the area of the given triangle is 14 square units.
Example 2: Find the area of the triangle whose base is 9 units and height is 16 units.
Base = 9 units.
Height = 16 units
Area of triangle= (1/2 ร base ร height) square units
= 1/2 ร 9 ร 16 square units
= 72 square units
Therefore, the area of the given triangle is 72 square units.
Now, use the area of triangle calculator to find the area of the triangles with the following values of base and height:
โข Base = 6 units and height = 9 units
โข Base = 24 units and height = 22 unit
โ Math Calculators:
โFactors Calculator โGraphing Calculator โ
โAge Calculator โFactoring Calculator โ
โStandard Deviation Calculator โVolume of Cylinder โ
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/calculators/area-of-triangle-calculator/","timestamp":"2024-11-02T19:01:22Z","content_type":"text/html","content_length":"209241","record_id":"<urn:uuid:6ed5f06d-7563-40e1-80eb-5b09fc0ac8bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00072.warc.gz"} |
Large scale shell model calculations for double beta decay of 48Ca
Neutrinoless double beta decay provides a way to determine the effective mass of neutrinos. If the half life of neutrinoless double beta decay is measured, the effective mass of neutrino is
determined using the value of nuclear matrix element (NME). In this talk a new NME value for neutrinoless double beta decay of 48Ca based on large-scale shell model calculations is presented, and
compared to the existing theoretical data. For the purpose of examining the reliability of our shell-model calculations, calculated nuclear matrix element for two-neutrino double beta decay is also
presented, and compared to experimental data. Consequently a constraint for effective neutrino mass is suggested based on the latest NME value. The impact of sterile neutrino on the life-time of
double beta decay will be also presented for the double beta decay of 48Ca.
References: [1] Y. Iwata, N. Shimizu, T. Otsuka, J. Menendez, Y. Utsuno, M. Honma, and T. Abe, Phys. Rev. Lett. 116 (2016) 112502. [2] Y. Iwata, N. Shimizu, T. Otsuka, J. Menendez, Y. Utsuno, M.
Honma, and T. Abe, JPS Conf.Proc. 6 (2015) 030057. | {"url":"https://indico2.riken.jp/event/2333/","timestamp":"2024-11-08T08:26:04Z","content_type":"text/html","content_length":"95957","record_id":"<urn:uuid:49cf4093-9704-45c1-af43-dfce3ee78200>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00815.warc.gz"} |
Languaging Involvement
Freedom of Choice is the essence of life.
Making a choice involves changing involvement.
By changing our involvement, we change our realisation.
What we need, in science is a way to describe our involvement.
For many years already i knew that science is incomplete without such a description.
In engineering school, i realised that the model of the outsider-observer was wrong.
In medical school, i realised that most/all, diseases stem from erroneous involvement.
The following series of Essay lays a foundation for a mathematics of involvement.
In part it makes use of my earliest work towards this; plus the later occasional sequels.
Human Limits to Man Made Models, my first ever paper, already laid the foundation.
Papers such as Subjective Objectivity (Objective Subjectivity) are clear steps on the way.
The paper โThe Collapse of the Vector of Stateโ in a way presented the evident conclusion: involvement matters.
In this series of papers, i will introduce work of people who helped me see the solution.
Some of the people i met in person, of others i got to know their ideas via their books.
In each care, their work already implied the need to describe our involvement.
By bringing their ideas together, that implicit insight becomes very explicit.
As is the case for all work presented on this web-site: this is work in progress.
More names will be added to the list; as the ideas are further unfolded.
No matter what other people may have written and concluded ...
... your own life will have already proven to you that your involvement matters!
You might have done excellent research on this yourself: please contribute.
There will be many people i do not know of who have done similar research.
There will be far more people who always realised that involvement matters ...
... but did not have/take the time to write down their specific findings.
One of the clearest domains where involvement matters, is in medical/Health care.
Specifically the psychic/normalpara forms of healing are based on modulating involvement.
The ancient mystic traditions and meditative trainings are all very aware of this.
What is needed now is a direct integration of this insight in (irresponsible) science.
By claiming/pretending to be โoutsidersโ of the measurement, scientists became irresponsible.
They could/did not take their of involvement/response-ability into account.
We all know and can see the consequences: scientific planetary destruction/wars.
All of which will end as soon as scientist MUST take their own involvement into account.
This has profound and direct implications: differences in vision can be reconciled.
That can be used, directly, in politics and integration of different possible perspectives.
The project โIntegral Health Careโ )Initiated decades ago) is based on that realisation.
Evidently this also means the possibility of religions integration, by complementing perspectives.
The language of involvement integrates subjective realisation with objective reality.
It is based on individual uniqueness; and our evident inherent Freedom of Choice.
It implies respect for other perspectives, and release of bias: deDogmatisation.
And thereby answers the question What is Science?: responsible choice in involvement.
- - -
It'll be evident that whatever we produce at that level will be a mathematical model; a mental construct.
This mathematical mental model/construct must comply with all rules existing for the use of languaging, as the described by Maturana & Varela, and by Grinder & Badler (the originators of
neurolinguistic programming).
The elemental mental principles described by Uri Fidelman apply; these are not only mental processes, but also brain processes and thus also body processes.
Later on we will need to connect this up with the understanding of health and the integrity of (inter)cellular communication as described by Bert Verveen and Hans Selye.
We need to look at the relational pattern which is presented by the mathematical equations.
We must understand that the equations of mathematics are in fact formulations of the relationship which we define ourselves within our context; and by which we define our self within that context.
Uri Fidelman points out that they can map onto each other only if there are actual representations of what happens within our environment and in our body.
Jon Cunnyngham points out that this means that we must adapt the mathematical formulations/calculations if they do not match our psychological experiences/findings.
We need a mathematics of involvement, in which we do not describe the findings on the basis of the parameters that we analyse in looking at the world around us.
As Dimensional Analysis shows: the errors that you make of the level of interpretation will distort our perceptions/realisation.
This will cause errors in the definitions by which we function.
The mathematics of involvement will need to be
1) a description not only of the observation but also of
2) the observation as a verb, plus also of
3) the Observer in specifying the relationship to the environment
(as noted by Cees van Schooneveld), but more fundamentally
4) we will need to (cor)relate that to the internal cellular communication functions, as they take place within (y)our own body.
This concludes the introduction on the mathematics of our participation in creation. | {"url":"https://scienceoflife.nl/html/languaging_involvement.html","timestamp":"2024-11-11T01:52:03Z","content_type":"text/html","content_length":"34885","record_id":"<urn:uuid:2ec34aba-c50b-4bfc-8c57-230461e7ebb9>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00058.warc.gz"} |
Mastering Polynomial Factoring: A Comprehensive Guide - Code With C
Mastering Polynomial Factoring: A Comprehensive Guide
Understanding Polynomial Functions for Programming Applications
Hey there, code-loving peeps! Today, weโre going to unravel the world of polynomial functions and how they play a vital role in programming. So, buckle up and letโs embark on this thrilling
polynomial function ride together! ๐ข
Getting to Know Polynomial Functions
Ah, polynomial functions! Theyโre like these cool math superheroes with capes made of equations. So, what are these funky functions exactly and what makes them tick? Well, hang tight as we wrap our
heads around the nitty-gritty of polynomial functions.
What are Polynomial Functions?
Poly-what now? Polynomial functions are mathematical expressions comprising variables, coefficients, and exponents. Picture this: 5x^2 โ 3x + 7; thatโs a classic example right there! These functions
come in different shapes and sizes, from simple linear equations (y = mx + c) to mind-bending cubic or quadratic functions. They are the bread and butter of algebra and calculus, and they are
everywhere in the programming universe.
Implementing Polynomial Functions in Programming
Letโs shift gears and delve into the juicy stuffโthe marriage of polynomial functions and programming! ๐ฅ๏ธ These functions arenโt just for acing your math test; theyโre the secret sauce in building
real-world applications. Got your GPS guiding you through traffic? Thank polynomial functions for those sweet route optimizations! ๐
Use of Polynomial Functions in Programming
Real-world applications
Alright, so where do polynomial functions strut their stuff in the programming world? Think about anything that involves prediction, approximation, or optimization. From creating stunning visual
effects in games to modeling natural phenomena in simulations, polynomial functions are the unsung heroes working behind the scenes.
Advantages of using Polynomial Functions
Now, why are programmers head over heels for polynomial functions? Itโs simpleโthese functions bring versatility to the table. They can smoothly adapt to various scenarios without breaking a sweat.
Additionally, they are excellent tools for curve-fitting, which is a crucial aspect of data analysis and machine learning tasks.
Challenges and Solutions
No journey is complete without facing a few road bumps, right? Letโs talk about the challenges programmers face when dealing with polynomial functions and how to swerve past them like a pro driver.
Common challenges faced when using Polynomial Functions
One word: complexity. Yup, things can get hairy when dealing with high-degree polynomial functions. The algorithms become more demanding, and performance starts sulking. Add a dash of numerical
instability and whoop, there goes your smooth sailing!
Dealing with complex algorithms
Taming complex polynomial algorithms is like solving a massive puzzle. But fear not, crafty programmers have come up with smart techniques to slice through the complexity. Divide and conquer, anyone?
Overcoming performance issues
A sluggish program is no fun. Performance optimization techniques like memoization, parallel computing, and clever algorithm design can sweeten the deal. Who doesnโt love zippy code, right?
Tips for Efficiency
Alright, letโs shift our gears to turbo mode and uncover some top-tier hacks to craft efficient polynomial function code thatโll make your programming pals go โWOW!โ
Writing efficient code for Polynomial Functions
Efficiency is the name of the game! Strategically selecting the right algorithm and data structures can work wonders. Remember, nobody likes a slowpoke, especially in the programming world!
Optimizing for speed and accuracy
Speed demons and accuracy aficionados, rejoice! Techniques like Hornerโs method and Newtonโs method can give these functions an adrenaline rush, making them blazing fast and error-free.
Utilizing libraries and tools for Polynomial Functions
Why break a sweat reinventing the wheel? There are fantastic libraries and tools out there, ready to be your sidekick in conquering polynomial challenges. Love a good shortcut, donโt you?
Future Trends and Innovations
Alright, time to put on our future-gazing glasses! The world of polynomial functions is ever-evolving, and weโre about to peek into the crystal ball to see whatโs in store for us.
The future of Polynomial Functions in programming
With advancements in artificial intelligence, data science, and computational power, polynomial functions are set to play a pivotal role. Expect them to be the secret sauce in cutting-edge
applications like robotics, autonomous vehicles, and advanced simulations.
Potential advancements and applications
Who knows, we might witness the birth of groundbreaking algorithms that revolutionize how polynomial functions are utilized. The future is bright, my friends!
Integration with emerging technologies
As new tech leaps onto the scene, polynomial functions will cozy up with them like old pals. Quantum computing, anyone? These functions are ready to dance to the beat of anything the future holds.
In closing, our adventure into the world of polynomial functions has been nothing short of fascinating! From their humble algebraic origins to their indispensable role in programming, polynomial
functions are the unsung heroes powering our digital world. Hereโs to more efficient code, mind-blowing innovations, and the endless possibilities that polynomial functions bring to the table. Thanks
for joining me on this whirlwind tour, and remember, keep coding and keep rocking! ๐
Program Code โ Mastering Polynomial Factoring: A Comprehensive Guide
import numpy as np
import matplotlib.pyplot as plt
class PolynomialFunction:
def __init__(self, coefficients):
'''Initializes the Polynomial Function with a list of coefficients.'''
self.coefficients = coefficients # Coefficients are in ascending order of degrees
def __call__(self, x):
'''Evaluates the Polynomial at x.'''
return sum(coef * x**degree for degree, coef in enumerate(self.coefficients))
def derivative(self):
'''Calculates the derivative of the Polynomial as another Polynomial.'''
derived_coeffs = [coef * degree for degree, coef in enumerate(self.coefficients) if degree > 0]
return PolynomialFunction(derived_coeffs)
def integral(self, constant=0):
'''Calculates the integral of the Polynomial as another Polynomial
with an integration constant.'''
integrated_coeffs = [constant] + [coef / (degree + 1) for degree, coef in enumerate(self.coefficients)]
return PolynomialFunction(integrated_coeffs)
def plot(self, x_range, title='Polynomial Function Plot'):
'''Plots the polynomial function over the given range of x values.'''
x_values = np.linspace(x_range[0], x_range[1], num=100)
y_values = self(x_values)
plt.plot(x_values, y_values, label='Polynomial')
# Example of usage:
# Create a polynomial f(x) = 1 + 2x + 3x^2
p = PolynomialFunction([1, 2, 3])
# Evaluate the polynomial at x = 5
value_at_5 = p(5)
# Calculate the derivative of the polynomial
p_prime = p.derivative()
# Calculate the definite integral from x = 0 to x = 5, with an integration constant of zero
p_integral = p.integral()
value_of_integral_at_5 = p_integral(5) - p_integral(0)
# Plot the polynomial and its derivative
p.plot([-10, 10], title='Original Polynomial')
p_prime.plot([-10, 10], title='Derivative of Polynomial')
Code Output:
โข The console output isnโt explicitly shown, but value_at_5 would be the value of the polynomial when x=5.
โข value_of_integral_at_5 would be the area under the curve from x=0 to x=5.
Code Explanation:
This code defines a PolynomialFunction class to encapsulate the behavior of polynomial functions in a programming context. Letโs break it down:
1. We import the numpy library to handle arrays and mathematical operations, and matplotlib.pyplot for plotting.
2. The PolynomialFunction class is initialized with a list of coefficients, defining the polynomial.
3. The __call__ method allows the class instances to be called as functions. It evaluates the polynomial at a given x-value by summing the product of each coefficient with the x-value raised to the
corresponding degree (power of x).
4. The derivative method computes the derivative of the polynomial, a new polynomial, by multiplying each coefficient by its corresponding degree and decreasing the degree by one.
5. The integral method calculates the indefinite integral, which is also another polynomial. It divides each coefficient by its new degree (original degree + 1). It also allows for an integration
6. The plot method uses Matplotlib to plot the polynomial function. It generates a range of x-values, evaluates the polynomial for each x-value, and then creates a plot.
In the usage example:
โข We create a polynomial p with coefficients [1, 2, 3], representing ( 1 + 2x + 3x^2 ).
โข We evaluate the polynomial at x = 5.
โข We compute the derivative and integral of p.
โข We plot the polynomial and its derivative over the range from -10 to 10.
This class and methods make it easy to work with polynomial functions programmatically, providing a good example of how object-oriented design can be applied in a mathematical context.
Frequently Asked Questions about Polynomial Functions in Programming Applications
1. What are Polynomial Functions?
โก Polynomial functions are mathematical expressions consisting of variables raised to non-negative integer powers, multiplied by coefficients. In programming, they are commonly used to model
various phenomena.
2. How are Polynomial Functions useful in Programming Applications?
โก Polynomial functions are versatile and can approximate a wide range of functions. They are essential for tasks like curve fitting, data analysis, signal processing, and solving optimization
problems in programming.
3. Can Polynomial Functions have complex coefficients?
โก Yes, Polynomial functions can have complex coefficients. This feature allows them to model complex functions encountered in fields like signal processing, control systems, and electrical
4. What is the best way to evaluate a Polynomial Function efficiently in code?
โก To efficiently evaluate a polynomial at a given point, methods like Hornerโs Method or libraries like NumPy in Python that provide optimized polynomial evaluation functions can be utilized in
5. Are there specialized libraries in programming languages for working with Polynomial Functions?
โก Yes, many programming languages offer specialized libraries like NumPy in Python or Polyfit in MATLAB that provide extensive functionalities for working with polynomial functions, including
differentiation, integration, and polynomial approximation.
Leave a comment Leave a comment
Heyyy, lovely humans and code enthusiasts! ๐ I'm CodeLikeAGirl, your go-to girl for everything tech, coding, and well, girl power! ๐๐ฉ๐ป I'm a young Delhiite who's obsessed with programming, and I
pour my soul into blogging about it. When I'm not smashing stereotypes, I'm probably smashing bugs in my code (just kidding, I'm probably debugging them like a pro!). ๐๐ป I'm a staunch believer that
anyone can code and that the tech world is big enough for all of us, regardless of gender, background, or experience level. ๐โจ I frequently collaborate with my friend's blog, CodeWithC.com, to share
my geeky insights, tutorials, and controversial opinions. Trust me, when you want an unfiltered, down-to-earth take on the latest programming trends, languages, and frameworks, I'm your girl! ๐๐ก I
love tackling complex topics and breaking them down into bite-sized, digestible pieces. So whether you're a seasoned programmer or someone who's just dipped their toes in, you'll find something that
resonates with you here. ๐ So, stick around, and let's decode the world of programming together! ๐ง๐
Latest Posts | {"url":"https://www.codewithc.com/mastering-polynomial-factoring-a-comprehensive-guide-2/","timestamp":"2024-11-07T16:17:49Z","content_type":"text/html","content_length":"147001","record_id":"<urn:uuid:8613b38b-201a-4d3c-9fb8-bb27f08cc7ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00810.warc.gz"} |
0 1 2 3 Multiplication Worksheets
Math, particularly multiplication, forms the foundation of many academic self-controls and real-world applications. Yet, for numerous students, grasping multiplication can pose a difficulty. To
address this hurdle, teachers and moms and dads have actually welcomed an effective tool: 0 1 2 3 Multiplication Worksheets.
Introduction to 0 1 2 3 Multiplication Worksheets
0 1 2 3 Multiplication Worksheets
0 1 2 3 Multiplication Worksheets -
Free 3rd grade multiplication worksheets including the meaning of multiplication multiplication facts and tables multiplying by whole tens and hundreds missing factor problems and multiplication in
columns No login required
Multiplication Facts up to the 9 Times Table Multiplication Facts up to the 10 Times Table Multiplication Facts up to the 12 Times Table Multiplication Facts beyond the 12 Times Table Welcome to the
multiplication facts worksheets page at Math Drills
Significance of Multiplication Practice Understanding multiplication is pivotal, laying a solid structure for advanced mathematical principles. 0 1 2 3 Multiplication Worksheets use structured and
targeted practice, fostering a much deeper comprehension of this basic arithmetic procedure.
Evolution of 0 1 2 3 Multiplication Worksheets
0 3 Multiplication Worksheets Best Kids Worksheets
0 3 Multiplication Worksheets Best Kids Worksheets
Our multiplication worksheets are free to download easy to use and very flexible These multiplication worksheets are a great resource for children in Kindergarten 1st Grade 2nd Grade 3rd Grade 4th
Grade and 5th Grade Click here for a Detailed Description of all the Multiplication Worksheets Quick Link for All Multiplication Worksheets
Basic Multiplication 0 through 10 This page has lots of games worksheets flashcards and activities for teaching all basic multiplication facts between 0 and 10 Basic Multiplication 0 through 12 On
this page you ll find all of the resources you need for teaching basic facts through 12
From traditional pen-and-paper exercises to digitized interactive styles, 0 1 2 3 Multiplication Worksheets have actually advanced, catering to diverse learning styles and preferences.
Kinds Of 0 1 2 3 Multiplication Worksheets
Standard Multiplication Sheets Simple exercises focusing on multiplication tables, helping students construct a strong arithmetic base.
Word Trouble Worksheets
Real-life circumstances integrated into issues, enhancing vital reasoning and application skills.
Timed Multiplication Drills Examinations created to enhance speed and precision, aiding in fast mental mathematics.
Advantages of Using 0 1 2 3 Multiplication Worksheets
Multiplication By Zero Worksheet
Multiplication By Zero Worksheet
Math explained in easy language plus puzzles games quizzes videos and worksheets For K 12 kids teachers and parents Multiplication Worksheets Worksheets Multiplication Mixed Tables Worksheets
Worksheet Number Range Online Primer 1 to 4 Primer Plus 2 to 6 Up To Ten 2 to 10 Getting Tougher 2 to 12 Intermediate 3
These multiplication facts worksheets provide various exercise to help students gain fluency in the multiplication facts up to 12 x 12 Jump to your topic Multiplication facts review times tables
Multiplication facts practice vertical Multiplication facts practice horizontal Focus numbers Circle drills Missing factor questions
Enhanced Mathematical Skills
Regular practice develops multiplication efficiency, boosting general math capacities.
Boosted Problem-Solving Talents
Word troubles in worksheets develop logical thinking and approach application.
Self-Paced Knowing Advantages
Worksheets fit individual learning rates, fostering a comfy and adaptable understanding environment.
How to Develop Engaging 0 1 2 3 Multiplication Worksheets
Including Visuals and Colors Vivid visuals and colors catch interest, making worksheets aesthetically appealing and involving.
Consisting Of Real-Life Circumstances
Associating multiplication to day-to-day circumstances adds significance and practicality to workouts.
Customizing Worksheets to Various Skill Degrees Customizing worksheets based on varying efficiency degrees makes certain comprehensive discovering. Interactive and Online Multiplication Resources
Digital Multiplication Devices and Gamings Technology-based sources supply interactive learning experiences, making multiplication interesting and delightful. Interactive Web Sites and Applications
On-line platforms provide varied and accessible multiplication method, supplementing standard worksheets. Customizing Worksheets for Various Knowing Styles Aesthetic Students Visual aids and diagrams
aid understanding for students inclined toward aesthetic discovering. Auditory Learners Spoken multiplication issues or mnemonics deal with students that grasp ideas through acoustic ways.
Kinesthetic Students Hands-on tasks and manipulatives sustain kinesthetic learners in understanding multiplication. Tips for Effective Application in Discovering Uniformity in Practice Normal method
enhances multiplication abilities, promoting retention and fluency. Balancing Repetition and Variety A mix of repeated exercises and varied problem styles preserves passion and comprehension. Giving
Positive Comments Feedback aids in identifying locations of renovation, motivating ongoing development. Obstacles in Multiplication Technique and Solutions Inspiration and Involvement Obstacles Dull
drills can lead to disinterest; ingenious strategies can reignite motivation. Getting Over Worry of Mathematics Unfavorable perceptions around math can hinder development; creating a favorable
understanding atmosphere is necessary. Impact of 0 1 2 3 Multiplication Worksheets on Academic Performance Research Studies and Study Searchings For Study shows a positive relationship between
constant worksheet use and improved math performance.
0 1 2 3 Multiplication Worksheets become flexible devices, promoting mathematical efficiency in students while accommodating varied learning styles. From standard drills to interactive on-line
resources, these worksheets not just improve multiplication abilities yet additionally advertise crucial reasoning and problem-solving capacities.
Multiplication Practice Sheets Printable Worksheets Multiplication Worksheets Pdf Grade 234
Multiplication Tables From 1 To 20 Printable Pdf Table Design Ideas
Check more of 0 1 2 3 Multiplication Worksheets below
Multiplication Worksheet Multiplying Two Digit By One Digit Multiplication worksheets
4 Digit Multiplication Worksheets Free Printable
Multiplication Worksheets 2 And 3 Times Tables Worksheets
Pin On Educational Coloring Pages
The Multiplying 1 To 10 By 2 36 Questions Per Page A Math Worksheet From The
Multiplication Facts Worksheets Math Drills
Multiplication Facts up to the 9 Times Table Multiplication Facts up to the 10 Times Table Multiplication Facts up to the 12 Times Table Multiplication Facts beyond the 12 Times Table Welcome to the
multiplication facts worksheets page at Math Drills
Multiplication Worksheets Basic Facts 0 10 Super Teacher Worksheets
Multiplication Worksheets Basic Facts 0 10 We have multiplication sheets for timed tests or extra practice as well as flashcards and games Most resources on this page cover basic multiplication facts
0 10 Worksheets and Games Introduction to Multiplication with Groups Count the number of groups and the number of objects in each group
Multiplication Facts up to the 9 Times Table Multiplication Facts up to the 10 Times Table Multiplication Facts up to the 12 Times Table Multiplication Facts beyond the 12 Times Table Welcome to the
multiplication facts worksheets page at Math Drills
Multiplication Worksheets Basic Facts 0 10 We have multiplication sheets for timed tests or extra practice as well as flashcards and games Most resources on this page cover basic multiplication facts
0 10 Worksheets and Games Introduction to Multiplication with Groups Count the number of groups and the number of objects in each group
4 Digit Multiplication Worksheets Free Printable
Pin On Educational Coloring Pages
The Multiplying 1 To 10 By 2 36 Questions Per Page A Math Worksheet From The
Printable Multiplication Worksheets X3 PrintableMultiplication
Free Multiplication Worksheet 3 Digit By 1 Digit Free4Classrooms
Free Multiplication Worksheet 3 Digit By 1 Digit Free4Classrooms
Math Multiplication Worksheets Grade 3 Free Printable
Frequently Asked Questions (Frequently Asked Questions).
Are 0 1 2 3 Multiplication Worksheets ideal for any age teams?
Yes, worksheets can be tailored to different age and skill degrees, making them adaptable for numerous learners.
How frequently should trainees exercise using 0 1 2 3 Multiplication Worksheets?
Regular method is vital. Normal sessions, ideally a couple of times a week, can produce considerable improvement.
Can worksheets alone boost math skills?
Worksheets are an useful tool yet ought to be supplemented with diverse learning methods for detailed skill development.
Are there online platforms using cost-free 0 1 2 3 Multiplication Worksheets?
Yes, numerous instructional websites offer free access to a variety of 0 1 2 3 Multiplication Worksheets.
Exactly how can moms and dads support their kids's multiplication technique in your home?
Urging constant technique, providing assistance, and producing a positive knowing setting are valuable actions. | {"url":"https://crown-darts.com/en/0-1-2-3-multiplication-worksheets.html","timestamp":"2024-11-04T09:10:28Z","content_type":"text/html","content_length":"28090","record_id":"<urn:uuid:6da0e990-1566-4e1a-bfb9-c8f8f9f4f02f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00423.warc.gz"} |
reciprocal lattice of honeycomb lattice
K 1) Do I have to imagine the two atoms "combined" into one? It only takes a minute to sign up. {\displaystyle \left(\mathbf {b_{1}} ,\mathbf {b} _{2},\mathbf {b} _{3}\right)}. The magnitude of the
reciprocal lattice vector {\textstyle a} they can be determined with the following formula: Here, It may be stated simply in terms of Pontryagin duality. = Some lattices may be skew, which means that
their primary lines may not necessarily be at right angles. \Psi_k (r) = \Psi_0 \cdot e^{i\vec{k}\cdot\vec{r}} {\displaystyle f(\mathbf {r} )} This primitive unit cell reflects the full symmetry of
the lattice and is equivalent to the cell obtained by taking all points that are closer to the centre of . {\displaystyle \lambda } These reciprocal lattice vectors of the FCC represent the basis
vectors of a BCC real lattice. \Leftrightarrow \quad pm + qn + ro = l For an infinite two-dimensional lattice, defined by its primitive vectors , $\vec{k}=\frac{m_{1}}{N} \vec{b_{1}}+\frac{m_{2}}{N}
\vec{b_{2}}$ where $m_{1},m_{2}$ are integers running from $0$ to $N-1$, $N$ being the number of lattice spacings in the direct lattice along the lattice vector directions and $\vec{b_{1}},\vec{b_
{2}}$ are reciprocal lattice vectors. V In quantum physics, reciprocal space is closely related to momentum space according to the proportionality To learn more, see our tips on writing great
answers. Because of the translational symmetry of the crystal lattice, the number of the types of the Bravais lattices can be reduced to 14, which can be further grouped into 7 crystal system:
triclinic, monoclinic, orthorhombic, tetragonal, cubic, hexagonal, and the trigonal (rhombohedral). Since we are free to choose any basis {$\vec{b}_i$} in order to represent the vectors $\vec{k}$,
why not just the simplest one? follows the periodicity of this lattice, e.g. b By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in
accordance with our Cookie Policy. e^{i \vec{k}\cdot\vec{R} } & = 1 \quad \\ Fig. The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open
Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Disconnect between goals and daily
tasksIs it me, or the industry? / Primitive translation vectors for this simple hexagonal Bravais lattice vectors are \Leftrightarrow \quad c = \frac{2\pi}{\vec{a}_1 \cdot \left( \vec{a}_2 \times \
vec{a}_3 \right)} Around the band degeneracy points K and K , the dispersion . Reciprocal lattice for a 1-D crystal lattice; (b). The vertices of a two-dimensional honeycomb do not form a Bravais
lattice. Using this process, one can infer the atomic arrangement of a crystal. e k It is a matter of taste which definition of the lattice is used, as long as the two are not mixed. [1] The symmetry
category of the lattice is wallpaper group p6m. 1 :aExaI4x{^j|{Mo. A non-Bravais lattice is often referred to as a lattice with a basis. Basis Representation of the Reciprocal Lattice Vectors, 4. {\
displaystyle f(\mathbf {r} )} This lattice is called the reciprocal lattice 3. 2 the function describing the electronic density in an atomic crystal, it is useful to write r we get the same value,
hence, Expressing the above instead in terms of their Fourier series we have, Because equality of two Fourier series implies equality of their coefficients, 2 endstream endobj 95 0 obj <> endobj 96 0
obj <> endobj 97 0 obj <>/Font<>/ProcSet[/PDF/Text/ImageC]/XObject<>>> endobj 98 0 obj <> endobj 99 0 obj <> endobj 100 0 obj <> endobj 101 0 obj <> endobj 102 0 obj <> endobj 103 0 obj <>stream The
reciprocal lattice of a reciprocal lattice is equivalent to the original direct lattice, because the defining equations are symmetrical with respect to the vectors in real and reciprocal space. 0 a r
0 We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. {\displaystyle f(\mathbf {r} )} [1], For an infinite three-dimensional lattice
0000073648 00000 n a One way of choosing a unit cell is shown in Figure \(\PageIndex{1}\). k , and {\textstyle {\frac {4\pi }{a}}} {\displaystyle \hbar } represents any integer, comprise a set of
parallel planes, equally spaced by the wavelength when there are j=1,m atoms inside the unit cell whose fractional lattice indices are respectively {uj, vj, wj}. 2 3] that the eective . ?&g>
4HO7Oo6Rp%O3bwLdGwS.7J+'{|pDE xF]A9!F/ +2 F+*p1fR!%M4%0Ey*kRNh+] AKf) k=YUWeh;\v:1qZ (wiA%CQMXyh9~`#vAIN[Jq2k5.+oTVG0<>!\+R. g`>\4h933QA$C^i = = This is summarised by the vector equation: d * = ha *
+ kb * + lc *. and the subscript of integers ( 1 In this sense, the discretized $\mathbf{k}$-points do not 'generate' the honeycomb BZ, as the way you obtain them does not refer to or depend on the
symmetry of the crystal lattice that you consider. No, they absolutely are just fine. with With this form, the reciprocal lattice as the set of all wavevectors j Spiral Spin Liquid on a Honeycomb
Lattice. 3 Each node of the honeycomb net is located at the center of the N-N bond. Q A translation vector is a vector that points from one Bravais lattice point to some other Bravais lattice point.
In physical applications, such as crystallography, both real and reciprocal space will often each be two or three dimensional. Fig. Any valid form of {\displaystyle \mathbf {G} _{m}} Lattice with a
Basis Consider the Honeycomb lattice: It is not a Bravais lattice, but it can be considered a Bravais lattice with a two-atom basis I can take the "blue" atoms to be the points of the underlying
Bravais lattice that has a two-atom basis - "blue" and "red" - with basis vectors: h h d1 0 d2 h x 1 or Q 2 0000001669 00000 n On this Wikipedia the language links are at the top of the page across
from the article title. c http://newton.umsl.edu/run//nano/known.html, DoITPoMS Teaching and Learning Package on Reciprocal Space and the Reciprocal Lattice, Learn easily crystallography and how the
reciprocal lattice explains the diffraction phenomenon, as shown in chapters 4 and 5, https://en.wikipedia.org/w/index.php?title=Reciprocal_lattice&oldid=1139127612, Creative Commons
Attribution-ShareAlike License 3.0, This page was last edited on 13 February 2023, at 14:26. and N. W. Ashcroft, N. D. Mermin, Solid State Physics (Holt-Saunders, 1976). {\displaystyle \phi _{0}} 3 \
vec{a}_1 &= \frac{a}{2} \cdot \left( \hat{y} + \hat {z} \right) \\ That implies, that $p$, $q$ and $r$ must also be integers. Reciprocal Lattice of a 2D Lattice c k m a k n ac f k e y nm x j i k Rj 2
2 2. a1 a x a2 c y x a b 2 1 x y kx ky y c b 2 2 Direct lattice Reciprocal lattice Note also that the reciprocal lattice in k-space is defined by the set of all points for which the k-vector
satisfies, 1. ei k Rj for all of the direct latticeRj Combination the rotation symmetry of the point groups with the translational symmetry, 72 space groups are generated. G Is it possible to create
a concave light? b R But we still did not specify the primitive-translation-vectors {$\vec{b}_i$} of the reciprocal lattice more than in eq. \Leftrightarrow \;\; [12][13] Accordingly, the
reciprocal-lattice of a bcc lattice is a fcc lattice. As will become apparent later it is useful to introduce the concept of the reciprocal lattice. In other words, it is the primitive
Wigner-Seitz-cell of the reciprocal lattice of the crystal under consideration. There are two concepts you might have seen from earlier {\displaystyle \left(\mathbf {a} _{1},\mathbf {a} _{2}\right)}
Yes. However, in lecture it was briefly mentioned that we could make this into a Bravais lattice by choosing a suitable basis: The problem is, I don't really see how that changes anything. 0
0000011155 00000 n . The Bravais lattice vectors go between, say, the middle of the lines connecting the basis atoms to equivalent points of the other atom pairs on other Bravais lattice sites.
0000073574 00000 n with ${V = \vec{a}_1 \cdot \left( \vec{a}_2 \times \vec{a}_3 \right)}$ as introduced above.[7][8]. MathJax reference. 1 0000028489 00000 n is another simple hexagonal lattice with
lattice constants 2 Because of the requirements of translational symmetry for the lattice as a whole, there are totally 32 types of the point group symmetry. . {\displaystyle (h,k,l)} The twist angle
has weak influence on charge separation and strong influence on recombination in the MoS 2 /WS 2 bilayer: ab initio quantum dynamics \begin{pmatrix} V The final trick is to add the Ewald Sphere
diagram to the Reciprocal Lattice diagram. Is there a mathematical way to find the lattice points in a crystal? where Your grid in the third picture is fine. + = In other In W- and Mo-based
compounds, the transition metal and chalcogenide atoms occupy the two sublattice sites of a honeycomb lattice within the 2D plane [Fig. {\displaystyle \mathbf {R} =0} . Fourier transform of
real-space lattices, important in solid-state physics. ( ) , and the reciprocal of the reciprocal lattice is the original lattice, which reveals the Pontryagin duality of their respective vector
spaces. denotes the inner multiplication. We probe the lattice geometry with a nearly pure Bose-Einstein condensate of 87 Rb, which is initially loaded into the lowest band at quasimomentum q = , the
center of the BZ ().To move the atoms in reciprocal space, we linearly sweep the frequency of the beams to uniformly accelerate the lattice, thereby generating a constant inertial force in the
lattice frame. a h k \label{eq:matrixEquation} The wavefronts with phases z the phase) information. Figure \(\PageIndex{5}\) (a). w {\displaystyle \mathbf {G} } My problem is, how would I express the
new red basis vectors by using the old unit vectors $z_1,z_2$. n 3 by any lattice vector Using Kolmogorov complexity to measure difficulty of problems? v = {\displaystyle t} \eqref{eq:matrixEquation}
by $2 \pi$, then the matrix in eq. satisfy this equality for all 2 0000004579 00000 n whose periodicity is compatible with that of an initial direct lattice in real space. = You could also take more
than two points as primitive cell, but it will not be a good choice, it will be not primitive. ( g The structure is honeycomb. As a starting point we consider a simple plane wave We introduce the
honeycomb lattice, cf. G g = on the reciprocal lattice, the total phase shift If I do that, where is the new "2-in-1" atom located? 1(a) shows the lattice structure of BHL.A 1 and B 1 denotes the
sites on top-layer, while A 2, B 2 signs the bottom-layer sites. 2 <<16A7A96CA009E441B84E760A0556EC7E>]/Prev 308010>> p`V iv+ G B[C07c4R4=V-L+R#\SQ|IE$FhZg Ds},NgI(lHkU>JBN\%sWH{IQ8eIv,TRN
kvjb8FRZV5yq@)#qMCk^^NEujU (z+IT+sAs+Db4b4xZ{DbSj"y q-DRf]tF{h!WZQFU:iq,\b{ R~#'[8&~06n/deA[YaAbwOKp|HTSS-h!Y5dA,h:ejWQOXVI1*. hb```f``1e`e`cd@ A HQe)Pu)Bt> Eakko]c@G8 0000000016 00000 n a j
0000001798 00000 n The above definition is called the "physics" definition, as the factor of The honeycomb point set is a special case of the hexagonal lattice with a two-atom basis. (or in this
case. 1 ( As {\displaystyle \mathbf {b} _{1}} 1 n The spatial periodicity of this wave is defined by its wavelength , defined by its primitive vectors The triangular lattice points closest to the
origin are (e 1 e 2), (e 2 e 3), and (e 3 e 1). Are there an infinite amount of basis I can choose? n . , The domain of the spatial function itself is often referred to as real space. 1D,
one-dimensional; BZ, Brillouin zone; DP, Dirac . $\DeclareMathOperator{\Tr}{Tr}$, Symmetry, Crystal Systems and Bravais Lattices, Electron Configuration of Many-Electron Atoms, Unit Cell, Primitive
Cell and Wigner-Seitz Cell, 2. R 0000003020 00000 n The volume of the nonprimitive unit cell is an integral multiple of the primitive unit cell. Fundamental Types of Symmetry Properties, 4.
Reciprocal lattices for the cubic crystal system are as follows. + If I draw the grid like I did in the third picture, is it not going to be impossible to find the new basis vectors? e 4 3 1
graphene-like) structures and which result from topological non-trivialities due to time-modulation of the material parameters. Figure \(\PageIndex{4}\) Determination of the crystal plane index. / {\
displaystyle k\lambda =2\pi } 0000028359 00000 n a in the real space lattice. {\displaystyle \mathbf {G} _{m}} . 2 What do you mean by "impossible to find", you have drawn it well (you mean $a_1$ and
$a_2$, right? ( The symmetry of the basis is called point-group symmetry. The constant ^ {\textstyle {\frac {2\pi }{c}}} Spiral spin liquids are correlated paramagnetic states with degenerate
propagation vectors forming a continuous ring or surface in reciprocal space. rev2023.3.3.43278. 1 and angular frequency Another way gives us an alternative BZ which is a parallelogram. c , and \vec
{k} = p \, \vec{b}_1 + q \, \vec{b}_2 + r \, \vec{b}_3 to any position, if , and with its adjacent wavefront (whose phase differs by % and so on for the other primitive vectors. 0000003775 00000 n hb
```HVVAd`B {WEH;:-tf>FVS[c"E&7~9M\ gQLnj|`SPctdHe1NF[zDDyy)}JS|6`X+@llle2 m {\displaystyle \mathbf {G} _{m}} Figure \(\PageIndex{5}\) illustrates the 1-D, 2-D and 3-D real crystal lattices and its
corresponding reciprocal lattices. {\displaystyle \omega (v,w)=g(Rv,w)} {\displaystyle 2\pi } , The cubic lattice is therefore said to be self-dual, having the same symmetry in reciprocal space as in
real space. \eqref{eq:matrixEquation} becomes the unit matrix and we can rewrite eq. a {\textstyle c} {\displaystyle \omega } . On the other hand, this: is not a bravais lattice because the network
looks different for different points in the network. Is it correct to use "the" before "materials used in making buildings are"? G and ID##Description##Published##Solved By 1##Multiples of 3 or 5##
1002301200##969807 2##Even Fibonacci numbers##1003510800##774088 3##Largest prime factor##1004724000 . \Psi_0 \cdot e^{ i \vec{k} \cdot ( \vec{r} + \vec{R} ) }. b Because a sinusoidal plane wave with
unit amplitude can be written as an oscillatory term But I just know that how can we calculate reciprocal lattice in case of not a bravais lattice. m "After the incident", I started to be more
careful not to trip over things. 2 is replaced with m Then the neighborhood "looks the same" from any cell. 2 2 0000006438 00000 n Figure 1. . ) Therefore, L^ is the natural candidate for dual
lattice, in a different vector space (of the same dimension). in the direction of 2 Bulk update symbol size units from mm to map units in rule-based symbology. Knowing all this, the calculation of
the 2D reciprocal vectors almost . {\displaystyle m_{1}} {\displaystyle \mathbf {G} _{m}} {\displaystyle m_{j}} {\displaystyle \mathbf {K} _{m}} ) , its reciprocal lattice can be determined by
generating its two reciprocal primitive vectors, through the following formulae, where 4.4: (C) Projected 1D arcs related to two DPs at different boundaries. a 4 , x #REhRK/:-&cH)TdadZ.Cx,$.C@ zrPpey
^R 2 with $\vec{k}$ being any arbitrary wave vector and a Bravais lattice which is the set of vectors The hexagon is the boundary of the (rst) Brillouin zone. a , + Rotation axis: If the cell remains
the same after it rotates around an axis with some angle, it has the rotation symmetry, and the axis is call n-fold, when the angle of rotation is \(2\pi /n\). ( \vec{b}_1 = 2 \pi \cdot \frac{\vec{a}
_2 \times \vec{a}_3}{V} 0000004325 00000 n 0000010878 00000 n 0000002411 00000 n Is this BZ equivalent to the former one and if so how to prove it? {\displaystyle \mathbf {p} =\hbar \mathbf {k} }
with the integer subscript 2 (b) FSs in the first BZ for the 5% (red lines) and 15% (black lines) dopings at . Is there a single-word adjective for "having exceptionally strong moral principles"? V ,
b The Hamiltonian can be expressed as H = J ij S A S B, where the summation runs over nearest neighbors, S A and S B are the spins for two different sublattices A and B, and J ij is the exchange
constant. $\vec{k}=\frac{m_{1}}{N} \vec{b_{1}}+\frac{m_{2}}{N} \vec{b_{2}}$, $$ A_k = \frac{(2\pi)^2}{L_xL_y} = \frac{(2\pi)^2}{A},$$, Honeycomb lattice Brillouin zone structure and direct lattice
periodic boundary conditions, We've added a "Necessary cookies only" option to the cookie consent popup, Reduced $\mathbf{k}$-vector in the first Brillouin zone, Could someone help me understand the
connection between these two wikipedia entries? is the wavevector in the three dimensional reciprocal space. . {\displaystyle h} And the separation of these planes is \(2\pi\) times the inverse of
the length \(G_{hkl}\) in the reciprocal space. Thus we are looking for all waves $\Psi_k (r)$ that remain unchanged when being shifted by any reciprocal lattice vector $\vec{R}$. m \\ more, $ \
renewcommand{\D}[2][]{\,\text{d}^{#1} {#2}} $ G + G_{hkl}=\rm h\rm b_{1}+\rm k\rm b_{2}+\rm l\rm b_{3}, 3. 1 m ( {\displaystyle n=(n_{1},n_{2},n_{3})} m 1 \end{pmatrix} What video game is Charlie
playing in Poker Face S01E07? Equivalently, a wavevector is a vertex of the reciprocal lattice if it corresponds to a plane wave in real space whose phase at any given time is the same (actually
differs by It is the set of all points that are closer to the origin of reciprocal space (called the $\Gamma$-point) than to any other reciprocal lattice point. %@ [= {\displaystyle g^{-1}} A point (
node ), H, of the reciprocal lattice is defined by its position vector: OH = r*hkl = h a* + k b* + l c* . Z The primitive translation vectors of the hexagonal lattice form an angle of 120 and are of
equal lengths, | | = | | =. , it can be regarded as a function of both {\displaystyle m=(m_{1},m_{2},m_{3})} , + It is the locus of points in space that are closer to that lattice point than to any
of the other lattice points. , {\displaystyle 2\pi } I just had my second solid state physics lecture and we were talking about bravais lattices. represents a 90 degree rotation matrix, i.e. (cubic,
tetragonal, orthorhombic) have primitive translation vectors for the reciprocal lattice, (a) Honeycomb lattice with lattice constant a and lattice vectors a1 = a( 3, 0) and a2 = a( 3 2 , 3 2 ). Real
and Reciprocal Crystal Lattices is shared under a CC BY-SA license and was authored, remixed, and/or curated by LibreTexts. R a How do you get out of a corner when plotting yourself into a corner. 1
The reciprocal lattice is a set of wavevectors G such that G r = 2 integer, where r is the center of any hexagon of the honeycomb lattice. n i k In physics, the reciprocal lattice represents the
Fourier transform of another lattice (group) (usually a Bravais lattice). @JonCuster Thanks for the quick reply. where H1 is the first node on the row OH and h1, k1, l1 are relatively prime. ) v , {\
displaystyle \mathbf {R} _{n}} V The basic vectors of the lattice are 2b1 and 2b2. is the phase of the wavefront (a plane of a constant phase) through the origin a 0000001213 00000 n wHY8E.$KD!l'=]
Tlh^X[b|^@IvEd`AE|"Y5` 0[R\ya:*vlXD{P@~r {x.`"nb=QZ"hJ$tqdUiSbH)2%JzzHeHEiSQQ 5>>j;r11QE &71dCB -(Xi]aC+h!XFLd-(GNDP-U>xl2O~5 ~Qc tn<2-QYDSr$&d4D,xEuNa$CyNNJd:LE+2447VEr x%Bb/2B RXM9bhVoZr ) {\
displaystyle m=(m_{1},m_{2},m_{3})} \vec{b}_3 \cdot \vec{a}_1 & \vec{b}_3 \cdot \vec{a}_2 & \vec{b}_3 \cdot \vec{a}_3 ( : Introduction of the Reciprocal Lattice, 2.3. How to use Slater Type Orbitals
as a basis functions in matrix method correctly? Crystal directions, Crystal Planes and Miller Indices, status page at https://status.libretexts.org. 0000083532 00000 n , {\displaystyle \mathbf {b} _
{2}} , and \end{pmatrix} replaced with i G {\displaystyle x} {\displaystyle f(\mathbf {r} )} Thus, the reciprocal lattice of a fcc lattice with edge length $a$ is a bcc lattice with edge length $\
frac{4\pi}{a}$. This gure shows the original honeycomb lattice, as viewed as a Bravais lattice of hexagonal cells each containing two atoms, and also the reciprocal lattice of the Bravais lattice
(not to scale, but aligned properly). can be determined by generating its three reciprocal primitive vectors Follow answered Jul 3, 2017 at 4:50. k The same can be done for the vectors $\vec{b}_2$
and $\vec{b}_3$ and one obtains . Thus, it is evident that this property will be utilised a lot when describing the underlying physics. {\displaystyle \left(\mathbf {b} _{1},\mathbf {b} _{2},\mathbf
{b} _{3}\right)} 2 5 0 obj t -C'N]x}>CgSee+?LKiBSo.S1#~7DIqp (QPPXQLFa 3(TD,o+E~1jx0}PdpMDE-a5KLoOh),=_:3Z R!G@llX 0000013259 00000 n Can airtags be tracked from an iMac desktop, with no iPhone?
Crystal lattice is the geometrical pattern of the crystal, where all the atom sites are represented by the geometrical points. k The lattice is hexagonal, dot. 1 o R a dynamical) effects may be
important to consider as well. ) at all the lattice point , \label{eq:b3} Each plane wave in this Fourier series has the same phase or phases that are differed by multiples of b ( a Making statements
based on opinion; back them up with references or personal experience. e ) f {\textstyle {\frac {4\pi }{a}}} R are integers defining the vertex and the is the position vector of a point in real space
and now and in two dimensions, , , n Note that the easier way to compute your reciprocal lattice vectors is $\vec{a}_i\cdot\vec{b}_j=2\pi\delta_{ij}$ Share. {\displaystyle l} j m to build a potential
of a honeycomb lattice with primitiv e vectors a 1 = / 2 (1, 3) and a 2 = / 2 (1, 3) and reciprocal vectors b 1 = 2 . To subscribe to this RSS feed, copy and paste this URL into your RSS reader. b R
If I do that, where is the new "2-in-1" atom located? -dimensional real vector space i Why do you want to express the basis vectors that are appropriate for the problem through others that are not?
{\displaystyle \mathbf {R} _{n}} 1 ) v Now take one of the vertices of the primitive unit cell as the origin. Moving along those vectors gives the same 'scenery' wherever you are on the lattice. How
do I align things in the following tabular environment? a @JonCuster So you are saying a better choice of grid would be to put the "origin" of the grid on top of one of the atoms? As shown in Figure
\(\PageIndex{3}\), connect two base centered tetragonal lattices, and choose the shaded area as the new unit cell. {\displaystyle \mathbf {a} _{3}} The Bravias lattice can be specified by giving
three primitive lattice vectors $\vec{a}_1$, $\vec{a}_2$, and $\vec{a}_3$. r on the reciprocal lattice does always take this form, this derivation is motivational, rather than rigorous, because it
has omitted the proof that no other possibilities exist.). is conventionally written as 1 Browse other questions tagged, Start here for a quick overview of the site, Detailed answers to any questions
you might have, Discuss the workings and policies of this site. i So it's in essence a rhombic lattice. ^ \end{pmatrix} Thus after a first look at reciprocal lattice (kinematic scattering) effects,
beam broadening and multiple scattering (i.e. , Honeycomb lattice as a hexagonal lattice with a two-atom basis. | {"url":"http://assincampo.ismea.it/wp-content/rBFcXkT/reciprocal-lattice-of-honeycomb-lattice","timestamp":"2024-11-08T02:13:47Z","content_type":"text/html","content_length":"46957","record_id":"<urn:uuid:733d3725-1a10-4d76-b5d5-b95c1b0adfa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00212.warc.gz"} |
Simple vs Compound Interest
Abdulla Javeri
30 years: Financial markets trader
Compounding is one of the most powerful concepts in finance. Abdulla compares simple versus compound interest and the importance of understanding the difference.
Compounding is one of the most powerful concepts in finance. Abdulla compares simple versus compound interest and the importance of understanding the difference.
Simple vs Compound Interest
Key learning objectives:
โข Apply the simple vs compound interest rate scenario to an investment
โข Learn how simple and compound interest differ
With simple interest, a depositor receiving annual interest from a deposit removes the interest and just reinvests the principal and earns the annual interest rate. With compound interest, the
depositor reinvests the principal and interest every year i.e. the interest earns interest so the amount of interest received goes up each year.
How do simple and compound interest differ?
With simple interest, a depositor receiving annual interest removes the interest and just reinvests the principal, hence earns the annual interest rate. With compound interest, the depositor
reinvests the principal and interest every year - the interest earns interest so the amount received goes up each year. Simple interest of 5% earned on $1,000 over 50 years equals $2,500, giving a
total value of $3,500. With compounding, the total value at the end of the period is $11,467. The higher the interest rate the bigger the difference (albeit the same in percentage terms).
Apply the simple vs compound interest rate scenario to an investment
Investing $1,000 in equities, removing the annual dividend and earning the capital gain has the same effect from a returns perspective as earning simple interest. Reinvesting the annual dividend by
buying more equity has the same effect as interest compounding. Assuming an annual capital gain of 10% and a dividend yield of 3.5% over 50 years but not reinvesting the dividend results in a
portfolio value of $117,391 for a capital gain of $116,391 (final value minus the initial $1,000 investment) plus dividends of $40,737 giving a total gain in capital and dividends of $157,128. If
dividends were reinvested, the capital value of the portfolio at the end of 50 years is $521,101. Reinvesting the dividend makes up a substantial part of the return.
Abdulla Javeri
Abdullaโs career in the financial markets started in 1990 when he entered the trading floor of the London International Financial Futures Exchange, LIFFE, and qualified as a pit trader in equity and
equity index options. In 1996, Abdulla became a trainer for regulatory qualifications and then for non-exam courses, primarily covering all major financial products.
There are no available Videos from "Abdulla Javeri" | {"url":"https://ondemand.euromoney.com/videos/interest-rates-simple-versus-compound","timestamp":"2024-11-13T23:07:58Z","content_type":"text/html","content_length":"145518","record_id":"<urn:uuid:56592e8e-236b-4f43-a03f-5ddd1e6bcdf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00462.warc.gz"} |
Printable Blank Unit Circle
Printable Blank Unit Circle - Web download a set of printable unit circle charts to hand out to your students. Highly rated with a low unit price. Web place the degreeangle measure of each angle in
the dashedblanks inside the circle, and the radian measure of each angle in the solidblanks inside the. Web download a set of printable unit circle charts to hand out to your students. The unit
circle is the golden key to actually understanding trigonometry. For each quadrant, state which. Write the ordered pairs for each point around the. Useful as a geometry handout/study guide, for. Web
the free, printable blank unit circle charts canister live uploaded as pdf button docure. Web these simple settings give lengths and sin/cos/tan values that are easy to work with compared to
neverending decimal.
Printable Blank Unit Circle Worksheet Template
Web how to understand the unit circle chart? Web use the blank unit circle worksheet to test yourself and keep the filled unit circle handy for a. Web choose your membership level. Degrees are on the
inside, then radians, and the pointโs coordinate in the brackets. Write the ordered pairs for each point around the.
Blank Unit Circle Pdf Inspirational Printable Blank Unit Circle
Web below you will find five printable unit circle worksheets. Web download a set of printable unit circle charts to hand out to your students. Web place the degreeangle measure of each angle in the
dashedblanks inside the circle, and the radian measure of each angle in the solidblanks inside the. Web metal circle blanks for laser engraving, select color.
Printable Blank Unit Circle Worksheet Template
Web blank unit circle chart. Also, get the filled in output for. Web download a set of printable unit circle charts to hand out to your students. Web choose your membership level. Useful as a
geometry handout/study guide, for.
Pictures of unit circle printables. free images that you can download
Web these simple settings give lengths and sin/cos/tan values that are easy to work with compared to neverending decimal. Web choose your membership level. Web the free, printable blank unit circle
charts can exist downloaded as pdf or doc. This is the best tool. For each quadrant, state which.
Blank Unit Circle Pdf Fresh Blank Unit Circle Worksheet the Best
Web place the radian measure of each angle in the provided rectangles. Also, get the filled in version in graduate to. When you are studying trigonometry you can use a unit circle chart. Web download
or print a blank unit circle with all of the angles in radians and degrees and the signs of the trig functions in each. Web.
Blank Unit Circle Worksheets Free to Print Now ยท Matter of Math
Web download or print a blank unit circle with all of the angles in radians and degrees and the signs of the trig functions in each. Also, get the filled in output for. Web these simple settings give
lengths and sin/cos/tan values that are easy to work with compared to neverending decimal. Web blank unit circle chart. Web the free,.
Blank Unit Circle Pdf Blank unit circle, Practices worksheets, Multi
Web the free, printable blank unit circle charts canister live uploaded as pdf button docure. The unit circle is the golden key to actually understanding trigonometry. Web metal circle blanks for
laser engraving, select color and size, thickness 1mm anodized aluminum disc for laser, round. Web how to understand the unit circle chart? Coupon codes available install smart.
blankunitcircle Tim's Printables
Web the free, printable blank unit circle charts canister live uploaded as pdf button docure. Web these simple settings give lengths and sin/cos/tan values that are easy to work with compared to
neverending decimal. Web download a set of printable unit circle charts to hand out to your students. Web download a set of printable unit circle charts to hand.
Printable Blank Unit Circle Worksheet Template
Web metal circle blanks for laser engraving, select color and size, thickness 1mm anodized aluminum disc for laser, round. Degrees are on the inside, then radians, and the pointโs coordinate in the
brackets. Web blank unit circle chart related tags unit circle charts unit circle unit chart circle chart blank unit free download this blank unit. Web blank unit circle.
FREE 19+ Unit Circle Charts Templates in PDF MS Word
The first sheet includes all the radians, degrees, and tangents. Web fill in the unit circle positive: Web the free, printable blank unit circle charts can exist downloaded as pdf or doc. Web choose
your membership level. Also, get the filled in output for.
Web use the blank unit circle worksheet to test yourself and keep the filled unit circle handy for a. Web place the radian measure of each angle in the provided rectangles. Web download a set of
printable unit circle charts to hand out to your students. Web blank unit circle chart related tags unit circle charts unit circle unit chart circle chart blank unit free download this blank unit.
Web the free, printable blank unit circle charts can exist downloaded as pdf or doc. This is the best tool. Web metal circle blanks for laser engraving, select color and size, thickness 1mm anodized
aluminum disc for laser, round. Useful as a geometry handout/study guide, for. For each quadrant, state which. Web download a set of printable unit circle charts to hand out to your students. Web
these simple settings give lengths and sin/cos/tan values that are easy to work with compared to neverending decimal. Also, get the filled in version in graduate to. The first sheet includes all the
radians, degrees, and tangents. Web below you will find five printable unit circle worksheets. Write the ordered pairs for each point around the. Web fill in the blanks. Web the printable unit circle
worksheets are intended to provide high school practice in using the unit circle to find the coordinates. Web download a set of printable unit circle charts to hand out to your students. Highly rated
with a low unit price. Coupon codes available install smart.
When You Are Studying Trigonometry You Can Use A Unit Circle Chart.
For each quadrant, state which. Web these simple settings give lengths and sin/cos/tan values that are easy to work with compared to neverending decimal. Write the ordered pairs for each point around
the. Web download a set of printable unit circle charts to hand out to your students.
Web Place The Degreeangle Measure Of Each Angle In The Dashedblanks Inside The Circle, And The Radian Measure Of Each Angle In The Solidblanks Inside The.
Also, get the filled in output for. Web download a set of printable unit circle charts to hand out to your students. Also, get the filled in version in graduate to. Web metal circle blanks for laser
engraving, select color and size, thickness 1mm anodized aluminum disc for laser, round.
The Unit Circle Is The Golden Key To Actually Understanding Trigonometry.
Web blank unit circle chart. Web the printable unit circle worksheets are intended to provide high school practice in using the unit circle to find the coordinates. Web choose your membership level.
Web download a set of printable unit circle charts to hand out to your students.
Useful As A Geometry Handout/Study Guide, For.
Web below you will find five printable unit circle worksheets. Web the free, printable blank unit circle charts can exist downloaded as pdf or doc. Coupon codes available install smart. Useful as a
geometry handout/study guide, for.
Related Post: | {"url":"https://tineopprinnelse.tine.no/en/printable-blank-unit-circle.html","timestamp":"2024-11-08T21:07:41Z","content_type":"text/html","content_length":"29610","record_id":"<urn:uuid:1fbfa598-cb4e-4a5f-a1be-68881005d546>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00786.warc.gz"} |
Understanding FFTs and Windowing
The Fourier transform can be powerful in understanding everyday signals and troubleshooting errors in signals. Although the Fourier transform is a complicated mathematical function, it isnโt a
complicated concept to understand and relate to your measured signals. Essentially, it takes a signal and breaks it down into sine waves of different amplitudes and frequencies. Letโs take a deeper
look at what this means and why it is useful.
All Signals Are the Sum of Sines
When looking at real-world signals, you usually view them as a voltage changing over time. This is referred to as the time domain. Fourierโs theorem states that any waveform in the time domain can be
represented by the weighted sum of sines and cosines. For example, take two sine waves, where one is three times as fast as the otherโor the frequency is 1/3 the first signal. When you add them, you
can see you get a different signal.
^Figure 1: When you add two signals, you get a new signal.
Now imagine if that second wave was also 1/3 the amplitude. This time, just the peaks are affected.
^Figure 2: Adjusting the amplitude when adding signals affects the peaks.
Imagine you added a third signal that was 1/5 the amplitude and frequency of the original signal. If you continued in this fashion until you hit the noise floor, you might recognize the resulting
^Figure 3: A square wave is the sum of sines.
You have now created a square wave. In this way, all signals in the time domain can be represented by a series of sines.
Although it is pretty neat that you can construct signals in this fashion, why do you actually care? Because if you can construct a signal using sines, you can also deconstruct signals into sines.
Once a signal is deconstructed, you can then see and analyze the different frequencies that are present in the original signal. Take a look at a few examples where being able to deconstruct a signal
has proven useful:
โข If you deconstruct radio waves, you can choose which particular frequencyโor stationโyou want to listen to.
โข If you deconstruct audio waves into different frequencies such as bass and treble, you can alter the tones or frequencies to boost certain sounds to remove unwanted noise.
โข If you deconstruct earthquake vibrations of varying speeds and strengths, you can optimize building designs to avoid the strongest vibrations.
โข If you deconstruct computer data, you can ignore the least important frequencies and lead to more compact representations in memory, otherwise known as file compression.
Deconstructing Signals Using the FFT
The Fourier transform deconstructs a time domain representation of a signal into the frequency domain representation. The frequency domain shows the voltages present at varying frequencies. It is a
different way to look at the same signal.
A digitizer samples a waveform and transforms it into discrete values. Because of this transformation, the Fourier transform will not work on this data. Instead, the discrete Fourier transform (DFT)
is used, which produces as its result the frequency domain components in discrete values, or bins. The fast Fourier (FFT) is an optimized implementation of a DFT that takes less computation to
perform but essentially just deconstructs a signal.
Take a look at the signal from Figure 1 above. There are two signals at two different frequencies; in this case, the signal has two spikes in the frequency domainโone at each of the two frequencies
of the sines that composed the signal in the first place.
^Figure 4: When two sine waves of equal amplitude are added, they result in two spikes in the frequency domain.
The amplitude of the original signal is represented on the vertical axis. If you look at the signal from Figure 2 above where there are two different signals at different amplitudes, you can see that
the most prominent spike corresponds to the frequency of the highest voltage sine signal. Looking at a signal in the time domain, you can get a good idea of the original signal by knowing at what
frequencies the largest voltage signals occur.
^Figure 5: The highest spike is the frequency of the largest amplitude.
It can also be helpful to look at the shape of the signal in the frequency domain. For instance, letโs take a look at the square wave in the frequency domain. We created the square wave using many
sine waves at varying frequencies; as such, you would expect many spikes in the signal in the frequency domainโone for each signal added. If you see a nice ramp in the frequency domain, you know the
original signal was a square wave.
^Figure 6: The frequency domain of a sine wave looks like a ramp.
So what does this look like in the real world? Many mixed-signal oscilloscopes (MSO) have an FFT function. Below, you can see what an FFT of a square wave looks like on a mixed-signal graph. If you
zoom in, you can actually see the individual spikes in the frequency domain.
^Figure 7: The original sine wave and its corresponding FFT are displayed in A, while B is a zoomed in portion of the FFT where you can see the individual spikes.
Looking at signals in the frequency domain can help when validating and troubleshooting signals. For instance, say you have a circuit that is supposed to output a sine wave. You can view the output
signal on the oscilloscope in the time domain in Figure 8 below. It looks pretty good!
^Figure 8: If these two waves were added, they would look like a perfect sine wave because they are so similar.
However, when you view the signal in the frequency domain, you expect only one spike because you are expecting to output a single sine wave at only one frequency. However, you can see that there is a
smaller spike at a higher frequency; this is telling you that the sine wave isnโt as good as you thought. You can work with the circuit to eliminate the cause of the noise added at that particular
frequency. The frequency domain is great at showing if a clean signal in the time domain actually contains cross talk, noise, or jitter.
^Figure 9: Looking at the seemingly perfect sine wave from Figure 8, you can see here that there is actually a glitch. | {"url":"https://www.ni.com/en/shop/data-acquisition/measurement-fundamentals/analog-fundamentals/understanding-ffts-and-windowing.html","timestamp":"2024-11-07T17:34:11Z","content_type":"text/html","content_length":"117914","record_id":"<urn:uuid:affa2a34-a37f-46c7-9b6e-249ced1a22cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00690.warc.gz"} |
Area of Regular Polygon Calculator - Online Calculators
To calculate the area of a regular polygon, square the side length (sยฒ) and multiply by the number of sides (n). Divide this product by 4 times the tangent of ฯ/n. This will give you the area of the
Area of Regular Polygon Calculator
Formula: (A = frac{n times s^2}{4 times tanleft(frac{pi}{n}right)})
The Area of Regular Polygon Calculator is designed to help you quickly find the area of any regular polygon by inputting the number of sides and the length of each side. Whether youโre calculating
the area of a 5-sided polygon or a polygon with 16 sides, this tool uses the standard formulas to provide accurate results.
You can also calculate the area using an apothem or radius if given. The calculator is ideal for students, architects, and anyone working with polygons in geometry. It simplifies the process, giving
step-by-step results for calculating the area of both regular and irregular polygons.
$A = \frac{n \times s^2}{4 \times \tan\left(\frac{\pi}{n}\right)}$
Variable Meaning
A Area of the regular polygon
n Number of sides of the polygon
s Length of each side
ฯ Pi (approximately 3.14159)
Solved Calculations:
Example 1:
Given Values:
โข n = 6 sides (hexagon)
โข s = 5 meters
Calculation Instructions
A = (6 ร 5ยฒ) / (4 ร tan(ฯ / 6)) Square the side length and multiply by the number of sides. Divide by 4 times the tangent of ฯ/n.
A = (6 ร 25) / (4 ร tan(ฯ / 6)) Calculate the side length squared.
A = 150 / (4 ร tan(0.5236)) Calculate ฯ/6 and then find the tangent.
A = 150 / (4 ร 0.5774) Multiply by the number of sides.
A = 150 / 2.3096 Perform the final division.
A โ 64.95 square meters The result gives the area of the hexagon.
Answer: A โ 64.95 square meters
Example 2:
Given Values:
โข n = 8 sides (octagon)
โข s = 3 meters
Calculation Instructions
A = (8 ร 3ยฒ) / (4 ร tan(ฯ / 8)) Square the side length and multiply by the number of sides. Divide by 4 times the tangent of ฯ/n.
A = (8 ร 9) / (4 ร tan(ฯ / 8)) Calculate the side length squared.
A = 72 / (4 ร tan(0.3927)) Calculate ฯ/8 and then find the tangent.
A = 72 / (4 ร 0.4142) Perform the multiplication.
A = 72 / 1.6568 Perform the final division.
A โ 43.45 square meters The result gives the area of the octagon.
Answer: A โ 43.45 square meters
What is Area of Regular Polygon Calculator ?
Finding the area of a regular polygon involves knowing the number of sides and the length of each side. This calculator uses an accurate formula to give you correct results for any polygon shape,
whether youโre working with a 4-sided or 8-sided polygon. This calculator can also be used with additional inputs, like the apothem, to provide an alternative method of calculating area.
For more complex shapes, like irregular polygons, tools like the Area of Irregular Polygon Calculator provide similar results by breaking down the polygon into triangles and calculating each area
This calculator explains with Steps so that you easily understand the calculations behind the results. Whether youโre dealing with an n-sided polygon or a polygon with given angles, this tool is
flexible enough to handle various input formats. From geometry homework to real-world applications, it is a must-have for accurate and efficient calculations. | {"url":"https://areacalculators.com/area-of-regular-polygon-calculator/","timestamp":"2024-11-03T03:20:23Z","content_type":"text/html","content_length":"109514","record_id":"<urn:uuid:d63829f6-f24b-418d-bcdd-e617486ece77>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00071.warc.gz"} |
Max of (sin(x))^(e^x)
02-25-2018, 06:19 PM
Post: #7
lrdheat Posts: 872
Senior Member Joined: Feb 2014
RE: Max of (sin(x))^(e^x)
Not at all arguing that the CASIO is better... with fsolve, an interval of 14.13 to 14.14 is still to wide for fsolve to find the solution to d/dx ((sin(x))^(e^x))=0. Solve on the CASIO fx-CG10 finds
the answer with the much wider range of 13 to 15. (SolveN requires the much narrower range on the CASIO)
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/showthread.php?mode=threaded&tid=10230&pid=91984","timestamp":"2024-11-03T02:42:19Z","content_type":"application/xhtml+xml","content_length":"18534","record_id":"<urn:uuid:91b1bdf9-9c8f-4073-91f4-dbd96cc0726e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00311.warc.gz"} |
Astronautics Questions and Answers - Real Orbits - Third-Body Forces - Set 2 - Sanfoundry
Astronautics Questions and Answers โ Real Orbits โ Third-Body Forces โ Set 2
This set of Astronautics Multiple Choice Questions & Answers (MCQs) focuses on โReal Orbits โ Third-Body Forces โ Set 2โ.
1. What is the energy of a satellite of mass 1 ton orbiting at a distance of 36,000 kilometers away from Earthโs center, assuming only Earthโs gravitational influence (Earthโs mass = 5.972 ร 10^24
a) -5.5 Giga Joules
b) -5.5 Mega Joules
c) 5.5 Giga Joules
d) 5.5 Mega Joules
View Answer
Answer: a
Explanation: Given, r = 36,000 kilometers = 36 x 10^6 m; M = 5.972 ร 10^24 kg; m = 1000 kg. Now, energy of the satellite is given by the following equation:
E = \(\frac{1}{2}mv^2 โ \frac{GMm}{r}\)
we know that
v^2 = \(\frac{GM}{r}\)
E = \(\frac{1}{2}m (\frac{GM}{r}) โ \frac{GMm}{r}\)
E = \(\frac{1}{2}\frac{GMm}{r} โ \frac{GMm}{r}\)
E = โ\(\frac{1}{2}\frac{GMm}{r}\)
substituting the given data, we have
E = โ\(\frac{1}{2}\frac{(6.67 * 10^{-11})(5.972 * 10^{24})(1000)}{36 * 10^6}\) = -5.5359 * 10^9 Joules = -5.5 Giga Joules
2. What is the energy of the above satellite if we also consider the Moonโs gravity? Assume the Moon lies 384,400 km from Earth and has a mass of 7.348 ร 10^22 kilograms).
a) -5.55 x 10^9 Joules
b) -6.55 x 10^9 Joules
c) -5.55 x 10^10 Joules
d) 5.55 x 10^9Joules
View Answer
Answer: a
Explanation: The energy of a satellite taking only Earthโs influence into consideration is
E = \(-\frac{1}{2} \frac{GM_Em}{r}\)
where M[E] is the mass of Earth. But here, we need to consider the Moonโs gravitational potential energy too. So the formula becomes
E = \(-\frac{1}{2} \frac{GM_Em}{r_E} โ \frac{GM_Mm}{r_E}\)
where r[E] is the distance of the satellite from Earthโs center, while r[M] is the distance of the satellite from the Moonโs center. We have
M[E] = 5.972 ร 1024 kg; r[E] = 36,000 kilometers = 3.6 x 107 meters; M[M] = 7.348 ร 1022 kg; r[M] = 384,400 โ 40,000 = 344400 kilometers = 3.444 x 108 meters. Substituting these values, we get
E = -5.55 x 109 Joules. Clearly, the change in energy of the satellite by taking the Moonโs influence into consideration is not that impressive.
3. Gravitational perturbations from the planets are noticeable during __________
a) close encounters with the sun
b) close encounters with Earth
c) close encounters between each external object
d) solar maximum
View Answer
Answer: b
Explanation: Third body forces on a high altitude satellite from the planets are significant during close encounters with Earth. During such encounters, the gravity from the nearby planets is more
strongly felt by Earth-orbiting satellites.
4. The close encounters of planets like Venus, Mars and Jupiter with Earth occur when _______________
a) their orbits change shape
b) their orbits change size
c) their orbits intersect with that of Earthโs
d) their angular separation with respect to Earth is minimum
View Answer
Answer: d
Explanation: The orbits of all the planets are fixed. They come close to Earth when the angular distance from our planet is minimum. The angular separation is measured in terms of the angle between
the Earth-sun line (line joining the centers of the Earth and the sun) and the planet-sun line (line joining the centers of the concerned planet and the sun).
5. Assume that at a given instant, the distance of Earth from both Mars and Venus is the same. In such a scenario, the perturbations on a high-altitude satellite are more from Mars than from Venus.
a) True
b) False
View Answer
Answer: b
Explanation: The perturbations from Venus are more due to the fact that Venus has a greater mass than the red planet, resulting in a stronger perturbing effect from Venus than from Mars.
6. At the closest point of approach, the perturbations on a geostationary satellite of mass 1 ton due to Jupiter are more than those due to Mars (assuming Jupiter is 588 million kilometers from Earth
at its nearest, while Mars is 54.6 million kilometers from Earth at its closest; also, mass of Jupiter 1.898 ร 10^27 kg, and mass of Mars = 6.39 ร 10^23 kg).
a) True
b) False
View Answer
Answer: a
Explanation: In order to find out the magnitude of the perturbations, we calculate the gravitational force on the geostationary satellite due to Jupiter and Mars separately. Let us assume that the
satellite is situated along the Earth-Jupiter (as well as Earth-Mars) line and lies in between Earth and Jupiter (or Mars). Now, mass of the satellite, m = 1000 kg.
Because the satellite is a geostationary satellite, the distance of the satellite from Jupiter, r[J] = (588 million kilometers) โ (radius of the satelliteโs orbit) = (588 x 10^9 m) โ (36 x 10^6 m) =
5.87964 x 10^11 m. Also, we have the mass of Jupiter, M[J] = 1.898 ร 10^27 kg. So the force on the satellite due to Jupiterโs gravity is given by
\(F_J = \frac{GM_Jm}{(r_J)^2}\)
Substituting all the values, we get F[J] = 3.664 x 10^-4 N.
Now letโs consider the case of Mars. Given, mass of the red planet, M[M] = 6.39 x 10^23 kg. Also, distance of the satellite from Mars, r[M] = (54.6 million km) โ (36000 km) = 5.4564 x 10^10 meters.
So, the force on the satellite due to Mars is given by
\(F_M = \frac{GM_Mm}{(r_M)^2}\)
Substituting the given values, we get F[M] = 1.4325 x 10^-5 N.
Clearly, F[J]>F[M]. The perturbing effects from Jupiter are thus greater.
7. For nearly circular orbits, the equation for the rate of change in the argument of perigee of a satelliteโs orbit due to the Moonโs influence is given by
\(\dot{w} = 0.00169 \frac{[4 โ 5(sin i)^2]}{n}\)
where โiโ is the orbital inclination and n is the number of orbit revolutions per day. Here, the unit of ฯ ฬis โdegrees per dayโ, and is clearly independent of the _____________
a) inclination
b) orbital altitude
c) number of revolutions
d) orbital orientation
View Answer
Answer: d
Explanation: Here, the rate of change of the argument of perigee is dependent on the inclination and orbital altitude (the number of revolutions per day depends on the height of the satellite above
the Earth). The orientation of the orbit, however, does not affect the value of ฯ ฬ(given a fixed inclination and orbital height).
8. For nearly circular orbits, the equation for the rate of change in the argument of perigee of a satelliteโs orbit due to the sunโs influence is given by
\(\dot{w}_{sun} = 0.00077 \frac{[4 โ 5(sin i)^2]}{n}\)
where โiโ is the orbital inclination and n is the number of orbit revolutions per day. For what value of the inclination does the precession rate of the argument of perigee due to the sun become
a) 56 degrees
b) 34 degrees
c) 63 degrees
d) 89 degrees
View Answer
Answer: c
Explanation: Observing the equation, we see that \(\dot{w}\) sun becomes zero when the numerator becomes zero. So, [4-5(sinโกi )^2 ]=0, or [5(sinโกi)^2 ]=4, implying (sinโกi )^2=4/5, or sinโกi=0.89, or i
=sin^(-1)โก0.89, which gives the value of โiโ as 63.43 degrees, which is approximately 63 degrees.
9. For a circular orbit, the secular rate of change of the right ascension of the ascending node caused by the Moon is given by ___________
\(\dot{ฮฉ} = -0.00338 \frac{cos i}{n}\)
where โiโ is the orbital inclination and n is the number of orbit revolutions per day. Here, the unit of \(\dot{ฮฉ}\) is โdegrees per dayโ. What happens to the rate of change of the right ascension
for a polar orbit?
a) Becomes \(\dot{ฮฉ} = โ 0.00338 \frac{1}{n}\)
b) Becomes 0
c) Becomes \(\dot{ฮฉ} = โ 0.00338\)
d) Becomes \(\dot{ฮฉ} = โ 0.00338 (cos i)\)
View Answer
Answer: b
Explanation: For a polar orbit, the inclination โiโ is 90 degrees, meaning that cos(i) = 0. So \(\dot{ฮฉ}\) becomes 0.
10. If we consider a circular orbit, the secular rate of change of the right ascension of the ascending node caused by the Moon is given by
\(\dot{ฮฉ}_{moon} = โ 0.00338 \frac{cos i}{n}\)
while that caused by the sun is given by
\(\dot{ฮฉ}_{sun} = โ 0.00154 \frac{cos i}{n}\)
The Value of \( (\frac{\dot{ฮฉ}_{moon}}{\dot{ฮฉ}_{sun}})\) is a constant
a) True
b) False
View Answer
Answer: a
Explanation: We have, for a given inclination and orbital altitude,
\( (\frac{\dot{ฮฉ}_{moon}}{\dot{ฮฉ}_{sun}}) = \frac{(-0.00338 \frac{cos i}{n})}{(- 0.00154 \frac{cos i}{n})}\)
\( (\frac{\dot{ฮฉ}_{moon}}{\dot{ฮฉ}_{sun}}) = \frac{-0.0033}{- 0.00154} = 2.1948\)
This is clearly a constant value.
Sanfoundry Global Education & Learning Series โ Astronautics.
To practice all areas of Astronautics, here is complete set of Multiple Choice Questions and Answers. | {"url":"https://www.sanfoundry.com/astronautics-questions-answers-real-orbits-third-body-forces-set-2/","timestamp":"2024-11-02T17:04:13Z","content_type":"text/html","content_length":"127721","record_id":"<urn:uuid:8f5326c1-f21e-438b-9344-21688b7c729a>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00370.warc.gz"} |
Transitive Group -- from Wolfram MathWorld
Transitivity is a result of the symmetry in the group. A group group action (understood to be a subgroup of a permutation group on a set transitive. In other words, if the group orbit
A group is called k-transitive if there exists a set of elements on which the group acts faithfully and Higman-Sims group has both a 2-transitive representation of degree 176, and a 1-transitive
representation of degree 100. Note also that while
The symmetric group alternating group classification theorem of finite groups. Except for some sporadic examples, the multiply transitive groups fall into infinite families. Certain subgroups of the
affine group on a finite vector space, including the affine group itself, are 2-transitive. Some of these are summarized below.
The multiply transitive groups fall into six infinite families, and four classes of sporadic groups. In the following enumeration,
1. Certain subgroups of the affine group on a finite vector space, including the affine group itself, are 2-transitive.
2. The projective special linear groups
3. The symplectic groups defined over the field of two elements have two distinct actions which are 2-transitive.
4. The field involution Hermitian form to be defined on a vector space on unitary group on isotropic vectors in projective special unitary group isotropic vectors.
5. The Suzuki group of Lie type automorphism group of a Steiner system, an inversive plane of order
6. The Ree group of Lie type automorphism group of a Steiner system, a unital of order
7. The Mathieu groups
8. The projective special linear group Witt geometry
9. The Higman-Sims group HS is 2-transitive.
10. The Conway group
Other 3-transitive groups include | {"url":"https://mathworld.wolfram.com/TransitiveGroup.html","timestamp":"2024-11-03T21:36:12Z","content_type":"text/html","content_length":"66922","record_id":"<urn:uuid:570210eb-4775-4678-8b1b-750ef23062e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00311.warc.gz"} |
How To Learn Trigonometry Intuitively
Trig mnemonics like SOH-CAH-TOA focus on computations, not concepts:
TOA explains the tangent about as well as $x^2 + y^2 = r^2$ describes a circle. Sure, if youโre a math robot, an equation is enough. The rest of us, with organic brains half-dedicated to vision
processing, seem to enjoy imagery. And โTOAโ evokes the stunning beauty of an abstract ratio.
I think you deserve better, and hereโs what made trig click for me.
โข Visualize a dome, a wall, and a ceiling
โข Trig functions are percentages to the three shapes
Motivation: Trig Is Anatomy
Imagine Bob The Alien visits Earth to study our species.
Without new words, humans are hard to describe: โThereโs a sphere at the top, which gets scratched occasionallyโ or โTwo elongated cylinders appear to provide locomotionโ.
After creating specific terms for anatomy, Bob might jot down typical body proportions:
โข The armspan (fingertip to fingertip) is approximately the height
โข A head is 5 eye-widths wide
โข Adults are 8 head-heights tall
How is this helpful?
Well, when Bob finds a jacket, he can pick it up, stretch out the arms, and estimate the ownerโs height. And head size. And eye width. One fact is linked to a variety of conclusions.
Even better, human biology explains human thinking. Tables have legs, organizations have heads, crime bosses have muscle. Our biology offers ready-made analogies that appear in man-made creations.
Now the plot twist: you are Bob the alien, studying creatures in math-land!
Generic words like โtriangleโ arenโt overly useful. But labeling sine, cosine, and hypotenuse helps us notice deeper connections. And scholars might study haversine, exsecant and gamsin, like
biologists who find a link between your tibia and clavicle.
And because triangles show up in circlesโฆ
โฆand circles appear in cycles, our triangle terminology helps describe repeating patterns!
Trig is the anatomy book for โmath-madeโ objects. If we can find a metaphorical triangle, weโll get an armada of conclusions for free.
Sine/Cosine: The Dome
Instead of staring at triangles by themselves, like a caveman frozen in ice, imagine them in a scenario, hunting that mammoth.
Pretend youโre in the middle of your dome, about to hang up a movie screen. You point to some angle โxโ, and thatโs where the screen will hang.
The angle you point at determines:
โข sine(x) = sin(x) = height of the screen, hanging like a sign
โข cosine(x) = cos(x) = distance to the screen along the ground [โcosโ ~ how โcloseโ]
โข the hypotenuse, the distance to the top of the screen, is always the same
Want the biggest screen possible? Point straight up. Itโs at the center, on top of your head, but itโs big dagnabbit.
Want the screen the furthest away? Sure. Point straight across, 0 degrees. The screen has โ0 heightโ at this position, and itโs far away, like you asked.
The height and distance move in opposite directions: bring the screen closer, and it gets taller.
Tip: Trig Values Are Percentages
Nobody ever told me in my years of schooling: sine and cosine are percentages. They vary from +100% to 0 to -100%, or max positive to nothing to max negative.
Letโs say I paid \$14 in tax. You have no idea if thatโs expensive. But if I say I paid 95% in tax, you know Iโm getting ripped off.
An absolute height isnโt helpful, but if your sine value is .95, I know youโre almost at the top of your dome. Pretty soon youโll hit the max, then start coming down again.
How do we compute the percentage? Simple: divide the current value by the maximum possible (the radius of the dome, aka the hypotenuse).
Thatโs why weโre told โSine = Opposite / Hypotenuseโ. Itโs to get a percentage! A better wording is โSine is your height, as a percentage of the hypotenuseโ. (Sine becomes negative if your angle
points โundergroundโ. Cosine becomes negative when your angle points backwards.)
Letโs simplify the calculation by assuming weโre on the unit circle (radius 1). Now we can skip the division by 1 and just say sine = height.
Every circle is really the unit circle, scaled up or down to a different size. So work out the connections on the unit circle and apply the results to your particular scenario.
Try it out: plug in an angle and see what percent of the height and width it reaches:
The growth pattern of sine isnโt an even line. The first 45 degrees cover 70% of the height, and the final 10 degrees (from 80 to 90) only cover 2%.
This should make sense: at 0 degrees, youโre moving nearly vertical, but as you get to the top of the dome, your height changes level off.
Tangent/Secant: The Wall
One day your neighbor puts up a wall right next to your dome. Ack, your view! Your resale value!
But can we make the best of a bad situation?
Sure. What if we hang our movie screen on the wall? You point at an angle (x) and figure out:
โข tangent(x) = tan(x) = height of screen on the wall
โข distance to screen: 1 (the screen is always the same distance along the ground, right?)
โข secant(x) = sec(x) = the โladder distanceโ to the screen
We have some fancy new vocab terms. Imagine seeing the Vitruvian โTAN GENTlemanโ projected on the wall. You climb the ladder, making sure you can โSEE, CANโT you?โ. (Yeah, heโs nakedโฆ wonโt forget
the analogy now, will you?)
Letโs notice a few things about tangent, the height of the screen.
โข It starts at 0, and goes infinitely high. You can keep pointing higher and higher on the wall, to get an infinitely large screen! (Thatโll cost ya.)
โข Tangent is just a bigger version of sine! Itโs never smaller, and while sine โtops offโ as the dome curves in, tangent keeps growing.
How about secant, the ladder distance?
โข Secant starts at 1 (ladder on the floor to the wall) and grows from there
โข Secant is always longer than tangent. The leaning ladder used to put up the screen must be longer than the screen itself, right? (At enormous sizes, when the ladder is nearly vertical, theyโre
close. But secant is always a smidge longer.)
Remember, the values are percentages. If youโre pointing at a 50-degree angle, tan(50) = 1.19. Your screen is 19% larger than the distance to the wall (the radius of the dome).
(Plug in x=0 and check your intuition that tan(0) = 0, and sec(0) = 1.)
Cotangent/Cosecant: The Ceiling
Amazingly enough, your neighbor now decides to build a ceiling on top of your dome, far into the horizon. (Whatโs with this guy? Oh, the naked-man-on-my-wall incidentโฆ)
Well, time to build a ramp to the ceiling, and have a little chit chat. You pick an angle to build and work out:
โข cotangent(x) = cot(x) = how far the ceiling extends before we connect
โข cosecant(x) = csc(x) = how long we walk on the ramp
โข the vertical distance traversed is always 1
Tangent/secant describe the wall, and COtangent and COsecant describe the ceiling.
Our intuitive facts are similar:
โข If you pick an angle of 0, your ramp is flat (infinite) and never reachers the ceiling. Bummer.
โข The shortest โrampโ is when you point 90-degrees straight up. The cotangent is 0 (we didnโt move along the ceiling) and the cosecant is 1 (the โramp lengthโ is at the minimum).
Visualize The Connections
A short time ago I had zero โintuitive conclusionsโ about the cosecant. But with the dome/wall/ceiling metaphor, hereโs what we see:
Whoa, itโs the same triangle, just scaled to reach the wall and ceiling. We have vertical parts (sine, tangent), horizontal parts (cosine, cotangent), and โhypotenusesโ (secant, cosecant). (Note: the
labels show where each item โgoes up toโ. Cosecant is the full distance from you to the ceiling.)
Now the magic. The triangles have similar facts:
From the Pythagorean Theorem ($a^2 + b^2 = c^2$) we see how the sides of each triangle are linked.
And from similarity, ratios like โheight to widthโ must be the same for these triangles. (Intuition: step away from a big triangle. Now it looks smaller in your field of view, but the internal ratios
couldnโt have changed.)
This is how we find out โsine/cosine = tangent/1โ.
Iโd always tried to memorize these facts, when they just jump out at us when visualized. SOH-CAH-TOA is a nice shortcut, but get a real understanding first!
Gotcha: Remember Other Angles
Psstโฆ donโt over-focus on a single diagram, thinking tangent is always smaller than 1. If we increase the angle, we reach the ceiling before the wall:
The Pythagorean/similarity connections are always true, but the relative sizes can vary.
(But, you might notice that sine and cosine are always smallest, or tied, since theyโre trapped inside the dome. Nice!)
Summary: What Should We Remember?
For most of us, Iโd say this is enough:
โข Trig explains the anatomy of โmath-madeโ objects, such as circles and repeating cycles
โข The dome/wall/ceiling analogy shows the connections between the trig functions
โข Trig functions return percentages, that we apply to our specific scenario
You donโt need to memorize $1^2 + \cot^2 = \csc^2$, except for silly tests that mistake trivia for understanding. In that case, take a minute to draw the dome/wall/ceiling diagram, fill in the labels
(a tan gentleman you can see, canโt you?), and create a cheatsheet for yourself.
In a follow-up, weโll learn about graphing, complements, and using Eulerโs Formula to find even more connections.
Appendix: The Original Definition Of Tangent
You may see tangent defined as the length of the tangent line from the circle to the x-axis (geometry buffs can work this out).
As expected, at the top of the circle (x=90) the tangent line can never reach the x-axis and is infinitely long.
I like this intuition because it helps us remember the name โtangentโ, and hereโs a nice interactive trig guide to explore:
Still, itโs critical to put the tangent vertical and recognize itโs just sine projected on the back wall (along with the other triangle connections).
Appendix: Inverse Functions
Trig functions take an angle and return a percentage. $\sin(30) = .5$ means a 30-degree angle is 50% of the max height.
The inverse trig functions let us work backwards, and are written $\sin^{-1}$ or $\arcsin$ (โarcsineโ), and often written asin in various programming languages.
If our height is 25% of the dome, whatโs our angle?
Plugging asin(.25) into a calculator gives an angle of 14.5 degrees.
Now what about something exotic, like inverse secant? Often times itโs not available as a calculator function (even the one I built, sigh).
Looking at our trig cheatsheet, we find an easy ratio where we can compare secant to 1. For example, secant to 1 (hypotenuse to horizontal) is the same as 1 to cosine:
$\displaystyle{\frac{\sec}{1} = \frac{1}{\cos}}$
Suppose our secant is 3.5, i.e. 350% of the radius of the unit circle. Whatโs the angle to the wall?
\begin{aligned} \frac{\sec}{1} &= \frac{1}{\cos} = 3.5 \\ \cos &= \frac{1}{3.5} \\ \arccos(\frac{1}{3.5}) &= 73.4 \end{aligned}
Appendix: A Few Examples
Example: Find the sine of angle x.
Ack, what a boring question. Instead of โfind the sineโ think, โWhatโs the height as a percentage of the max (the hypotenuse)?โ.
First, notice the triangle is โbackwardsโ. Thatโs ok. It still has a height, in green.
Whatโs the max height? By the Pythagorean theorem, we know
\begin{aligned} 3^2 + 4^2 &= \text{hypotenuse}^2 \\ 25 &= \text{hypotenuse}^2 \\ 5 &= \text{hypotenuse} \end{aligned}
Ok! The sine is the height as a percentage of the max, which is 3/5 or .60.
Follow-up: Find the angle.
Of course. We have a few ways. Now that we know sine = .60, we can just do:
$\displaystyle{\arcsin(.60) = 36.9}$
Hereโs another approach. Instead of using sine, notice the triangle is โup against the wallโ, so tangent is an option. The height is 3, the distance to the wall is 4, so the tangent height is 3/4 or
75%. We can use arctangent to turn the percentage back into an angle:
$\displaystyle{\tan = \frac{3}{4} = .75 }$
$\displaystyle{\arctan(.75) = 36.9}$
Example: Can you make it to shore?
Youโre on a boat with enough fuel to sail 2 miles. Youโre currently .25 miles from shore. Whatโs the largest angle you could use and still reach land? Also, the only reference available is Hubertโs
Compendium of Arccosines, 3rd Ed. (Truly, a hellish voyage.)
Ok. Here, we can visualize the beach as the โwallโ and the โladder distanceโ to the wall is the secant.
First, we need to normalize everything in terms of percentages. We have 2 / .25 = 8 โhypotenuse unitsโ worth of fuel. So, the largest secant we could allow is 8 times the distance to the wall.
Weโd like to ask โWhat angle has a secant of 8?โ. But we canโt, since we only have a book of arccosines.
We use our cheatsheet diagram to relate secant to cosine: Ah, I see that โsec/1 = 1/cosโ, so
\begin{aligned} \sec &= \frac{1}{\cos} = 8 \\ \cos &= \frac{1}{8} \\ \arccos(\frac{1}{8}) &= 82.8 \end{aligned}
A secant of 8 implies a cosine of 1/8. The angle with a cosine of 1/8 is arccos(1/8) = 82.8 degrees, the largest we can afford.
Not too bad, right? Before the dome/wall/ceiling analogy, Iโd be drowning in a mess of computations. Visualizing the scenario makes it simple, even fun, to see which trig buddy can help us out.
In your problem, think: am I interested in the dome (sin/cos), the wall (tan/sec), or the ceiling (cot/csc)?
Happy math.
Update: The owner of Grey Matters put together interactive diagrams for the analogies (drag the slider on the left to change the angle):
Other Posts In This Series
Topic Reference | {"url":"https://betterexplained.com/articles/intuitive-trigonometry/","timestamp":"2024-11-05T04:35:34Z","content_type":"text/html","content_length":"56676","record_id":"<urn:uuid:72fa2f76-d0d8-4057-8b63-0d0c95b04e9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00538.warc.gz"} |
TR04-045 | 15th April 2004 00:00
Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs
A strong direct product theorem says that if we want to compute
k independent instances of a function, using less than k times
the resources needed for one instance, then our overall success
probability will be exponentially small in k.
We establish such theorems for the classical as well as quantum
query complexity of the OR function. This implies slightly
weaker direct product results for all total functions.
We prove a similar result for quantum communication
protocols computing k instances of the Disjointness function.
Our direct product theorems imply a time-space tradeoff
T^2*S=Omega(N^3) for sorting N items on a quantum computer, which
is optimal up to polylog factors. They also give several tight
time-space and communication-space tradeoffs for the problems of
Boolean matrix-vector multiplication and matrix multiplication. | {"url":"https://eccc.weizmann.ac.il//eccc-reports/2004/TR04-045/","timestamp":"2024-11-10T05:35:54Z","content_type":"application/xhtml+xml","content_length":"21456","record_id":"<urn:uuid:34a04efa-771f-4371-87f0-a98d7a534730>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00635.warc.gz"} |
By Nicolas Gambardella
In the absence of information regarding the structure of variability (whether intrinsic noise, technical error or biological variation), one very often assumes, consciously or not, a normal
distribution, i.e. a โbell curveโ. This is probably due to an intuitive application of the central limit theorem which stipulates that when independent random variables are added, their normalized
sum tends toward such a normal distribution, even if the original variables themselves are not normally distributed. The reasoning then goes that any biological process is the sum of many
sub-processes, each with its own variability structure, therefore its โnoiseโ should be Gaussian.
Although that sounds almost common sense, alarm bells start ringing when we use such distributions with molecular measurements. Firstly, a normal distribution ranges from -โ to +โ. And there is no
such things as negative amounts. So, at most, the variability would follow a truncated normal distribution, starting at 0. Secondly, the normal distribution is symmetrical. However, in everyday
conversation, the biologists will talk of a variability โreaching twofoldโ. For a molecular measurement, a two-fold increase and a two-fold decrease do not represent the same amount. So there is an
asymmetric notion here. We are talking about linking the addition and removal of the same โquantum of variabilityโ to a multiplication or division by a same number. Immediately logarithms come to
mind. And log2 fold changes are indeed one of the most used method to quantify differences. Populations of molecular measurements can also be โ sometimes reasonably โ fitted with log-normal
distributions. Of course, several other distributions have been used to fit better cellular contents of RNA and protein, including the gamma, Poisson and negative binomial distributions, as well as
more complicated mix.
Letโs look at some single-cell gene expression measurements. Below, I plotted the distribution of read counts (read counts per million reads to be accurate) for four genes in 232 cells. The asymmetry
is obvious, even for NDUFAB1 (the acyl carrier protein, central to lipid metabolism). This dataset was generated using a SmartSeq approach and Illumina HiSeq sequencing. It is therefore likely that
many of the observed 0 are โdropoutsโ, possibly due to the reverse transcriptase stochastically missing the mRNAs. This problem is probably even amplified with methods such as Chromium, that are
known to detect less genes per cell. Nevertheless, even if we remove all 0, we observe extremely similar distributions.
One of the important consequences of the normal distributionโs symmetry, is that mean and median of the distribution are identical. In a population, we should have the same amounts of samples
presenting less and presenting more substance than the mean. In other words, a โtypicalโ sample, representative of the population, should display the mean amount of the substance measured. It is easy
to see that this is not the case at all for our single cell gene expressions. The numbers of cells expressing more than the mean of the population are 99 for ACP (not hugely far from the 116 of the
median), 86 for hexokinase, 78 for histone acetyl transferase P300 and 30 for actin 2. In fact, in the latter case, the median is 0, mRNAs having been detected in only 50 of the 232 cells ! So, if we
take a cell randomly in the population, most of the time it presents a count of 0 CPM of actin 2. The mean expression of 52.5 CPM is certainly not representative!
If we want to model the cell type, and provide initial concentrations for some messenger RNAs, we must use the median of the measurements, not the mean (of course, the best route of action would be
to build an ensemble model, cf below). The situation would be different if we wanted to model the tissue, that is a sum of non individualised cells representative of the population.
To explain how such asymmetric distributions can arise from noise following normal distributions, we can build a small model of gene expression. mRNA is transcribed at a constant flux, with a rate
constant kT. It is then degraded following a unimolecular decay with rate kdeg (chosen to be 1 on average, for convenience). Both rate constants are computed from energies, following the Arrhenius
equation, k = Ae-(E/RT), where R is the gas constant, 8.314 and T is the temperature, that we set at 310 K (37 deg C). To simplify weโll just set the scaling factor A to 1, assuming it is included in
the reference energy. E is 0 for degradation, and we modulate the reference transcription energy to control the level of transcript. Both transcription and degradation energy will be affected by
normally distributed noises that represent differences between cells (e.g. concentration and state of enzymes). So Ei = E + noise. Because of Arrhenius equation, the normal distributions of energy
are transformed into lognormal distributions of rates. Below I plot the distributions of the noises in the cells and the resulting rates.
The equilibrium concentration of the mRNA is then kdeg/kT (we could run stochastic simulations to add temporal fluctuations, but that would not change the message). The number of molecules is
obtained by multiplying by volume (1e-15 l) and Avogadro number. Each panel presents 300 cells. The distribution on the top-left looks kind of intermediate between those of hexokinase and ACP above.
To get the values on the top-right panel, we simulate an overall increase of the transcription rate by twofold, using a decrease of the energy by 8.314*310*ln(2). In this specific case, the observed
ratios between the two medians and between the two means are both about 2.04, close to the โtruthโ. So we could correctly infer a twofold increase by looking at the means. In the bottom panels, we
increase the variability of the systems by doubling the standard deviation of the energy noises. Now the ratio of the median is 1.8, inferring a 80% increase while the ratio of the means is 2.53,
inferring an increase of 153%!
In summary:
1. Means of single cell molecular measurements are not a good way of getting a value representing the population;
2. Comparing the means of single measurements in two populations does not provide an accurate estimation of the underlying changes; | {"url":"https://www.ascistance.co.uk/blog/tag/histogram/","timestamp":"2024-11-07T15:57:39Z","content_type":"text/html","content_length":"42634","record_id":"<urn:uuid:613f2ada-833f-4ef4-b59e-557f7fff8a7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00187.warc.gz"} |
5 Ways to Calculate the Square Root in Excel
The square root of a number is used in many mathematical, statistical, engineering, and other types of formulas.
Itโs no doubt the reason why many people need to calculate the square root of numbers in Excel.
With such a common task, there are many ways to get it done in Excel.
In this post, Iโll show you 5 ways you can use to calculate the square root of a number.
What is a Square Root of a Number?
The square root of a number is another number that produces the original number when multiplied by itself.
You would say A is the square root of B if and only if A x A = B.
For example, 3 is the square root of 9 because 3 x 3 = 9.
1.5 is the square root of 2.25 because 1.5 x 1.5 = 2.25.
Calculate the Square Root with the Carat Operator
One way you can use to calculate the square root of a number is using the carat (^) operator.
This is Microsoft Excelเน s exponentiation operator and will allow you to raise a number to an exponent or power.
Finding the square root of a number is the same as raising that number to a power of 1/2 = 0.5. This is because A^1/2 x A^1/2 = A^(1/2+1/2) = A^1 = A.
= B3 ^ ( 1 / 2 )
You can use the carat operator as above to raise the number in cell B3 to the power of 1/2 or 0.5 which will produce the square root.
Calculate the Square Root with the SQRT Function
Another useful way to get the square root is with the SQRT function.
In fact, the sole purpose of this function is to return the square root of a number you supply it.
Syntax for the SQRT Function
= SQRT ( number )
โข number is the positive number of which you would like to calculate the square root.
Note: If you try to find the square root of a negative number, the function will return a #NUM! error. This is because the square root does not exist unless you consider the complex numbers.
Example of the SQRT Function
= SQRT ( B3 )
The above example will return the square root of the number in cell B3. In this case, it returns 2 as the result.
Calculate the Square Root with the POWER Function
The SQRT function is a very focused function with a single-use case of calculating the square root.
There is a more generalized function that will allow you to calculate the exponent of any number including the square root value.
Syntax for the POWER Function
= POWER ( base, exponent )
โข base is the number which you would like to calculate the square root.
โข exponent is the power to which you would like to raise the base value.
Example of Calculating the Square Root with the POWER Function
= POWER ( B3, 1 / 2 )
If you want to use the POWER function to find the square root, then you can use an exponent of 1/2 = 0.5.
The example above will return the square root of the value in cell B3.
Calculate the Square Root with the SERIESSUM Function
The SERIESSUM is a specialized function that allows you to evaluate a series.
a[1]x^n + a[2]x^(n+m) + a[3]x^(n+2m) +...+ a[i]x^(n+(i-1)m)
A series is just the sum of a sequence of terms like the above generic formula. The SERIESSUM function will evaluate this sum.
Since x^(1/2) is a special case of this generic formula, you can use the SERIESSUM function to evaluate the square root of a value.
Syntax for the SERIESSUM Function
= SERIESSUM ( x, n, m, coefficients )
โข x is the input value of the series. This is the value which you would like to evaluate the series at.
โข n is the starting power of the series.
โข m is the step by which the power will increase in the series.
โข coefficients is an array of values (a[1], a[2], a[3],โฆ,a[i]) to multiply each term of the series.
Example of Calculating the Square Root with the SERIESSUM Function
Now if youโd like to use the SERIESSUM function to calculate the square root of a number, then all you need to do is use a single value instead of an array of values for the coefficients.
This way the series will only have a single term.
You can then set the starting power as 1/2 and the step increases as 0.
= SERIESSUM ( B3, 1 / 2, 0, 1 )
In the above formula, cell B3 contains the value you wish to find the square root of.
The starting power of the series is n = 1/2 and this will increase by a factor of m = 0 for each term in the series.
Because the coefficients = 1, there will only be one term in the series which will be the square root of B3.
Calculate the Square Root with Power Query
Power query is the best way to import and transform data in Excel.
If your data is from an external source, then you might be using a power query to get the data into Excel.
Power query would also be a great place to add any calculations, such as a square root, during the import.
If your data is already in Excel, you can also use a power query.
Add your data into an Excel table. Select the data and press Ctrl + T to create a table.
Go to the Data tab and press the From Sheet command. This will open up the power query editor with your data.
Go to the Add Column tab and click on Custom Column to create a new column with the square root calculation.
= Number.Sqrt([Numbers])
This will open the Custom Column menu. Give the column a name such as Square Root, and insert the above formula into the formula editor, then press the OK button.
In this example, the Numbers column contains the number which you want to find the square root.
This will create a new column in the data!
You can now go to the Home tab and press the Close and Load button to load the data back into Excel.
The Import Data menu will appear and you can select to load the data into a Table then choose the location where youโd like to load the data. In this example, the data is being loaded into an
Existing worksheet in cell D2.
Your data will load into another Excel table with the additional Square Root column.
Excel offers many different options when calculating the square root!
This blog post showed you five different ways you can find the square root of a number.
The carat operator, SQRT function, POWER function, SERIESSUM functions, and power query can all be used to calculate the square root.
Do you know any other methods? Let me know in the comments section below!
4 Comments
Pedro Wave on 20211205051148 at 05:11
Two more methods to calculate the square root of cell B3:
John MacDougall on 20211207164004 at 16:40
I would group these both under the carat operator as they are algebraic expressions that simplify to B3^(1/2).
Pedro Wave on 20211205053527 at 05:35
It can also be calculated with:
which returns the square root of a complex number in text format, so it is converted to a number by prepending two minus signs.
John MacDougall on 20211207163635 at 16:36
Nice one! I might add this to the post when I have the time.
Here's a straightforward guide on how to clear all filters from a table in...
6 Ways To Clear All Filters From a Table in Microsoft Excel | {"url":"https://www.howtoexcel.org/square-root/","timestamp":"2024-11-12T02:50:30Z","content_type":"text/html","content_length":"436837","record_id":"<urn:uuid:1416c524-227d-4ade-8532-d3268350e165>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00887.warc.gz"} |
Talk:Tensor product
Jump to navigation Jump to search
WikiProject Mathematics (Rated C-class, High-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project
page, where you can join the discussion and see a list of open tasks.
Mathematics rating: Field: Algebra
This page is poorly constructed[edit]
Well, maybe that's an overstatement, but it appears as though mathematicians have clogged this page up with soggy, pointless math jargon and made it irritatingly difficult to use as a reference for,
you know, actually calculating things with tensor products. If all else fails, can somebody actually push the examples to the top as a quick reference and push the frivalous tedium to the bottom, or
even better, package the frivalous tedium into a completely separate page? Or give me permission to do so.
Definitions confounded by construction[edit]
The definition used in this article badly confuses the construction of a tensor product with the definition of tensor product. For an excellent intrinsic definition of the tensor product see the one
on PlanetMath for example. In the current state the definition mixes a method of building a tensor product with an abstract definition of the tensor product. โ Preceding unsigned comment added by
139.48.54.241 (talk) 16:04, 24 August 2016 (UTC)
I partly agree that the focus of this article is not ideal. However, I do not think that the planetmath is really much better, with the over-reliance on the universal property. Is there no way to
make something reasonably explicit that is also satisfactory as a definition (even an intuitive one)? Sลawomir Biaลy (talk) 17:34, 24 August 2016 (UTC)
The "definition" also badly confuses 'definition' and 'motivation'. Mixing in motivational remarks into the text of a definition shows negligence for the aesthetics and rigor of precise
mathematical language. โ Preceding unsigned comment added by 93.207.197.239 (talk) 17:24, 6 September 2016 (UTC)
Can't we define the tensor product the same way polynomials are handled? Quoting from there
A polynomial is an expression that can be built from constants and symbols called indeterminates or variables by means of addition, multiplication and exponentiation to a non-negative integer
power. Two such expressions that may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication are
considered as defining the same polynomial.
If it were done like in this article, we'd have first built a free ring involving finitely-supported sequences of coefficients, and then built a big equivalence relation for "can be algebraically
manipulated into one another", and then quotiented out by the equivalence class of 0. This is crazy, but it's what this article does for tensors! Patterning a definition after the above could give
An element of the tensor product V โ W is an expression that can be built from vectors in V and vectors in W by vector addition, subtraction, scalar multiplication, and application of a formal
variable F representing a multilinear map whose domain is V ร W. Two such expressions that may be transformed, one to the other, by applying the usual properties of linear algebra and multilinear
maps are considered as defining the same element.
which should immediately be followed by a simple example IMO (ex: 2 F(v,w), F(v,w) + F(v,w), F(v+v,w), and F(v,2 w) are all elements of the tensor product and are equal).
This would easily segue into the universal property, because the map : V โ W โ Z is just substitution of h for the formal variable F.
If we have to have a formalish definition (again, polynomial doesn't have one; even polynomial ring doesn't go to the extent we do here) couldn't it come after general motivational remarks like this?
64.92.17.6 (talk) 16:28, 13 May 2017 (UTC)
"generalises the outer product"?[edit]
The lead states:
"... the tensor product ${\displaystyle V\otimes W}$ of two vector spaces V and W is itself a vector space, together with an operation of bilinear composition denoted by ${\displaystyle \
otimes }$ from ordered pairs in the Cartesian product ${\displaystyle V\times W}$ into ${\displaystyle V\otimes W}$, in a way that generalizes the outer product."
Surely the outer product is the operation being referred to (confusingly also often called the tensor product)? There is no generalization of the outer product as an operation. โQuondum 16:08, 6
January 2017 (UTC)
I usually think of the outer product as defined for coordinate vectors in ${\displaystyle \mathbb {R} ^{n}}$, and the tensor product as the generalization for arbitrary pairs of vector spaces.
Sลawomir Biaลy (talk) 17:52, 6 January 2017 (UTC)
Ah, okay, makes sense. Just like the dot product is essentially defined on coordinate vectors (despite being used in other senses), in contrast to terms like inner product. Perhaps we should
simply emphasize which of the two meanings of tensor product is meant when used, such as by referring to the tensor product of vectors (or tensors) or the tensor product operator when the
bilinear operator is meant. The article Tensor product should then also clearly define and distinguish both in the lead (currently it simply avoids using the term tensor product for the
operation, even though it refers to it). I've tweaked Outer product to be a little clearer in this sense by my understanding; feel free to change/revert. โQuondum 18:33, 6 January 2017 (UTC)
"Quick Sense"[edit]
I object to including this because the reader is left wondering what "subject to" means, and a definition for arbitrary modules is far less elementary than for vector spaces. By the way, header
titles are not capitalized, per WP:MOSHEADER. @Gedt11: I also object to it on the grounds that anyone wanting a quick definition should get a rigorous one as a quotient module - other readers will
need more introduction. Plus it's more confusing to readers because a tensor product is more than an abelian group. R has to act on it in a suitable way as well.--Jasper Deng (talk) 18:18, 18
December 2017 (UTC)
Reorganization Suggestions[edit]
The current definition of a tensor product is just the construction โ instead of this, the definition should be given by the universal property, and then from the definition of a tensor product, a
vector space representing it should be constructed. Check out definition 3.1 in http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf โ Preceding unsigned comment added by Username6330
(talk โข contribs) 02:05, 20 December 2017 (UTC)
@Username6330: So, the idea is that in Wikipedia we prefer to give a concrete down-to-earth definition first even if it is not correct for theoresitsโ point of views. We in fact give the
universal property def in tensor product of modules since the target audience of the latter needs to see the correct definition first. โ Taku (talk) 00:51, 30 December 2017 (UTC)
(irony = on) Agreed. The idea of Wikipedia in many cases unfortunately is to first give a wrong description to have people get an incorrect idea and only then, when they understood the wrong
definition to provide the correct one with the effect that they then no longer understand the correct concept (irony = off). It would be much better to give a motivation (and to make it clear
that this is only a motivation) and then show how the precise definition evolves from that motivation. All good books do so. Just Wikipedia fails to do so in many places. :-/ โ Preceding
unsigned comment added by 217.95.169.8 (talk) 15:32, 13 January 2019 (UTC) | {"url":"https://static.hlt.bme.hu/semantics/external/pages/tenzorszorzatok/en.wikipedia.org/wiki/Talk_Tensor_product.html","timestamp":"2024-11-12T16:58:39Z","content_type":"text/html","content_length":"50679","record_id":"<urn:uuid:781133bb-e0da-4b54-bfb2-51153a6ca634>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00310.warc.gz"} |
Tangent Ratio Calculator
Tangent Ratio Calculator
Tangent Ratio Calculator
What is the Tangent Ratio Calculator?
The Tangent Ratio Calculator helps you find the tangent value of a given angle. By providing the angle and selecting whether itโs expressed in degrees or radians, this tool computes the tangent for
you instantly. Itรขโฌโขs especially useful for students, engineers, and anyone needing quick trigonometric calculations.
Applications of the Tangent Ratio Calculator
This calculator is invaluable in several fields like engineering, physics, architecture, and even gaming. For example, it can help civil engineers determine the slope of a terrain, or architects can
use it to calculate the inclination of a roof. Gamers can use it to develop more accurate simulations involving projectile motion or trajectory.
Benefits of Using the Tangent Ratio Calculator
The primary benefit is convenience; manual calculations can be error-prone and time-consuming. This calculator eliminates the risk of errors and provides instant results. It also helps in verifying
results obtained through manual calculations, ensuring accuracy in academic or professional settings.
Deriving the Answer
The tangent of an angle is derived using a trigonometric function that compares two sides of a right triangle: the length of the side opposite the angle and the length of the side adjacent to it. The
formula used accounts for whether the angle is given in degrees or radians and ensures that the calculations adhere to trigonometric principles. This ensures you get the most accurate result every
time you use the calculator.
Relevant Information
Angles like 90 degrees or รโฌ/2 radians make the tangent ratio undefined because at these points, the value grows infinitely. Itโs important to understand the limitations and constraints dictated by
trigonometry to use the calculator effectively and interpret its results correctly.
Letรขโฌโขs suppose you have an angle of 45 degrees: When you enter this into the Tangent Ratio Calculator, it converts the angle to radians, applies the tangent function, and outputs the value which is
approximately 1. This quick and efficient process helps you spend more time applying the results rather than calculating them.
What is the Tangent Ratio?
The tangent ratio is a trigonometric function that represents the ratio of the length of the side opposite an angle to the length of the adjacent side in a right triangle. It is crucial in various
mathematical and engineering applications.
How do I enter angles into the Tangent Ratio Calculator?
You can input angles in degrees or radians. Select the appropriate option before entering your angle to ensure accurate results.
What happens if I enter an angle of 90 degrees or รโฌ/2 radians?
The tangent ratio for these angles is undefined because the value becomes infinite. The calculator will indicate that the tangent is undefined for these specific angles.
Why is the Tangent Ratio Calculator useful?
This calculator simplifies trigonometric calculations, saving time and reducing the risk of manual errors. It provides quick and accurate results, which are essential in fields like engineering,
physics, and architecture.
Can I use the Tangent Ratio Calculator for academic purposes?
Absolutely. The calculator is designed to assist with academic work and can be used to verify the results of manual calculations, ensuring both speed and accuracy in your studies.
Is the calculator accurate?
Yes, the Tangent Ratio Calculator uses precise mathematical algorithms to ensure the accuracy of its results, whether you input angles in degrees or radians.
Are there any limitations to using this calculator?
The primary limitation is for angles where the tangent ratio is undefined, such as 90 degrees or รโฌ/2 radians. It is important to understand these constraints to use the calculator effectively.
How do I convert degrees to radians if needed?
To convert degrees to radians, multiply the number of degrees by รโฌ and then divide by 180. For example, 45 degrees is 45 รโ รโฌ / 180, which equals รโฌ/4 radians.
Can I use the Tangent Ratio Calculator for non-right triangles?
No, the tangent ratio specifically applies to right triangles. For non-right triangles, other trigonometric laws like the Sine and Cosine laws may be used.
What should I do if the calculator gives an unexpected result?
Double-check your input to ensure that you have selected the correct unit (degrees or radians) and entered the angle correctly. If the issue persists, review the mathematical principles to ensure
proper understanding. | {"url":"https://www.onlycalculators.com/other/tangent-ratio-calculator/","timestamp":"2024-11-09T22:50:59Z","content_type":"text/html","content_length":"242167","record_id":"<urn:uuid:42989030-ceab-4905-83e0-4f852bba8e1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00690.warc.gz"} |
The program calculates and prints the sum of all numbers between two numbers input by the user.
The code begins with including the standard input-output header file using #include <stdio.h>.
The main() function is defined, and three integer variables d, m, and j are declared. Variable j is initialized to 0 to store the sum of the numbers.
The program prints a description message for the user to understand its functionality.
The user is prompted to enter two numbers between which they want to find the sum of all the numbers. The input is read using scanf("%d %d", &d, &m);.
If the user enters the larger number first, a conditional statement swaps the values of d and m to ensure d is always the smaller number:
The program calculates the sum of all numbers between d and m using a while loop:
The program prints the final sum stored in j and ends with printf("\n"); to print a new line and return 0; to indicate successful execution.
/* * ----------------------------------------------------------- * Logic Building with Computer Programming (CSU1128) * Instructor: Dr. Pankaj Vaidya | Author: Divya Mohan * * This code is a part of
the educational initiative by dmj.one * with aim of empowering and inspiring learners in the field of * Computer Science and Engineering through the respective courses. * * (c) 2022, Divya Mohan for
dmj.one. All rights reserved. * ----------------------------------------------------------- */ #include <stdio.h> int main() { int d, m, j = 0; printf("\n\n Program to calculate and print the sum of
all numbers between the numbers that the user chooses. Example 1 to 100. \n\n"); printf("Enter two numbers between which you want to find the sum of all the numbers - (Example: 2 100) - and press
enter: "); scanf("%d %d", &d, &m); // if loop to swap max number if user enters a bigger number first. if (d > m) { int o = d; d = m; m = o; } printf("\nSum of all the numbers between %d and %d is ",
d, m); while (d <= m) { j += d; d++; } printf("%d\t", j); printf("\n"); return 0; }
Program to calculate and print the sum of all numbers between the numbers that the user chooses. Example 1 to 100.
Enter two numbers between which you want to find the sum of all the numbers - (Example: 2 100) - and press enter: 22 34 Sum of all the numbers between 22 and 34 is 364 | {"url":"https://dmj.one/edu/su/course/csu1128/program/p17","timestamp":"2024-11-10T21:36:51Z","content_type":"text/html","content_length":"9074","record_id":"<urn:uuid:899e90a3-a4c1-468a-a35d-b4fb8aa1eebf>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00493.warc.gz"} |
่พ
ๅฏผ CS 211็ผ็จใ่ฎฒ่งฃComputer Architecture
CS 211: Computer Architecture, Spring 2024
Programming Assignment 1: Introduction to C (50 points)
Instructor: Prof. Santosh Nagarakatte
Due: February 9, 2024 at 5pm Eastern Time.
The goal of this assignment is to get you started with programming in C, as well as compiling,
linking, running, and debugging. Your task is to write 5 small C programs. Your program must
follow the input-output guidelines listed in each section exactly, with no additional or missing
output. You can assume that we will provide well-defined test cases.
No cheating or copying will be tolerated in this class. If you use any large language models (LLMs)
such as ChatGPT/Bard/LLama or other models and as a result, your code is similar to another
studentโs code in the class, it will be considered a violation of academic integrity. You should not
be using large language models to copy-paste your code. Your assignments will be automatically
checked with plagiarism detection tools that are pretty powerful. Hence, you should not look at your
friendโs code or copy-paste any code from LLMs or the Internet. See CS departmentโs academic
integrity policy at:
First: Is the Input a Product of 2 or 3 Given Numbers? (5 Points)
You have to write a program that given an array of integers determines if a particular integer input
being queried is a product of 2 or 3 numbers in the array. If it is such a product, then you have to
output yes. Otherwise, you will output no.
Input-Output format: Your program will take the file name as input. The first line in the input
file provides the total number of integers in the input array. The next line will provide the list of
these input integers. The third line in the input file provides the number of inputs queried. The
subsequent lines are the queries that will the provide the specific input and the number 2 or 3. For
example, if you have a query line of the form: 21 2, then you are checking if 21 is a product of 2
numbers from the input integer array. If so, output yes.
Here is a sample input file. Let us call file1.txt
In the above file, the input array has 5 integers whose entries are 3, 7, 8, 11, and 17 respectively.
There are 4 queries being done. The first query asks if 21 is a product of 2 integers in the array.
The output is yes because 21 = 3 โ 7. In contrast, the answer is no for the query: 12 2. This is
because 12 is not a product of any two integers in the input array.
Your output will contain the same number of lines as the number of query lines in the input file.
Each line will either say yes if the corresponding input in the query is a product of the specified
numbers or no if the corresponding input is not a product of the specified numbers.
The sample execution is as shown.
$./first file1.txt
We will not give you improperly formatted files. You can assume that the files exist and all the
input files are in proper format as above. See the submission organization format at the end of the
assignment for more details.
Second: Ordered Linked List (10 points)
In this part, you have to implement a linked list that maintains a list of integers in sorted order.
For example, if a list already contains 2, 5 and 8, then 1 will be inserted at the start of the list, 3
will be inserted between 2 and 5 and 10 will be inserted at the end.
Input format: This program takes a file name as an argument from the command line. The file
contains successive lines of input. Each line contains a string, either INSERT or DELETE, followed
by a space and then an integer. For each of the lines that starts with INSERT, your program should
insert that number in the linked list in sorted order if it is not already there. Your program should
not insert any duplicate values. If the line starts with a DELETE, your program should delete the
value if it is present in the linked list. Your program should silently ignore the line if the requested
value is not present in the linked list. After every INSERT and DELETE, your program should print
the content of the linked list. The values should be printed in a single line separated by a single
space. There should be no leading or trailing white spaces in each line of the output. You should
print EMPTY if the linked list is empty.
Output format: At the end of the execution, your program should have printed the content of
the linked list after each INSERT or DELETE operation. Each time the content is printed, the values
should be on a single line separated by a single space. There should be no leading or trailing white
spaces in each line of the output.You should print EMPTY if the linked list is empty. You can assume
that there will be at least one INSERT or DELETE in each file.
Example Execution:
Lets assume we have 2 text files with the following contents:
INSERT 1
INSERT 2
DELETE 1
INSERT 3
INSERT 4
DELETE 4
INSERT 5
DELETE 5
INSERT 1
DELETE 1
INSERT 2
DELETE 2
INSERT 3
DELETE 3
INSERT 4
DELETE 4
INSERT 5
DELETE 5
Then the result will be:
$./second file1.txt
$./first file2.txt
Third: Matrix Exponentiation (10 points)
This program will test your ability to manage memory using malloc() and provide some experience
dealing with 2D arrays in C. Your task is to create a program that computes Mn where M is a
square matrix (the dimensions of the matrix will be k ร k where k is the number of rows) and a
number n โฅ 0. In summary, you need to multiply the matrix with itself n times.
Input format: The program will take the file name as input. The first line in the file will provide
the number of rows in the matrix. The subsequent lines will provide the contents of the matrix.
The numbers are tab separated. The last line in the file after the contents of the matrix will contain
the exponent n. For example, a sample input file โfile.txtโ:
The first number (3) refers to the number of rows in the square matrix. The dimensions of the
matrix will be 3 ร 3. The exponent is 2. Hence, the program is required to compute M2
. You can
assume that the input will be properly formatted. The output on executing the program with the
above input is shown below. The output numbers should be tab separated. There should not be
extra tabs or spaces at the end of the line or the end of the file.
Fourth: Binary Search Tree (10 Points)
You have to implement a binary search tree. The tree must satisfy the binary search tree property:
the key in each node must be greater than all keys stored in the left sub-tree, and smaller than all
keys in right sub-tree. You have to dynamically allocate space for each node and free the space for
the nodes at the end of the program.
Input format: This program takes a file name as an argument from the command line. The file
is either blank or contains successive lines of input. Each line starts with a character, either iโ or
โsโ, followed by a tab and then an integer. For each line that starts with โiโ, your program should
insert that number in the binary search tree if it is not already there. If it is already present, you
will print โduplicateโ and not change the tree. If the line starts with a โsโ, your program should
search for the value.
Output format: For each line in the input file, your program should print the status/result of
the operation. For an insert operation, the program should print either โinsertedโ with a single
space followed by a number, the height of the inserted node in the tree, or โduplicateโ if the value
is already present in the tree. The height of the root node is 1. For a search, the program should
either print โโpresentโ, followed by the height of the node, or โabsentโ based on the outcome of the
Example Execution: Lets assume we have a file file1.txt with the following contents:
i 5
i 3
i 4
i 1
i 6
s 1
Executing the program in the following fashion should produce the output shown below:
$./eighth file1.txt
inserted 1
inserted 2
inserted 3
inserted 3
inserted 2
present 3
Fifth: Matrix Determinant(15 points)
In linear algebra, the determinant is a value that can be computed with a square matrix. The
determinant describes some properties about the square matrix. Determinants are used for solving
linear equations, computing inverses, etc, and is an important concept in linear algebra. In the
fifth part of the assignment, you will write a program that computes the determinant of any n ร n
matrix. You will have to carefully manage malloc and free instructions to successfully compute
the determinants.
Given a square n ร n matrix M, we will symbolize the determinant of M as Det(M). You can
compute Det(M) as follows:
1ร1 matrix The determinant of the 1 ร1 matrix is the value of the element itself. For example,
) = 3
2 ร 2 matrix The determinant of a 2 ร 2 matrix can be computed using the following formula:
a b
c d
) = ad โ bc
For example,
) = 1 ร 4 โ 2 ร 3 = 4 โ 6 = โ2
3 ร 3 matrix The determinant of a 3 ร 3 matrix can be computed modularly. First, letโs define
a 3 ร 3 matrix:
M =
a b c
d e f
g h i
The formula for computing the determinant of M is as follows:
Det(M) = a ร Det(Ma) โ b ร Det(Mb) + c ร Det(Mc)
The matrix Ma is a 2 ร 2 matrix that can be obtained by eliminating the row and column that a
belongs to in M. More specifically, since a is on the first row and first column, we eliminate the
first row and first column from M:
a b c
d e f
g h i
This gives us a 2 ร 2 matrix for Ma:
Ma =
e f
h i
Mb can be computed similarly. Since b is on the first row and second column, we eliminate the first
row and second column from M:
a b c
d e f
g h i
This gives us a 2 ร 2 matrix for Mb:
Mb =
d f
g i
Mc can be computed by removing the first row and the third column from M since c is on the first
row and third column. Thus,
a b c
d e f
g h i
Mc =
d e
g h
Finally, the formula for computing the determinant of M is:
Det(M) = a ร Det(
e f
h i
) โ b ร Det(
d f
g i
) + c ร Det(
d e
g h
For example, we can compute the determinant of the following matrix,
M =
as follows:
Det(M) = 2 ร Det(
) โ 7 ร Det(
) + 6 ร Det(
) = 2(37) โ 7(68) + 6(7) = โ360
nรn matrix Computing the determinant of an nรn matrix can be considered as a scaled version
of computing the determinant of a 3 ร 3 matrix. First, letโs say weโre given an n ร n matrix,
M =
x1,1 x1,2 x1,3 . . . x1,n
x2,1 x2,2 x2,3 . . . x2,n
x3,1 x3,2 x3,3 . . . x3,n
xn,1 xn,2 xn,3 . . . xn,n
In essence, we have to pivot each element in the first row and create (n โ 1) ร (n โ 1) matrix for
each pivot element (in the case of computing the determinant of 3 ร 3 matrix, we had Ma that
corresponds to a, etc).
For example, when we pivot x1,1, we create the corresponding (n โ 1) ร (n โ 1) matrix for x1,1 by
deleting the 1st row and 1st column:
M1,1 =
x1,1 x1,2 x1,3 . . . x1,n
x2,1 x2,2 x2,3 . . . x2,n
x3,1 x3,2 x3,3 . . . x3,n
xn,1 xn,2 xn,3 . . . xn,n
x2,2 x2,3 . . . x2,n
x3,2 x3,3 . . . x3,n
xn,2 xn,3 . . . xn,n
Similarly, we can create M1,2, M1,3, . . . by pivoting x1,2, x1,3, and so on:
M1,2 =
x1,1 x1,2 x1,3 . . . x1,n
x2,1 x2,2 x2,3 . . . x2,n
x3,1 x3,2 x3,3 . . . x3,n
xn,1 xn,2 xn,3 . . . xn,n
x2,1 x2,3 . . . x2,n
x3,1 x3,3 . . . x3,n
xn,1 xn,3 . . . xn,n
M1,3 =
x1,1 x1,2 x1,3 . . . x1,n
x2,1 x2,2 x2,3 . . . x2,n
x3,1 x3,2 x3,3 . . . x3,n
xn,1 xn,2 xn,3 . . . xn,n
x2,1 x2,2 x2,4 . . . x2,n
x3,1 x3,2 x3,4 . . . x3,n
xn,1 xn,2 xn,4 . . . xn,n
Finally, you can compute the determinant of M using the following formula:
Det(M) = x1,1รDet(M1,1)โx1,2รDet(M1,2)+x1,3รDet(M1,3)โx1,4รDet(M1,4)+x1,5รDet(M1,5). . .
The above formula can be shortened to the following formula:
Det(M) = ฮฃn
i=1(โ1)iโ1x1,i ร Det(M1,i)
This general formula for computing the determinant of n ร n matrix applies to all n. The formula
for computing the determinant of 2 ร 2 and 3 ร 3 matrix is exactly the same as this formula.
Input-Output format:
Your program should accept a file as command line input. The format of a sample file test3.txt
is shown below:
The first number (3) corresponds to the size of the square matrix (n). The dimensions of the matrix
will be n x n. You can assume that n will not be greater than 20. The rest of the file contains the
content of the matrix. Each line contains a row of the matrix, where each element is separated by
a tab. You can assume that there will be no malformed input and the matrices will always contain
valid integers.
Your program should output the determinant of the n ร n matrix provided by the file.
Example Execution
A sample execution with above input file test3.txt is shown below:
$./fifth test3.txt
Structure of your submission folder
All files must be included in the pa1 folder. The pa1 directory in your tar file must contain 5
subdirectories, one each for each of the parts. The name of the directories should be named first
through fifth (in lower case). Each directory should contain a c source file, a header file (if you
use it) and a Makefile. For example, the subdirectory first will contain, first.c, first.h (if you create
one) and Makefile (the names are case sensitive).
|- first
|-- first.c
|-- first.h (if used)
|-- Makefile
|- second
|-- second.c
|-- second.h (if used)
|-- Makefile
|- third
|-- third.c
|-- third.h (if used)
|-- Makefile
|- fourth
|-- fourth.c
|-- fourth.h (if used)
|-- Makefile
|- fifth
|-- fifth.c
|-- fifth.h (if used)
|-- Makefile
You have to e-submit the assignment using Canvas. Your submission should be a tar file named
pa1.tar. To create this file, put everything that you are submitting into a directory (folder)
named pa1. Then, cd into the directory containing pa1 (that is, pa1โs parent directory) and run
the following command:
tar cvf pa1.tar pa1
To check that you have correctly created the tar file, you should copy it (pa1.tar) into an empty
directory and run the following command:
tar xvf pa1.tar
This should create a directory named pa1 in the (previously) empty directory.
The pa1 directory in your tar file must contain 5 subdirectories, one each for each of the parts.
The name of the directories should be named first through fifth (in lower case). Each directory
should contain a c source file, a header file and a make file. For example, the subdirectory first will
contain, first.c, first.h and Makefile (the names are case sensitive).
We provide a custom autograder to test your assignment. The custom autograder is provided as
pa1 autograder.tar. Executing the following command will create the autograder folder.
$tar xvf pa1_autograder.tar
There are two modes available for testing your assignment with the custom autograder
First mode
Testing when you are writing code with a pa1 folder
(1) Lets say you have a pa1 folder with the directory structure as described in the assignment.
(2) Copy the folder to the directory of the autograder (i.e., pa1 autograder)
(3) Run the custom autograder with the following command
$python3 pa1 autograder.py
It will run your programs and print your scores.
Second mode
This mode is to test your final submission (i.e, pa1.tar)
(1) Copy pa1.tar to the pa1 autograder directory
(2) Run the autograder with pa1.tar as the argument.
The command line is
$python3 pa1 autograder.py pa1.tar
The autograder will print out information about the compilation and the testing process. At the
end, if your assignment is completely correct, the score will something similar to what is given
You scored
5.0 in second
5.0 in fourth
5.0 in third
7.5 in fifth
2.5 in first
Your TOTAL SCORE = 25.0 /25
Your assignment will be graded for another 25 points with test cases not given to you
Grading Guidelines
This is a large class so that necessarily the most significant part of your grade will be based on
programmatic checking of your program. That is, we will build the binary using the Makefile and
source code that you submitted, and then test the binary for correct functionality against a set of
inputs. Thus:
โข You should not see or use your friendโs code either partially or fully. We will run
state of the art plagiarism detectors. We will report everything caught by the
tool to Office of Student Conduct.
โข You should make sure that we can build your program by just running make.
โข Your compilation command with gcc should include the following flags: -Wall -Werror
-fsanitize=address,undefined -g
โข You should test your code as thoroughly as you can. For example, programs should not crash
with memory errors.
โข Your program should produce the output following the example format shown in previous
sections. Any variation in the output format can result in up to 100% penalty. Be
especially careful to not add extra whitespace or newlines. That means you will probably not
get any credit if you forgot to comment out some debugging message.
โข Your folder names in the path should have not have any spaces. Autograder will
not work if any of the folder names have spaces.
Be careful to follow all instructions. If something doesnโt seem right, ask on discussion forum. | {"url":"http://7daixie.com/2024021310013850001.html","timestamp":"2024-11-03T21:31:43Z","content_type":"application/xhtml+xml","content_length":"72255","record_id":"<urn:uuid:a24c45c6-87ad-4a34-b49d-5b83e2d92e2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00458.warc.gz"} |
EViews 10 New Features
EViews 10 New Econometrics and Statistics: Estimation
EViews 9 introduced Threshold Regression (TR) and Threshold Autoregression (TAR) models, and EViews 10 expands up these model by adding Smooth Threshold Regression and Smooth Threshold Autoregression
as options.
In STR models the regime switching that occurs when an observed variable crosses unknown thresholds happens smoothly. As a result, STR models are often considered to have more โrealisticโ dynamics
that their discrete TR model counterparts.
EViews' implementation of STR includes features such as:
โข Estimation of parameters for both shape and location of the smooth threshold.
โข Model selection for the threshold variable.
โข Specification of both regieme varying and regieme non-varying regressors.
EViews has included both White and Heteroskedasticity and Autocorrelation Consistent Covariance (HAC) estimators of the least-squares covariance matrix for over twenty years.
EViews 10 expands upon these robust standard error options with the addition of a family of heteroskedastic consistent covariance, and clustered standard errors.
EViews 10 increases the options for heteroskedastic consistent covariance estimators beyond the familiar White estimator available in previous versions. The class of estimators supported belong to
the HC family described by Long and Ervin, 2000, and Cribari-Neto and da Silva, 2011.
The estimators differ in their choice of observation-specific weights used to improve the finite sample properties of the residual error covariance.
Specifically, EViews supports the following estimators and weight choices:
$$ \begin{array}{|l|c|} \hline \hfill \text{Method} \hfill & \text{Weight}\\ \hline \text{HC0 - White} & 1\\ \hline \text{HC1 - White with d.f. correction} & \sqrt{T/(T-k)}\\ \hline \text{HC2 - bias
corrected} & (1-h_t)^{-1/2}\\ \hline \text{HC3 - pseudo-jacknife} & (1-h_t)^{-1}\\ \hline \text{HC4 - relative leverage} & (1-h_t)^{-\delta_t/2}\\ \hline \text{HC4m} & (1-h_t)^{-\gamma_t/2}\\ \hline
\text{HC5} & (1-h_t)^{-\delta_t/4}\\ \hline \text{User - user specified} & \text{arbitrary} \\ \hline \end{array} $$ where $h_t = X_t^\top \left(X^\top X\right)^{-1}X_t$ are the diagonal elements of
the familiar "hat matrix" $H = X^\top \left(X^\top X\right)^{-1}X$, and $\delta_t$ and $\gamma_t$ are discount factors.
In many settings, observations may be grouped into different groups or โclustersโ where errors are correlated for observations in the same cluster and uncorrelated for observations in different
clusters. EViews 10 offers support for consistent estimation of coefficient covariances that are robust to either one and two-way clustering.
As with the HC estimators, EViews supports a class of cluster-robust covariance estimators, with each estimator differing on the weights it gives to observations in the cluster.
The weighting of each estimator is as follows:
$$ \begin{array}{|l|c|} \hline \hfill \text{Method} \hfill & \text{Weight}\\ \hline \text{CR0 - Ordinary} & 1\\ \hline \text{CR1 - finite sample corrected (default)} & \sqrt{\frac{G}{(G-1)} \cdot \
frac{(T-1)}{(T-k)}}\\ \hline \text{CR2 - bias corrected} & (1-h_t)^{-1/2}\\ \hline \text{CR3 - pseudo-jacknife} & (1-h_t)^{-1}\\ \hline \text{CR4 - relative leverage} & (1-h_t)^{-\delta_t/2}\\ \hline
\text{CR4m} & (1-h_t)^{-\gamma_t/2}\\ \hline \text{CR5} & (1-h_t)^{-\delta_t/4}\\ \hline \text{User - user specified} & \text{arbitrary} \\ \hline \end{array} $$ where $h_t = X_t^\top \left(X^\top X\
right)^{-1}X_t$ are the diagonal elements of the familiar "hat matrix" $H = X^\top \left(X^\top X\right)^{-1}X$, $\delta_t$ and $\gamma_t$ are discount factors, and $G$ is the number of clusters.
The basic $k$-variable VAR(p) specification has $k(pk+d)$ coefficients so that even moderate sized VARs require estimation of a large number of parameters. When VARs are applied to macroeconomic data
with limited sample sizes, model over-parameterization is a frequent problem as there are too few observations to estimate precisely the VAR parameters.
EViews now offers support for the linear restriction approach to handling this over-parameterization problem.
One of the key elements behind Structural VAR estimation is the necessary imposition of restrictions on the residual structure matrices.
These restrictions generally take the form of restrictions on the factorization matrices, A and B, restrictions on the short-run impulse response matrix S, or restrictions on the long-run impulse
response matrix F (or C), or a combination of the above.
Previous versions of EViews only allowed restrictions on A and B, or on F. EViews 10 broadens the restriction engine by allowing restrictions on any of the four matrices, adding linear restrictions,
and adds a new interface allowing easier specification of the restrictions.
In EViews 10 you may now, from an estimated standard VAR, easily perform historical decomposition, the innovation-accounting technique proposed by Burbridge and Harrison (1985).
Historical decomposition decomposes forecast errors into components associated with structural innovations (computed by weighting ordinary residuals).
Dynamic forecasting using simulation methods is now supported from the equation forecast dialog.
Autoregressive Distributed Lag (ARDL) estimation has been drastically improved for EViews 10. In particular, EViews now allows absolute control over lag specification.
Any of the variables (dependent or regressor) can be specified with a custom lag, and you can mix the specification allowing certain variable to have fixed custom lags and the remainder having their
lags chosen via model selection methods.
Moreover, in the context of the ARDL approach to the Bounds Cointegration Test of Pesaran Shin and Smith (2001) (PSS), EViews now offers inference under all 5 deterministic cases considered in PSS.
Also, alongside the asymptotic critical values provided in PSS, EViews now offers finite sample critical values from Narayan (2005)
Finally, in addition to the Bounds F-test, Eviews now also reports the appropriate Banerjee, Dolado, Mestre (1998) (BDM) t-bounds test. | {"url":"https://www.eviews.com/EViews10/ev10ecest_n.html","timestamp":"2024-11-11T16:22:57Z","content_type":"application/xhtml+xml","content_length":"23049","record_id":"<urn:uuid:c9c89817-5836-4ad0-8262-c0907e4aadc6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00409.warc.gz"} |
American Mathematical Society
Chromatic expansions in function spaces
HTML articles powered by AMS MathViewer
Trans. Amer. Math. Soc. 366 (2014), 4097-4125 Request permission
Chromatic series expansions of bandlimited functions have recently been introduced in signal processing with promising results. Chromatic series share similar properties with Taylor series insofar as
the coefficients of the expansions, which are called chromatic derivatives, are based on the ordinary derivatives of the function, but unlike Taylor series, chromatic series have a better rate of
convergence and more practical applications.
The $n$-th chromatic derivative $K^n(f)$ of an analytic function $f(t)$ is a linear combination of the ordinary derivatives $f^{(k)}(t), 0\leq k\leq n,$ where the coefficients of the combination are
based on systems of orthogonal polynomials. In addition to their practical applications, chromatic series expansions have useful theoretical and mathematical applications. For example, functions in
the Paley-Wiener space can be completely characterized by their chromatic series expansions associated with the Legendre polynomials.
The purpose of this paper is to show that chromatic series expansions can be used to characterize other important function spaces. We show that functions in weighted Bergman spaces $\mathfrak {B}_\
gamma$ can be characterized by their chromatic series expansions that use chromatic derivatives associated with the Laguerre polynomials, while functions in the Bargmann-Segal-Foch space $\mathfrak
{F}$ can be characterized by their chromatic series expansions that use chromatic derivatives associated with the Hermite polynomials. Another goal of this article is to show that each one of these
spaces has an orthonormal basis that is generated from one single function $\psi$ by applying successive chromatic derivatives to it, that is, both $\mathfrak {B}_\gamma$ and $\mathfrak {F}$ have an
orthonormal basis of the form $\left \{K^n\psi \right \}_{n=0}^\infty .$
โข V. Bargmann, P. Butera, L. Girardello, and John R. Klauder, On the completeness of the coherent states, Rep. Mathematical Phys. 2 (1971), no. 4, 221โ228. MR 290680, DOI 10.1016/0034-4877(71)
โข V. Bargmann, On a Hilbert space of analytic functions and an associated integral transform. Part II. A family of related function spaces. Application to distribution theory, Comm. Pure Appl.
Math. 20 (1967), 1โ101. MR 201959, DOI 10.1002/cpa.3160200102
โข V. Bargmann, On a Hilbert space of analytic functions and an associated integral transform, Comm. Pure Appl. Math. 14 (1961), 187โ214. MR 157250, DOI 10.1002/cpa.3160140303
โข J. Byrnes, Local signal reconstruction via chromatic differentiation filter banks, Conference Record of the Thirty-Fifth Asilomar conference on Signals, Systems and Computers 2001, Vol. 1 (2001),
โข M. Cushman and T. Herron, โThe general theory of chromatic derivativesโ, Kromos Technology Technical Report (2001).
โข M. Cushman, M. Narasimha, and P.P. Vaidyanathan, Finite-channel chromatic derivative filter banks, IEEE Signal Processing Letters, Vol. 10 (1), (2003), 15-17.
โข Charles F. Dunkl and Yuan Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and its Applications, vol. 81, Cambridge University Press, Cambridge, 2001. MR 1827871, DOI
โข Peter Duren and Alexander Schuster, Bergman spaces, Mathematical Surveys and Monographs, vol. 100, American Mathematical Society, Providence, RI, 2004. MR 2033762, DOI 10.1090/surv/100
โข T. Herron and J. Byrnes, โFamilies of orthogonal differential operators for signal procesingโ, Kromos Technology Technical Report (2001).
โข V. Foch, Verallgemeinerung und Lรถsung der Diracschen statistischen Gleichung, Z. Phys., Vol. 49 (1928), pp. 339โ357.
โข I. Gradshteyn and I. Ryzhik, Tables of Integrals, Series, and Products, Academic Press, New York (1965).
โข Aleksandar Ignjatovic and Ahmed I. Zayed, Multidimensional chromatic derivatives and series expansions, Proc. Amer. Math. Soc. 139 (2011), no. 10, 3513โ3525. MR 2813383, DOI 10.1090/
โข A. Ignjatovic, Frequency estimation using time domain methods based on robust differential operators, 2010 IEEE 10th International Conference on Signal Processing (ICSP) (2010) 151-154.
โข Aleksandar Ignjatoviฤ, Chromatic derivatives, chromatic expansions and associated spaces, East J. Approx. 15 (2009), no. 3, 263โ302. MR 2741822
โข Aleksandar Ignjatoviฤ, Local approximations based on orthogonal differential operators, J. Fourier Anal. Appl. 13 (2007), no. 3, 309โ330. MR 2334612, DOI 10.1007/s00041-006-6085-y
โข A. Ignjatovic, Numerical differentiation and signal processing, Proc. Intenational Conference Information, Communications and Signal Processing (ICICS), Singapore (2001).
โข A. Ignjatovic, Local approximations and signal processing , Kromos Technology Technical Report, Los Altos (2001).
โข A. Ignjatovic, Numerical differentiation and signal processing, Kromos Technology Technical Report, Los Altos (2001).
โข A. Ignjatovic and N. Carlin, Signal processing with local behavior, Provisional Patent Application, (60)-143,074 (1999), US Patent Office, Patent issued as US 6313778, June 2001.
โข A. J. E. M. Janssen, Bargmann transform, Zak transform, and coherent states, J. Math. Phys. 23 (1982), no. 5, 720โ731. MR 655886, DOI 10.1063/1.525426
โข A. J. E. M. Janssen, Gabor representation of generalized functions, J. Math. Anal. Appl. 83 (1981), no. 2, 377โ394. MR 641340, DOI 10.1016/0022-247X(81)90130-X
โข John R. Klauder and Bo-Sture Skagerstam (eds.), Coherent states, World Scientific Publishing Co., Singapore, 1985. Applications in physics and mathematical physics. MR 826247, DOI 10.1142/0096
โข John R. Klauder and E. C. G. Sudarshan, Fundamentals of quantum optics, W. A. Benjamin, Inc., New York-Amsterdam, 1968. MR 0231591
โข N. N. Lebedev, Special functions and their applications, Dover Publications, Inc., New York, 1972. Revised edition, translated from the Russian and edited by Richard A. Silverman; Unabridged and
corrected republication. MR 0350075
โข Giorgio Mantica and Sandro Vaienti, The asymptotic behaviour of the Fourier transforms of orthogonal polynomials. I. Mellin transform techniques, Ann. Henri Poincarรฉ 8 (2007), no. 2, 265โ300. MR
2314448, DOI 10.1007/s00023-006-0308-2
โข Giorgio Mantica and Davide Guzzetti, The asymptotic behaviour of the Fourier transforms of orthogonal polynomials. II. L.I.F.S. measures and quantum mechanics, Ann. Henri Poincarรฉ 8 (2007),
no. 2, 301โ336. MR 2314450, DOI 10.1007/s00023-006-0309-1
โข Vladimir A. Marchenko, Sturm-Liouville operators and applications, Operator Theory: Advances and Applications, vol. 22, Birkhรคuser Verlag, Basel, 1986. Translated from the Russian by A. Iacob. MR
897106, DOI 10.1007/978-3-0348-5485-6
โข M.A. Naimark, Linear Differential Operators I, George Harrap, London, 1967.
โข M. Narasimha, A. Ignjatovic, and P. Vaidyanathan, Chromatic derivative filter banks, IEEE Signal Processing Lett., Vol. 9 (7), (2002), pp. 215-216.
โข B. Savkovic, Decorrelation properties of chromatic derivative signal representation, IEEE Signal Processin Letters, Vol. 17 (8), (2010), pp. 770-773.
โข G. Szegรถ, Orthogonal Polynomials, Amer. Math. Soc., Providence, RI (1975).
โข E. C. Titchmarsh, Eigenfunction expansions associated with second-order differential equations. Part I, 2nd ed., Clarendon Press, Oxford, 1962. MR 0176151
โข P. Vaidyanathan, A. Ignjatovic, and M. Narasimha, New sampling expansions of bandlimited signals based on chromatic derivatives, Proc. 35th Asilomar Conf. Signals, Systems and Computers, Monterey
(2001), pp. 558-562.
โข Gilbert G. Walter and Xiaoping Shen, A sampling expansion for nonbandlimited signals in chromatic derivatives, IEEE Trans. Signal Process. 53 (2005), no. 4, 1291โ1298. MR 2128248, DOI 10.1109/
โข Gilbert G. Walter, Chromatic series with prolate spheroidal wave functions, J. Integral Equations Appl. 20 (2008), no. 2, 263โ280. MR 2418070, DOI 10.1216/JIE-2008-20-2-263
โข Ahmed I. Zayed, Chromatic expansions of generalized functions, Integral Transforms Spec. Funct. 22 (2011), no. 4-5, 383โ390. MR 2801291, DOI 10.1080/10652469.2010.541059
โข Ahmed I. Zayed, Generalizations of chromatic derivatives and series expansions, IEEE Trans. Signal Process. 58 (2010), no. 3, 1638โ1647. MR 2730105, DOI 10.1109/TSP.2009.2038415
โข Ahmed I. Zayed, Advances in Shannonโs sampling theory, CRC Press, Boca Raton, FL, 1993. MR 1270907
Similar Articles
โข Retrieve articles in Transactions of the American Mathematical Society with MSC (2010): 41A58, 42C15, 44A15, 42B35
โข Retrieve articles in all journals with MSC (2010): 41A58, 42C15, 44A15, 42B35
Additional Information
โข Ahmed I. Zayed
โข Affiliation: Department of Mathematical Sciences, DePaul University, Chicago, Illinois 60614
โข Email: azayed@condor.depaul.edu
โข Received by editor(s): August 17, 2011
โข Received by editor(s) in revised form: July 20, 2012
โข Published electronically: March 24, 2014
โข ยฉ Copyright 2014 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication.
โข Journal: Trans. Amer. Math. Soc. 366 (2014), 4097-4125
โข MSC (2010): Primary 41A58, 42C15; Secondary 44A15, 42B35
โข DOI: https://doi.org/10.1090/S0002-9947-2014-05991-6
โข MathSciNet review: 3206453 | {"url":"https://www.ams.org/journals/tran/2014-366-08/S0002-9947-2014-05991-6/home.html","timestamp":"2024-11-05T03:42:50Z","content_type":"text/html","content_length":"77573","record_id":"<urn:uuid:23697454-9d60-443e-b88f-e9de2d501aa2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00251.warc.gz"} |
(3.) A boy buys 9 apples for Rs 9.60 and sells them at 11 for R... | Filo
Question asked by Filo student
(3.) A boy buys 9 apples for Rs and sells them at 11 for Rs 12 . Find his gain or loss percent.
a. 4. The cost price of 10 articles is equal to the selling price of 9 articles. Find the profit percent.
b. 5. A retailer buys a radio for Rs 225. His overhead expenses are Rs 15 . If he sells the radio for Rs 300 , determine his profit percent.
c. 6. A retailer buys a cooler for Rs 1200 and overhead expenses on it are Rs 40 . If he sells the cooler for Rs 1550 , determine his profit percent.
d. 7. A dealer buys a wristwatch for Rs 225 and spends Rs 15 on its repairs. If he sells the same for , find his profit percent.
Not the question you're searching for?
+ Ask your question
Video solutions (2)
Learn from their 1-to-1 discussion with Filo tutors.
12 mins
Uploaded on: 1/18/2023
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text (3.) A boy buys 9 apples for Rs and sells them at 11 for Rs 12 . Find his gain or loss percent.
Updated On Jan 18, 2023
Topic Calculus
Subject Mathematics
Class Class 12
Answer Type Video solution: 2
Upvotes 258
Avg. Video Duration 6 min | {"url":"https://askfilo.com/user-question-answers-mathematics/3-a-boy-buys-9-apples-for-rs-and-sells-them-at-11-for-rs-12-33383533323438","timestamp":"2024-11-03T04:39:06Z","content_type":"text/html","content_length":"403446","record_id":"<urn:uuid:1c8ec53f-7def-4a06-93a9-17359e21190e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00444.warc.gz"} |
How to Find Common Elements in Three Sorted Arrays Using Python Dictionary Intersection
ฯ โ Problem Formulation: Given three sorted arrays, the task is to identify the common elements present in all three arrays efficiently. For example, suppose we have arrays A = [1,2,5,6], B =
[2,3,5,7], and C = [2,4,5,8]. The common elements in these arrays are [2,5], which should be the return value of the solution.
Method 1: Use Dictionary from Counters
To find common elements in three sorted arrays using Python, you can create dictionaries from arrays by utilizing the Collections.Counter class, which counts the occurrences of each element in each
array. By performing dictionary intersection, you can efficiently obtain the common elements.
Hereโs an example:
from collections import Counter
def common_elements(arr1, arr2, arr3):
dict1 = Counter(arr1)
dict2 = Counter(arr2)
dict3 = Counter(arr3)
result_dict = dict1 & dict2 & dict3
return list(result_dict.elements())
# Example usage
arr1 = [1, 2, 5, 6]
arr2 = [2, 3, 5, 7]
arr3 = [2, 4, 5, 8]
print(common_elements(arr1, arr2, arr3))
The output:
[2, 5]
The function common_elements creates Counter dictionaries from three input arrays. The & operator is used to compute the intersection of these Counters. The result is converted back to a list using
the elements() method and returned. This method is fast and concise when dealing with sorted arrays.
Method 2: Dictionary Intersection from Dictionary Comprehensions
This method involves creating dictionaries with element frequency using dictionary comprehensions and then performing the intersection operation. This allows for exploitation of dictionary
intersection methods which can identify common keys quickly and efficiently.
Hereโs an example:
def common_elements(arr1, arr2, arr3):
dict1 = {i: arr1.count(i) for i in arr1}
dict2 = {i: arr2.count(i) for i in arr2}
dict3 = {i: arr3.count(i) for i in arr3}
common_keys = dict1.keys() & dict2.keys() & dict3.keys()
return list(common_keys)
# Example usage
arr1 = [1, 2, 5, 6]
arr2 = [2, 3, 5, 7]
arr3 = [2, 4, 5, 8]
print(common_elements(arr1, arr2, arr3))
The output:
[2, 5]
The function common_elements computes dictionaries representing the frequency of each element in the arrays. The intersection of the keys of these dictionaries yields the common elements, since
dictionaries are inherently fast for lookups, this method is also quite performant especially with sorted arrays where counting is more optimized.
Method 3: Using Set Intersection
Set intersection is a straightforward and effective approach to finding common elements. Convert each array to a set and apply the intersection (&) operator to these sets to find common elements.
Hereโs an example:
def common_elements(arr1, arr2, arr3):
set1 = set(arr1)
set2 = set(arr2)
set3 = set(arr3)
common_elements_set = set1 & set2 & set3
return list(common_elements_set)
# Example usage
arr1 = [1, 2, 5, 6]
arr2 = [2, 3, 5, 7]
arr3 = [2, 4, 5, 8]
print(common_elements(arr1, arr2, arr3))
The output:
[2, 5]
The function common_elements first converts the input arrays into sets. It then uses set intersection to find common elements. While converting to a set loses the sorted nature of the arrays, this
method is typically very efficient due to Pythonโs optimized set operations.
Method 4: Traditional Loop and Comparison
In this method, we use a more traditional approach by iterating through each array and comparing elements to find common items. This is a brute force method and while less efficient, it is
straightforward and does not require additional data structures.
Hereโs an example:
def common_elements(arr1, arr2, arr3):
common = []
i, j, k = 0, 0, 0
while i < len(arr1) and j < len(arr2) and k < len(arr3):
if arr1[i] == arr2[j] == arr3[k]:
i += 1
j += 1
k += 1
elif arr1[i] < arr2[j]:
i += 1
elif arr2[j] < arr3[k]:
j += 1
k += 1
return common
# Example usage
arr1 = [1, 2, 5, 6]
arr2 = [2, 3, 5, 7]
arr3 = [2, 4, 5, 8]
print(common_elements(arr1, arr2, arr3))
The output:
[2, 5]
The common_elements function iterates through the arrays simultaneously while maintaining three separate indices. When a common element is found, it is added to the result list, and the indices are
incremented. If elements donโt match, the smallest one is increased to keep the searching efficient, leveraging the sorted nature of the input arrays.
Bonus One-Liner Method 5: Using List Comprehension with Set Intersection
If you prefer a more concise one-liner, Python allows you to combine set intersection with list comprehension to find common elements in sorted arrays.
Hereโs an example:
arr1 = [1, 2, 5, 6]
arr2 = [2, 3, 5, 7]
arr3 = [2, 4, 5, 8]
common_elements = list(set(arr1) & set(arr2) & set(arr3))
The output:
[2, 5]
This line of code is a concise and pythonic way to compute the intersection of three arrays by converting them into sets and performing the intersection operation. It is then converted back into a
list and printed out. This method benefits from the efficiency of set operations in Python.
โข Method 1: Using Dictionary from Counters. This method is highly efficient for sorted arrays and leverages Pythonโs optimized Counter class. However, it involves the creation of temporary
dictionary structures.
โข Method 2: Dictionary Intersection from Dictionary Comprehensions. Similar advantages as Method 1 but can be less performance-efficient due to needing to count elements in each array separately
for the dictionary.
โข Method 3: Using Set Intersection. Extremely efficient for finding common elements due to Pythonโs optimized set operations. However, converting to a set does not maintain the order of elements.
โข Method 4: Traditional Loop and Comparison. Straightforward, no additional data structure, preserves the order of arrays, but has a higher time complexity compared to other methods.
โข Bonus One-Liner Method 5: Using List Comprehension with Set Intersection. Offers elegance and conciseness but like Method 3 loses the order of elements. | {"url":"https://blog.finxter.com/how-to-find-common-elements-in-three-sorted-arrays-using-python-dictionary-intersection/","timestamp":"2024-11-06T12:48:20Z","content_type":"text/html","content_length":"73687","record_id":"<urn:uuid:c7f8d84c-961b-4fe2-bcc5-e2a3427edb6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00632.warc.gz"} |
Cover Page The handle
Cover Page
The handle http://hdl.handle.net/1887/53199 holds various files of this Leiden University dissertation.
Author: Zuiden, B.C. van
Title: Topology and geometry in chiral liquids Issue Date: 2017-09-27
Chapter 2.
Topological chiral sound in active liquids
olar active fluids are fluids formed by particles that propel them- selves in a certain direction. The study of self propelled particles has been a popular subject of research for the last two
decades. Early theoretical progress [33, 52, 69] has been accompanied by the engineering of soft materials made of self-propelled polymers, colloids, emulsions, and grains [11, 43, 56, 59, 76, 79,
84, 85], which exhibit novel nonequilibrium phenomena.
Prominent examples include phase separation of repulsive spheres, giant num- ber fluctuations away from criticality, and long-range orientational order in two-dimensional flocks [27, 110, 148].
On the other hand, the study of topological states has been a topic of intense study in the quantum domain. One of the most notable topological effect in the quantum realm is the so called quantum
Hall effect which is an precursor of a class of topological insulators.
Schematically, the quantum Hall effect is found when a two-dimensional electron gas is put in the presence of a strong external magnetic field โ breaking time-reversal-symmetry [82]. The magnetic
field causes the electrons to form classical cyclotron orbits. At edges of the system, however, the electrons are interrupted in their orbit causing an edge current. Upon analysing this system, it
turns out that the bulk of this system is insulating, as the system is band-gapped. At the edge, however, the system will conduct. This property can be linked to the existence of a topological
invariant that makes it robust to various perturbations that do not change the topology. This robustness is often referred to as topological protection.
It turns out these topological states are not exclusive to the quantum domain but are thus also present in the classical domain [14, 25, 28, 49, 57, 95].
Recent studies have shown that interesting topological quantum electronic effect often translate to similarly interesting topological acoustic effects [16, 18, 19, 34]. Here, we will combine
topological mechanics and active matter to find topological insulating states. As active matter naturally breaks time-reversal- symmetry, it is an excellent candidate to achieve a self-assembled
analogue of a certain class of topological insulators.
2.1 Classical quantum Hall fluids
Liquids composed of self-propelled particles have been experimentally realized using molecular, colloidal, or macroscopic constituents [8, 39, 43, 79, 85].
These active liquids can flow spontaneously even in the absence of an external drive [52, 69, 110]. Unlike spontaneous active flow [7, 9], the propagation of density waves in confined active liquids
is not well explored. Here, we exploit a mapping between density waves on top of a chiral flow and electrons in a synthetic gauge field [14, 31] to lay out design principles for artificial structures
termed topological active metamaterials. We design metamaterials that break time-reversal symmetry using lattices composed of annular channels filled with a spontaneously flowing active liquid. Such
active metamaterials support topologically protected sound modes that propagate unidirectionally, without backscattering, along either sample edges or domain walls and despite overdamped particle
dynamics. Our work illustrates how parity-symmetry breaking in metamaterial structure combined with microscopic irreversibility of active matter leads to novel functionalities that cannot be achieved
using only passive materials.
We design active metamaterials with transport properties akin to those of quantum Hall fluids [82] by confining active liquids in periodic geometries that generate gapped density-wave spectra. Recent
studies of topological acoustics have revealed that spectral bands characterized by topological invariants host (in their spectral gaps) robust mechanical states [18, 34, 49] and sound modes that
propagate unidirectionally along sample edges and interfaces [4, 13, 14, 16, 25, 28, 31, 95]. However, the translation of topological-acoustic designs from macroscopic prototypes to soft materials
has so far proven challenging, because overdamped particle dynamics overcome inertia and suppress the propagation of ordinary sound waves at the microscale. To address this challenge, we
2.1. Classical quantum Hall fluids 35 elucidate the relationship between emergent active flow and the spectrum of topological density waves in a confined liquid composed of self-propelled particles
that have overdamped dynamics and align their velocities, i.e., a confined polar active liquid.
In order to obtain generic results, we use a continuum mechanics de- scription of polar active flow. The analog of NavierโStokes equations that describe a one-component fluid of self-propelled
particles (with overdamped particle dynamics, see section 2.2.3) are the TonerโTu equations [52, 110, 148], which in their simplest form read
๐[๐ก]๐ + ๐ฃ[0]โ ยท (๐p) = ๐ท[๐]โ^2๐, (2.1)
๐๐กp + ๐๐ฃ0(p ยท โ)p =(๏ธ๐ฝ|p|^2โ ๐ผ)๏ธ p โ ๐ฃ1
โ๐ + ๐ฮp, (2.2) where ๐(r, ๐ก) is the density of active particles that fluctuates around its mean value ๐0. The polarization field of the material, p(r, ๐ก), denotes the local average orientation of
the velocities of the self-propelled units which, when isolated, all move at the same speed ๐ฃ0. The effective viscosity, ๐, the diffusivity, ๐ท[๐], and the other (positive) hydrodynamic coefficients
๐, ๐ฃ1, ๐ผ, and ๐ฝ have been computed from a number of microscopic models in Suzuki et al. [26], Bricard et al. [43], Solon et al. [61], Farrell et al. [65], and Bertin et al. [102];
๐ผand ๐ฝ are the Landau coefficients used to model the spontaneous breaking of rotational symmetry; ๐ฃ1 relates pressure and density. In section 2.3, we provide a concise introduction to the TonerโTu
model and explain how the left hand side of eq. (2.2) originates from overdamped dynamics of p and not from momentum conservation.
Numerically solving eq. (2.1) and eq. (2.2) in the connected-annuli ge- ometry of fig. 2.1 a, we find the emergence of a uniform steady chiral flow in each annulus. As this flow is a consequence of
spontaneous symmetry breaking, left-handed and right-handed orientations are equally likely to occur.
These general continuum-mechanics results are confirmed by particle-velocity maps measured from a prototypical microscopic model shown in fig. 2.1 b, see section 2.2.3. As particle velocities align
in the region shared between two adjacent annuli, the fluid within these annuli circulates in opposite directions, in analogy with either engaged counter-rotating gears or antiferromagnetic spins.
Similar behavior was observed in bacterial fluids experiments [7] and simulations of agent-based models [20].
b a
-1 0
-1 0
Figure 2.1: Steady states of polar active liquids in coupled annular channels.
(a) Steady state of a polar active liquid described by the hydrodynamic eqs. (2.1) and (2.2), in a confinement geometry based on the Lieb lattice. Note that the inter- annular coupling leads to a
stable steady-state order reminiscent of either engaged gears or spins in an antiferromagnet. The colors indicate the azimuthal component
๐ฃ[๐]of the velocity field (also shown in arrows) around the center of the corresponding
annulus. (b) Steady state of the same liquid simulated using a particle-based model that is described in section 2.2.3. (Dashed lines indicate periodic boundary conditions.)
When a homogeneous polar liquid flows through interconnected an- nuli, the channel geometry determines the mean polarization p0(r), which is proportional to the steady-state velocity field. We now
elucidate how this emergent spontaneous flow impacts sound propagation. We linearize eq. (2.1) and eq. (2.2) deep in the polar liquid phase, in which case both ๐ผ and ๐ฝare only weakly dependent on ๐.
We define ๐(r, ๐ก) = p(r, ๐ก) โ p0(r)and ๐(r, ๐ก) = ๐(r, ๐ก) โ ๐[0], and confirm that density waves propagate over a finite range of wave numbers ๐: |๐ผ|/๐ โช ๐ โช ๐/(๐ + ๐ท0), where ๐ โกโ
๐ฃ[0]๐ฃ[1] sets the magnitude of the speed of sound, see [52, 110] and section 2.4. In this regime, density fluctuations obey a wave equation that depends on p0:
[๐[๐ก]+ ๐๐ฃ[0](p[0]ยท โ)][๐[๐ก]+ ๐ฃ[0](p[0]ยท โ)]๐ = ๐^2โ^2๐. (2.3) Whereas (acoustic) density waves in simple driven fluids [14, 31] arise only in systems with inertial dynamics, such waves in polar
active liquids survive even in the overdamped limit โ in the latter case, these waves originate from Goldstone modes associated with broken rotational symmetry, see [110] and section 2.4. Figure 2.2
a shows the dispersion relation of density waves for a ho- mogeneous polar liquid uniformly flowing along the ๐ฅ-direction (p0(r) = ๐[0]๐ฅ^).
Note that the speed of sound depends on the orientation of the wavevector q relative to p0, because Galilean invariance is broken in eq. (2.2).
2.1. Classical quantum Hall fluids 37
dca Cn=2 Cn=0 ฮ
b MMMMฮ
M ฮ M
M ฮ M 00.01 -101 -101ฮ Figure2.2:Dispersionofdensitywavesinactivemetamaterialseitherwithorwithouttopologicaledgestates.(a)Dis-
waveshavealineardispersion,reminiscentofpressurewavesinasimplefluid.Thespectrumisasymmetricduetothebreaking ofGalileaninvariancebyp0(bottomrow,a-c:correspondingsteady-stateflow).(b)
Dispersionofdensitywavesdescribedby eq.(2.3)inthesquare-latticegeometry.Becausethesystemretainstime-reversalsymmetry(TRS),bandscrossatthepoint๐inthe
BrillouinzoneandthebandsโChernnumbersarenotwelldefined.(c)DispersionofthesedensitywavesintheLieb-latticegeometry. DuetobrokenTRS,thebandsgenericallydonotcross.(d)
Zoom-inofthedispersionofthesewavesinaquasi-one-dimensional waveguidebasedontheLieblattice,withfreeedgesonthetopandbottom(alsoseefig.2.3d).Thebulkmodes(blue)correspondto thebandsin
(c).Inaddition,weobservechiraltopologicaledgestatesplottedinredandgreencolorswhichindicatestatechirality (definedbygroupvelocity๐๐/๐๐)
and,correspondingly,theedgeonwhichthesemodesarelocated.Thesestatesinhabitgaps betweenbandswithwell-definedChernnumber๐ถ๐ฬธ=0.(below:Adensity-waveeigenmodeofafiniteLieb-latticesamplewith
-1 0 1
00.010.02-1 0 1
151050 -20 -10 0
2.1. Classical quantum Hall fluids 39 Our design of topological metamaterials exploits (i) microscopic irre- versibility induced by activity and (ii) parity-symmetry breaking of the struc- ture. To
highlight how the interplay between activity and structural design leads to metamaterials that globally break time reversal symmetry, we contrast two simple geometries of interconnected channels: one
based on the square lattice, fig. 2.2 b, and one based on the Lieb lattice, fig. 2.2 c. Solving eq. (2.3) numerically in a square lattice geometry (see section 2.2.2), we show that the wave spectrum
contains degeneracies at the edge of the Brillouin zone where two spectral branches intersect (point M). Note that the corresponding steady-state flow is invariant with respect to simultaneously
inverting the arrow of time and performing a lattice translation. By contrast, the degeneracy at point M is lifted for the Lieb lattice and a gap opens. Unlike the square lattice, the Lieb lattice
has an odd number of rings per unit cell and, therefore, a net circulation of steady-state flow. Heuristically, the spectral-gap opening stems from the frequency difference between density waves
propagating along versus opposite to flow with a non-vanishing net circulation. As a result, a gap opens only for unit cells that are chiral. In the limit ๐ฃ0๐0/๐ โช 1, we rewrite eq. (2.3) as
(โ โ ๐A)^2+ ๐^2/๐^2]๏ธ
๐ = 0, (2.4)
where A โก ๐(๐ + 1)๐ฃ0p[0]/(2๐^2), and note that the steady-state velocity field ๐ฃ0p0 couples minimally to the wavenumber of the density wave [31]. The emergent chiral flow plays the role of a
synthetic gauge field for a charged quantum particle, whereas its curl, the vorticity, acts analogously to a magnetic field that lifts spectral degeneracies.
We establish the topological nature of the band structure corresponding to eq. (2.3) in the Lieb lattice by calculating (for each band) an integer-valued topological invariant called the Chern
number, ๐ถ๐, see section 2.2.1 and Hasan et al. [82] for an introduction. For almost all of the bands in the spectrum, and for a wide range of values of the mean polarization ๐0, we find that ๐ถ๐ฬธ= 0,
fig. 2.2 c-d. As ๐ถ๐is an integer, it cannot vary smoothly from within the sample to the exterior (where ๐ถ๐= 0). Therefore, ๐ถ๐can only change if the band gap closes along the sample edge, locally
enabling edge-mode propagation [82].
Such edge modes, shown in fig. 2.2 d, are called topologically protected because they arise from the presence of topological invariants in the bulk, irrespectively of the sampleโs shape or disorder.
As in quantum Hall fluids, the topological edge modes are chiral, i.e., they propagates along a single direction. The chirality of the edge modes reflects the chirality of the flow within the unit
cell. The system edge acts as a robust acoustic diode โ topological density waves, unlike ordinary waves, propagate unidirectionally along the boundary and do not backscatter even if obstacles or
sharp corners are introduced, as demonstrated in fig. 2.3 a.
Similarly, along the boundary between two regions of distinct flow chi- rality, ๐ถ๐varies from one integer value to another. Therefore, in this region of space, the band gap must vanish, which leads
to the existence of topologi- cally protected waves along this interface. A topological waveguide can be sculpted in the bulk by deleting a row of annuli, as in fig. 2.3 b. For this sample,
topologically robust density waves propagate through the irregularly shaped domain wall in the bulk of the metamaterial. However, if the domain wall has both a row deletion and a half-column
displacement, then the chirality of flow does not change across the wall. Consequently, modes associated with the domain wall are not topologically protected and do backscatter in the bulk, as
exemplified in fig. 2.3 c.
Whereas the existence of edge waves in polar active liquids is topologi- cally protected, their penetration depth into the bulk can be tuned by changing the flow speed. As shown in the section 2.6,
by considering the minimal coupling form of eq. (2.4) relevant to the motion of density waves in the limit ๐ฃ0๐0/๐ โช 1, we expect the penetration to be exponential with a penetration depth โ scaling
as โ โผ |A|^โ1 โผ ๐๐/(๐ฃ[0]๐[0]), where ๐ is the lattice spacing. We stress that this spatial structure differs from the Gaussian profiles of quantum Hall states that share similar topological
properties. These predictions are in good agreement with the numerical resolution of the full equations of motion:
as shown in fig. 2.3 d, the penetration of the edge modes is exponential and decreases with the mean-flow speed.
Having explored the phenomenology of chiral states in confined active liquids, we can now compare this realization of a topological metamaterial with those achieved in driven liquids [14, 31]. First,
in both cases, to achieve a small penetration depth it is necessary that the speed of flow be appreciable relative to the speed of sound. For a simple fluid, this is a limitation โ driving the fluid
at speeds near the speed of sound leads to flow instabilities either in the bulk or in the boundary layer of the fluid. By contrast, for active liquids, the speed of flow ๐ฃ0๐[0]and the speed of sound
๐ are distinct parameters entering the hydrodynamic eq. (2.2) and may, in general, be comparable, so that the chiral edge state may be readily observable. Second, whereas metamaterials composed of
driven fluids require motors at each component to provide the drive, for an active liquid the drive is provided by the particles composing the liquid, whereas the confining channels prescribe the
emergent chiral
2.2. Methods 41 flow. Third, topological density waves in polar active liquids originate from Goldstone modes due to broken rotational symmetry. As a consequence, they can propagate even if particle
dynamics are overdamped โ paving the way towards colloidal and other soft matter realizations of mechanical topological insulators.
We examined topological sound in metamaterials based on polar active liquids, but our approach can be applied to wave propagation in other time- reversal-symmetry-broken active systems. Our results
epitomize the defining feature of topological active metamaterials: they combine the microscopic irreversibility inherent in active matter with structural design to achieve func- tionalities absent
in passive materials.
2.2 Methods
2.2.1 Chern numbers
We establish the topological nature of the active-liquid metamaterial by calcu- lating (for the Lieb-lattice spectrum) an integer-valued topological invariant called the Chern number associated with
each band, see [82]. The Chern number ๐ถ๐is analogous to the Euler characteristic of a closed surface with Gaussian curvature. Using the Gauss-Bonnet theorem, we can compute ๐ถ๐by integrating a
curvature called the Berry curvature ๐ต๐(q)over a closed surface formed by the first Brillouin zone (which by construction is periodic in both directions):
๐ถ๐โก 1 2๐
๐ต๐(q)๐q, (2.5)
where ๐ต๐(q) โก โร๐๐(q), ๐๐(q) โก ๐(u^๐[q])^โ ยท(โ[q]u^๐[q])is the Berry connection calculated from the u^๐q eigenstate of band ๐ and wavenumber q. For our discrete data set, we use the
gauge-choice-independent protocol described in Fukui et al. [46] to efficiently calculate the Chern number using a coarse discretization of the first Brillioun zone.
2.2.2 Finite element simulations
We solve eq. (2.3) for both a finite geometry and for a unit cell with Floquet boundary conditions (i.e., periodic boundary conditions with an additional phase factor) using COMSOL Multiphysics
finite element analysis simulations on a highly refined mesh. To obtain the dispersion relations shown in fig. 2.2 bโ
c, we perform a sweep through the wavenumbers (๐๐ฅ, ๐๐ฆ)along the ๐ฮ๐ cut and assign the appropriate phase factors for (Floquet) boundary conditions across the unit cell. Then, we solve the
corresponding eigenvalue problem at each wavenumber and plot the corresponding bands. We numerically obtain the solutions in the form of frequencies ๐๐(q)as well as the density eigenstates
๐(๐, q), for which the density waves are ๐(๐ฅ, ๐ก) = ห๐(๐, q)๐^๐(๐๐กโqยทx). Unless otherwise noted, to obtain good numerical accuracy, we use for the correspond- ing background flow a simplified model
with constant |v| = ๐0๐ฃ0 = 0.5๐, pointed along the azimuthal direction of the corresponding annulus (see visu- alizations in insets of fig. 2.3 bโc). In the regions of annular overlap, we patch the
flow field using an interpolation that is linear along the ๐ฅ-direction, and then normalize the result. For fig. 2.2 d, we begin with a quasi-one-dimensional lattice geometry (also see fig. 2.3 d),
and impose a phase factor only along the periodic boundaries in the ๐ฅ-direction. Again, the eigenvalues are plotted, and those forming a solid region corresponding to the bulk bands are shaded in
blue. For parts of fig. 2.2 d and fig. 2.3 aโc, we use a finite geometry and plot a single eigenmode located in the band gap that contains topological states.
2.2.3 Particle-based model
We used a particle-based model as an illustrative example to check the steady- state flow that we obtained from the TonerโTu equations. We emphasize that the conclusions obtained in the section 2.1
are based on the continuum TonerโTu equations, which form a description that has a more general appli- cability than the specific particle-based model presented below. We choose a continuous-time
model that includes Vicsek-like alignment interactions and repulsive interactions that prevent clustering [128, 149]. The position x๐and velocity v๐of the ๐^thparticle is evolved using a symplectic
Euler integrator^โ for Newtonโs laws of motion with the force term
F[๐] = ๐ หv[๐] = โ ๐พv[๐]+ ๐น[0]
โ ^v[๐]+ โ๏ธ
^ v[๐]/๐
โ[๐]๐ (|x[๐]โ x[๐]|) +โ๏ธ
2๐พ๐[๐ต]๐ ^๐[๐](๐ก),
โ The symplectic Euler integration scheme is described in appendix K.
2.2. Methods 43 where ๐พ is a friction coefficient, ๐ is the particle mass, and ๐น0is the active force such that ๐ฃ0 = ๐น0/๐พ. The neighbors in the ๐น0 term are denoted as
โจx[๐], x[๐]โฉand include all particles x๐ within a distance ๐
(= ๐/20) of x๐, see fig. 2.4. We use a Yukawa potential ๐(๐):
๐ (๐) = ๐
๐๐^๐
๐ (2.7)
to account for excluded-volume effects, where ๐
^โ1 = ๐
/6sets the repul- sion range, and ๐ = 4 ร 10^3๐น0/๐
^2sets the Yukawa coupling constant. The white-noise stochastic forcing term ^๐๐(๐ก), whereโจ^๐
= ๐ฟ(๐ก โ ๐ก^โฒ), mimics thermal fluctuations. The temperature is set by ๐๐ต๐ = 10^โ3๐๐
= 2 ร 10^โ2๐๐ฃ^2[0]. The nonlinear forcing term ๐น0v^, where ^v โก v/|v|, breaks the equilibrium
fluctuation-dissipation relation for this far-from-equilibrium system. The overdamped limit is defined as the regime for which the velocity relaxation time ๐pโก ๐/๐พ is much smaller that the
characteristic oscillation time ๐๐associated with the interaction potential: ๐๐โกโ๏ธ
๐๐^โ1๐^โ3, where ๐ is the particle density. Time integration is done using the following time step ฮ๐ก = 10^โ5๐/๐พ, where ๐ is the mass of an individual particle.
Both the square and Lieb lattices have lattice spacing ๐ = 120๐
^โ1and are implemented by confining particles in overlapping annuli.^โ A single annulus has an inner radius ๐
[in] = [10]^3๐and ๐
[out] =
2๐
[in]. The confining boundaries are implemented using a steep one-sided harmonic repulsive potential ^1[2]๐๐ค๐ฅ^2 with ๐๐ค = 3.14 ร 10^5๐พ^2/๐experienced by all particles. The area fraction of particles
๐ = ๐ ๐
^2
๐
^2[out]โ ๐
^2[in] โ 6.17, (2.8) where ๐ is the number of particles per annulus. (We choose units in which ๐ = 1, ๐ = 6, and ๐/๐ฃ0 = 60.)
In the steady state, the flow for the ๐-th annulus is measured by the azimuthal component of the velocity field
๐ฃ๐ = 1 ๐ฃ0
โจ [๐]
v๐ยท ^๐๐
, (2.9)
โ For a possilbe algorithm to generate square lattices and Lieb lattices respectively see appendix sections Q.1 and Q.4.
where ^๐๐ is the azimuthal unit vector around the annulus center, and โจ. . .โฉ
denotes a time average over 8000 configuration, with 1000 timesteps between subsequent configurations. ๐ฃ๐ = ยฑ1for an ideally flowing system; the sign indicates flow chirality. Two examples of steady
states are plotted in fig. 2.5 and the dependence of ๐ฃ๐ on ๐ within the Lieb lattice is plotted in fig. 2.6.
At low area fraction, the particles undergo the alignment transition for their velocities, whereas at high area fraction they jam. The flowing steady state occurs over a wide range of intermediate
area fractions.
2.3 TonerโTu hydrodynamics of polar active liquids
The hydrodynamic equations of a passive polar liquid take into account three slow variables: the usual density, ๐(r, ๐ก), and velocity, v(r, ๐ก) fields, as well as a broken-symmetry field, the
polarization p(r), defined as the local average of the particle orientations. When the polar units that form the liquid propel themselves on a solid surface, momentum is no longer conserved, because
the substrate acts as a momentum sink. Such systems are referred to as dry active matter, even though the particles may propel in a fluid medium as in, e.g., Bricard et al. [43] and Schaller et al.
[85]. The substrate enables preferential alignment of the particlesโ velocities with their polar orientation.
The hydrodynamic equations of the resulting polar active liquid read
๐๐ก๐ + โ๐(๐๐ฃ๐) = ๐ท^๐โ^2๐, (2.10)
๐[๐ก](๐๐ฃ[๐]) + โ[๐](๐๐ฃ[๐]๐ฃ[๐]) = โ[๐]๐[๐๐] โ ฮ^๐ฃ(๐ฃ[๐]โ ๐ฃ[0]๐[๐]), (2.11)
๐[๐ก]๐[๐]+ ๐ฃ[๐]โ[๐]๐[๐]+ ๐[๐๐]๐[๐] = ๐[1]๐ฃ[๐] + ๐[2]๐ฃ[๐๐]๐[๐]โ ฮ^๐๐ฟโ
๐ฟ๐[๐], (2.12) where we have introduced the symmetric part of the strain-rate tensor ๐ฃ๐๐ โก
2(๐๐๐ฃ๐+ ๐๐๐ฃ๐)and the vorticity tensor ๐๐๐ โก ^1[2](๐๐๐ฃ๐โ ๐[๐]๐ฃ๐). Note that the components of the velocity vector ๐ฃ๐for this one-fluid model are the coarse- grained velocities of the self-propelled
particles composing the active liquid and notof the potential surrounding fluid (e.g., air or solvent). The first (continuity) equation reflects mass conservation and includes a diffusive term ๐ท^๐โ^
2๐. The second equation includes the liquid stress tensor ๐ as well as an active frictional term proportional to ฮ^๐ฃ. This terms differentiates eq. (2.11) from the usual NavierโStokes equations as it
explicitly breaks momentum conservation. For the sake of simplicity, we consider here a linear coupling between the velocity and the polarization (the hydrodynamic coefficient ๐ฃ0has the dimensions of
a speed and scales with the speed of an isolated active particle). Equation (2.12)
2.3. TonerโTu hydrodynamics of polar active liquids 45
Figure 2.4:One configuration of the particle-based simulation in a periodic geometry
based on the Lieb lattice. For each particle, the radius of the short-range repulsive interaction is indicated in green. For a few chosen particles, the radius of the longer- range alignment
interaction is indicated in pink. For some particle, their instantaneous velocity is indicated by a red arrow.
-1 0 1
-1 0 1
-1 0 1
2.3. TonerโTu hydrodynamics of polar active liquids 47
Figure 2.6:(a) The average normalized azimuthal component ๐ฃ๐/๐ฃ0(measured rel-
ative to the center of each annulus) as a function of particle density ๐ in the Lieb lattice geometry measured relative to the (large) radius of the alignment interaction (see section 2.2.3, fig.
describes the relaxational dynamics of the polarization, see [52]. The left- hand side of eq. (2.12) contains the comoving (2nd term), corotational (3rd term) time-derivative of the polarization. The
right-hand side of eq. (2.12) includes the effective Hamiltonian โ and the dissipative coefficient ฮ^๐along with two frictional terms. The first frictional term in eq. (2.12) contains the friction
coefficient ๐1and describes the friction between particle and substrate
โ this terms is responsible for the โweathercock effectโ i.e., the polar particlesโ
local alignment with the flow see, e.g., Kumar et al. [39] and Brotto et al.
[44]. The second friction term in eq. (2.12) contains the friction coefficient ๐2
and originates from the friction between an individual polar particle and the surrounding active fluid (itself composed of polar particles). The sign and the magnitude of ๐2controls the strength of
alignment of the particle polarization with the local elongation (or compression) axis of the flow.
We can also consider eqs. (2.10) to (2.12) in the limit for which the frictional ฮ^๐ฃ term dominates eq. (2.11). In this overdamped limit, eq. (2.11) reduces to a constraint equation, v = ๐ฃ0p, and the
hydrodynamics is fully captured by mass conservation and the dissipative dynamics of the polarization field. A gradient expansion of โ then yields [110, 148]:
๐[๐ก]๐ + ๐ฃ[0]โ ยท (๐p) = ๐ท[0]โ^2๐, (2.13)
๐๐กp + ๐๐ฃ0p ยท โp =(๏ธ๐ฝ|p|^2โ ๐ผ)๏ธ p โ๐ฃ1
โ๐ + ๐ฮp + ๐[2]๐ฃ[0]โ|p|^2โ ๐[2]๐ฃ[0]p(โ ยท p),
where all of the hydrodynamic coefficients depend a priori on the local density.
Note that whereas for a system with Galilean invariance, ๐ = 1 and ๐2 = 0, for the polar active liquid, which lacks this symmetry, these parameters may be arbitrary. Studies of realistic microscopic
models have found ๐ to be positive, less than, and of order 1, and for the numerical computations performed in this work, we assume ๐ = 0.8 [43, 65, 102]. The lack of Galilean invariance as well as
momentum conservation leads to the ๐ผ and ๐ฝ terms in eq. (2.14), which suggests a preference for either zero or nonzero velocity โ depending on the sign of ๐ผ. In the article, for the sake of
simplicity we focus on the case in which the ๐2terms are negligible. In this case, eqs. (2.13) and (2.14) reduce to eqs. (2.1) and (2.2), and are reproduced here:
๐[๐ก]๐ + ๐ฃ[0]โ ยท (๐p) = ๐ท[๐]โ^2๐, (2.15)
๐๐กp + ๐๐ฃ0(p ยท โ)p =(๏ธ๐ฝ|p|^2โ ๐ผ)๏ธ p โ๐ฃ[1] ๐0
โ๐ + ๐ฮp. (2.16)
2.4. Linear density waves for TonerโTu liquids 49 Equations (2.15) and (2.16) are sufficient to capture the phenomena associated with linear density waves in a polar active liquid relevant to our
analysis. In that limit, the polarization field itself defines the fluid velocity, so that the coupling between the polarization field and the density gradient has an effect analogous to that of a
pressure gradient in an equilibrium liquid.
2.4 Linear density waves for TonerโTu liquids
For the case ๐ผ < 0, the TonerโTu equations result in a steady state of the fluid with spontaneous flow in the bulk, such that ๐^2[0] โก |p[0]|^2= โ๐ผ/๐ฝ. Although in the bulk, the spontaneous flow
direction ^p[0]could be arbitrary, in physical realizations of active liquids, the boundaries fix ^p0. For example, in an open channel, ^p0is parallel to the channel walls. In the Lieb lattice
geometry, we have solved eqs. (2.15) and (2.16) for a sufficient time for the dynamics to relax to a steady state. We find that this steady state, plotted in fig. 2.1 a (also see fig. 2.5), has the
features of the spatial profile observed from our particle-based simulations, although the particle-based simulations lead to a smoother profile, fig. 2.1 b.
In the analysis performed for the density wave computations, we take a particularly simple form of the steady state, based on the profile we observe.
We postulate the polarization has magnitude unity everywhere and is oriented azimuthally, i.e., perpendicular to the vector connecting the position of the fluid to the nearest annulus center. In the
regions of overlap between annuli, we linearly interpolate between the two annular flow profiles. This spatial profile is plotted in a large sample in fig. 2.1 a.
Thus, given a spontaneous steady-state flow field p0, we expand ๐(r, ๐ก) = p(r, ๐ก) โ p0(r)and ๐(r, ๐ก) = ๐(r, ๐ก) โ ๐0to find
๐๐ก๐ + (๐ฃ0p[0]ยท โ)๐ = โ๐ฃ[0]๐0โ ยท v + ๐ท[0]โ^2๐, (2.17)
๐[๐ก]v + ๐๐ฃ[0](p[0]ยท โ)v = โ(๐ฃ[1]/๐[0])โ๐ + 2๐ผ(v ยท ^p[0])^p[0]+ ๐โ^2v. (2.18) For the case of propagating waves, the right-hand side can be decomposed as the sum of a dominant anti-Hermitian matrix
that governs wave dispersion and a perturbatively small Hermitian matrix that governs wave attenuation.
As we are interested in the behavior of an active fluid deep in the ordered phase, we have assumed ๐ผ to be constant. The ๐ dependance of ๐ผ, which leads to additional dissipative terms, would be most
significant near the phase transition from an isotropic to a flowing steady state. There are two notable differences for the propagation of density waves in an active liquid compared
to a simple fluid: (1) the ๐ผ term acts as an additional dissipative term for sound in an active liquid, and (2) one of the convection terms contains the coefficient ๐(ฬธ= 1). Due to this second
difference, the equation of motion can no longer be
โGalilean boostedโ into a different reference frame by replacing the lab-frame derivative ๐๐กby a convective derivative.
To closely examine the mode structure in eqs. (2.17) and (2.18), we split the vector v into components ๐ฃโ and ๐ฃโฅ, respectively parallel and perpendicular to ^p0. Note that due to the confinement of
the active liquid inside a channel, we can assume that the density waves only propagate along the channel and, therefore, the derivatives of p and ๐ along the direction perpendicular to ^p0
can be ignored. Under this assumption, eqs. (2.17) and (2.18) reduce to:
๐[๐ก]๐ + ๐ฃ[0]๐[0]๐[โ]๐ = โ๐ฃ[0]๐[0]๐[โ]๐ฃ[โ]+ ๐ท[0]๐[โ]^2๐, (2.19)
๐๐ก๐ฃโ+ ๐๐0๐โ๐ฃโ = โ(๐ฃ1/๐0)๐โ๐ + 2๐ผ๐ฃโ+ ๐๐[โ]^2๐ฃโ, (2.20)
๐๐ก๐ฃโฅ+ ๐๐0๐[โ]๐ฃโฅ= ๐๐[โ]^2๐ฃโฅ. (2.21)
Let us now consider eqs. (2.19) to (2.21) for the density and the longitudinal velocity modes in an active liquid. Ignoring, for now, the effects of the flow, we can calculate the dispersion relation
for density waves in active liquids in the limit ๐ โช ๐/(๐ท0 + ๐), where ๐ is the wavenumber of the density wave and ๐ โกโ
๐ฃ0๐ฃ1 is the speed of sound. The frequency is then given by ๐(๐) = ๐|๐ผ|+โ๏ธ
๐^2๐ฃ[0]๐ฃ[1]โ ๐ผ^2+๐(๐ท[0]+๐)๐^2/2(also see fig. 2.2 a). Whereas the real component of ๐ governs the propagation of sound waves, the imaginary component governs their dissipation. Due to spontaneous
flow, the sound wave frequency may, generically, have a real component, which is plotted in fig. 2.1 a. Furthermore, in the limits ๐ โซ |๐ผ|/๐, the imaginary component will always be much smaller than
the real component. Thus, in the regime |๐ผ|/๐ โช ๐ โช ๐/(๐ท0+ ๐), there exist longitudinal density waves that propagate and decay slowly in the ordered active liquid. Provided we are in this regime, for
phenomena on sufficiently short time scales, we may ignore the dissipative terms and concentrate on the wave-like solutions to the equations of motion.
2.5. Analogy with Schrรถdinger equation 51
2.5 Analogy with Schrรถdinger equation
Note that when the dissipative components of the density wave equation can be neglected, eq. (2.3) may be recast as a single wave equation. By applying the convective derivative ๐๐ก+ ๐๐ฃ[0](p[0]ยท โ)to
the continuity equation, eq. (2.19), and then substituting the velocity equation of motion, eq. (2.20), one obtains:
[๐[๐ก]+ ๐๐ฃ[0](p[0]ยท โ)][๐[๐ก]+ ๐ฃ[0](p[0]ยท โ)]๐ = ๐^2โ^2๐. (2.22) The eigenvalue problem for the above wave equation has solutions in terms of the frequency ๐ of a time-dependent oscillation ๐(x, ๐ก) =
ห๐(x, ๐)๐^๐๐๐ก. The corresponding equation has the form, provided that ๐ โก ๐ฃ0๐0/๐ โช 1,
[๏ธ๐^2โ^2+ ๐^2โ ๐๐(๐ + ๐ฃ[0])p[0]ยท โ]๏ธ ห๐ = 0, (2.23) or,
(โ โ ๐A)^2+ ๐^2/๐^2 ]๏ธ
๐ = 0, (2.24)
where A โก ๐(๐ + 1)๐ฃ0p[0]/(2๐^2). This shows that the velocity field ๐ฃ0p[0]acts as an effective vector potential for the propagation of density waves.
2.6 Scaling argument for penetration depth
From the form of eq. (2.24), we can deduce the following scaling argument for the penetration depth of a topological edge state in the relevant limit ๐ฃ0๐0/๐ โช 1. Consider the first term, (โ โ ๐A)^2,
which shows the minimal coupling between density gradients and spontaneous flow [31]. The penetra- tion depth is a lengthscale that originates from density gradients and therefore scales as โ โผ A^โ1.
Furthermore, A โผ ๐ฃ0๐0๐/๐^2and depends on a character- istic frequency ๐ โผ ๐/๐, where ๐ is the speed of sound and ๐ is a characteristic lengthscale of the material, i.e., the lattice spacing. In
addition, A is approx- imately the same from one unit cell to the next. Combining these scaling relations, โ โผ ๐๐/(๐ฃ0๐0). The length โ diverges as the flow velocity goes to zero and, therefore, as
the material loses its bulk bandgap.
We also note that we expect and observe topological edge states to be localized near the edge with an exponential profile, see fig. 2.3 d. To see why we expect eq. (2.24) to lead to exponentially
localized states, note that if we assume ๐ โผ ๐(x/โ), and the scaling law derived above for โ, โ โผ ๐๐/(๐ฃ0๐0), eq. (2.24) predicts ๐^โฒโฒ โผ ๐, with a dimensionless proportionality constant.
An exponential profile satisfies this approximate scaling form. Such a profile is a consequence of the fact that A does not vary over lengthscales larger than a unit cell, an argument that relies on
the metamaterial structure of the topological state. By contrast, in the quantum Hall fluid, the frequency scale depends on the field strength and A varies over large distances, which leads to both a
Gaussian profile of states in a Landau level as well as a different scaling for the penetration depth [172].
2.7 Conclusion
We have shown how polar active fluids confined in a two-dimensional Lieb lattice form a time-asymmetric steady state. Upon analyzing the dispersion of density waves in this system we find it is
gapped between bands with a well- defined Chern number. In these gaps we find chiral topological edge states, similar to quantum Hall effect, yet in the classical domain. In order to probe the
robustness of these states, we show that these edge states remain in a system with defects or domain walls with systems of different topological phases.
Additionally, we show how the edge states of these systems are exponentially localized, with a penetration depth controlled by the flow speed of the active fluid. | {"url":"https://5dok.net/document/y8gxepd4-cover-page-the-handle.html","timestamp":"2024-11-14T08:46:11Z","content_type":"text/html","content_length":"200409","record_id":"<urn:uuid:90e3f536-8d1f-4927-bf17-afec4990fea9>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00204.warc.gz"} |
Pi Day
Pi Day is celebrated on March 14th (3/14) around the world. Pi (Greek letter โฯโ) is the symbol used in mathematics to represent a constant โ the ratio of the circumference of a circle to its
diameter โ which is approximately 3.14159.
Pi has been calculated to over one trillion digits beyond its decimal point. As an irrational and transcendental number, it will continue infinitely without repetition or pattern. While only a
handful of digits are needed for typical calculations, Piโs infinite nature makes it a fun challenge to memorize, and to computationally calculate more and more digits.
Pi (ฯ) is the ratio of a circleโs circumference to its diameter. Pi is a constant number, meaning that for all circles of any size, Pi will be the same.
The diameter of a circle is the distance from edge to edge, measuring straight through the center. The circumference of a circle is the distance around.
By measuring circular objects, it has always turned out that a circle is a little more than 3 times its width around. In the Old Testament of the Bible (1 Kings 7:23), a circular pool is referred to
as being 30 cubits around, and 10 cubits across. The mathematician Archimedes used polygons with many sides to approximate circles and determined that Pi was approximately 22/7. The symbol (Greek
letter โฯโ) was first used in 1706 by William Jones. A โpโ was chosen for โperimeterโ of circles, and the use of ฯ became popular after it was adopted by the Swiss mathematician Leonhard Euler in
1737. In recent years, Pi has been calculated to over one trillion digits past its decimal. Only 39 digits past the decimal are needed to accurately calculate the spherical volume of our entire
universe, but because of Piโs infinite & patternless nature, itโs a fun challenge to memorize, and to computationally calculate more and more digits.
The number pi is extremely useful when solving geometry problems involving circles. Here are some examples:
The area of a circle.
A = ฯr2
Where โrโ is the radius (distance from the center to the edge of the circle). Also, this formula is the origin of the joke โPies arenโt square, theyโre round!โ
The volume of a cylinder.
V = ฯr2h
To find the volume of a rectangular prism, you calculate length ร width ร height. In that case, length ร width is the area of one side (the base), which is then multiplied by the height of the prism.
Similarly, to find the volume of a cylinder, you calculate the area of the base (the area of the circle), then multiply that by the height (h) of the cylinder. | {"url":"https://doylestownautorepairs.com/blog/pi-day/","timestamp":"2024-11-11T10:48:08Z","content_type":"text/html","content_length":"59993","record_id":"<urn:uuid:dfd26d41-267a-4b6e-8953-defa0dd28dbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00557.warc.gz"} |
Calculation Shows Why Heavy Quarks Get Caught up in the Flow
New results will help physicists interpret experimental data from particle collisions at RHIC and the LHC and better understand the interactions of quarks and gluons
June 7, 2023
Nuclear Theory Group Leader Peter Petreczky at the STAR detector of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory.
UPTON, NYโUsing some of the worldโs most powerful supercomputers, a group of theorists has produced a major advance in the field of nuclear physicsโa calculation of the โheavy quark diffusion
coefficient.โ This number describes how quickly a melted soup of quarks and gluonsโthe building blocks of protons and neutrons, which are set free in collisions of nuclei at powerful particle
collidersโtransfers its momentum to heavy quarks.
The answer, it turns out, is very fast. As described in a paper just published in Physical Review Letters, the momentum transfer from the โfreed upโ quarks and gluons to the heavier quarks occurs at
the limit of what quantum mechanics will allow. These quarks and gluons have so many short-range, strong interactions with the heavier quarks that they pull the โboulderโ-like particles along with
their flow.
The work was led by Peter Petreczky and Swagato Mukherjee of the nuclear theory group at the U.S. Department of Energyโs Brookhaven National Laboratory, and included theorists from the Bielefeld,
Regensburg, and Darmstadt Universities in Germany, and the University of Stavanger in Norway.
The calculation will help explain experimental results showing heavy quarks getting caught up in the flow of matter generated in heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) at
Brookhaven and the Large Hadron Collider (LHC) at Europeโs CERN laboratory. The new analysis also adds corroborating evidence that this matter, known as a โquark-gluon plasmaโ (QGP), is a nearly
perfect liquid, with a viscosity so low that it also approaches the quantum limit.
โInitially, seeing heavy quarks flow with the QGP at RHIC and the LHC was very surprising,โ Petreczky said. โIt would be like seeing a heavy rock get dragged along with the water in a stream.
Usually, the water flows but the rock stays.โ
The new calculation reveals why that surprising picture makes sense when you think about the extremely low viscosity of the QGP.
Frictionless flow
The low viscosity of matter generated in RHICโs collisions of gold ions, first reported on in 2005, was a major motivator for the new calculation, Petreczky said. When those collisions melt the
boundaries of individual protons and neutrons to set free the inner quarks and gluons, the fact that the resulting QGP flows with virtually no resistance is evidence that there are many strong
interactions among the quarks and gluons in the hot quark soup.
โThe low viscosity implies that the โmean free pathโ between the โmeltedโ quarks and gluons in the hot, dense QGP is extremely small,โ said Mukherjee, explaining that the mean free path is the
distance a particle can travel before interacting with another particle.
โIf you think about trying to walk through a crowd, itโs the typical distance you can get before you bump into someone or have to change your course,โ he said.
With a short mean free path, the quarks and gluons interact frequently and strongly. The collisions dissipate and distribute the energy of the fast-moving particles and the strongly interacting QGP
exhibits collective behaviorโincluding nearly frictionless flow.
โItโs much more difficult to change the momentum of a heavy quark because itโs like a trainโhard to stop,โ Mukherjee noted. โIt would have to undergo many collisions to get dragged along with the
But if the QGP is indeed a perfect fluid, the mean free path for the heavy quark interactions should be short enough to make that possible. Calculating the heavy quark diffusion coefficientโwhich is
proportional to how strongly the heavy quarks are interacting with the plasmaโwas a way to check this understanding.
Crunching the numbers
The calculations needed to solve the equations of quantum chromodynamics (QCD) โ the theory that describes quark and gluon interactions โ are mathematically complex. Several advances in theory and
powerful supercomputers helped to pave the way for the new calculation.
The data points on this graph show that the interactions of heavy quarks (Q) with the quark-gluon plasma (QGP) are strongest and have a short mean free path (zig zags) right around the transition
temperature (T/Tc = 1). The interaction strength (the heavy quark diffusion constant) decreases, and the mean free path lengthens, at higher temperatures.
โIn 2010/11 we started using a mathematical shortcut, which assumed the plasma consisted only of gluons, no quarks,โ said Olaf Kaczmarek of Bielefeld University, who led the German part of this
effort. Thinking only of gluons helped the team to work out their method using lattice QCD. In this method, scientists run simulations of particle interactions on a discretized four-dimensional
space-time lattice. Essentially, they โplaceโ the particles on discrete positions on an imaginary 3D grid to model their interactions with neighboring particles and see how those interactions change
over time (the 4^th dimension). They use many different starting arrangements and include varying distances between particles.
After working out the method with only gluons, they figured out how to add in the complexity of the quarks.
The scientists loaded a large number of sample configurations of quarks and gluons onto the 4D lattice and used Monte Carlo methodsโrepeated random samplingโto try to find the most probable
distribution of quarks and gluons within the lattice.
โBy averaging over those configurations, you get a correlation function related to the heavy quark diffusion coefficient,โ said Luis Altenkort, a University of Bielefeld graduate student who also
worked on this research at Brookhaven Lab.
As an analogy, think about estimating the air pressure in a room by sampling the positions and motion of the molecules. โYou try to use the most probable distributions of molecules based on another
variable, such as temperature, and exclude improbable configurationsโsuch as all the air molecules being clustered in one corner of the room,โ Altenkort said.
In the case of the QGP, the scientists were trying to simulate a thermalized systemโwhere even on the tiny-fraction-of-a-second timescale of heavy ion particle collisions, the quarks and gluons come
to some equilibrium temperature.
They simulated the QGP at a range of fixed temperatures and calculated the heavy quark diffusion coefficient for each temperature to map out the temperature dependence of the heavy quark interaction
strength (and the mean free path of those interactions).
โThese demanding calculations were possible only by using some of the worldโs most powerful supercomputers,โ Kaczmarek said. The computing resources included Perlmutter at the National Energy
Research for Scientific Computing Center (NERSC), a DOE Office of Science User Facility located at Lawrence Berkeley National Laboratory; Juwels Booster at the Juelich Research Center in Germany;
Marconi at CINECA in Italy; and dedicated lattice QCD GPU clusters at Thomas Jefferson National Accelerator Facility (Jefferson Lab) and at Bielefeld University.
As Mukherjee noted, โThese powerful machines donโt just do the job for us while we sit back and relax; it took years of hard work to develop the codes that can squeeze the most efficient performance
out of these supercomputers to do our complex calculations.โ
The codes were developed as part of a larger collaborative effort known as Fundamental Nuclear Physics at the Exascale and Beyond, which is jointly funded by the DOE Office of Science, Office of
Advanced Scientific Computing Research and Office of Nuclear Physics through the Scientific Discovery through Advanced Computing (SciDAC) program.
Rapid thermalization, short-range interactions
The calculations show that the heavy quark diffusion coefficient is largest right at the temperature at which the QGP forms, and then decreases with increasing temperatures. This result implies that
the QGP comes to an equilibrium extremely rapidly.
โYou start with two nuclei, with essentially no temperature, then you collide them and in less than one quadrillionth of a second, you get a thermal system,โ Petreczky said. Even the heavy quarks get
For that to happen, the heavy quarks have to undergo many scatterings with other particles very quickly โ implying that the mean free path of these interactions must be very small. Indeed, the
calculations show that, at the transition to QGP, the mean free path of the heavy quark interactions is very close to the shortest distance allowable. That so-called quantum limit is established by
the inherent uncertainty of knowing both a particleโs position and momentum simultaneously.
This independent โmeasureโ provides corroborating evidence for the low viscosity of the QGP, substantiating the picture of its perfect fluidity, the scientists say.
โThe shorter the mean free path, the lower the viscosity, and the faster the thermalization,โ Petreczky said.
Simulating real collisions
Now that scientists know how the heavy quark interactions with the QGP vary with temperature, they can use that information to improve their understanding of how the actual heavy ion collision
systems evolve.
โMy colleagues are trying to develop more accurate simulations of how the interactions of the QGP affect the motion of heavy quarks,โ Petreczky said. โTo do that, they need to take into account the
dynamical effects of how the QGP expands and cools down โ all the complicated stages of the collisions.โ
โNow that we know how the heavy quark diffusion coefficient changes with temperature, they can take this parameter and plug it into their simulations of this complicated process and see what else
needs to be changed to make those simulations compatible with the experimental data at RHIC and the LHC.โ
This effort is the subject of a major collaboration known as the Heavy-Flavor Theory (HEFTY) for QCD Matter Topical Theory Collaboration.
โWeโll be able to better model the motion of heavy quarks in the QGP, and then have a better theory to data comparison,โ Petreczky said.
The work was funded by the DOE Office of Science, Office of Nuclear Physics, and by other funders for individual collaborators listed in the scientific paper.
Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences
in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.
Follow @BrookhavenLab on Twitter or find us on Facebook
2023-21223 | INT/EXT | Newsroom | {"url":"https://www.bnl.gov/newsroom/news.php?a=121223","timestamp":"2024-11-05T12:37:35Z","content_type":"text/html","content_length":"39164","record_id":"<urn:uuid:f81be177-0df1-4e7d-b407-ca86f53df6f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00590.warc.gz"} |
Two-Body Problems and Graphical Analysis: Unit 1: Kinematics - Motion in One Direction
Two-Body Problems and Graphical Analysis
Study Guide
Unit 1: Kinematics - Motion in One Direction
Problems Involving Two Bodies
Frequently a problem will be given which involves the motion of two objects with different speeds, accelerations, starting points, starting times and etc. These objects will then move until one of
them catches or meets the other. Since each problem of this type will be different it is difficult to give concrete rules regarding their solution. However, the following suggestions may give you
some help in attacking the problems. List everything known about the motion of the objects. You will need a separate list for each object. Frequently it will be necessary to give a quantity for
one object in terms of the same quantity for the other. For example, if the problem involved a car catching up with another car which had a head start of 500 ft you might give ฮX for one car as Z
and ฮX for the other as Z + 500. You may need to express more than one variable this way. After listing all known quantities and expressing all unknown quantities as variables you should start
writing equations. You may end with a set of simultaneous equations which you solve using algebra. Again you learn best by doing it. Study the following examples and then work on assigned
Sample Problem #1
Two boys are in a bicycle race. John can accelerate with a constant acceleration of 0.5
and Paul can accelerate with a constant acceleration of 0.75
. Paul gives John a head start of 5 seconds.
(a) Find how long before Paul catches John.
(b) Calculate the speed of both riders when Paul catches John.
(c) How far do they travel before Paul catches John?
Sample Solution #1
Note that this time we have expressed the two unknowns by variables. Use equation 5 to express the distance traveled by both John and Paul in terms of a and t. Then solve the set of equations you
get for T and Z.
For John Z=
For Paul Z=
(0.75)(T - 5)
Combine these equations to obtain
You should show that the solution to this equation is T = 27.3 s and Z = 186 m. Therefore Paul catches John 27.3 seconds after John starts. When they are even both will have traveled 186 m. You
could check your work by using John's time (22.3 s) and acceleration (0.75
) to see if his distance is also 186 m.
Graphical Analysis
Frequently a graph can be a useful tool in studying the motion of an object. The graphs do not always need to be constructed. Often a good sketch will help in the analysis of the motion. The two
graphs which are generally most likely to be useful are graphs of position vs. time or speed vs. time. The following facts should be kept in mind:
1. In a position vs. time graph the slope of the line at any point represents the speed at that instant.
2. In a speed vs. time graph the slope of the line at any point represents the acceleration of the object at that instant.
3. In a graph of speed vs. time the area between the line and the time (horizontal) axis between two points is the distance traveled by the object between those two times.
4. In an acceleration vs. time graph the area between the line and the time axis is between two points is the change in speed during that time interval. The use of graphs in solving problems is
shown in the problems below. Study these problems carefully and be alert for places in your problem solving where these techniques can be helpful.
Sample Problem #2
Represent the motion of the two boys in problem 7 by graphs and use the graphs to solve the problem.
Sample Solution #2
The most useful graph is frequently the graph of speed vs. time since both acceleration, speed and displacement can be obtained from it. Sketch a graph showing the motion of both objects on the same
set of axes.
^Figure 1.4.1
Study Figure 1.4.1 carefully. The line representing John's motion has a constant slope of 0.5
starting at the beginning of the motion. The line representing Paul's motion remains at zero until 5 s have elapsed and then has a constant slope of 0.75
. John's speed at any point will is the slope multiplied by the elapsed time (0.5T). Paul's speed will be zero until 5 s have elapsed and after that will be 0.75(T - 5). The point at which the two
lines cross is the time when the two boys have the same speed, it is not the point at which they have moved the same distance. Before that point in time John is moving faster than Paul and is
pulling ahead of him. After that time Paul is moving faster than John and is gaining on him. Paul will catch up with John when the areas under the two lines are equal. The two areas involved are
both triangles. The area for John is a triangle with a base T (where T is the time they come together) and height of 0.5T. The area for Paul's motion is a triangle with a base of (T - 5) and a
height of 0.75(T - 5). Since we wish to know when the boys are at the same distance from the start we set the two areas equal to obtain.
(0.75)(T - 5)
Note that this is the same equation which was obtained by different reasoning the first time we solved this problem. The rest of the problem proceeds much as before. We really have introduced
nothing new but have only shown a different way of seeing the motion.
Sample Problem #3
You have a car which can accelerate from 0 to 60 mph (88
) in seven seconds. A car passes you just as you start up from a stop light. The speed limit is 60 mph. Doing your transmission no favor, but obeying the law:
(a) how long will it take you to catch the other car?
(b) how far will you have gone?
Sample Solution #3
It is probably best in a problem such as this to express all speeds and distances in
and ft. Since 60 mph is 88
, 40mph must be 58.7
. Your car can accelerate from 0 to 88
in 7 s so its acceleration must be 88/7 = 12.6
The motion of the two cars is shown in Figure 1.4.2.
^FIGURE 1.4.2
Please note that this problem has an added complication, not only does it involve the motion of two objects, the motion of one of them (your car) has to be divided into two parts. The other car has
a steady speed of 58.7
as showed by the horizontal line. Your car starts at rest and has a constant acceleration of 12.6
for seven seconds and then travels at a constant speed of 88
until it catches the other car. You will catch the other car when the areas under the two curves are equal. The area under the curve for the other car is a rectangle with a base of T (the time you
catch him) and a height of 58.7
(his speed). The area under the curve for your car is the sum of two areas: a triangle with a base of 7 seconds and a height of 88
, and a rectangle with a base of (T - 7) seconds and a height of 88
. Setting these two areas equal gives:
58.7T =
7(88) + (T - 7)88
The student can show that the solution to this equation is T = 10.5 s. Use this value of T to obtain the distance moved by either of the cars which will come out to be 616 ft.
Two motorcycles are at rest and separated by 24.5ft. They start simultaneously in the same direction, the one in the back having an acceleration of 3.00 ft/sec^2 and the one in the front with an
acceleration of 2.00 ft/sec^2. How long does it take for the faster cycle to overtake the slower? How far does the faster machine go before it catches up? How fast is each cycle going at this time?
Superman is standing in a window of a building 100ft above the street. A baby hurtles past, having fallen from a window 50.0ft higher. With what acceleration must Superman descend to catch the baby
just before it is too late?
A balloon rises from the earth with a constant speed of 10.0ft/sec. A stone dropped from the balloon reaches the ground in 3.00sec. Find the height of the balloon at the instant the stone was
dropped; the height of the balloon when the stone reaches the ground; the speed of the stone as it reaches the ground.
A man is working in a basket attached to a balloon which is rising at a uniform rate of 32.2 ft/sec with respect to the ground, when he accidentally drops overboard a hammer which weighs 0.279lb. The
hammer strikes the earth 10.0 sec later. What is the velocity of the hammer with respect to the ground at the instant it is dropped? How high was the balloon at the instant the hammer was dropped? If
the balloon continues at a constant speed, how high will it be when the hammer strikes the ground?
Who would win a 100yd dash, a runner who can cover the distance in 10.0sec, or an automobile which can accelerate to 60 mi/hr (88ft/sec) from rest in 16.0sec? By how much?
An automobile and a truck start from rest at the same instant with the automobile initially at some distance behind the truck. The truck and automobile have constant accelerations of 4.0 and 6.0 ft/
sec^2 respectively, and the automobile overtakes the truck after the truck has moved 150ft. How long does it take the auto to overtake the truck? How far was the auto behind the truck initially? What
is the speed of each when they are abreast?
A truck goes through a red light and continues to travel at 30mi/hr. At the instant the truck passes it at the intersection a waiting car assumes that the light has changed and starts up in the same
direction as the truck with a constant acceleration of 12ft/sec^2. Two seconds later a motorcycle officer leaves the intersection with a constant acceleration. If he wishes to catch both offenders,
what should be the value of his acceleration to reach the car just as it is passing the truck?
Assign this reference page
Click here to assign this reference page to your students.
Blogs on This Site
Reviews and book lists - books we love!
The site administrator fields questions from visitors. | {"url":"https://www.theproblemsite.com/reference/science/physics/study-guide/kinematics/two-body-problems-and-graphical-analysis","timestamp":"2024-11-06T14:48:59Z","content_type":"text/html","content_length":"41963","record_id":"<urn:uuid:38d18ccf-c65b-435e-a6b4-433af7a57aa8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00670.warc.gz"} |