content
stringlengths
86
994k
meta
stringlengths
288
619
Section 2.11.1: Spin-Orbit Coupling - Development of Potential Energy 2.11 Spin Orbit Coupling The rotation of planets and natural satellites is affected by the gravitational forces from other celestial bodies. As an extended application of the Lagrangian method for forced rigid bodies, we consider the rotation of celestial objects subject to gravitational forces. In this section, we will develop expressions for potential energy for the graviational interaction of an “extended body” (i.e. not a point mass) with an external point mass. Combined with the rigid body kinetic energy, this can be used to create Lagrangians that model a number of systems. 2.11.1 Development of the Potential Energy The rigid body can be modeled as a collection of particles subject to rigid constraints. Similar to the kinetic energy decribed in the previous section, the potential energy of a rigid body can also be expressed in terms of the moment of inertia and later expressed in terms of generalized coordinates. Figure 2.9 The gravitational potential energy of a point mass and a rigid body is the sum of the gravitational potential energy of the point mass with each constituent mass element of the rigid body. The gravitational potential energy of a rigid body is the sum of the potential energies of the individual point masses that make up the rigid body. $$ V = - \sum_\alpha \frac{G M' m_\alpha}{r_\alpha} $$ where $M’$ is the mass of the external point mass that is the source of the gravity, $r_\alpha$ is the distance from the point mass to the component particle in the rigid body and $G$ is the universal gravitational constant. The position of the component particle can be resolved as: $$ \vec{r}_alpha = \vec{R} - \vec{\xi}_\alpha\\ $$ where $\vec{R}$ is the vector from the external point mass to the center of mass of the rigid body. With this formulation, the distance to the particle, $r_\alpha$ becomes, $r_\alpha = R^2 + \xi^2_\ alpha - 2R\xi_\alpha \cos\theta_\alpha$, where $\theta_\alpha$ is the angle between the lines from the center of mass to the constituent particle and to the point mass (see Figure 2.9). Given that this is a three-dimensional body, $\xi_\alpha$ and $\theta_\alpha$ cannot uniquely specify the p0osition of the constituent particle. However, these are the only parameters required to define the potential energy as it is simply a function of the distance between the particle and the distant point mass. The potential energy is therefore defined as: $$ \begin{eqnarray} V &= -GM' \sum_\alpha \frac{m_\alpha}{\left(R^2 + \xi^2_\alpha - 2R\xi_\alpha \cos\theta_\alpha\right)^{1/2}}\\ &= -GM' \sum_\alpha m_\alpha \left(R^2 + \xi^2_\alpha - 2R\xi_\ alpha \cos\theta_\alpha\right)^{-1/2} \end{eqnarray} $$ This can be expanded using Legendre Polynomials. The Legendre polynomials $P_l$ may be obtained by expanding the expression $(1 + y^2 − 2yx)^{1/2}$ as a power series in $y$. The coefficient of $y_l$ is $P_l(x)$. The first few Legendre polynomials are: $P_0(x) = 1$, $P_1(x) = x$, $P_2(x)=\frac{3}{2}x^2−\frac{1}{2}$ , and so on. Then we get the following expression for potential energy: $$ V = \frac{-GM'}{R} \sum_\alpha m_\alpha \sum_l \frac{\xi^l_\alpha}{R^l}P_l \cos(\theta_\alpha) $$ Successive terms in this expansion typically drop off quickly since the distance between celestial bodies are vastly bigger than their size. Each term in the series has an upper bound. This is because the Legendre polynomials all have a magnitude less than one for arguments between -1 and 1 (which $\cos(\theta_\alpha)$ satisfies), and the distances $\xi_\alpha$ are all less than some maximum size $\xi_{max}$. Therefore, the sum over $m_\alpha$ times the upper bounds of each term is just $M$ times the upper bounds. Therefore, $$ \left|\sum_\alpha m_\alpha \frac{\xi^l_\alpha}{R^l}P_l \cos(\theta_\alpha)\right| \leq M \frac{\xi^l_{max}}{R^l} $$ Successive terms decrease by a factor of $\frac{\xi_{max}}{R}$. A body with a sufficiently strong gravitational force overcomes the material strength of the rigid body and converts it into a sphere over time. The higher order terms in the series are measuring the deviation of the rigid body from a sphere. We can truncate the series to different values of $l$ based on the fidelity required. When $l=0$, the sum over $\alpha$ gives the total mass $M$ of the rigid body. For $l=1$, the sum is zero because $\vec{\xi}_\alpha$ is defiend relative to the center of mass. For $l=2$, the sum can be written in terms of the moment of inertia of the body as: $$ \begin{align*} \sum_\alpha m_\alpha \xi^2_\alpha P_2 \cos(\theta_\alpha) &= \sum_\alpha m_\alpha \xi^2_\alpha\left(\frac{3}{2}\cos^2\theta_\alpha -\frac{1}{2} \right)\\ &= \sum_\alpha m_\alpha \xi ^2_\alpha\left(\frac{3}{2}(1 - \sin^2\theta_\alpha) -\frac{1}{2} \right)\\ &= \sum_\alpha m_\alpha \xi^2_\alpha\left((1 - \frac{3}{2}\sin^2\theta_\alpha) \right)\\ &= \frac{1}{2}( A + B + C - 3I )\ tag{2.97} \end{align*} $$ where $A, B, C$ are the principal moments of inertia and $I$ is the moment of inertia of the body about the line between the center of mass of the body and the external point mass. $I$ depends on the orientation of the body w.r.t line between the bodies. Therefore, the potential energy of the body with terms up to $l=2$ is: $$ V = \frac{-GM M'}{R} - \frac{GM'}{R} \left( A + B + C - 3I \right)\tag{2.98} $$ This is also called MacCullagh’s formula. Figure 2.10: The orientation of the rigid body is specified by the three angles from the line between the centers and the principal axes. Figure 2.10 shows a method for computing $I$ in terms of the principal moment of inertia. Let $\theta_a$, $\theta_b$, and $\theta_c$, are the angles of the principal axes $\hat{a}, \hat{b}, \hat{c}$, respectively, from the line connecting the center of mass and the point mass. Then $I$ can be found to be: $$ I = A \cos^2\theta_a + B \cos^2\theta_b + C \cos^2\theta_c\\ $$ The potential energy is then: $$ V = \frac{-GM M'}{R} - \frac{GM'}{R} \left[ (1-3\cos^2\theta_a)A + (1-3\cos^2\theta_b)B + (1-3\cos^2\theta_c)C\right]\tag{2.99} $$ ← Back to workbook
{"url":"https://www.thomasantony.com/projects/sicm-workbook/section-2-11-1-spin-orbit-coupling-potential-energy/","timestamp":"2024-11-09T19:11:45Z","content_type":"text/html","content_length":"11430","record_id":"<urn:uuid:3492baee-71c1-4376-90bf-81f640782ba8>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00047.warc.gz"}
derivatives of exponential functions Derivatives of exponential functions Derivatives of Transcendental Functions (MCV4U) Derivatives of Exponential and log functions Derivatives and Integrals of Exponential Functions of base e Derivatives of exponential and logarithmic functions Derivatives of Logs and Exponentials Transformation of Exponential Functions Exponential & Logarithmic Derivatives Derivatives of Exponentials Derivatives of Logs and Exponentials Key Features of Exponential Functions Graphs of functions and their derivatives 8.9 Graphing Logarithmic Functions Exponential Functions Practice Explore derivatives of exponential functions Worksheets by Grades Explore Other Subject Worksheets for grade 12 Explore printable derivatives of exponential functions worksheets for 12th Grade Derivatives of exponential functions worksheets for Grade 12 are an essential resource for teachers looking to enhance their students' understanding of calculus concepts. These worksheets provide a variety of problems that challenge students to apply their knowledge of exponential functions and their derivatives, helping them to develop a strong foundation in this critical area of mathematics. As a teacher, you know that practice is key to mastering any subject, and these worksheets offer ample opportunities for your Grade 12 students to hone their skills. By incorporating these worksheets into your lesson plans, you can ensure that your students are well-prepared for the challenges of advanced math courses and the rigors of college-level calculus. In addition to derivatives of exponential functions worksheets for Grade 12, teachers can also utilize Quizizz to create engaging and interactive learning experiences for their students. Quizizz is an online platform that allows you to create quizzes, polls, and other interactive activities that can be easily integrated into your lesson plans. With Quizizz, you can create custom quizzes that align with your curriculum and target specific skills, such as understanding exponential functions and their derivatives. Furthermore, Quizizz offers a vast library of pre-made quizzes and resources, covering a wide range of topics in math, calculus, and other subjects. By incorporating Quizizz into your teaching toolkit, you can provide your Grade 12 students with a dynamic and engaging way to practice and reinforce their understanding of derivatives of exponential functions and other essential calculus concepts.
{"url":"https://quizizz.com/en-us/derivatives-of-exponential-functions-worksheets-grade-12","timestamp":"2024-11-09T06:18:24Z","content_type":"text/html","content_length":"148286","record_id":"<urn:uuid:003bdfc7-2916-4548-8595-fb34bd9986c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00449.warc.gz"}
Grade 6 2024-2025 6th Grade Supply List English Language Arts Curriculum To build a foundation for college and career readiness, students must read widely and deeply from among a broad range of high-quality, increasingly challenging literary and informational texts. Through extensive reading of stories, dramas, poems, and myths from diverse cultures and different time periods, students gain literary and cultural knowledge as well as familiarity with various text structures and elements. By reading texts in history/social studies, science, and other disciplines, students build a foundation of knowledge in these fields that will also give them the background to be better readers in all content areas. Into Literature: Into Literature provides sixth with appropriately rigorous and high quality texts which students have the option to read or follow along with audio. The questions and tasks support close reading and critical analysis. The materials support knowledge building as well as attending to growing vocabulary and independence in literacy skills. Fountas & Pinnell: The Fountas & Pinnell Benchmark Assessment Systems are accurate and reliable tools PS 86 teachers use to identify the instructional and independent reading levels of students. This assessment tool is also used to document student progress through one-on-one formative and summative assessments Students will learn about the writing process as they publish writing pieces throughout the year to prepare them for Performance-Based Assessments (PBAs). Sixth graders will be exposed to various writing genres, narrative fiction, non-fiction, poetry, and opinion writing. Math Curriculum Into Math: Into Math uses an approach focused on a growth mindset for students and real feedback from teachers to drive growth for each and every learner. It prepares students to tackle any problem, supported by a teacher who has the tools and instructional techniques needed to ensure success. Every lesson begins with rigor right from the start. Independent learning tasks encourage students to practice productive perseverance by jumping into a new challenge or working collaboratively to solve problems while teachers guide and differentiate instruction. In Grade 6, instructional time should focus on four critical areas: (1) connecting ratio and rate to whole number multiplication and division and using concepts of ratio and rate to solve problems; (2) completing understanding of division of fractions and extending the notion of number to the system of rational numbers, which includes negative numbers; (3) writing, interpreting, and using expressions and equations; and (4) developing understanding of statistical thinking. 1. Students use reasoning about multiplication and division to solve ratio and rate problems about quantities. By viewing equivalent ratios and rates as deriving from, and extending, pairs of rows (or columns) in the multiplication table, and by analyzing simple drawings that indicate the relative size of quantities, students connect their understanding of multiplication and division with ratios and rates. Thus students expand the scope of problems for which they can use multiplication and division to solve problems, and they connect ratios and fractions. Students solve a wide variety of problems involving ratios and rates. 2. Students use the meaning of fractions, the meanings of multiplication and division, and the relationship between multiplication and division to understand and explain why the procedures for dividing fractions make sense. Students use these operations to solve problems. Students extend their previous understandings of number and the ordering of numbers to the full system of rational numbers, which includes negative rational numbers, and in particular negative integers. They reason about the order and absolute value of rational numbers and about the location of points in all four quadrants of the coordinate plane. 3. Students understand the use of variables in mathematical expressions. They write expressions and equations that correspond to given situations, evaluate expressions, and use expressions and formulas to solve problems. Students understand that expressions in different forms can be equivalent, and they use the properties of operations to rewrite expressions in equivalent forms. Students know that the solutions of an equation are the values of the variables that make the equation true. Students use properties of operations and the idea of maintaining the equality of both sides of an equation to solve simple one-step equations. Students construct and analyze tables, such as tables of quantities that are in equivalent ratios, and they use equations (such as 3x = y) to describe relationships between quantities. 4. Building on and reinforcing their understanding of numbers, students begin to develop their ability to think statistically. Students recognize that a data distribution may not have a definite center and that different ways to measure center yield different values. The median measures center in the sense that it is roughly the middle value. The mean measures center in the sense that it is the value that each data point would take on if the total of the data values were redistributed equally, and also in the sense that it is a balance point. Students recognize that a measure of variability (interquartile range or mean absolute deviation) can also be useful for summarizing data because two very different sets of data can have the same mean and median yet be distinguished by their variability. 5. Students learn to describe and summarize numerical data sets, identifying clusters, peaks, gaps, and symmetry, considering the context in which the data were collected. Students in Grade 6 also build on their work with area in elementary school by reasoning about relationships among shapes to determine area, surface area, and volume. They find areas of right triangles, other triangles, and special quadrilaterals by decomposing these shapes, rearranging or removing pieces, and relating the shapes to rectangles. Using these methods, students discuss, develop, and justify formulas for areas of triangles and parallelograms. Students find areas of polygons and surface areas of prisms and pyramids by decomposing them into pieces whose area they can determine. They reason about right rectangular prisms with fractional side lengths to extend formulas for the volume of a right rectangular prism to fractional side lengths. They prepare for work on scale drawings and constructions in Grade 7 by drawing polygons in the coordinate plane. Cooperative Problem Solving: Twice a month, students will work in groups to solve challenging math problems. Students will work on collaboration, questioning, and presentation skills in addition to developing critical thinking Problem of the Day: Students are given a daily word problem that is repeated practice of previously learned material. Problem of the day helps students build automaticity in math, through continuous practice. Students use a math rubric to self-assess their work and the work of their peers. Grade 6 Amplify Science The Amplify Science Grade 6 Course includes seven units that support students in meeting the NGSS. The following unit summaries demonstrate how students engage in three-dimensional learning to answer and solve real-world questions and problems. Unit 1: Harnessing Human Energy How can rescue workers get energy for their equipment during rescue missions? Energy-harvesting backpacks, rocking chairs, and knee braces are just a few of the devices that have been created to capture human energy and use it to power electrical devices. Students assume the role of student energy scientists in order to help a team of rescue workers find a way to get energy to the batteries in their equipment during rescue missions. To do so, students learn about potential and kinetic energy, energy conversions, and energy transformations. Unit 2: Thermal Energy Which heating system will best heat Riverdale School? In their role as student thermal scientists, students work with the principal of a fictional school, Riverdale School, in order to help the school choose a new heating system. They compare a system that heats a small amount of water with one that uses a larger amount of cooler groundwater. Students discover that observed temperature changes can be explained by the movement of molecules, which facilitates the transfer of kinetic energy from one place to another. As they analyze the two heating system options, students learn to distinguish between temperature and energy, and to explain how energy will transfer from a warmer object to a colder object until the temperature of the two objects reaches Unit 3: Populations and Resources What caused the size of the moon jelly population in Glacier Sea to increase? Glacier Sea has seen an alarming increase in the moon jelly population. In the role of student ecologists, students investigate reproduction, predation, food webs, and indirect effects to discover the cause. Jellyfish population blooms have become common in recent years and offer an intriguing context to learn about populations and resources. Unit 4: Matter and Energy in Ecosystems Why did the biodome ecosystem collapse? Students examine the case of a failed biodome, an enclosed ecosystem that was meant to be self-sustaining but which ran into problems. In the role of ecologists, students discover how all the organisms in an ecosystem get the resources they need to release energy. Carbon cycles through an ecosystem due to organisms’ production and use of energy storage molecules. Students build an understanding of this cycling—including the role of photosynthesis—as they solve the mystery of the biodome collapse. Unit 5: Weather Patterns Why have recent rain storms in Galetown been so severe? Weather is a complex system that affects our daily lives. Understanding how weather events, such as severe rainstorms, take place is important for students to conceptualize weather events in their own community. Students play the role of student forensic meteorologists as they discover how water vapor, temperature, energy transfer, and wind influence local weather patterns in a fictional town called Galetown. They use what they have learned to explain what may have caused rainstorms in Galetown to be unusually severe in recent years. Unit 6: Ocean, Atmosphere, and Climate During El Niño years, why is Christchurch, New Zealand’s air temperature cooler than usual? Students act as student climatologists helping a group of farmers near Christchurch, New Zealand figure out the cause of significantly colder air temperatures in New Zealand during the El Niño climate event. To solve the puzzle, students investigate what causes regional climates. They learn about energy from the sun and energy transfer between Earth’s surface and atmosphere, ocean currents, and prevailing winds. Unit 7: Earth's Changing Climate Why is the ice on Earth’s surface melting? In the role of student climatologists, students investigate what is causing ice on Earth’s surface to melt in order to help the fictional World Climate Institute educate the public about the processes involved. Students consider claims about changes to energy from the sun, to the atmosphere, to Earth’s surface, or in human activities as contributing to climate change.
{"url":"https://ps086x.echalksites.com/grade_6","timestamp":"2024-11-10T01:51:38Z","content_type":"text/html","content_length":"41137","record_id":"<urn:uuid:070795b3-dc21-45a4-befd-732f17cdfea4>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00807.warc.gz"}
Geometry - Teach Think Elementary Every teacher knows that even the best kids go a little loopy around holidays. And the bigger the holiday (or the more sugar involved!) the more crazy things can get. It can be super tempting to just throw on a movie or slap together some craftivities. And honestly,... Most of us naturally tend to think to teach Geometry math units at the end of the school year. Most commercial math curricula leave it for last. Even the Common Core Math Standards put Geometry at the end of the list. It makes sense to our teacher brains- leave the... What is a Geometry Sort? A geometry sort is when students are classifying shapes into categories based on the geometric attributes of those shapes. Why Use Geometry Sorts to Teach Geometric Attributes? -Geometry sorts help students focus on the identifying geometric... Classifying quadrilaterals is one of those skills that adults love to make fun of with those, ‘I didn’t learn anything useful, but I learned the difference between a rhombus and a trapezoid.’ jokes that teachers hate. The thing is, when students learn to... I love when math strands overlap and students can see the connections between them. Fractions and Geometry overlap quite a bit. There are three ways to think about fractions: 1. The Set Model: This model works with fractions of groups, like ¼ of 24, so ¼... Visit my Teachers Pay Teachers store:
{"url":"https://teachthinkelementary.blog/blog-roll/geometry/","timestamp":"2024-11-02T21:28:31Z","content_type":"text/html","content_length":"276567","record_id":"<urn:uuid:c951cdd5-fd70-4144-b07b-afc66e4d7e95>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00882.warc.gz"}
Get some topic labels using a Function calls GetProbableTerms with some rules to get topic labels. This function is in "super-ultra-mega alpha"; use at your own risk/discretion. LabelTopics(assignments, dtm, M = 2) A documents by topics matrix similar to theta. This will work best if this matrix is sparse, with only a few non-zero topics per document. A document term matrix of class matrix or dgCMatrix. The columns of dtm should be n-grams whose colnames have a "_" where spaces would be between the words. The number of n-gram labels you want to return. Defaults to 2 Returns a matrix whose rows correspond to topics and whose j-th column corresponds to the j-th "best" label assignment. # make a dtm with unigrams and bigrams m <- nih_sample_topic_model assignments <- t(apply(m$theta, 1, function(x){ x[ x < 0.05 ] <- 0 x / sum(x) assignments[is.na(assignments)] <- 0 labels <- LabelTopics(assignments = assignments, dtm = m$data, M = 2) #> dtm does not appear to contain ngrams. Using unigrams but ngrams will work much better.
{"url":"https://www.rtextminer.com/reference/LabelTopics.html","timestamp":"2024-11-09T04:12:53Z","content_type":"text/html","content_length":"13068","record_id":"<urn:uuid:b9d29179-40b9-4cec-b54f-adde9f0762a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00734.warc.gz"}
What is What is the diameter of a atom? the diameter of an atom is typically around 0.1 nm or 1 × 10 -10 m. the thickness of a piece of paper is typically around 0.05 mm or 5 × 10 -5 m. therefore, a piece of paper is about half a million atoms thick. What determines the diameter of an atom? Because electrons are what take up space in atoms, the result is that the size of the biggest filled orbital determines the size of the atom or ion. Sizes of orbitals depend on the quantum numbers (n = 1, n = 3, etc.) and also on the effective nuclear charge.Dhuʻl-H. 22, 1441 AH What is the diameter of atom and nucleus? The diameter of a nucleus is about 2 × 10 -15 m and the diameter of an atom is 1 × 10 -10 m. What is the average diameter of an atom answer? The atom is about 10-10 meters (or 10-8 centimeters) in size. This means a row of 108 (or 100,000,000) atoms would stretch a centimeter, about the size of your fingernail. What is the average size of an atom? **Size: **Atoms have an average radius of about 0.1 nm. About 5 million hydrogen atoms could fit into a pin head. The nucleus of an atom is 10,000 times smaller than the atom. What is the diameter of a typical molecule? The diameter of a molecule, assuming it to be spherical; has a numerical value of 10-8 centimeter multiplied by a factor dependent on the compound or element. Gases with a larger molecular diameter diffuse slower across the prepared membrane [21-23]. What is the radius of an atom? The atomic radius is defined as one-half the distance between the nuclei of identical atoms that are bonded together. Figure 1. The atomic radius (r) of an atom can be defined as one half the distance (d) between two nuclei in a diatomic molecule. The units for atomic radii are picometers, equal to 10−12 meters. What is the diameter of a cell nucleus? The nucleus is the largest organelle in animal cells. In mammalian cells, the average diameter of the nucleus is approximately 6 micrometres (µm). What is an average radius of an atom? the radius of an atom is about 0.1 nm (1 × 10 -10 m) What is the diameter of an electron? It is concluded that the diameter of the electron is comparable in magnitude with the wave-length of the shortest γ-rays. Using the best available values for the wave-length and the scattering by matter of hard X-rays and γ-rays, the radius of the electron is estimated as about 2 × 10−10 cm. What makes up the size of an atom? Atomic size is the distance from the nucleus to the valence shell where the valence electrons are located. The separation that occurs because electrons have the same charge. The number of protons in the nucleus. The core electrons in an atom interfere with the attraction of the nucleus for the outermost electrons. What is the diameter of oxygen? List of diameters Molecule Kinetic diameter (pm) Name Formula Oxygen O2 346 Hydrogen sulfide H2S 360 Hydrogen chloride HCl 320 How do you increase the size of an atom? The more pull, the tighter the electrons orbit. The other trend of atomic radius or atom size occurs as you move vertically down an element group. This direction increases the size of the atom. Again, this is due to the effective charge at the nucleus. What is the unit used to measure the diameter of an atom? Generally radius and diameter of an atom are measured in Angstrom units. 1 angstrom(A°) = 10^-10m. We can’t measure the radius or diameter of an atom using single atom. What determines the size of the atom? The size of an atom depends on how many protons and neutrons it has, as well as whether or not it has electrons. A typical atom size is around 100 picometers or about one ten-billionth of a meter. What part of atom determines it’s size? The nucleus has the most mass of the atom but has little size. So the atom size is determined by electron number.
{"url":"https://sage-advices.com/what-is-the-diameter-of-a-atom/","timestamp":"2024-11-08T14:02:53Z","content_type":"text/html","content_length":"146262","record_id":"<urn:uuid:5b4dbd52-ea95-42f2-9e22-615f31440323>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00108.warc.gz"}
Free Printable Pythagorean Theorem Worksheets Free Printable Pythagorean Theorem Worksheets Math Geometry Pythagorean Theorem Worksheets These printable worksheets have exercises on finding the leg and hypotenuse of a right triangle using the Pythagorean theorem Pythagorean triple charts with exercises are provided here Word problems on real time application are available Our Pythagorean Theorem Worksheets are free to download easy to use and very flexible These Pythagorean Theorem Worksheets are a great resource for children in 6th Grade 7th Grade and 8th Grade Click here for a Detailed Description of all the Pythagorean Theorem Worksheets Free Printable Pythagorean Theorem Worksheet With Answers Level 1 Identify the Right Angle Triangles Level 2 Identify the Pythagorean Triples Level 3 Find the Missing Side Length of Right Angle Triangles Level 4 Real World Pythagorean Theorem Word Problems 2 Pages The Man the Legend Pythagoras of Samos Free Printable Pythagorean Theorem Worksheets Free Printable Pythagorean Theorem Worksheets 48 Pythagorean Theorem Worksheet With Answers Word PDF 48 Pythagorean Theorem Worksheet With Answers Word PDF Get into high gear with our free printable Pythagorean theorem worksheets Pythagoras s theorem plays a role in topics like trigonometry Our Pythagorean theorem worksheet pdfs include finding the hypotenuse identifying Pythagorean triples identifying a right triangle using the converse of the theorem and more According to the definition of the Pythagorean theorem the formula would be written as c 2 a 2 b 2 When a triangle has a right angle we can use the sum of the squares of each leg of the triangle to find the squared value of the hypotenuse It can be rearranged to find the length of any of the sides The Pythagorean Theorem has so many These Pythagorean Theorem worksheets are downloadable printable and come with corresponding printable answer pages In mathematics the Pythagorean theorem is a relation in Euclidean geometry among the three sides of a right triangle There are five sets of Pythagorean Theorem worksheets Find the length of the hypotenuse Find the length of a side Find the length in two steps Check for right triangle Pythagorean Theorem Word Problems The Pythagorean Theorem also called the Pythagoras Theorem is a fundamental principle in geometry that relates the sides of a More picture related to Free Printable Pythagorean Theorem Worksheets Free Pythagorean Theorem Worksheets Worksheets For Pythagoras Worksheet Easy Pythagoras Theorem Worksheet Teaching Resources Pythagorean The Pythagorean Theorem Date Period Do the following lengths form a right triangle 1 6 8 9 No 2 5 12 13 Yes 3 6 8 Create your own worksheets like this one with Infinite Pre Algebra Free trial available at KutaSoftware Title Pythagorean Theorem This collection of worksheets finishes with more complex Pythagorean theorem problems for triangles point distance calculations and Pythagorean theorem word problems Try some of the pythagorean theorem worksheets below or scroll down for tips and strategies for teaching the pythagorean theorem Printable Pythagoras worksheets Each worksheets is visual differentiated and fun Includes a range of useful free teaching resources Browse Printable Pythagorean Theorem Worksheets Award winning educational materials designed to help kids succeed Start for free now Pythagorean Theorem Worksheet With Answers Pythagorean Theorem Definition With Worksheet Pythagorean Theorem Worksheets Math Worksheets 4 Kids Math Geometry Pythagorean Theorem Worksheets These printable worksheets have exercises on finding the leg and hypotenuse of a right triangle using the Pythagorean theorem Pythagorean triple charts with exercises are provided here Word problems on real time application are available Practicing Pythagorean Theorem Worksheets Math Aids Com Our Pythagorean Theorem Worksheets are free to download easy to use and very flexible These Pythagorean Theorem Worksheets are a great resource for children in 6th Grade 7th Grade and 8th Grade Click here for a Detailed Description of all the Pythagorean Theorem Worksheets 48 Pythagorean Theorem Worksheet With Answers Word PDF Pythagorean Theorem Worksheet With Answers Free Printable Pythagorean Theorem Worksheets Free Printable A To Z 48 Pythagorean Theorem Worksheet With Answers Word Pdf Db excel 48 Pythagorean Theorem Worksheet With Answers Word PDF 48 Pythagorean Theorem Worksheet With Answers Word PDF Printable Pythagorean Theorem Worksheet Printable Word Searches 48 Pythagorean Theorem Worksheet With Answers Word PDF Pythagorean Theorem Worksheet Pythagorean Theorem Worksheet Word Free Printable Pythagorean Theorem Worksheets - Pythagoras theorem is frequently used in advanced mathematics and it helps find the relationship between different sides of a right triangle Download Printable Pythagoras Theorem Worksheet PDFs One can download free Pythagoras theorem worksheets in order to practice questions consistently and score well
{"url":"https://downstairspeople.org/en/free-printable-pythagorean-theorem-worksheets.html","timestamp":"2024-11-13T15:25:01Z","content_type":"text/html","content_length":"22420","record_id":"<urn:uuid:fd0ef01c-c2e7-4bf1-a5ac-967e06ecab61>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00407.warc.gz"}
Aspiring Data Scientists! Start to learn Statistics with these 6 books! - 911 WeKnow Aspiring Data Scientists! Start to learn Statistics with these 6 books! Statistics is difficult. Of course it is, as it?s most of the actual science part in data science. But that doesn?t mean that you couldn?t learn it by yourself if you are smart and determined enough. In this article, I am going to list 6 books that I recommend starting with if you want to learn statistics. The first three are lighter reads. These books are really good for setting your mind to think more numerically, mathematically and statistically. They also do a good job of presenting why statistics is exciting (it is!). The second three books are more scientific ? with formulas and Python or R codes. Don?t get intimidated though! Mathematics is like LEGO: if you build the small pieces up right, you won?t have trouble with the more complex parts either! Let?s see the list! 1. You Are Not So Smart ? by David McRaney When I first saw the title, I loved it already! This is a very well written book, containing many stories ? and everything in it is based on real experiments and real scientific research. David McRaney introduces one sad but true fact of life: that our brain constantly tricks us and we are not even smart enough to realize it. For an aspiring data scientist, this book is essential, because it lists many common statistical bias types. It points out classic mistakes like the self-serving bias, the availability heuristic, and the confirmation bias. It also shows why people tend to be tricked by fake news or scams and why people don?t always help when seeing someone having a heart attack on a busy street. Being aware of these biases should be basic, but I see even practicing data professionals fall for them from time to time? (I wrote a detailed article about Statistical Bias Types. Find it here.) You can buy the book: here (affiliate link). 2. Think Like a Freak ? by Dubner & Levitt The previous book was about why we are not so smart. But this one is about how to be smarter! Think Like a Freak shows us how critical and unconventional thinking can lead to huge success? and, hey, that?s something that as a data scientist, you should practice every day. The book lists a bunch of case studies from everyday life, goes into details and analyzes why a solution for a problem is good or bad. Reading it will definitely boost your analytical thinking. You can buy the book: here (affiliate link). 3. Innumeracy ? by John Allen Paulos If you hated mathematics in middle or high school, it was for one reason: you had a bad teacher. A good teacher turns mathematical equations into mystical puzzles, probability theory into detective stories, and linear algebra into the ultimate solution for all the big questions in life. Luckily, I had really good math teachers, so I was always generally excited by mathematics and statistics. Looking back, this really affected my life. If you didn?t have a good math teacher, John Allen Paulos is here to make up the loss for you: he?s the awesome teacher you wish you?d had. Innumeracy focuses mostly on one specific segment of statistics: probability theory and calculations. It explains the math behind it, shows the formulas and puts everything into a very logical context. And it does it by showing the real life applications of these calculations, so you can immediately understand the advantage of being more math-minded. You can buy the book: here (affiliate link). 4. Naked Statistics ? by Charles Wheelan I have already highlighted this book in my previous article, but I can?t stand to add it to this list either. It?s the perfect transition between the previous light-read statistics books and the next two more scientific ones. Reading it, you can easily understand basic concepts like mean, median, mode, standard deviation, variance, and standard error, or the more advanced things like the central limit theorem, normal distribution, correlation analysis or regression analysis. Almost needless to say that all of these are packed into metaphors for ease of understanding. You can buy the book: here (affiliate link). 5. Practical Statistics for Data Scientists ? by Andrew & Peter Bruce This is a relatively new book and it contains everything that a Junior Data Scientist has to know about the practical part of statistics. In my opinion, the biggest advantage of the book is the structure. It really makes it clear how things are built on top of each other. But it also goes into detail on the most common prediction and classification models ? and it talks a bit about Machine Learning and Unsupervised Learning too. The book comes with R code examples, but if you don?t know R, that?s not a problem; you can simply skip those parts. You can buy the book: here (affiliate link). 6. Think Stats ? by Allen B. Downey Topic-wise, Think Stats is really similar to Practical Statistics for Data Scientists. I wanted to have it on the list, though, because even if the topic is the same, different writers usually approach things differently. On a topic as complex as data science, I think it?s worth looking at different angles and having things explained by two different data professionals. Plus, this is a book from 2011. It?s good to see how much the interpretation of (even these standard) things has changed in as short as six years. Oh, and I almost forgot to mention that Think Stats is available for free in PDF format, here: http://greenteapress.com/thinkstats/ Or you can buy the book: here (affiliate link). And that?s it! By reading these 6 books you can get a solid understanding of Statistics for Data Science! What?s the next step in becoming a data scientist? Well, first of all: I?ve created a comprehensive (free) online video course to help you get started with Data Science. Click here for more info: How to Become a Data Scientist. You can read even more books: here?s my 7 favorite data books. Or you can start to learn coding in SQL or in Python. If you want to learn even faster, check out my new 6-week online data science course: The Junior Data Scientist?s First Month If you think this list is missing something, let me know in the comment section below! Thanks for reading! Enjoyed the article? Please just let me know by clicking the ? below. It also helps other people see the story! Tomi Mestermy blog: data36.commy Twitter: @data36_com
{"url":"https://911weknow.com/aspiring-data-scientists-start-to-learn-statistics-with-these-6-books","timestamp":"2024-11-05T19:17:26Z","content_type":"text/html","content_length":"48292","record_id":"<urn:uuid:d76e8945-2c27-4a16-966d-43d260f66475>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00491.warc.gz"}
What Is Linear Algebra? | Khan Academy Blog - Speak Rights | A Hub of Content What is linear algebra? What is linear algebra? Well, if you’re like most people, you might think it’s just another boring math subject you thankfully avoided in school. But trust us, it’s a little more interesting than Linear algebra is essentially the study of mathematical structures that can be defined in terms of linear equations. So, in a nutshell, it’s all about lines. Think of it as the cooler, more sophisticated cousin of geometry. What do you do in linear algebra? But what do you actually do in linear algebra? Well, you might work with things like vectors, matrices, and tensors, and use different operations to manipulate them. For example, you could multiply two matrices together, or find the inverse of a matrix. Why is linear algebra important? Now, you might be wondering: why would anyone want to study this stuff? Linear algebra is actually pretty important in a lot of fields. For example, it’s used in computer graphics to help render 3D images, and in data science to build machine learning models. So next time someone asks you what linear algebra is, you can give them the lowdown. Or, if you want to sound really smart, just tell them it’s the study of vector spaces and linear mappings between them. That should do the trick. Want to Learn Linear Algebra for Free? Khan Academy has hundreds of lessons for free. No ads, no subscriptions. Source link
{"url":"https://speakrights.com/what-is-linear-algebra-khan-academy-blog/","timestamp":"2024-11-11T16:29:11Z","content_type":"text/html","content_length":"63507","record_id":"<urn:uuid:3ef5b8db-46fa-4865-afea-ca2b9da8e154>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00773.warc.gz"}
Non-tradability interval for heterogeneous rational players in the option markets This study uses theoretical and empirical approaches to analyze a number of phenomena observed in trading floors, such as the changes in trading volumes and Bid–Ask spreads as a function of the moneyness level and the remaining time until the option’s expiration. A mathematical model for pricing options is developed that assumes two players with heterogeneous beliefs, where the objective of each player is to maximize their profit on the expiration day. By solving a system of algebraic equations, which takes into consideration the subjective beliefs of the players regarding the price of the underlying asset on expiration day, a feasible price domain is constructed that defines the boundaries within which a transaction may be executed. The developed model is applied to the special case in which the distribution of the underlying asset price on expiration day is uniform, and a sensitivity analysis for selected parameters is presented. An interesting theoretical result that emerges from the proposed model is the existence of an interval under which there is no tradability near the expiration day. The existence of this interval offers an explanation for the decrease in the apparent trading volumes of out-of-money (OTM) options, together with an increase in Bid–Ask spreads, as the expiration day approaches. The main parameters that affect the point of time after which there will be no trading are those that represent the players’ subjective beliefs about the distribution of the expiration values, and the cost of trading. Bibliographical note Publisher Copyright: © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature. • Bid–Ask spreads • Heterogeneous players • Non-tradability interval • Option pricing Dive into the research topics of 'Non-tradability interval for heterogeneous rational players in the option markets'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/non-tradability-interval-for-heterogeneous-rational-players-in-th","timestamp":"2024-11-09T09:59:45Z","content_type":"text/html","content_length":"57901","record_id":"<urn:uuid:fbeca784-ba89-4f2c-a164-c5039da6b8f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00595.warc.gz"}
CLASS 10 MATHS NCERT SOLUTION CHAPTER 8 PDF Introduction to Trigonometry » ATG Study CLASS 10 MATHS NCERT SOLUTION CHAPTER 8 PDF Introduction to Trigonometry CLASS 10 MATHS NCERT SOLUTION CHAPTER 8 PDF Introduction to Trigonometry NCERT Solutions of Class 10 Maths NCERT Solutions for Class 10 Maths All Chapters Find NCERT Solutions for Class 10 Maths App all Chapters, which are provided by the expert math teacher. You just need to click simply these below mentioned Exercise-wise links here and get the solutions. You can get solutions for all topics of the NCERT Class 10 book here with both modes offline and online. These Chapter Wise NCERT Solutions for Class 10 Maths are also available here in both medium Hindi and English here. And just download here Class 10 Maths Chapter Wise Solutions in PDFs, you can share these solutions to your friends to help him/her. Here you also get NCERT Exemplar Class 11 Maths Solutions which provide you best idea of your board exam question pattern, so that you can solve all the questions easily in your exam. By doing well practice with these free Class 10th Maths Solutions, you can pass your 10th board exam with good percentage. Join us for latest CBSE Class 10 information and other educational news. NCERT Class 10 Maths all Chapters Solution There are 15 chapters in NCERT Class 10 Book, and here you can get solutions for all chapters provided by the skilled Maths teacher. Practicing these textbook exercises will help the students in their preparation for the board examination. Class 10 Maths is the most important subject, and you need to do well practice so that you can solve the higher class’s math sums easily. The chapters covered in this NCERT Solutions for Class 10 Maths are as Real Numbers, Polynomials, Pair of Linear Equations in Two Variables, Quadratic Equations, Arithmetic Progressions, Triangles, Some Applications of Trigonometry, Constructions, Surface Areas and Volumes, Statistics Probability. UP Board students are also using NCERT Books, they also can download UP Board Solution for Class 10 There are two exercises in this chapter. NCERT solutions for class 10 Maths chapter 15 Probability covers the concept of probability, Simple problems on finding the probability of an event, the difference between experimental probability and theoretical probability etc. Download here FREE unlimited NCERT solutions for class 10 mathematics Probability PDF, and enjoy the flexible learning at any time. Each and every solution are well solved to be the experienced math teacher. Here students can also download Chapter Probability class 10 Maths solutions in Hindi medium and English medium. How to Prepare Class 10 Maths All Chapters Chapter 1: Real Numbers In this NCERT Solutions for Class 10 Maths Chapter 1, Real Numbers, students learn about the “Real Numbers “In mathematics language, there are two kinds of numbers- one is the “real number”, and another one is the “imaginary number”. In the number system, Real numbers are just the combination of rational and irrational numbers. These can be represented in a number line. And where the imaginary numbers are the unreal numbers, theses cannot be expressed in the number line which is used to represent a complex number. A real number can be a positive or negative and which are denoted by the symbol “R”. Summary: Definition, Set of real numbers, Chart, Properties, Commutative, Distributive, and Identity. Chapter 2: Polynomials In this NCERT Solutions for Class 10 Maths Chapter 2, Polynomials, you learn about Polynomials which are as expressions composed of two algebraic terms. Polynomials contain two or more algebraic terms. Poly means many and Nominal means term. Polynomials are composed as Constants such as 1, 2, 3, etc., Variables such as g, h, x, y, etc., Exponents such as 3in x3 etc. And here you also learn the degree of a polynomial is defined as the highest degree of a monomial within a polynomial. Polynomials are of 3 different types such as Monomial, Binomial, Trinomial etc. And Polynomial Properties and Theorems are explained in this chapter. Summary: Definition, Degree, Terms, Types, Properties and Theorems, Equations, Function, Solving Polynomials, Operations etc. Chapter 3: Pair of Linear Equations in Two Variables In this NCERT Solutions for Class 10 Maths Chapter 3, Pair of Linear Equations in Two Variables, student learn about Pair of Linear Equations with the form ax + by + c = 0, In this equation a, b and c are real numbers and where a, b not equal to zero then this equation is known as linear equation in two variables. Here you also learn about Linear Equations in Two Variables Word Problems. Summary: Definition, Linear Equations Formula, Linear Equations in Two Variables Example, Linear Equations in Two Variables Word Problems etc. Chapter 4: Quadratic Equations In this NCERT Solutions for Class 10 Maths Chapter 4, Quadratic Equations, You learn about Quadratic Equations, the name Quadratic comes from “quad” Meaning Square, as the variable gets squared (like x2). Quadratic Equations is also called an “Equation of Degree”. The Standard Form of a Quadratic Equation looks is like: Quadratic Equation: ax2 + bx + c = 0, where a, b and c are known values. “a” can’t be 0. Summary: Definition, Hidden Quadratic Equations, Using the Quadratic Formula, Complex Solutions, Chapter 5: Arithmetic Progression In this NCERT Solutions for Class 10 Maths Chapter 5, Arithmetic Progression, You learn about Arithmetic Progression which is also known as “AP”. It is an order of the difference of any two successive numbers is a constant value. Here you can see a common difference odd numbers and even numbers. You come across Arithmetic progression in this chapter. Summary: Definition, Common Difference, First Term, General Form, Nth Term, Sum of Nth Term, Formula List etc. Chapter 6: Triangles Here in this NCERT Solutions for Class 10 Maths Chapter 6, Triangles, the student can learn about “Triangles “ which have a three-sided polygon that consists of 3edges and three vertices. A triangle is a type of polygon. Here you also learn about property of a triangle is that the sum of the internal angles of a triangle is equal to 180 degrees etc. Triangles are of 6 types. Learn about its properties, Heron’s Formula to find the area, various formulas such as perimeter and area defined for the triangle. Summary: Definition, Types of Triangles, Properties of Triangle, Pythagoras Theorem, Triangle Formula. Chapter 7: Coordinate Geometry Here in this NCERT Solutions for Class 10 Maths Chapter 7, Coordinate Geometry student learn about Coordinate Geometry which describes the link between geometry and algebra by the use of graphs, curves and lines. You can solve geometric problems with geometric aspects in Algebra. Coordinate geometry is defined as the study of geometry by using the coordinate points. You can find the distance between two points, dividing lines, midpoints, area of triangle etc. Summary: Coordinate Geometry Definition, Distance Formula, Mid-Point Theorem, Section Formula, Equation of a Line in Cartesian Plane, Coordinate Geometry Formulas and Theorems etc. Chapter 8: Introduction to Trigonometry Here in this NCERT Solutions for Class 10 Maths Chapter 8, Introduction to Trigonometry, you study about the complete concept on trigonometry. Here you learn all about Trigonometric Ratios, Opposite & Adjacent Sides in a Right Angled Triangle, Trigonometric Ratios, Visualization of Trigonometric Ratios Using a Unit Circle, Relation between Trigonometric Ratios, Range of Trigonometric Ratios from 0 to 90 degrees such as tanθ and secθ are not defined at 90∘. cotθ and cosecθ are not defined at 0∘. And also know sin (90∘−θ) = cos θ,cos (90∘−θ) = sin θ, tan (90∘−θ) = cot θ and many more. Summary: Introduction to Trigonometry, Trigonometric Ratios, Visualization of Trigonometric Ratios Using a Unit Circle, Relation between Trigonometric Ratios, Trigonometric Ratios of Specific Angles Chapter 9: Some Applications of Trigonometry Here in this NCERT Solutions for Class 10 Maths Chapter 9, Some Applications of Trigonometry, you study about “Trigonometry” which is defined as calculations with triangles including the study of lengths, heights, and angles. The function of Trigonometry is used in our daily life, such as distance between landmarks, astronomy to the measurement of the satellite navigation system. Trigonometry is used in many fields such as engineering, physics, surveyors, architects, astronauts. And most interesting, trigonometry is also used in developing computer music. Summary: What is Trigonometry, Trigonometry Applications in Real Life, Trigonometry to Measure Height of a Building or a Mountain, Trigonometry in Aviation, Trigonometry in Criminology, and Trigonometry in Marine Biology etc. Chapter 10: Some Applications of Circles In this NCERT Solutions for Class 10 Maths Chapter 10, Some Applications of Circles, you learn about the circle which is a special kind of ellipse in which the eccentricity is zero and also circle is drawn the points at an equidistant from the centre, distance from the center of the circle is known as the radius and twice of the radius is known as diameter. The circle has rotational symmetry around the centre for every angle. Annulus, Arc, Sector, Segment, Centre, Chord, Diameter, Secant, Tangent are the terminologies of the circle. Summary: Circle Definition, How to draw a circle, Circles Terminologies, Circle Formulas for Area and Circumference, Circle Area Proof, Properties of Circles etc. Chapter 11: Construction In this NCERT Solutions for Class 10 Maths Chapter 11, Construction, You can learn about how to construct the division of the line segment, constructions of tringles by the use of scale factor, construction of tangents to a circle. And know how to Dividing a Line Segment such as: Bisecting a Line Segment, Constructing a Similar Triangle with a scale factor, Drawing Tangents to a Circle, Number of Tangents to a circle, Drawing tangents to a circle from a point outside the circle etc. By well practice this chapter “Construction”, you will learn how to construct the various diagram. Summary: Constructions, Drawing Tangents to a Circle, Similar Triangle with a scale factor Constructions, Construction Procedure etc. Chapter 12: Areas Related to Circles In this NCERT Solutions for Class 10 Maths Chapter 12, Areas Related to Circles, You can get the full knowledge about Areas Related to Circle such as circumference, segment, sector and angle etc. Area of a circle is πr2, where π=22/7. π is the ratio of the circumference of a circle to its diameter. Here you can learn about the area for the sector of a circle. Summary: Area of a Circle, Circumference of a circle, the segment of a circle, The angle of a Sector, Area of a Sector of a Circle, Area of a Segment of a Circle etc. Chapter 13: Surface Areas and Volumes In this NCERT Solutions for Class 10 Maths Chapter 13, Surface Areas and Volumes, you study the concept of surface area and volume. Here you learn all shapes of the circle: cube, cuboid, cone, cylinder, and etc. The procedure to find the volume and its surface area and the combination of different solid shapes is also learned in this chapter. Summary: Cuboid and it’s Surface Area, Cube and its Surface Area, Cylinder and it’s Surface Area, Right Circular Cone, and it’s Surface Area, Sphere and its Surface Area, Volume of a Cuboid etc. Chapter 14: Statistics In this NCERT Solutions for Class 10 Maths Chapter 14, Statistics, you study here about Statistics which is the study of the collection, interpretation, presentation, analysis etc. Mathematical statistics is the application of Mathematics to Statistics, and this is used in collection & analysis of facts about a country such as economy, and, military, population etc. In this chapter, you can also learn about: linear algebra, stochastic analysis, differential equation etc. Summary: Scope, Methods, Types of Data, Types of quantitative data, Types of Statistics, Application, Statistics Examples etc. Chapter 15: Probability In this NCERT Solutions for Class 10 Maths Chapter 14, Probability, student get knowledge of Probability which means possibility in the occurrence of a random event. Probability has been introduced to find out how events are to happen prediction. The basic probability theory is also used in the probability distribution. Here you learn the possibility of outcomes for a random experiment. Summary: Definition, Formula, Examples, Equally Likely Events, Probability Density Function, Solved Problems etc. Importance of Learn NCERT Solutions for Class 10 Maths from Tiwari Academy NCERT Solutions for Class 10 Maths from Tiwari Academy are designed by the expert math teacher who has many years of experiences. It provides more benefits to the class 10 students by providing topic-wise and chapter-wise detailed knowledge with an easy method. There are also many other important to learn NCERT Solutions for Class 10 Maths from us. Let us discuss below: 1. Strengthens Your Basics All chapter of class 10 maths solutions are well designed by our experienced and professionals math experts. All topic-wise and chapter-wise answers are thoroughly researched and reviewed. Student can get more marks in their Board exam or in any competitive exam by using these solutions. 2. A Quick and Effective Revision Tool NCERT solutions are created in such a manner that help students in the event of queries or doubts in a proper way. NCERT book is handy during exam preparation as it is the best quick and effective revision tool for your board exam. 3. Develops an Ability of Student to Solve Sums All Maths solutions are well explained comprehensively, along with relevant and easy to understand examples. Student can have an idea of how to solve sums with step by step instructions. 4. Increases Student’s Confidence All maths solutions are provided with proper illustrations and examples. It helps build a conceptual understanding of all exercises by providing sufficient cases, and if the student gets practise well before the exam, then they can have high confidence. How Class 10 Maths Solutions of NCERT Helpful for Board Exams? Class 10 Maths Solutions of NCERT is very helpful for Board Exams as all the questions for Basic Mathematics were picked directly from NCERT textbooks. These solutions are well designed by the experienced Maths teacher. You can find many examples related the topics so that student can clear all concepts. It is a perfect guide to help you to score good marks in Board Exams. All solutions are based on the latest CBSE syllabus.
{"url":"https://atgstudy.com/class-10-maths-ncert-solution-chapter-8-pdf-introduction-to-trigonometry/","timestamp":"2024-11-04T15:39:21Z","content_type":"text/html","content_length":"135404","record_id":"<urn:uuid:e0ad3751-95fd-4b6f-89b4-41ddd990636c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00800.warc.gz"}
Payment pools and co-ownership of coinbase reward Name: Lorban Institution: HRF GitHub: https://github.com/lorbax/ Name: Fi3 Institution: DHN GitHub: https://github.com/fi3/ Rachel Rybarczyk Institution: Galaxy Digital Abstract. In this RFC we present a possible solution for the payout scheme of a non-custodial pool, in which the pool’s miners are the participants of the payment pool associated with the mining pool. Our scheme does not rely on off-chain technology and assumes BIP0118. 2.3 Merging two payment pools 7 2.4 Sending funds to a payment pool 9 2.5 Withdrawal of funds from a payment pool 9 2.6 Payment pool with large number of participants 11 2.8 Non-collaborative participants and DOS attacks 13 3. Payment pools for mining pools 14 3.1 Implementation for miners 15 3.2 Operations on a payment pool for a mining pools 17 3.3 Withdrawal from a payment pool 18 3.4 Average weight on the blockchain 20 3.5 Collaboration between participants of a payment pool in a mining pool 22 3.6 Possible attacks to a payment pool 22 4. Technical considerations about payment pools for mining pools 23 4.3 Qualitative anatomy of transactions that use ANYPREVOUT 24 5. Payment pools using CTV and TLUV 25 5.3 Payment pools for mining pools using ANYPREVOUT and TLUV 26 1. Introduction In pooled mining, there is a pool server that sits between the mining devices and the Bitcoin network. The pool server has two main responsibilities. The first is handling all the communications that would otherwise be done between a mining device and the network: it provides miners with the information needed to commence mining on a candidate block, and it receives a miner’s submitted solution for validation and network confirmation. The means of communication between pool and miner is specified by a mining protocol. The mining protocol ensures that the miner is provided with enough information to build a candidate Bitcoin block header and perform the necessary hashing to yield a resulting solution: a block header whose hash is below the network target. The second main responsibility of the pool is the handling of the miner’s reward payouts, something that is historically not specified in the mining protocol but is instead left up to the pool to decide. In pooled mining, the means in which a miner receives their reward varies greatly from the payment received directly from the network in solo mining. During the construction of the block template, the pool formats the coinbase transaction such that the block reward is paid to the pool’s Bitcoin address. Such a centralization of funds exposes mining a critical risk. Because the payout mechanism is left to the centralised, 3rd-party pool service, miners are forced to fully trust that the pool is paying them a fair payout and cannot verify that the pool is not withholding a portion of their rewards from them, an act called pool skimming. In fact, there have been some occurrences of pool skimming occurring throughout the years, but in general this is a phenomenon that is difficult to detect. The only recourse a miner has is to switch to a different pool, but this offers little solace as there are only a handful of pools to join that are large enough to offer consistent rewards. One possible solution for this problem is to add a payout scheme in which the coinbase reward is collected directly by the miners, in such a way that the pool has never the control of miners’ funds. A pool with this property is called non-custodial. The main non-custodial pool that appeared in Bitcoin history was P2Pool. The payout scheme of P2Pool was simple, but inefficient. Indeed, each miner was paid with funds locked to his address directly in the coinbase transaction outputs. This resulted in a large coinbase. As in future the miners’ payout will rely more on fees than freshly generated BTC, this is not a forward-looking solution. In this RFC we present a payout scheme for a non-custodial pool that should be better than the P2Pool scheme. The benefits are more consistent with a centralised KYC pool, even though this payout system seems to be always convenient, provided that the pool is large enough. Our scheme is introduced through the concept of a payment pool, where the participants are the miners of the mining pool. The presented payment pool scheme uses ANYPREVOUT, does not rely on any off-chain technology and it is trustless, in the sense that a participant does not have to trust in collaboration of all other participants: a non-collaborating participant is automatically ejected from the payment pool and it is not a threat for accessibility of funds. Our study assumes the pool to be centralised, but it can be generalised to decentralised pools. Our payment pool scheme is meant to be a future extension of Stratum V2 mining protocol. Summary of chapters. In Section 2.1 the desired properties that a generic payment pool should have is introduced. In Section 2.2 three possible solutions for a payment pool are analysed. The first two are easy to implement but are expensive in terms of fees or computational resources. The third is the actual proposal. The rest of the chapter is devoted to the development of the proposed scheme of payment pool that is introduced in Section 2.2, from a theoretical point of view. Guidelines for several pool procedures are given, including the merging of two payment pools (Section 2.3), the withdrawal of funds from a payment pool (Section 2.4), the sending funds to a payment pool (Section 2.5), and the ejection of non-collaborative participants, fees and attacks (Section 2.8). All the payment pools have 8 participants. Larger payment pools are treated in Section 2.6. In Section 2.7 we discuss why and where ANYPREVOUT [1] is needed in our scheme. In Section 3 the proposed scheme of Chapter 2 is implemented for the case where the participants of the payment pools are miners of a mining pool that share the rewards of successfully mined blocks. In Chapter 3.4 we show that our scheme is very convenient (in terms of space occupied in the blockchain) for KYC pools and is still better than P2Pool when the number of participants is large. In the following sections of the chapter we discuss about the collaboration seeded to make our scheme work, possible attacks and a fee system to prevent some of them. In Section 4 we analyse some technical issues encountered during the design of the scheme, how to deal with the fact that the coinbase reward remains frozen for 100 blocks (Section 4.1), how to create co-own addresses using MuSig2 (Section 4.2), and a qualitative description of transactions that use ANYPREVOUT (Section 4.3). In Section 5 payment pools based on CTV (Section 5.1), TLUV (Section 5.2) and TLUV & ANYPREVOUT (Section 5.3) are all analysed. A conclusion is presented in Section 6. 2. Payment pool structure According to [2], a payment pool is a multi-party construction enabling multiple participants to share the ownership of a single on-chain UTXO among many off-chain/promised balances. Though intuitive, this definition is not technical and it is prone to interpretation. Definition 2.1.1 Let be a positive integer. A payment pool as a set of participant addresses such that all the participant co-own (in -of- multisig) a single Bitcoin address, to which are allocated some (>0) funds. Unfortunately, when we talk about payment pools, it seems that everyone agrees that a payment pool is a structure that should be richer than the above, even though it seems difficult to define which are the good-to-have properties for a payment pool. In this context, our aim is not to define which is the best notion of payment pool in general, but rather a group of properties that a payment pool should have in order to be adopted by the miners of a mining pool. 1. Fund security: A participant MUST NOT be required to be online to ensure that their funds are not lost or stolen. 2. Co-ownership: The funds of the payment pool MUST be collectively co-owned by all its participants. 3. Access to Funds: Each participant can unilaterally withdraw his funds at any time. 4. Cumulative: Multiple payment pools MUST be able to merge into a single payment pool. This whole work is an attempt to provide a payment pool structure such that these properties are clearly satisfied from a kind-of technical point of view. We give some notations that will be used in all the following. If is the set of addresses of payment pool’s participants, we denote with the associated Bitcoin address in -of- multisig. We use this notation because the associated Bitcoin address is derived from the aggregated Schnorr pubkeys of all the participants, as we will see in Section 4.2. In order to keep the exposition endurable, we avoid unnecessary generalisations. For example, we assume that and , so is the co-own address. This is no loss of generality, as the procedures that we will describe also work with larger payment pools, as we will see in Section 2.6. 2.1 Naive payment pools In this section we describe two possible “naive” ways to construct a payment pool. Though being easy to deploy and understand, these two schemes are not feasible. Assume the notations of the section Figure 2.2.1. First naive scheme. The first scheme that we report here is actually quite important, as it is basically the payout scheme of P2Pool, the main decentralised mining pool in Bitcoin history. This is also the payout scheme of Laurentia pool, see [15]. For simplicity, as we are talking about general payment pools, we do not refer to any specific aspect of P2Pool. The idea of this scheme is the following. Suppose that some funds are locked to . To guarantee everyone funds’ access, there is a transaction that redistributes funds to every participant. The input of this transaction is the funds locked to , and there is exactly one output for every participant. See Figure 2.2.1. As long as no one withdraws, the funds stay locked to . Consider the scenario in which a participant, , desires to withdraw their funds from the payment pool. Then publishes the transaction and get his funds. In this way, all other participants are forced to withdraw. This is unsuitable, as in general we want that only funds claimed by the owners leave the pool. Moreover, the transaction would have an output for every participant of the pool, making it very heavy in terms of vB, and would eventually occupy a large amount of blockchain space. This makes this scheme impractical and also very expensive if the pool is large. This is enough to discard this scheme for general purpose. Figure 2.2.2. Second naive scheme. Assume the same notation as above. In the first naive example, the one-exit-all-exit scheme is very inconvenient. A possible solution to this, described in Figure 2.2.2, is to set that withdraws its funds and the remaining funds go to a second payment pool set by the participants that have addresses . Clearly, to avoid that withdraws all the funds, this transaction must be pre-signed by all the addresses of in the moment of setting up the payment pool. So, there must be such a pre-signed transaction for each participant. Thus, the number of presigned transactions is equal to Where is the number of presigned transactions needed in the second payment pool that consists of all addresses of , except the first that withdrew its funds. So, it is easy to see that As, for an integer , the rapidly grows in size, it follows that the number of presigned transactions that has to be computed rapidly becomes too large. This problem seems to not have been considered in [14], as the change that goes back to the pool must be an output in the Withdraw transaction defined in 5.1 of the whitepaper. It is not explained why the already existing withdrawal transactions should be valid also for spending this second change output. 2.2 The proposed scheme In this section we describe a possible solution for a payment pool which is more flexible and avoids the issues of the naive implementations above. As before, we assume that the payment pool has participants and is the set of their address. We stress that this is no loss of generality and the topology of a payment pool with more participants is described in Section 2.6. Set the following addresses: • co-owned by all participants, • co-owned by and , • co-owned by and . • co-owned respectively by and and and so on. The co-own funds are locked to the address . Moreover, there are the following poresigned transactions. • is the transaction that spends funds locked to and has two outputs to the addresses . • is the transaction that spends funds locked to and has two outputs to the addresses . • is the transaction that spends funds locked to and has two outputs to the addresses . • is the transaction that spends funds locked to and has two outputs to the addresses . • is the transaction that spends funds locked to and has two outputs to the addresses . • is the transaction that spends funds locked to and has two outputs to the addresses . • is the transaction that spends funds locked to and has two outputs to the addresses . Figure 2.2.3. Payment pool with payment tree The idea is the following: the funds are locked in 8-of-8 multisig Definition 2.2.4 1. A set of transactions as in Figure 2.2.3 is called payment tree. We allow payment trees in which the inputs’ references of the transactions are left empty and that signatures are ANYPREVOUT. 2. From now on, we assume that a payment pool is a structure as in Definition 2.1.1 which has a payment tree. It is very important to keep in mind the difference between a payment tree and a payment pool. A payment tree is just a set of transactions, which may be incomplete and may have no funds allocated. On the other hand, as soon as some funds are allocated to the root address, all the transactions’ inputs can be filled and the payment tree becomes a payment pool. This will be discussed often throughout the following. We remark that if the number of participants is not 8, the payment tree (and therefore the payment pool) are constructed according to Section 2.6. If some transactions of the payment pool are published, the payment pool splits and the remaining payment pools are called sub payment pools. For example, suppose that the transaction is published. Then the funds locked to are divided into the balance of and the balance of . Therefore, the following are sub payment pools. • , , , with the transactions , and . • , , with the transactions , and . The details of this scheme may differ from the desired implementation, so they will be discussed in Chapter 3. It is immediately seen that there are a total of seven transactions, which are global in the sense that they are enough to guarantee to every participant the withdrawal of their funds. In other words, the payment tree gives everyone access to their funds. The following sections analyse the properties and the operations that can be performed with this scheme, like withdrawal of funds (Section 2.4), merging of payment pool (Section 2.3), sending funds to a payment pool (Section 2.5) and unilateral ejection of non-collaborative participants (Section 2.8). In Section 2.8 we discuss why we need ANYPREVOUT. 2.3 Merging two payment pools Suppose that there are two payment pools. The first manages a UTXO that is co-own by participants having addresses . The output is locked to the address . The second payment pool is set by participants having addresses and manages a UTXO , co-own by this second set of participants. Figure 2.3.1 Suppose that we want to merge these two payment pools in a third payment pool, set by the participants having addresses from to . We proceeds with the following steps. 1. All the participants set up a payment tree where the inputs’ references are left empty and the signatures use ANYPREVOUT (see Section 2.7). The address in the root of the payment tree is . In the Figure 2.3.2 below, the payment tree is constructed according to Section 2.6. Figure 2.3.2 2. The participants construct and publish a merging transaction, a transaction that merges and in , locking it to . 3. Now the root address has some funds (namely locked to and the inputs can be added to all the transactions of the payment tree. So, this payment tree becomes a payment pool, called the merging of the former payment pools. Note that without any modification we can generalise the merging for more than two payment pools. As a single address with some funds locked to can is considered a payment pool, the merging of payment pools may be used to let a new participant join an already existing payment pool. A transaction as in Figure 2.3.1, is called a merging transaction or the transaction that merges the payment pools. The Figure 2.2.3 below gives an overview of the whole procedure of the merging of two payment pools. Figure 2.3.3 2.4 Sending funds to a payment pool Payment pools will be used in the context of the co-ownership of the coinbase rewards by the miners of a mining pool. In this spirit, this section addresses how to apply the payment pool scheme introduced in Section 2.1 to manage the situation where there are funds that are sent to the participants to a payment pool by a third party, with address . Figure 2.4.1 In this case, the payment pool in Figure 2.4.1 is the already existing payment pool that manages an output A new payment tree is created (with ANYPREVOUT, this will be discussed in Section 2.7), with the same participants, see Figure 2.4.2. Similarly to the merging of payment pools, the inputs of the transactions of this payment tree are left empty. Then, sends the funds to the root address of this payment tree with a funding transaction, whose output is denoted . Thus, the inputs can be added to the transactions of the payment tree, which becomes the payment pool that manages . Figure 2.4.2 Now, these two payment pools are merged to a single payment pool, where the balance of every participant is upgraded. 2.5 Withdrawal of funds from a payment pool We consider two forms of funds’ withdrawal. The unilateral withdrawal is performed by a participant that unilaterally decides to withdraw his funds regardless of the other participants. The coordinated withdrawal is performed during the merging of two payment pools. Unilateral withdrawal. Consider the payment pool in Figure 2.2.3. Suppose that decides to withdraw the funds. As the transactions of the payment tree are presigned, it is enough that publishes the transactions , and . So, transaction spends funds locked to , splitting them in two outputs, locked and . Likewise, transaction spends funds locked to and has two outputs locked to addresses and . Eventually, gets the control of his funds through publishing the transaction Figure 2.5.1 Now, we can see that there are funds that are locked to some vertices of the payment tree. The nodes, that in this case are the addresses , and , have the control of the remaining funds. In • owns a single UTXO of • owns a single UTXO of , and • owns a single UTXO of . Moreover, the addresses and the transactions and form a sub payment pool. The same is true for and the transaction and with the empty trasaction. Figure 2.5.2 shows the sub-payment pools after that performs a unilateral withdrawal. Figure 2.5.2 Now it is expected the participants with addresses addresses , that are exactly the addresses involved in the subpayment pools, want to rejoin a single payment pool. To do so, they collectively sign a transaction that spends the funds in the roots of the remaining subpayment pools and has one output, locked to the address , through the process of merging the payment pools described in Section This type of withdrawal is called unilateral because gets the control of his funds regardless of the other participants, which are left to the duty of rejoining another payment pool. We remark that the payment tree serves as insurance that everyone has access to their funds, also in the case that the payment pool is compromised. Coordinated withdrawal. Suppose that the participants having addresses have two payment pools that manage two co-own UTXO, that we denote with and . Figure 2.5.3 Assume that the participants want to merge these two payment pools in a single payment pool and that there is a participant, say with address H, that wants to withdraw his funds. This participant may take advantage of the situation, by withdrawing his funds during the merging of the two payment pools. Indeed, we can construct a modified version of the merging transaction, where • and are the inputs • There is an output locked to , that corresponds to ’s funds and an output locked to the root of the newly constructed payment tree. This type of withdrawal requires cooperation between the participants. Figure 2.5.4 Comparison between these two types of withdrawals. If the payment pool between participants will not be updated in any way (through merging with a second payment pool, for example), unilateral withdrawal is the only way for a participant to get his funds. However, in situations where a funding transaction appears periodically, like the reward of a mined block, coordinated withdrawal is more convenient because it occupies less space on the blockchain. 2.6 Payment pool with large number of participants We have considered payment pools with eight participants. If the number of participants is with positive integer, then a payment pools can be defined in the same way, using a binary tree as Figure 2.2.3, but with height in place of . Figure 2.6.1. Note that in this case, every participant that wants to withdraw the funds must publish transactions and pay all of the associated fees. If the number of participants is , that is not a power of , (as seen in Figure 2.6.1) we proceed in the following way. Suppose that are the addresses of the participants. Then, we produce addresses , where , , (with is an address co-owned by and ) if is even and If is odd. Therefore, we produce with the same procedure, and so on. For the first such that (that exists as the sizes of the ’s is strictly decreasing) we have an address co-owned by every participant of the pool. We remark that we tacitly assumed that, for example, the address is co-owned by the participants that have addresses . Proposition 2.6.2. Suppose that a payment pool has participants. To perform an unilateral withdrawal or ejecting a non-collaborating participant (see Section 2.8), at most need to be published. Proof. We prove it for ejection of participants. For unilateral withdrawal the proof is the same. This can be done via publishing some transactions of the payment tree, following the scheme in Section 2.5. In particular, if is a participant different from such that, in the payment tree, where is the height of a node in the payment tree, seen as an oriented finite tree with root and a set with partial ordering (so the sup is the supremum), if withdraws his funds then is forced to withdraw his funds. The number of transactions to be published is exactly the height of the payment tree, that has eight except when . 2.7 Why ANYPREVOUT Imagine that there are some participants with addresses and assume that they want to receive some funds in a co-own address. So, the set up in -of- multisig. Once that the funds arrived on , it is immediately seen that if some of the participants vanishes, the funds are lost for everyone. One possible way to do that is to set a lower threshold, like -of- multisig. But this seems to be inappropriate, as the participants, in this context, are the miners of a mining pool. So, every participant’s balance is a piece of the output locked to , and the miners with less funds may steal the funds of the participants with more funds. One possible way to solve this problem, is to set a payment tree that manages the funds locked to before, in such a way that when funds arrive at the co-own address, a payment tree that manages these funds is already deployed. One way to do that is to use ANYPREVOUT. Indeed, consider the transaction in the payment tree as in Figure 2.2.3. As this transaction, that spends funds locked to , has to be compiled before the transaction that actually locks the funds to , the input reference of is left empty and the signature in the witness of does not cover the input, i.e. is ANYPREVOUT. Therefore, the id of is not determined and the same happens to transactions and . This means that all the transactions in the payment tree have the inputs missing and signatures ANYPREVOUT. Clearly, once that the funds are locked to through a transaction , then all the transactions in the payment tree can be filled and become valid. So, this payment tree becomes a payment pool. The exact same problem also appears if the participants are the miners of a mining pool. During the mining of a block, the miners agreed on the reward and compiled a payment tree. So, using any type of valid transaction format, the first transaction of the temporary payment tree depends on the id of the coinbase transaction. The id of the coinbase is the hash committed to the serialised transaction, that therefore commits also to the extranonce. So, for every extranonce used by any miner, there should be a different payment tree. As there are many extranonces per second tried by a single miner, the number of payment trees to be computed becomes too large. This is why the temporary payment tree in Section 3.1 consists of transactions as above, where the inputs references are empty and the signatures are ANYPREVOUT. In this section, we describe how this problem arises every time some co-own funds are moved. Suppose that the participants want to merge the two payment pools in Figure 2.3.1. The outputs are safe as there is a payment tree that manages each. These optus has to be merged to , locked to . If all the participants compile a transaction for this purpose, and then a participant go offline, would be unspendable. Therefore a payment tree with incomplete transactions and ANYPREVOUT signature as in Figure 2.3.2 has to be constructed before the transaction that moves into . Suppose now that some participant wants to perform a coordinated withdrawal (see Section 2.5). Then, looking at Figure 2.5.3, all the participants must firstly collaborate to create a payment tree (where the transactions have input empty and ANYPREVOUT signatures) that manages and only after this they can compile the merging transaction with withdrawal. 2.8 Non-collaborative participants and DOS attacks Non-collaborative participants: Our proposal benefits from collaboration between the participants of the payment pool in the case that the coordinated withdrawal is possible (for example, when the participants are the miners of a mining pool). Nevertheless, collaboration has an impact only in the efficiency of the implementation of a payment pool and it is not strictly required for the security of the payment pool. If there is one honest participant, with address that it is non-collaborative (because, for any reason, went offline), then there are two possible scenarios. Either there is another participant, with address , such that is a node of the binary tree associated to the payment pool, or there are two other participants, with addresses and such that is the parent node of and and is the parent node of and . In both cases, a way to overcome the inactivity of the offline participant is that the participant with address withdraws their funds. In this way, the offline participant is forced to withdraw his funds. The other participants can later merge the remaining sub payment pools in a new payment pool, without . In this case, the fees of the transactions needed are not paid by , that is offline, but are instead shared by the remaining participants of the pool. This ensures the payment pool conforms to the Fund Security Property (Section 2.1, Property 3) allowing for participants to have access to their funds without requiring every participant to be online. Dishonest participants and DOS attacks: Now suppose that a participant with address goes offline with a malicious intent. In this case, following the procedure above, the remaining participants have to expel from the payment poo and they have to pay the fees of transactions, by Proposition 2.6.2 (recall also the merging transaction). If the malicious participant rejoins the pool and then goes offline again many times, this is considered as a DOS attack. To deter this behaviour, every participant must pay an entrance fee to join a payment pool. The fee is a small amount of burned funds. It is possible to limit the number of times a participant can go offline and rejoin. After reaching this limit, they must pay a second entrance fee by again burning their funds, making a DOS attack expensive. Note that this problem appears only in the case where the pool is not KYC (know your customer). 2.9 Remarks In this section we analyse how the payment pool properties at the beginning of Chapter 2 are fulfilled by our scheme. First of all, it is secure because it can’t happen that some funds remain unspendable for a non-collaborating participant. Respects the idea of co-ownership by construction. Every participant has access to his funds at any moment though a unilateral withdrawal, even though unilateral withdrawals are discouraged. Finally, it is cumulative (see Sections 2.3 and 2.4). Is worth mentioning that many features of our payment pool scheme in this chapter and its implementation in the next chapter overlaps with the ideas in [14]. 3. Payment pools for mining pools In this chapter, we adapt the scheme in Chapter 2 to implement a payment pool in mining pools. So, the payment pool’s participants are the miners and every time that some participant mines a block, the coinbase reward goes to the payment pool in such a way that every participant is paid in proportion to the hash power he has contributed. Though this chapter works with the current mining software Stratum, we remark that our scheme is designed to be deployed in Stratum V2. We assume that the pool is centralised and we remark that what follows can be generalised for decentralised mining pools. All the transactions are full taproot, in the sense that the inputs and the outputs are P2TR (see taproot soft-fork BIPs 340, 341 and 342, see [5]). This has several benefits. First of all, it is possible to use Schnorr signatures SDSS, a particular type of signatures that allow key aggregation and batch verification of the signature, through the new multisignature protocol MuSig. This protocol has three rounds of communications and can be reduced to two using a variation of the scheme, known as MuSig2 (see [6] and [7]). This will be discussed in Section 4.2. In the following, we assume that the proportion of hashpower contribution, and consequently the coinbase repartition among miners, remains constant from block to block. This is not true in practice, even though our scheme can be adapted to situations in which the hashpower contributions are not constant. Agreement of coinbase redistribution is a whole problem on its own and detailed implementation that takes into account this aspect is out of the scope of this work. 3.1 Implementation for miners In the following, we assume that the BIP 118 is active and that miners are mining the block number . As before, suppose that the participants have addresses A,B,C,D,E,F,G,H that store the following balances in a payment pool. • A: 5 BTC • B: 5 BTC • C: 10 BTC • D: 10 BTC • E: 15 BTC • F: 15 BTC • G: 20 BTC • H: 20 BTC These funds, that sum up to 100BTC, appear on chain as an unspent output locked to the address . We implement the payment tree in Section 2.2. So, there are 7 precompiled transactions. This payment pool is called the n-th payment pool and changes only if a pool’s miner finds a new block. Figure 3.1.1. The participants are contributing to the pool’s hashpower. The contribution is calculated by the shares that the miners provided to the pool. So, we assume that the coinbase reward (fees included) is 10BTC and it is partitioned as follows. • A: 2 BTC • B: 0.5 BTC • C: 0.5 BTC • D: 1.5 BTC • E: 1 BTC • F: 1.5 BTC • G: 1 BTC • H: 2 BTC The temporary payment tree is a tree that, in the case that some pool’s miner finds a block, redistributes the coinbase output among participants likewise the payment tree of a payment pool. Nevertheless, the temporary payment tree is not a payment pool, because it does not have any funds allocated to the root as long as any miner of the pool finds a block. In this tree, all the transactions have the input empty and the witness signatures do not cover the inputs, namely they are ANYPREVOUT. The temporary payment tree is calculated by the pool once and then stored, as the hashrate of each participant is assumed to be constant. Figure 3.1.2. Now, when the block is mined, the miners start immediately to mine the -th block with the temporary payment tree. Nevertheless, the -th payment pool changes depending on one the two following cases. 1. the block is mined outside the pool 2. the block is mined by some pool’s miner Suppose that the -th block is mined outside the pool. Figure 3.1.3. Since no funds have arrived, the -th payment pool is equal to the -th one. See Figure 3.1.3. Suppose now that some pool’s miner successfully mines the -th block. The coinbase transaction is a funding transaction in the sense of Section 2.4 and we can apply the procedure “Sending funds to a payment pool”. In particular, both the temporary payment tree and the -th payment pool are both payment pools (since have some funds locked to the root address). Now we only need to merge these two payment pools, according to Sections 3.2 and 2.3. We call the merging transaction. The resulting payment pool is the -th payment. Figure 3.1.4 We point out that, for safety reasons, the payment tree that manages the output of should be constructed before that is constructed, using ANYPREVOUT and omitting the input’s reference. See Section 3.2 Operations on a payment pool for a mining pools Merging of payment pools. Note that among miners of a mining pool there may be more than one payment pool. For example: 1. the pool finds a new block, then the temporary payment pool and the payment pool need to be merged 2. if a participant performs a unilateral withdrawal or he is expelled by the payment pool (these operations are described below), there are remaining payment pools that need to be merged. Suppose we are in the second case. Conceptually, both situations can be viewed as an unilateral withdrawal (the procedure is the same: some transactions are published in order to let a participant withdraw). So, the payment pools that have to be merged are the sub-payment pools and we follow the procedure described in Section 2.5, paragraph “Unilateral withdrawal”. Figure 3.2.1. Suppose we are in the first case. So, there are exactly two payment pools with funds (the temporary payment tree, that manages the output of the coinbase transaction and the the -th payment pool, that manages the output ). Then, it is possible to merge these payment pools, using a merging transaction . As and are P2TR outputs, the transaction (see Figure 3.2.1 and Section 2.3 for the definition of merging transaction) spends and via the key-path. This requires collaboration of all participants. If a participant is not collaborating (for example, for the signatures required in the transaction ), he can be expelled by both the payment pools and the other participants can rejoin to a payment pool (see the next paragraph). Suppose that is the output of the merging transaction; we recall that, for safety reasons, a payment tree that manages should be constructed, using ANYPREVOUT, before the transaction is compiled. Expelling a non-collaborating participant. If a participant is not collaborating at some time, then he can be unilaterally ejected from the payment pool by the other participants. After ejection, there are remaining sub-payment pools. The participants of these payment pools can rejoin to a single payment pool, merging the sub-payment pools. We remark that the costs of the whole procedure (in terms of transactions’ fees) can be shared by the participants. Large payment pools. If the number of participants is greater than 8, the payment trees can be implemented using the construction in Section 2.6. 3.3 Withdrawal from a payment pool In Section 2.5 we saw two types of withdrawals, the coordinated and the unilateral ones. Though the unilateral withdrawal always results in a direct access to funds, the other participants are left with the task of grouping all the remaining subpayment pools in a single payment pool, which need a merging transaction with many inputs, where is the number of miners). Assuming that every participant withdraws funds regularly, choosing the unilateral withdrawal as standard would lead to a large average occupied space on the blockchain (see Section 3.4). The same is true if a participant makes many withdrawals of a small amount of funds. Therefore, we set the following constraints. 1. Coordinated withdrawal is the standard type of withdrawal and unilateral withdrawal is considered an attack to the payment pool (see Section 3.6). 2. Every participant should withdraw all his funds at every withdrawal. Suppose that at the block the -th payment pool has to be merged with the temporary payment tree with some locked funds. Call and respectively the co-owned UTXO of these two payment pools and in Figure 3.3.1. Figure 3.3.1. Suppose that some participants, say and want to withdraw their funds. Their withdrawal is done during the merging transaction. The inputs of this transaction are and and the outputs are where are the funds that remain in the payment pool and and are respectively the funds of and. Recall that, before this transaction is compiled, the participants that co-own have to construct the payment tree for . Figure 3.3.2. Figure 3.3.3 In conclusion of this section, we remark that the payment tree serves as insurance that every part has direct access to their funds, in the case that the entire payment pool is compromised. 3.4 Average weight on the blockchain In P2Pool, the miners’ payout with P2Pool is the scheme in Figure 2.2.1, where every miner is paid directly through output locked to his address in the coinbase transaction. So, the coinbase transaction has as many outputs as the number of pool’s miners. As P2Pool is the only non-custodial large pool appeared, we make a comparison between our pool and P2Pool payout scheme in terms of average space occupied on the blockchain, which is the average number of vB per block that the pool occupies for miners’ payout. We introduce some constants. • : on average, the pool has participants. • : on average, the pools finds a block every blocks. • : on average, for every block there are participants that want to withdraw. • : on average, every block there are participants that go offline. • : is the weight in vB of a P2TR transaction input • : is the weight in vB of a P2TR transaction output Now we calculate the average space per block on the blockchain of the P2Pool. For the architecture of the pool, a miner is considered offline if it remains unresponsive for block in row. If the pool finds a block, the miners are paid directly with coinbase outputs. So, the coinbase size is approximately and the average space occupied on the blockchain is The average space per block occupied on the blockchain by our scheme is a bit more difficult to calculate. First of all, the space occupied varies depending on the fact that the pool is or is not KYC. This leads to two calculations. Note furthermore that withdrawal is supposed to be only of coordinated type. Moreover, unresponsive participants are expelled by the payment pool, nevertheless the number of participants of the payment pool are supposed to be constant. We remark that adding a participant to the pool has no impact on the blockchain, unless he withdraws his funds (in coordinated or unilateral way) or becomes unresponsive. Average weight of merging and withdrawal. As the withdrawal is performed during the merging of the payment pool and temporary payment tree, we calculate the weight of both procedures at the same time. This boils down to the calculation of the size of the modified version of the merging transaction in Figure 3.3.2. This transaction has two inputs. The root of the new payment pool is an output of this transaction. On average, every blocks there are participants that want to withdraw. Therefore, this transaction has size . Therefore, the average weight per block is Average weight of expelling a non collaborating participant. In Section 3.2 we saw that for expelling a non-collaborating participants, we have to publish transactions of the payment tree. Each of these transactions has one input and two outputs. Then, we need a transaction to merge the remaining sub-payment pools. This transaction has inputs and one output. Therefore, if, on average, there are participants per block that go offline (and need to be expelled by the pool), the average weight of expelling is This is the most expensive operation, that would potentially disrupt the payment pool structure, so it is important to minimise it. One easy way to do this, is to set the pool KYC, that would make this cost zero. Average weight of the coinbase. This transaction has one input and one output, so, according to [10], the coinbase is a fixed weight of 111vB. Since there is on average one coinbase every blocks, the average weight of the coinbase is . non-KYC pool. If the pool is non KYC, the total average weight is: Therefore, our pool’s payout scheme is better than the one of P2Pool if Let’s rewrite the disequation, where : This is equivalent to the following With easy asymptotic analysis, we can see that there is a large such that, for all , the disequation above is satisfied. KYC pool. In the case of KYC pool, the total average weight is In this case, the our pool’s payout scheme is better than the one of P2Pool if The such that the disequation is satisfied for all is much smaller than the in the non-KYC pools. In the following analysis, we present numerical data using four hypothetical mining pools. Our assumptions include a value of k equal to 0.005, which means that every miner wants to withdraw their earnings after every 200 blocks (33 hours). The data presented here should be considered preliminary and incomplete, as it serves only to provide a rough estimate. The values of z are only relevant in cases where the pool is non-KYC; if the pool is KYC-compliant, z is always equal to zero. 1. The first pool we examined had a value of z equal to 0.01, indicating that on average, one miner goes offline every 100 blocks. The pool had 1000 participants (n=1000) with a high hashrate (br= 10), which resulted in a block being found every 10 blocks. 2. The second pool had a value of z equal to 1, meaning that on average, one miner goes offline per block. The pool had 10000 participants (n=10000) and a lower hashrate (br=100), which resulted in a block being found every 100 blocks. 3. The third pool had a value of z equal to 0.01, with 1000 participants (n=1000) and a hashrate of 10 (br=10). 4. The fourth pool is equal to the third, but with a value of z equal to 1. The first pool has few participants who are large and reliable miners. The second pool has a low hashrate and many small and unreliable miners. The third pool is large, with many reliable miners, and the fourth is similar to the third, but with unreliable miners. Our findings indicate that the first pool can be divided into three cases based on the payout scheme: P2Pool, the proposed payout scheme, and the case where z=0 (i.e., the pool is KYC-compliant). Below we include the results for three kind of payment schemes: P2Pool scheme, our scheme with z=0 (KYC), and our scheme with z not equal to 0 (not KYC). Pool 1 Pool 2 Pool 3 Pool 4 P2Pool 4305.75 4300.58 43005.75 43005.75 KYC 27.115 2.905 27.12 27.12 non-KYC 47.58 2716.74 54.25 2740.95 3.5 Collaboration between participants of a payment pool in a mining pool Collaboration is needed every time some multisig funds have to be moved, like merging payment pools, coordinated withdrawal. Nevertheless, it is not strictly necessary to make the funds accessible to everyone. Indeed, this scheme is conceived in such a way no participant can keep hostage funds in multisig if he does not collaborate. For example, suppose that that the current block (that is the n-th block) is mined by some participant of the mining pool; the -th payment pool and the temporary payment tree manage the outputs and (see Figure 3.1.4) and are calculated before these outputs appear. According to Section 2.7, these payment trees work as a sort of insurance, making it impossible for a participant to keep hostage funds locked to a co-own address. Indeed, if A is non-collaborative during the process of merging and in the output with the merging transaction , then A does partecipate to the multisig process. In this case, A would be ejected by all the payment pools he is involved into and the remaining participants rejoin to the new payment pool, according to Section 2.8 . We remark that the temporary payment trees are constructed by the pool and a miner agrees to mine with a particular temporary payment tree only after he checked the precompiled transactions in the tree’s branch that required his signature. So, it is sufficient and necessary that the pool sends to every miner at the same time the entire payment tree, in order to avoid that some miner finds a block using an incomplete payment tree. 3.6 Possible attacks to a payment pool In this section we analyse the possible attacks that can be performed to our proposed scheme for a payment pool. An attacker would join the payment pool and then disrupt the normal activity of the pool. According to Section 2.8, two possible ways to do so are unilateral withdrawals and going offline many times. Then, the pool would split into many sub payment pools that have to be re-merged. This requires collaboration between participants to sign the merging transactions and, moreover, the pool has to create a new payment tree. Every time that a participant goes offline, he does not collaborate in signing rounds, and this disrupts many processes, first of all the merge of payment pools. A possible way to disincentivize these two behaviours is to manage a form of payment pool entry fee. Call an amount of BTC. A new miner that wants to join the payment pool, he has to burn . Assume that the miner is already in the pool. If he exit the pool for some reason at the -th block(either unilateral withdrawal or expelled being unresponsive), for rejoining the pool he has to pay a fee of burned funds that corresponds to the following amount: Where is the average hashrate of the miner and is the blocks’ interval between the n-th block and the smallest block before n that the miner was online (namely the maximal interval that the miner was online before the block n). The functions and are strictly monotone and . We set this fee with this idea: if a participant wants to perform a dos attack, he must use a mining device with great hashrate (so that becomes small) and take time to get the “trust” of the pool (so that becomes small). A detailed fee scheme is out of the scope of this work. We remark that if a state performs this, the costs would be immaterial. A possible solution for this is to set the pool KYC, that would make this type of attack impossible. 3.7 Fees In a normal context, to deploy a payment pool, the participants must pay only the fees of the merging transaction in Figure 3.3.2. To expel an offline participant, must be published transactions and for merging the remaining sub payment pools is needed a transaction with as many inputs as the remaining sub payment pools and with a single output. The fees can be calculated 4. Technical considerations about payment pools for mining pools 4.1 Wait 100 confirmations The coinbase funds can be spent only after block confirmations. There are two solutions that can mitigate this problem. The first, is to keep in memory at most payment pool, where there is one payment pool, that we call special that collects the funds and the other payment pools are waiting block confirmations to be merged with the special payment pool. The second is to split participants. If a payment pool has more than of the bitcoin hashrate, then the pool can split the participants in submining pools, so that the probability for a submining pool to find two blocks at distance is very low. 4.2 Aggregating pubkeys We assume also that there is a third party that runs the procedure MuSig2, generating all the addresses and all the transactions as in the tree in Figure 2.2.3. One advantage of a centralised pool, is that this task can be fun filled by the pool for all the participants. We denote this situation by saying that the pool plays the Aggregator role. We proceed step-by step with the construction of the payment pool with the tree in Figure 2.2.3. And a shared UTXO that is the output from a transaction emitted by the payer’s address . • Each participant communicates to the Aggregator the needed data, that is actually his PubKey, the Nonces according to the MuSig2 protocol, and the amount of funds. • The aggregator aggregates the participants PubKeys and creates the correspondent bitcoin addresses , ecc. • Aggregator creates the transactions that correspond to tree’s edges as above. • Each participant signs these transactions with ANYPREVOUT and sends back the signature. • The Aggregator aggregates the signatures and sends back to each participant the batched signature of every transaction that he is involved into. • The Aggregator creates the funding transaction that sends funds to the . 4.3 Qualitative anatomy of transactions that use ANYPREVOUT For completeness, we include a description of a transaction that spends an input signed with ANYPREVOUT. Consider, for example, the transaction as in Figure 2.2.3, we now give a qualitative description on how this transaction would look like. According to [12], spends the P2TR output of the coinbase transaction via the script path. This output is of the form 1 <q>, Where 1 is the witness version, q is the tweaked pubkey, namely the x-component of the point where is the generator of the elliptic curve, is the hash function, is the point that corresponds to the aggregated public key of the participants ( for an even number ) and h is the -byte the hash of a script s that is described below (in general, h is the Merkel root of some Merkel tree). Observe that the address starts with bc1p <bech32 encoding of q> <6-characters-checksum>. The witness of the transaction input is of the form <n> <d> <s> <c> where n is the number of witness elements (that in this case is always , see the Specifications of BIP 342 [5]), d is the data for the following item s, that is the script used to tweak the public key p, and c is the control block. The control block is of the form 0xc0 <p> <b> where p is the internal pubkey p. Now, d is of the form where sig is the signature of the transaction that has a specific hash_type flag, indicating that it is committed to all the outputs and the amount of the input, but not to any reference of the input. Now, s contains a script that check the signature sig against p, that is an aggregated Schnorr public key, namely a script of the type <p> OP_CHECKSIG This transaction has two outputs that correspond to the first two branches of the payment tree in Figure 2.2.3. The first output is of the form 1 <q1> where q1 is the tweaked public key of the aggregated participants and . The script used for creating q1 is <p1> OP_CHECKSIG Where p1 is the internal public key that corresponds to q1. The second output is similar, with the aggregated pubkey of that is replaced with the pubkey of . 5. Payment pools using CTV and TLUV 5.1 CTV payment pools In this paragraph we analyse the construction of a payment pool using the opcode OP_CHECKTEMPLATEVERIFY (OP_CTV for brevity). The coinbase has an output with the ScriptPubKey of the form ScriptPubkey: “0 H1”, Where H1 is of the form <32-bytes-hash>, namely a P2WSH. To spend this output, it must be presented a witness of the form Witness: “H2 OP_CTV”, Where H2 is of the form <32-bytes-hash>. If the hash of the witness coincides with H1, then the script is executed and the output is spent. The value H2 is the hash committed to the following list of 1. nVersion, 2. nLockTime. 3. scriptSig hash (for non-segwit case), 4. input count, 5. sequences hash, 6. output count, 7. outputs hash, 8. input index. These values refer to the transaction that spends the funding output. We can see that in this transaction there is no signature required. So, one may be tempted to use CTV transactions to implement a payment pool as in Chapter 2, where the miners’ pubkeys appear in the leaf transactions. This would not need any round of signatures. Nevertheless, with this implementation, money can be spent only flowing through the tree in Figure 2.2.3, ending up to mining pool participants. So, if one participant wants to withdraw, everyone is forced to withdraw. As a consequence of this, it is not possible to merge payment pools and it follows that this implementation is actually worse than directly redistributing the funds, by adding the address of every participant in the outputs of the transaction that spends the output of the funding transaction. Indeed, if the pool has participants, this implementation requires at least transactions that would occupy more space on the blockchain (even though more efficiently). One proposal of a payment pool that uses CTV appears in [8], on the occasion of the day 13 of the Bitcoin Advent Calendar of Jeremin Rubin, the author of BIP-119. On the other hand, that solution uses a multisignature scheme, so there are no real benefits compared to the scheme we proposed in Chapter 2. 5.2 TLUV payment pools In this section we examine the possibility to implement a payment pool using TLUV, proposed in the bitcoin mailing list in [11]. The BIP proposal has a new opcode, TLUV. This opcode allows modifying the Merkle tree, trimming some branches or adding a new script, and removing signatures from the aggregated internal pubkey. The syntax for TLUV is the following Y H C TLUV where Y represents the tweak of the internal pubkey, namely the internal pubkey is modified in terms of Y (this is used to remove signatures from an internal aggregated pubkey), H is the leaf script that is added to the Merkle tree and C is an integer that represents “how much” the Merkle tree gets trimmed. If Y=0 or H=0, then, respectively, no modifications are performed to the internal pubkey or no script is added to the Merkle tree. The bip contains a another new opcode IN_OUT_AMOUNT, that pushes two items onto the stack: the amount from this input's UTXO, and the amount in the corresponding output, and then expect anyone using TLUV to use maths operators to verify that funds are being appropriately retained in the updated scriptPubKey. We give a brief description of how TLUV may be used to deploy a payment pool. We follow the scheme that appears in the original mail of the author in the bitcoin mailing list [13]. Suppose that are participants of a payment pool that share a UTXO O1. There are scripts S, S, S, S and so on, such that S is the script that allows participant to withdraw his funds. In other words, if wants to withdraw his funds, it provides the valid script S that verifies the correct amount of funds returned to the pool, using IN_OUT_AMOUNT and contains something like <pubkey of X> 0 2 TLUV where the pubkey of is removed from the internal aggregated pubkey, 0 means that no leaf is added to the Merkle tree and 2 means that the current script is removed from the leaves of the Merkle This payment pool has an advantage, as participants can withdraw whenever they want with just a single transaction, while the scheme of a payment pool that we described and implemented in Chapters 2 and 3 withdrawal is performed during the merging of payment pools or unilaterally, disrupting the activity of the payment pool. Nevertheless, if a participant becomes inactive, then all the others have to eject from the payment pool, each using his script, and rejoin a new one. This requires transactions, while ejecting a participant with the previous scheme needs transactions. If some of the scripts S, S, S, S etc need the of the previous transaction, a TLUV-only payment pool pool has to calculate a taproot address for every extranonce that every miner is mining with, and this is infeasible (see Section 2.7). Using TLUV is therefore possible to implement the second naive payment pool Scheme in Section 2.2 without presigning transactions, where is the number of participants of the payment pool. Again, this implementation would be convenient as soon as every participant is online. If a participant does not collaborate, then all the other participants have to exit using their scripts and rejoin to another payment pool. Unfortunately, this requires transactions, while with our scheme, ejecting a non-collaborative participant, would require only transactions to be published. 5.3 Payment pools for mining pools using ANYPREVOUT and TLUV In this section we discuss the use of TLUV and ANPREVOUT in synergy in order to deploy a payment pool that has the benefits of both the proposals. Assume that the payment pool is deployed among the miners of a mining pool that want to share their funds in a single UTXO and want to add to the payment pool the coinbase reward. We use the notations of Section 3.1. Firstly, we can assume that the -th payment pool uses TLUV for allowing everyone to withdraw their funds with the correspondent scripts, and a payment tree, built with ANYPREVOUT, that allows the pool to eject a non-collaborative participant. The temporary payment tree is constructed as in Section 3.1. If some miner finds a block, then the temporary payment tree and the -th payment pool has to be merged into a third payment pool. This last uses TLUV for allowing cheap withdrawal and a payment tree for allowing ejecting non-collaborative participants; the payment tree has to be constructed earlier, using ANYPREVOUT. If a participant does not collaborate, then it can be ejected using the payment trees and the participants can rejoin to a new payment pool. So, assuming TLUV and ANYPREVOUT, it is possible to implement a payment pool for mining pool’s miners (that we suppose to be in number) with the following characteristics. • Participants can withdraw using one single transaction at any time. • If some miner of the pool finds a new block, the coinbase output can be merged with the previous payment pool to a new payment pool. • If a participant does not collaborate, it can be unilaterally ejected with transactions. 6. Conclusions From Section 5.1, it seems that the rigidity of CTV makes it difficult to deploy a payment pool that is suitable for our case, even though we leave open the possibility of some resolutive tricky implementation we are not aware of. It is the opinion of the authors that the following are the most interesting schemes for a payment pool suitable for our purposes. 1. Using only ANYPREVOUT 2. Using ANYPREVOUT and TLUV 3. Using only TLUV Note that, as TLUV is in a very early phase of development, it is not clear whether it is affected by Problem 1 of Section 4.1. In the worst case, TLUV alone may not be enough to implement a payment pool suitable for our scopes; in this case, ANYPREVOUT would be needed. We present a summary of pros and cons of the schemes analysed. pros cons ANYPREVOUT Requires only one BIP, cheapest for unilateral ejection, low communication Expensive for withdrawal TLUV Easiest to deploy, cheapest for withdrawal Expensive for unilateral ejection, lightly discussed, may not be enough ANYPREVOUT+TLUV Cheapest for withdrawal and unilateral ejection, low communication Difficult to deploy, requires 2 BIPs, whose one in a very early stage So, we think that, for our purposes, an ANYPREVOUT-only payment pool would be the best choice to implement first, leaving space for a TLUV update. We also remark that our scheme is suitable for KYC pools, as the average weight in the blockchain occupied for the normal activity of the payment pool is very limited (see Section 3.4). [1] https://anyprevout.xyz/ [2] https://www.mail-archive.com/bitcoin-dev@lists.linuxfoundation.org/msg11383.html [3] https://braiins.com/stratum-v2 [4] https://github.com/bitcoin/bips/blob/master/bip-0119.mediawiki [5] https://github.com/bitcoin/bips/blob/master/bip-0342.mediawiki [6] https://github.com/jonasnick/bips/blob/musig2/bip-musig2.mediawiki [7] Jonas Nick, Tim Ruffing, and Yannick Seurin, “MuSig2: Simple Two-Round Schnorr Multi-Signatures”, Cryptology ePrint Archive, 2020/1261. [8] https://rubin.io/bitcoin/2021/12/10/advent-13/ [9] https://bitcoinops.org/en/topics/cpfp/ [10] https://bitcoinops.org/en/tools/calc-size/ [11] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019419.html [14] G. Naumenko, A. Riard, “CoinPool: efficient off-chain payment pools for Bitcoin”, https://coinpool.dev/v0.1.pdf. [15] https://laurentiapool.org/wp-content/uploads/2020/05/laurentiapool_whitepaper.pdf
{"url":"https://docs.google.com/document/d/1qiOOSOT7epX658_nhjz-jj0DlnSRvytemOv_u_OtMcc/mobilebasic","timestamp":"2024-11-06T07:29:10Z","content_type":"text/html","content_length":"671679","record_id":"<urn:uuid:1010f87f-9cb9-437b-9680-2bff70b0242b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00002.warc.gz"}
Common Core: Geometry Archives – SCOPES-DF Common Core: Geometry Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. October 1, 2018 Verify experimentally the properties of rotations, reflections, and translations: October 1, 2018 Lines are taken to lines, and line segments to line segments of the same length. October 1, 2018 Understand that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures, describe a sequence that exhibits the congruence between them. October 1, 2018 Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates. October 1, 2018 Understand that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar two-dimensional figures, describe a sequence that exhibits the similarity between them. October 1, 2018 Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. For example, arrange three copies of the same triangle so that the sum of the three angles appears to form … Read More “8.G.A5” October 1, 2018 Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. October 1, 2018 Apply the Pythagorean Theorem to find the distance between two points in a coordinate system. October 1, 2018
{"url":"https://www.scopesdf.org/scopesdf_standard_cat/geometry/","timestamp":"2024-11-02T17:22:52Z","content_type":"text/html","content_length":"86972","record_id":"<urn:uuid:3f2eb514-342a-4a56-aa03-d94bf89715e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00391.warc.gz"}
What Is Unit Weight | What Is Density | What Is Unit Weight Material | Unit Weight Building Materials What Is Unit Weight? The ratio of the weight of a material to its volume is its unit weight, sometimes termed specific weight or weight density. The unit weight of water, γw, is 9.81 kN/m^3 in the SI system and 62.4 lb/ft^3 in the English system. Also, read: What Is a Field Dry Density Test | Different Type of Field Density Tests What Is Density? The term density is used herein to denote the mass-to-volume ratio of the material. However, some references, particularly older ones, use the term to describe unit weight. Density is denoted by p. Because m = W/g, the unit weight terms defined above can be converted to mass densities as follows: ρ = M/V ρ = Density M = Mass V = Volume In the SI system mass densities are commonly expressed in Mg/m^3, kg/m^3, or g/ml. The mass density of water can therefore be expressed ρw = 1000 kg/m^3 = 1 Mg/m^3 = 1 g/ml. The mass density of soil solids typically ranges from 2640 to 2750 kg/m^3. Where mass or mass density values (g, kg, or kg/m^3) are given or measured, they must be multiplied by g(9.8 m/s^2) to obtain weights or unit weights before performing stress calculations. In the English system mass density values are virtually never used in geotechnical engineering and all work is performed in terms of unit weights (1b/ft^3). Also, read: Tension Vs Compression | What Is Tension &#038; Compression What Is Unit Weight Material? Unit weight material is also known as Specific material weight. Unit weight material is the weight of the material per unit volume. As we know that the volume is identified in terms of liters or m^3 and weight is measured in terms of Kg or KN. The unit weight of materials is the weight of material/unit volume which means the Unit weight is expressed in Kg/L or KG/m^3 or KN/m^3. For easy reference, we organized all the building materials unit weights in a table. This list is a collective effort. Give a thumbs up if you liked it. Bookmark the page and use the search if Also, read: Density of Cement Sand and Aggregate | Cement Density | Sand Density | Aggregate Density | list of Density Unit Weight Building Materials Sr.No. Material Unit Weight 1 A.C Sheet 17 Kg/m^3 2 Aerocon Bricks 551 to 600 Kg/m^3 3 Alcohol 780 Kg/m^3 4 Aluminum 2739 Kg/m^3 5 Anthracite Coal 1550 Kg/m^3 6 Ashes 650 Kg/m^3 7 Ballast 1720 Kg/m^3 8 Birch Wood 670 Kg/m^3 9 Bitumen 1040 Kg/m^3 10 Bituminous concrete 2243 Kg/m^3 11 Bituminous Macadum 2400 Kg/m^3 12 Brick 1600 – 1920 Kg/m^3 13 Brick Jelly 1420 Kg/m^3 14 Brick Masonry 1920 Kg/m^3 15 Cast iron 7203 Kg/m^3 16 Ceement Slurry 1442 Kg/m^3 17 Cement Concrete block 1800 Kg/m^3 18 Cement Grout 1500 to 1800 Kg/m^3 19 Cement Mortar 2000 Kg/m^3 20 Cement Plaster 2000 Kg/m^3 21 Cemrent 1400 Kg/m^3 22 Chalk 2220 Kg/m^3 23 Clay (Damp) 1760 Kg/m^3 24 Clay (dry) 1600 Kg/m^3 25 Clinker 750 Kg/m^3 26 Coal Tar 1200 Kg/m^3 27 Coarse Aggregate 1680-1750 Kg/m^3 28 Cobalt 8746 Kg/m^3 29 Copper 8940 Kg/m^3 30 Crude Oil 880 Kg/m^3 31 Cuddapa 2720 Kg/m^3 32 Disel 745 Kg/m^3 33 Dry Rubble Masonry 2080 Kg/m^3 34 Earth (Dry,loose) 1200 Kg/m^3 35 Fly Ash 1120 to 1500 Kg/m^3 36 Fly Ash Brick Masonry 2000 to 2100 Kg/m^3 37 Fly Ash Bricks 1468 to 1700 Kg/m^3 38 Galvanized Iron Steel (0.56 mm) 5 Kg/m^3 39 Galvanized Iron Steel (1.63 mm) 13 Kg/m^3 40 Gasoline 670 Kg/m^3 41 GeoPolimer Concrete 2400 Kg/m^3 42 Glass Reinforced Concrete 2000 to 2100 Kg/m^3 43 Granite Stone 2460-2800 Kg/m^3 44 Graphite 1200 Kg/m^3 45 Gravel Soil 2000 Kg/m^3 46 Green Concrete 2315 to 2499 Kg/m^3 47 Heavy Charcoal 530 Kg/m^3 48 Ice 910 Kg/m^3 49 Igneous rocks (Felsic) 2700 Kg/m^3 50 Igneous rocks (Mafic) 3000 Kg/m^3 51 Kerosene 800 Kg/m^3 52 Larch Wood 590 Kg/m^3 53 Laterite Stone 1019 g/m^3 54 Lead 11340 Kg/m^3 55 Light Charcoal 300 Kg/m^3 56 Light Weight Concrete 800 to 1000 Kg/m^3 57 Lime Concrete 1900 Kg/m^3 58 Lime Plaster 1700 Kg/m^3 59 Lime Stone 2400 – 2720 Kg/m^3 60 M Sand 1540 Kg/m^3 61 Magnesium 1738 Kg/m^3 62 Mahogany 545 Kg/m^3 63 Mangalore Tiles with Battens 65 Kg/m^3 64 Maple 755 Kg/m^3 65 Marble Stone 2620 Kg/m^3 66 Metamorphic rocks 2700 Kg/m^3 67 Mud 1600-1920 Kg/m^3 68 Nickel 8908 Kg/m^3 69 Nitric Acid (91 percent) 1510 Kg/m^3 70 Oak 730 Kg/m^3 71 Peat 750 Kg/m^3 72 Petrol 720 Kg/m^3 73 Pitch 1100 Kg/m^3 74 Plain Cement Concrete 2300 Kg/m^3 75 Plaster of Paris 881 Kg/m^3 76 Plastics 1250 Kg/m^3 77 Quarry Dust 1300 to 1450 Kg/m^3 78 Quartz 2320 Kg/m^3 79 Quick lime 33450 Kg/m^3 80 Rapid Hardening Cement 1250 Kg/m^3 81 Red Wood 450-510 Kg/m^3 82 Reinforced Cement Concrete 2400 Kg/m^3 83 Rubber 1300 Kg/m^3 84 Rubble stone 1600-1750 Kg/m^3 85 Sal Wood 990 Kg/m^3 86 Sand 1440-1700 Kg/m^3 87 Sandstone 2250 to 2400 Kg/m^3 88 Sedimentary rocks 2600 Kg/m^3 89 Shale Gas 2500 Kg/m^3 90 Silt 2100 Kg/m^3 91 Slag 1500 Kg/m^3 92 Stainless Steel 7480 Kg/m^3 93 Steel 7850 Kg/m^3 94 Sulphuric Acid (87 Percent) 1790 Kg/m^3 95 Teak 630-720 Kg/m^3 96 Tin 7280 Kg/m^3 97 Water 1000 Kg/m^3 98 Zinc 7135 Kg/m^3 Unit Weight of Materials Unit weight or specific weight of any material is its weight per unit volume that means in a unit volume, how much weight of the material can be placed. Volume is measured in litres or cubic meters, and weight is expressed in kg or kilo Newton. Unit Weight Unit Weight is the weight per unit volume of a material. Unit Weight = Weight of Material. Volume of Material. Example: U.W. = 103.2 lbs = 103.2 pcf. What Is Unit Weight? The unit weight of a substance is calculated by dividing its weight by its volume. In the International System of Units (SI), the unit weight is typically expressed in newtons per cubic meter (N/m³) or kilograms per cubic meter (kg/m³). In the United States customary units, it is commonly expressed in pounds per cubic foot (lb/ft³). Rubble Density Kg/m3 The bulk density ranges from 1,86 g/cm3 to 2,33 g/cm3. Unit Weight Definition Unit weight, also known as specific weight or weight density, is a measure of the weight of a substance per unit volume. It quantifies the amount of mass in a given volume of a material and is expressed in units of force per unit volume, such as newtons per cubic meter (N/m³) or pounds per cubic foot (lb/ft³). What Is Density? Density is the substance’s mass per unit of volume. The symbol most often used for density is ρ, although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume: where ρ is the density, m is the mass, and V is the volume. Unit Weight Means The specific weight, also known as the unit weight, is the weight per unit volume of a material. A commonly used value is the specific weight of water on Earth at 4 °C (39 °F), which is 9.807 kilonewtons per cubic metre or 62.43 pounds-force per cubic foot. Cement Unit Weight Normal weight concrete is in the range of 140 – 150 lbs./cu. ft. For normal weight concrete, a change in unit weight of 1.5 lbs./cu. ft. Define Unit Weight The specific weight, also known as the unit weight, is the weight per unit volume of a material. A commonly used value is the specific weight of water on Earth at 4 °C, which is 9.807 kilonewtons per cubic metre or 62.43 pounds-force per cubic foot. Density of Rubble Stone The density of rubble stone can range from approximately 1500 to 2500 kilograms per cubic meter (kg/m³) or 94 to 156 pounds per cubic foot (lb/ft³). However, it’s important to note that these values are approximate and can vary based on the specific type of rubble stone, its porosity, and other factors. Density Vs Unit Weight Density is mass per unit volume, whereas unit weight is force per unit volume. In this standard, density is given only in SI units. After the density has been determined, the unit weight is calculated in SI or inch-pound units, or both. What Is a Unit Weight? The unit of measurement for weight is that of force, which in the International System of Units (SI) is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, and about one-sixth as much on the Moon. Density of Laterite Stone in Kn/m3 Laterite weighs 1.02 gm/cm³ or 1,019 kg/m³., i.e. density of Laterite is equal to 1,019 kg/m³. The unit weight of cement can vary slightly depending on the specific type and brand of cement. However, as a general guideline, the typical unit weight of cement is around 1440 kilograms per cubic meter (kg/m³) or 90 pounds per cubic foot (lb/ft³). Unit Weight of Construction Materials Unit weight of the material is simply defined as Mass per Volume (M/V) and also it is called Specific Weight or Density. Like this post? Share it with your friends! Suggested Read – 1. Tshering Tobgyel says □ Krunal Rajput says Thanks, sir Leave a Reply Cancel reply
{"url":"https://civiljungles.com/unit-weight-building-materials/","timestamp":"2024-11-12T13:40:20Z","content_type":"text/html","content_length":"169101","record_id":"<urn:uuid:c34dc114-a077-4ab4-91e8-4c77de1f4f5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00291.warc.gz"}
10-Ix-the moment of inertia for right-angle-Case-2. Last Updated on March 5, 2024 by Maged kamel Ix-the moment of inertia-right-angle triangle-Case-2. The moment of inertia-case-2 for right-angle by using a horizontal strip. To get the value of the area and Moment of inertia for regular shapes, we can refer to the NCEES reference handbook-3.50. The Moment Of Inertia Ix-Case-2 is the first item in the table. The right-angle triangle which is on the right side of the next slide represents case No.2. please refer to the next slide image. Step-by-step guide for the calculation of moment of inertia -case-2. 1-For Ix-Case-2. estimation, a horizontal strip will be used, the strip width is dy and its height is x. 2- since the strip height starts from the base and intersects with line AC, the y value is the same as the line AC equation, which is y=h*x/b. 3- the moment of inertia due to that strip=dA*y^2. Our strip’s left edge is at distance =x. 4- the width of the strip=(b-x) and the depth is =dy. 5- we will substitute the value of dA, and try to make all our items as a function of y, since we will integrate from y=0 to y=h, for the Ix value. 6-After performing the integration, we get the value of inertia for a right-angle-Ix-case-2 as Ix=b*h^3/12 about the x-axis passing by the base.Ix=(base)*(height)^3/12. 7-For Ix at the CG, we will use the parallel axes theorem and deduct the product of the A*x-bar*y bar. 8- for x bar=b/3, while y bar=h/3. Finally, we get Ixg=b*h^3/36. The moment of inertia-case-2 for right-angle by using a vertical strip as an alternative method. 1-For The moment of inertia for a right-angle-triangle-case-2, the second alternative method is by using a vertical strip will be used, the strip width is (dx )and its height is( y). 2- Since the strip height starts from the base and intersects with line AC, y- y-value is the same as the line Bc equation. 3-Inertia due to that strip=dx*y^3/3 as derived from the case of the rectangle. 4- Integrate from x=0 to x=b, the final answer is Ix=b*h^3/12 same as per the previous method by using a horizontal strip. The radius of gyration can be calculated as the sqrt of (Ix/A). For the estimation of Ixg at the Cg. We will subtract the product of the area*Y^2bar. We have the area=1/2bh, while ybar^2=(2/3b)^2. We will get Ixg=bh^3/36, from which we can get K^2g=h^2/18, as shown in the next slide. This is the pdf file used in the illustration of this post. For an external resource, the definition of the moment of inertia with solved problems is the 2nd moment of inertia. This is a link to the playlist for all videos for inertia. The next post is the moment of inertia for the right-angle triangle-Iy- Case-2.
{"url":"https://magedkamel.com/10-ix-the-moment-of-inertia/","timestamp":"2024-11-02T17:28:26Z","content_type":"text/html","content_length":"196062","record_id":"<urn:uuid:61fc2097-4950-4863-a0b7-9b501d412bfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00710.warc.gz"}
From Divide and Conquer to Parallelization As computers surely take off the world, algorithms more and more rule it. I’m not saying we shall use algorithms for everything, at least not as Dr. Sheldon Cooper uses an algorithm to make friends. Great concepts such as divide and conquer or parallelization have developed a lot and showed plenty of applications, including in the way we should deal with real life issues. Let’s mention Facebook, Youtube or Wikipedia whose success is based on these concepts, by distributing the responsibility of feeding them with contents. But, what’s less known is that astronomy also went distributed, as it has asked for every one to contribute to its development by classifying galaxy on this website. Let’s also add Science4All to that list (we’ll get there!), which counts on each and everyone of you, not only to contribute by writing articles, but also to share the pages and tell your friends! But what has come is nothing compare to what’s coming. In his book Physics of the Future, the renowned theoretical physicist Michio Kaku predicts the future of computers. Today, computers have only one chip, which may be divided in two or four cores. Chips give computers their computability powers. But, in the future, chips will be so small and so cheap, that they will be everywhere, just like electricity today. Thousands of chips will take care of each and everyone of us. I’ll let you imagine the infinite possibilities of parallelization. In this article, we’ll present how the divide and conquer approach improves sorting algorithms. Then we’ll talk about parallelization and the major fundamental problem $P=NC$. Sorting algorithms The problem of sorting data is crucial, not only in computer science. Imagine Sheldon wanted to hire a friend to take him to the comic store. He’d make a list of his friends, ranked by how good friends they are, then he’ll ask each friend to come with him. If the friend refuses, he’d have to go to the next friend of the list. Thus, he needs to sort his friends. This can take a while, even for Dr. Sheldon Cooper. Really? I mean, Sheldon only has a handful of friends, doesn’t he? Even if he only has 6 friends, if he does the sorting badly, it can take him a while. For instance, he could use the bogosort: Write his friends’ names on cards, throw the cards in the air, pick them, test if they are sorted, and, if they are not, repeat from step 1. Even though this will eventually work, this will probably take a while. In means, it will take as many throws in the air as the number of ways to sort 6 friends, that is $6! = 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720$. But don’t underestimate Dr. Cooper’s circle. In the pilot of the show, he claims to have 212 friends on MySpace (and it doesn’t even include Penny and Amy)! Sorting 212 friends will take a while. With the bogosort, it will take in means about 10^281 throws in the air, which cannot be done in billions of years even with very powerful computers… Obviously, he’ll need to do it smartly. Can’t he just take his most favorite friend that hasn’t refused yet until finding one? Yes he can. At the first iteration, he’d have to look through the 212 friends. Then, if the favorite friend refuses, through the other 211 friends. Then, if the second favorite friend refuses, through the other 210 friends… and so on. In the worst case where all his friends refuse (which is a scenario Sheldon really should consider…), he’ll have to do $212+211+210+…+1$ basic operations. That is equal to $212*211/2 = 22,472$. Even for Dr. Cooper, this will take a while. What he’s done is almost the selection sort algorithm. This algorithm uses two lists, one called the remaining list initially filled with all friends, and the other, called the sorted list, initially empty. At each iteration, the most favorite friend of the remaining list is selected and removed from that list, and it is appended to the sorted list. If the number of friends is $n$, the number of operation required for this algorithm is about $n^2/2$. We say that it has a quadratic time complexity. There are several other sort algorithms with quadratic time complexity that you can think of, including the insertion sort, the bubble sort and the gnome sort. But there are better algorithms, mainly based on the divide and conquer principle. Divide and conquer? What do you mean? Divide and conquer is an extremely powerful way of thinking problems. The idea is to divide a main problem into smaller subproblems easier to solve. Solving the main problem becomes equivalent to the problems of the division into subproblems, the resolution of subproblems and the merge of results of subproblems. This is particularly powerful in the case of parallel algorithms, which I’ll get back to later in this article. In our case, we’re going to divide the problem of sorting all of the friends into two subproblems of sorting each half of the friends. Now, there are mainly two ways of handling the dividing and merging phases. Either we focus on the dividing phase and we’ll be describing the quick sort, or we focus on the merging phase and this will lead us to the merge sort. Although I actually prefer the merge sort (and I’ll explain why later), I will be describing the quick sort. As I said, the idea of the quick sort is to focus on the dividing phase. What we’ll be doing in this phase is dividing the list of friends into a list of relatively good friends and a list of relatively bad friends. In order to do that we need to define a friend that’s in-between the two lists, called the pivot. This pivot is usually the first element of the list or an element picked randomly in the list. Friends preferred to the pivot will go into the first list, the others into the second list. In each list, we solve a subproblem that gets it sorted. Merging will become very easy as it will simply consist in appending the second list to the first one. OK, I know how to divide and merge. But how do we solve the subproblem? The subproblem is identical to the main problem… Obviously, if a list has only 1 or 2 elements, we don’t need to apply a quick sort to sort it… But in other cases, we can use the quick sort for the subproblems! That means that we will apply a quick sort to our two lists, which will sort them. Can we use an algorithm in its own definition? Yes, we can, because we use it for a strictly simpler problem, and if the problem is already too simple (that is, we’re trying to sort a list of 1 or 2 elements), we don’t use the algorithm. Let’s apply the quick sort on Sheldon’s example if we only consider his close friends. And you’re saying that this algorithm performs better than the selection sort? Yes that’s what I’m saying. Let’s do a little bit of math to prove that. Suppose we have $n$ friends to sort. Denote $T(n)$ the number of operations to perform the quick sort. The dividing phase takes $n$ operations. Once divided, we get two equally smaller problems. Solving one of them corresponds to sorting $n/2$ friends with the quick sort, which takes $T(n/2)$. Solving the two of them therefore takes twice as long, that is $2T(n/2)$. The merging phase can be done very quickly with only 1 operation. That’s why, we have the relation $T(n) = 2T(n/2)+n+1$. Solving this equation leads you to approximate $T(n)$ by $n \log(n)/\log(2)$. For great values of $n$, this number is much much less than $n^2/2$. In Sheldon’s example in which he has 212 friends, the number of operations required by the quick sort is about 1,600, which is much less than the 22 thousand operations required by the selection sort! And if you’re not impressed yet, imagine the problem Google faces when sorting 500 billion web pages. Using a selection sort for that would take millions of years… Wait a minute, you did assume we had two “equally smaller problems”… Yes I did. It can be shown that by picking the pivot randomly, then in means, the complexity will be nearly the one we wrote. We talk about average complexity. But in the worst case, the complexity is not as good. In fact, the worst case complexity for the quick sort is the same as for the selection sort: it is quadratic. That’s why I prefer the merge sort who, by making sure the two subproblems will be equally small, makes sure that the complexity is always about $n \log n$, even in the worst case. The reason why I wanted to talk about the quick sort is to show the importance of randomness in the algorithm. If we used the first element as pivot, then sorting a sorted list would be the worst case. Yet, it will probably happen quite often… Randomness enables to obtain a good complexity even for that case. Read more on randomness with my article on probabilistic algorithms! P = NC? The problem $P=NC$ is definitely a less-known problem but I actually find it just as crucial as the $P=NP$ problem (see my article on P=NP). Obviously if both were proven right, then $P=NC=NP$, which means that complicated problems could be solved… almost instantaneously. I really like the $P=NC$ problem as it highlights an important new way of thinking that needs to be applied in every field: What do you mean by parallelization? I know about parallel lines… The term parallel here is actually opposed to sequential. It is also known as distributed. For instance, suppose you want to compare the prices of milk, lettuce and beer in shops of New York City. You could go into each store, check the price and get out, but, as you probably know, NYC is a big city and this could take you months, if not years. What Brian Lehrer did on WNYC is applying the concept of parallelization on the comparing prices in NYC shops problem: he asked his auditors to check the prices as he opened a web page to gather the information. In matter of days the problem was solved. Check out the obtained map. Hey, that sounds quite like the divide and conquer… It surely does! The concept of divide and conquer was already extremely powerful in the case of a sequential algorithm, so I’ll let you imagine how well it performs in the case of parallelized You still haven’t explained what a parallelized algorithm is… You’re right! In a parallelized algorithm, you can run subproblems on different computers simultaneously. It’s kind of like task sharing in a company… Except that there are no problems of egos with computers! Parallelization is a very important concept as more and more computers enable it. As a matter of fact, your computer (or even your phone) probably has a duo core or a quad core processor, which means that algorithms can be parallelized into 2 or 4 subproblems that run simultaneously. But that’s just the tip of the iceberg! Many algorithms now run in cloud computing, which means that applications are running on several servers at the same time. The number of subproblems you can run is now simply enormous, and it will keep Parallelization is now a very important way of thinking, because we now have the tools to actually do it. And to do it well. So I guess that NC stands for parallelized algorithms… Almost. $NC$, which corresponds to Nick’s class after Nick Pippenger, is the set of decision problems that can be solved using a polynomial number of processors in a polylogarithmic time, that is with less than $(\log n)^k$ transitions, where $k$ is a constant, and $n$ is the size of the input. That means that parallelization would enable to solve NC problems very very quickly. In matters of seconds if not less, even for extremely large inputs. And what does the “P” stand for? $P$ stands for Polynomial. It’s the set of decision problems that can be solved in a polynomial time with a sequential algorithm. I won’t be dwelling to much on these definitions, you can read my future article on $P=NP$ to have better definitions. As any parallelized problem can be sequenced by solving parallelized subproblems sequentially, it can be easily proved that any $NC$ problem is a $P$ problem. The big question is proving whether a $P$ problem is necessarily a $NC$ problem or not. If $P=NC$, then that means that any problem that we can solve with a single machine in reasonable time can now be solved almost instantaneously with cloud computing. Applications in plenty of fields would be extraordinary. But if $P \neq NC$, which is, according to Wikipedia, what scientists seem to suspect, that means that some problems are intrinsically not parallelizable. Do we have the concept of P-completeness to prove P=NC? Yes we do, just like we have the concept of NP-completeness to prove $P=NP$! There are a few problems that have been proved to be P-complete, that is, problems that are harder than other $P$ problems. If one of them is proven $NC$, then any other $P$ problems will be $NC$. Proving that a P-complete problem is $NC$ would solve the problem $P=NC$. One of these problems is the decision problem associated to the very classical linear programming problem. This problem is particularly interesting because it has a lot of applications (and there would an awful lot of applications if it could be parallelized!). Read my article on linear programming to learn about it! So I guess, once again, scientists suspect that $P \neq NP$ because they have not succeeded in parallelizing linear programming… I guess so… Still, a few problems sequentially polynomial have been parallelized and can now be solved much more quickly. For instance, let’s get back to sort algorithms. In the two divide and conquer algorithms we have described, subproblems are generated and can easily be parallelized. The number of subproblems cannot be more than the size of the list. It is therefore polynomial in the size of the list at any time. In the case of nearly equal subproblems, the number of iterations is about the logarithm of the size of the list. Therefore the parallelized sort algorithms is parallelized on a polynomial number of processors and has an almost polylogarithmic time. The only problem is the complexity time of the merging phase for the merge sort, and of the dividing phase for the quick sort. However, those two difficulties have been overcome. In 1988, Richard Cole found a smart way to parallelize the merging phase of the merge sort. In 1991, David Powers used implicite partitions to parallelize the dividing phase for the quick sort. In both cases, this led to a logarithmic complexity. With huge datacenters, google can now perform sorting of web pages in matters of seconds. Impressive, right? I’m sure there is a lot more to say on parallelization (and definitely a lot more of research to be done), especially in terms of protocols, which leads to problems of mechanism design , as each computer may act for its own purpose. I’m no expert in parallelization and if you are, I encourage you to write an article on that topic. Let’s sum up Computer science has already totally changed the world. For the new generation that includes myself, it’s hard to imagine how people used to live without computers, how they wrote reports or seek for information. Yet, this is only the beginning and parallelization will change the world in a way I probably can’t imagine. The advice I’d give you is to think with parallelization. It’s powerful concepts that need to be applied in various fields, including in company strategies. In fact, I have applied divide and conquer by choosing the apparent lack of structure of the articles for Science4All. Classical structures would have involved a tree structure with categories of articles and subcategories, probably by themes of fields. But I’m betting on a more horizontal structure, where the structure is created by the links between the articles, via related articles or via
{"url":"http://www.science4all.org/article/divide-and-conquer/","timestamp":"2024-11-03T21:46:43Z","content_type":"text/html","content_length":"63129","record_id":"<urn:uuid:41f6af56-9477-498f-bcde-0adc306fa760>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00893.warc.gz"}
Where can I find experts to handle my linear programming tasks? | Linear Programming Assignment Help Where can I find experts to handle my linear programming tasks? Are expert generalists who have taught linear programming? Are anyone searching for generalists who know general find out here or linear algebra to evaluate linear programming? Other than that, are you alone with this issue and a topic that is new to me? Is it too much of a conceptual question to ask? It seems like a lot, with great big questions like why are linear algebra and linear programming problems that can he has a good point solved in algebra? And indeed, many topics go too far. The question where Can I Find Expert for Linear Programming? Is it too much of a conceptual question to ask? It seems like a lot, with great big questions like why are linear algebra and linear programming problems that can be solved in algebra? But now I understand the problem, so I don’t care what answer you say. Moreover when presented with an answer, only you can comment whether it is true or not. 1. Your initial question about how to solve linear algebra is very vague when you answer it to one answer or two answers. You certainly didn’t say it, so it doesn’t help. Please, before getting more specific : Does this question have a clear outline or should I clarify / edit it? You can read my other comments to your version and find out my decision. Please email me you suggestions to comment in which answer you stated you had a high score to my best judgment. I don’t know if this question has a focus on mathematics (at least if it turns out to be linear or algebraic), but it is very vague in its topic. 2. Your book, Chapter One describes linear algebra, and shows some of the various ways linear algebra can be solved by equation The proof of visit this site right here case is that both linear and quiver methods require the opposite but same concept of algorithm. Does there exist a simple way to solve linear algebra? Where can I find information about algorithms that solve linear algebraWhere can I find experts to handle my linear programming tasks? I can try this web-site some quick and efficient tools/tools that I could develop for my more complex linear programming task. But you aren’t missing anything, so I encourage you to create a notebook/web-based library for your linear programming tasks. Let me know if you want to test anything and I’ll let you know as soon as I have an idea of where you’ll land my features. If you’d like to read my full tutorial and my short “one-to-one” learning style for linear programming: 2.1.1 My Tutorial Go Here You will find two ways to learn about linear programming of a beginner see here advanced man by providing these topics to those who are getting stuck: Listing 1, by William C. “Linear Programming” with David Grossman, New York : Fordham University Press, 1998. This book is by William C. Grossman, New York : Fordham University Press, 1998. Is It Illegal To Do Someone Else’s Homework? 6.1 Linear Programming Model The model for linear programming is used by physicists and engineers as a stepping stone. “l” stands for length, and “a” means “an”: the number of lines that are longer than the length of a line Typically, researchers use short, one-line measurements to measure angles in real space. In my lab, the 2D planes described earlier are measured by just one (or two) lines of a cylinder. In this case, my theory works almost instantly as long as the measured angle (y) is still shorter than the length of a cylinder. 5.1 Linear Programming Model The Model for Linear Programming is used by physicists and engineers as a stepping stone. “l” stands for length, and “a” means “an”: the number of lines that are longer thanWhere can I find experts to handle my linear programming tasks? Hi all! I tried two different methods to find experts: to implement my programs, using my professor’s software source code (at irc.pci), and to maintain my hardware-programming on my laptop. To interface my program with my software sources, I installed the.net browser and looked for a.Net.Net IDE. It didn’t seem to be a good solution. So I tried using a.NET SDK for my program. This gave me: It’s just a couple of hours old, and I’m not sure if it’s a project that’s much simpler to work with in the.net framework, and.net components. To keep up with the progress. For me,.net isn’t exactly the full functional model, either. I noticed everytime I try to understand how.net works, it just doesn’t know where I put my programs – since that requires the proper.net framework’s built in library too: http://pastie.org/115572 For my code, it works fine, and I can’t get any.net _ modules to compile regardless. For the current project, I’m just looking forward to a few more hours of effort with a few more questions on the end: What does “mvc4” mean when I put a program into my project’s main directory? Does “mvc” mean anything except “mvc4”? What do you mean by “mvc4”? And how is “mvc4” used as such? Does “mvc” mean anything except “mvc4”? The meaning of “mvc” site link largely unclear, and the MSDN site is hard to work with. I’m not trying to make.net programm, but to explain more deeply why I use it as a verb. I’m not really interested in trying to explain how things
{"url":"https://linearprogramminghelp.com/where-can-i-find-experts-to-handle-my-linear-programming-tasks","timestamp":"2024-11-13T09:01:08Z","content_type":"text/html","content_length":"115236","record_id":"<urn:uuid:b1cc2a12-0159-45d2-9bc7-6f1952a8e5d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00330.warc.gz"}
QPointF Class The QPointF class defines a point in the plane using floating point precision. More... Header: #include <QPointF> qmake: QT += core Note: All functions in this class are reentrant. Public Functions QPointF(qreal xpos, qreal ypos) QPointF(const QPoint &point) bool isNull() const qreal manhattanLength() const qreal & rx() qreal & ry() void setX(qreal x) void setY(qreal y) CGPoint toCGPoint() const QPoint toPoint() const QPointF transposed() const qreal x() const qreal y() const QPointF & operator*=(qreal factor) QPointF & operator+=(const QPointF &point) QPointF & operator-=(const QPointF &point) QPointF & operator/=(qreal divisor) Static Public Members qreal dotProduct(const QPointF &p1, const QPointF &p2) QPointF fromCGPoint(CGPoint point) Related Non-Members bool operator!=(const QPointF &p1, const QPointF &p2) const QPointF operator*(const QPointF &point, qreal factor) const QPointF operator*(qreal factor, const QPointF &point) const QPointF operator+(const QPointF &p1, const QPointF &p2) const QPointF operator+(const QPointF &point) const QPointF operator-(const QPointF &p1, const QPointF &p2) const QPointF operator-(const QPointF &point) const QPointF operator/(const QPointF &point, qreal divisor) QDataStream & operator<<(QDataStream &stream, const QPointF &point) bool operator==(const QPointF &p1, const QPointF &p2) QDataStream & operator>>(QDataStream &stream, QPointF &point) Detailed Description A point is specified by a x coordinate and an y coordinate which can be accessed using the x() and y() functions. The coordinates of the point are specified using floating point numbers for accuracy. The isNull() function returns true if both x and y are set to 0.0. The coordinates can be set (or altered) using the setX() and setY() functions, or alternatively the rx() and ry() functions which return references to the coordinates (allowing direct manipulation). Given a point p, the following statements are all equivalent: QPointF p; p.setX(p.x() + 1.0); p += QPointF(1.0, 0.0); A QPointF object can also be used as a vector: Addition and subtraction are defined as for vectors (each component is added separately). A QPointF object can also be divided or multiplied by an int or a qreal. In addition, the QPointF class provides a constructor converting a QPoint object into a QPointF object, and a corresponding toPoint() function which returns a QPoint copy of this point. Finally, QPointF objects can be streamed as well as compared. See also QPoint and QPolygonF. Member Function Documentation QPointF::QPointF(qreal xpos, qreal ypos) Constructs a point with the given coordinates (xpos, ypos). See also setX() and setY(). QPointF::QPointF(const QPoint &point) Constructs a copy of the given point. See also toPoint(). Constructs a null point, i.e. with coordinates (0.0, 0.0) See also isNull(). [static] qreal QPointF::dotProduct(const QPointF &p1, const QPointF &p2) QPointF p( 3.1, 7.1); QPointF q(-1.0, 4.1); int lengthSquared = QPointF::dotProduct(p, q); // lengthSquared becomes 26.01 Returns the dot product of p1 and p2. This function was introduced in Qt 5.1. [static] QPointF QPointF::fromCGPoint(CGPoint point) Creates a QRectF from CGPoint point. This function was introduced in Qt 5.8. See also toCGPoint(). bool QPointF::isNull() const Returns true if both the x and y coordinates are set to 0.0 (ignoring the sign); otherwise returns false. qreal QPointF::manhattanLength() const Returns the sum of the absolute values of x() and y(), traditionally known as the "Manhattan length" of the vector from the origin to the point. This function was introduced in Qt 4.6. See also QPoint::manhattanLength(). qreal &QPointF::rx() Returns a reference to the x coordinate of this point. Using a reference makes it possible to directly manipulate x. For example: QPointF p(1.1, 2.5); p.rx()--; // p becomes (0.1, 2.5) See also x() and setX(). qreal &QPointF::ry() Returns a reference to the y coordinate of this point. Using a reference makes it possible to directly manipulate y. For example: QPointF p(1.1, 2.5); p.ry()++; // p becomes (1.1, 3.5) See also y() and setY(). void QPointF::setX(qreal x) Sets the x coordinate of this point to the given x coordinate. See also x() and setY(). void QPointF::setY(qreal y) Sets the y coordinate of this point to the given y coordinate. See also y() and setX(). CGPoint QPointF::toCGPoint() const Creates a CGPoint from a QPointF. This function was introduced in Qt 5.8. See also fromCGPoint(). QPoint QPointF::toPoint() const Rounds the coordinates of this point to the nearest integer, and returns a QPoint object with the rounded coordinates. See also QPointF(). QPointF QPointF::transposed() const Returns a point with x and y coordinates exchanged: QPointF{1.0, 2.0}.transposed() // {2.0, 1.0} This function was introduced in Qt 5.14. See also x(), y(), setX(), and setY(). qreal QPointF::x() const Returns the x coordinate of this point. See also setX() and rx(). qreal QPointF::y() const Returns the y coordinate of this point. See also setY() and ry(). QPointF &QPointF::operator*=(qreal factor) Multiplies this point's coordinates by the given factor, and returns a reference to this point. For example: QPointF p(-1.1, 4.1); p *= 2.5; // p becomes (-2.75, 10.25) See also operator/=(). QPointF &QPointF::operator+=(const QPointF &point) Adds the given point to this point and returns a reference to this point. For example: QPointF p( 3.1, 7.1); QPointF q(-1.0, 4.1); p += q; // p becomes (2.1, 11.2) See also operator-=(). QPointF &QPointF::operator-=(const QPointF &point) Subtracts the given point from this point and returns a reference to this point. For example: QPointF p( 3.1, 7.1); QPointF q(-1.0, 4.1); p -= q; // p becomes (4.1, 3.0) See also operator+=(). QPointF &QPointF::operator/=(qreal divisor) Divides both x and y by the given divisor, and returns a reference to this point. For example: QPointF p(-2.75, 10.25); p /= 2.5; // p becomes (-1.1, 4.1) See also operator*=(). Related Non-Members bool operator!=(const QPointF &p1, const QPointF &p2) Returns true if p1 is sufficiently different from p2; otherwise returns false. Warning: This function does not check for strict inequality; instead, it uses a fuzzy comparison to compare the points' coordinates. See also qFuzzyCompare. const QPointF operator*(const QPointF &point, qreal factor) Returns a copy of the given point, multiplied by the given factor. See also QPointF::operator*=(). const QPointF operator*(qreal factor, const QPointF &point) This is an overloaded function. Returns a copy of the given point, multiplied by the given factor. Returns a QPointF object that is the sum of the given points, p1 and p2; each component is added separately. See also QPointF::operator+=(). const QPointF operator+(const QPointF &point) Returns point unmodified. This function was introduced in Qt 5.0. Returns a QPointF object that is formed by subtracting p2 from p1; each component is subtracted separately. See also QPointF::operator-=(). const QPointF operator-(const QPointF &point) This is an overloaded function. Returns a QPointF object that is formed by changing the sign of both components of the given point. Equivalent to QPointF(0,0) - point. const QPointF operator/(const QPointF &point, qreal divisor) Returns the QPointF object formed by dividing both components of the given point by the given divisor. See also QPointF::operator/=(). Writes the given point to the given stream and returns a reference to the stream. See also Serializing Qt Data Types. bool operator==(const QPointF &p1, const QPointF &p2) Returns true if p1 is approximately equal to p2; otherwise returns false. Warning: This function does not check for strict equality; instead, it uses a fuzzy comparison to compare the points' coordinates. See also qFuzzyCompare. Reads a point from the given stream into the given point and returns a reference to the stream. See also Serializing Qt Data Types.
{"url":"https://lira.no-ip.org:8443/doc/qtmultimedia5-doc-html/html/qtcore/qpointf.html","timestamp":"2024-11-07T21:43:10Z","content_type":"text/html","content_length":"32258","record_id":"<urn:uuid:81d50f47-c7f3-46a5-a9a7-a63befb2f6e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00620.warc.gz"}
A python library for time-series smoothing and outlier detection in a vectorized way. A python library for time-series smoothing and outlier detection in a vectorized way. tsmoothie computes, in a fast and efficient way, the smoothing of single or multiple time-series. The smoothing techniques available are: • Exponential Smoothing • Convolutional Smoothing with various window types (constant, hanning, hamming, bartlett, blackman) • Spectral Smoothing with Fourier Transform • Polynomial Smoothing • Spline Smoothing of various kind (linear, cubic, natural cubic) • Gaussian Smoothing • Binner Smoothing • LOWESS • Seasonal Decompose Smoothing of various kind (convolution, lowess, natural cubic spline) • Kalman Smoothing with customizable components (level, trend, seasonality, long seasonality) tsmoothie provides the calculation of intervals as result of the smoothing process. This can be useful to identify outliers and anomalies in time-series. In relation to the smoothing method used, the interval types available are: • sigma intervals • confidence intervals • predictions intervals • kalman intervals tsmoothie can carry out a sliding smoothing approach to simulate an online usage. This is possible splitting the time-series into equal sized pieces and smoothing them independently. As always, this functionality is implemented in a vectorized way through the WindowWrapper class. tsmoothie can operate time-series bootstrap through the BootstrappingWrapper class. The supported bootstrap algorithms are: • none overlapping block bootstrap • moving block bootstrap • circular block bootstrap • stationary bootstrap Blog Posts: pip install tsmoothie The module depends only on NumPy, SciPy and simdkalman. Python 3.6 or above is supported. Usage: smoothing Below a couple of examples of how tsmoothie works. Full examples are available in the notebooks folder. # import libraries import numpy as np import matplotlib.pyplot as plt from tsmoothie.utils_func import sim_randomwalk from tsmoothie.smoother import LowessSmoother # generate 3 randomwalks of lenght 200 data = sim_randomwalk(n_series=3, timesteps=200, process_noise=10, measure_noise=30) # operate smoothing smoother = LowessSmoother(smooth_fraction=0.1, iterations=1) # generate intervals low, up = smoother.get_intervals('prediction_interval') # plot the smoothed timeseries with intervals for i in range(3): plt.plot(smoother.smooth_data[i], linewidth=3, color='blue') plt.plot(smoother.data[i], '.k') plt.title(f"timeseries {i+1}"); plt.xlabel('time') plt.fill_between(range(len(smoother.data[i])), low[i], up[i], alpha=0.3) # import libraries import numpy as np import matplotlib.pyplot as plt from tsmoothie.utils_func import sim_seasonal_data from tsmoothie.smoother import DecomposeSmoother # generate 3 periodic timeseries of lenght 300 data = sim_seasonal_data(n_series=3, timesteps=300, freq=24, measure_noise=30) # operate smoothing smoother = DecomposeSmoother(smooth_type='lowess', periods=24, # generate intervals low, up = smoother.get_intervals('sigma_interval') # plot the smoothed timeseries with intervals for i in range(3): plt.plot(smoother.smooth_data[i], linewidth=3, color='blue') plt.plot(smoother.data[i], '.k') plt.title(f"timeseries {i+1}"); plt.xlabel('time') plt.fill_between(range(len(smoother.data[i])), low[i], up[i], alpha=0.3) Usage: bootstrap # import libraries import numpy as np import matplotlib.pyplot as plt from tsmoothie.utils_func import sim_seasonal_data from tsmoothie.smoother import ConvolutionSmoother from tsmoothie.bootstrap import BootstrappingWrapper # generate a periodic timeseries of lenght 300 data = sim_seasonal_data(n_series=1, timesteps=300, freq=24, measure_noise=15) # operate bootstrap bts = BootstrappingWrapper(ConvolutionSmoother(window_len=8, window_type='ones'), bootstrap_type='mbb', block_length=24) bts_samples = bts.sample(data, n_samples=100) # plot the bootstrapped timeseries plt.plot(bts_samples.T, alpha=0.3, c='orange') plt.plot(data[0], c='blue', linewidth=2) • Polynomial, Spline, Gaussian and Binner smoothing are carried out building a regression on custom basis expansions. These implementations are based on the amazing intuitions of Matthew Drury available here • Time Series Modelling with Unobserved Components, Matteo M. Pelagatti • Bootstrap Methods in Time Series Analysis, Fanny Bergström, Stockholms universitet
{"url":"https://repo.telematika.org/project/cerlymarco_tsmoothie/","timestamp":"2024-11-04T06:10:39Z","content_type":"text/html","content_length":"27248","record_id":"<urn:uuid:4913fb45-e50c-4942-b9ab-b85ed67d1f9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00733.warc.gz"}
Basic Mathematics Quiz 2 - GM Statistics Business Mathematics MCQS Basic Mathematics Quiz 2 MCQs about Basic Business and Applied Mathematics for the preparation of Exams related to CA, CIMA, ICMAP, and MBA. MCQs cover many Business-related fields (such as Business Administration, Commerce, and chartered accountancy-related Institutes) in which the subject of Business Mathematics is taught. Let us start with the Basic Mathematics Quiz. This quiz covers topics related to Business and Applied mathematics such as selling price, revenue, cost, profit, retail price, marked price, rates, ratio, and basic arithmetics, etc. 1. (Cost price – Loss) is equal to 2. (Cost Price – Selling Price ) is equal to 3. If the selling price is the selling price, the cost price is the cost price, we get a loss when 4. (Profit + cost) price is equal to 5. During the sale, a shop offers a discount of 8% on the marked price. If the marked price is $5500, then the purchase price of an oven should be 6. The price at which a particular item is purchased by a shopkeeper is known as 7. Sara and Ali earned a profit of $500,000 from a business and their ratio of investment was 5:8, respectively. The profit of each should be 8. The marked price of a fan is £850, it is sold for £800. The percentage discount allowed is 9. A phone was purchased for £4000 and sold for £4800. The profit percentage should be 10. A trader sold a television for $1500. The price should he sell to get a profit of 20% is 11. If the selling price of an item is greater than its cost price, then we earn 12. The annual income of a person is £530,000 and the exempted amount is £280,000. The income tax payable at the rate of 0.75% would be 13. (Discount ⁄ MP) $\times$ 100 is equal to 14. (Profit ⁄ Cost Price) $\times$ 100 is equal to 15. If the sales price is 672 and the profit is 5%, then the cost price should be 16. (Selling Price – Cost Price) is called 17. Marked price – sales price is equal to 18. If the capital of partners is invested for the same length of time, the partnership is said to be none of the above 19. A deduction that is offered on the MP or the list price of items by the seller to the purchaser is called 20. (Loss⁄Cost Price) $\times$ 100 is equal to Basic Mathematics Quiz • Sara and Ali earned a profit of $500,000 from a business and their ratio of investment was 5:8, respectively. The profit of each should be • (Cost Price – Selling Price ) is equal to • (Selling Price – Cost Price) is called • During the sale, a shop offers a discount of 8% on the marked price. If the marked price is $5500, then the purchase price of an oven should be • The price at which a particular item is purchased by a shopkeeper is known as • If the sales price is 672 and the profit is 5%, then the cost price should be • A trader sold a television for $1500. The price should he sell to get a profit of 20% is • If the selling price is the selling price, the cost price is the cost price, we get a loss when • A deduction that is offered on the MP or the list price of items by the seller to the purchaser is called • A phone was purchased for £4000 and sold for £4800. The profit percentage should be • Marked price – sales price is equal to • (Profit ⁄ Cost Price) $\times$ 100 is equal to • If the capital of partners is invested for the same length of time, the partnership is said to be none of the above • (Cost price – Loss) is equal to • The marked price of a fan is £850, it is sold for £800. The percentage discount allowed is • (Discount ⁄ MP) $\times$ 100 is equal to • (Loss⁄Cost Price) $\times$ 100 is equal to • If the selling price of an item is greater than its cost price, then we earn • (Profit + cost) price is equal to • The annual income of a person is £530,000 and the exempted amount is £280,000. The income tax payable at the rate of 0.75% would be Visit for MCQs about Basic Mathematics MCQs Statistics Quiz with Answers
{"url":"https://gmstat.com/business-math/basics/basic-mathematics-quiz-2/","timestamp":"2024-11-12T02:05:24Z","content_type":"text/html","content_length":"275519","record_id":"<urn:uuid:050a7910-8b12-4824-b6fe-e7d6f0898a0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00341.warc.gz"}
Spacetime Geometry and General Relativity by Neil Lambert Publisher: King's College London 2011 Number of pages: 48 This course is meant as introduction to what is widely considered to be the most beautiful and imaginative physical theory ever devised: General Relativity. It is assumed that you have a reasonable knowledge of Special Relativity as well as tensors. Download or read it online for free here: Download link (360KB, PDF) Similar books Mass and Angular Momentum in General Relativity J.L. Jaramillo, E. Gourgoulhon arXivWe present an introduction to mass and angular momentum in General Relativity. After briefly reviewing energy-momentum for matter fields, first in the flat Minkowski case (Special Relativity) and then in curved spacetimes with or without symmetries. General Relativity Without Calculus Jose Natario SpringerThis book was written as a guide for a one week course aimed at exceptional students in their final years of secondary education. The course was intended to provide a quick but nontrivial introduction to Einstein's general theory of relativity. Beyond partial differential equations: A course on linear and quasi-linear abstract hyperbolic evolution equations Horst R. Beyer arXivThis course introduces the use of semigroup methods in the solution of linear and nonlinear (quasi-linear) hyperbolic partial differential equations, with particular application to wave equations and Hermitian hyperbolic systems. Schwarzschild and Kerr Solutions of Einstein's Field Equation: an introduction Christian Heinicke, Friedrich W. Hehl arXivStarting from Newton's gravitational theory, we give a general introduction into the spherically symmetric solution of Einstein's vacuum field equation, the Schwarzschild solution, and into one specific stationary solution, the Kerr solution.
{"url":"https://www.e-booksdirectory.com/details.php?ebook=8685","timestamp":"2024-11-12T12:09:47Z","content_type":"text/html","content_length":"11292","record_id":"<urn:uuid:36ea5df3-4b0f-4adc-8ee2-2dcf0bf79463>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00223.warc.gz"}
In-Network View Synthesis for Interactive Multiview Video Systems Pascal Frossard, Laura Toni, Xue Zhang, Yao Zhao Multiview applications endow final users with the possibility to freely navigate within 3D scenes with minimum-delay. A real feeling of scene navigation is enabled by transmitting multiple high-quality camera views, which can be used to synthesize addition ...
{"url":"https://graphsearch.epfl.ch/en/publication/214915","timestamp":"2024-11-12T07:02:58Z","content_type":"text/html","content_length":"108388","record_id":"<urn:uuid:e88835b0-ab67-42c0-8190-420d56659673>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00488.warc.gz"}
We analyze the well-posedness of certain field-only boundary integral equations (BIEs) for frequency domain electromagnetic scattering from perfectly conducting spheres. Starting from the observations that (1) the three components of the scattered electric field E^scat(x) and (2) scalar quantity E^scat(x) \cdot x are radiative solutions of the Helmholtz equation, we see that novel boundary integral equation formulations of electromagnetic scattering from perfectly conducting obstacles can be derived using Green's identities applied to the aforementioned quantities and the boundary conditions on the surface of the scatterer. The unknowns of these formulations are the normal derivatives of the three components of the scattered electric field and the normal component of the scattered electric field on the surface of the scatterer, and thus these formulations are referred to as field-only BIEs. In this paper we use the combined field methodology of Burton and Miller within the field-only BIE approach, and we derive new boundary integral formulations that feature only Helmholtz boundary integral operators, which we subsequently show to be well posed for all positive frequencies in the case of spherical scatterers. Relying on the spectral properties of Helmholtz boundary integral operators in spherical geometries, we show that the combined field-only boundary integral operators are diagonalizable in the case of spherical geometries and their eigenvalues are nonzero for all frequencies. Furthermore, we show that for spherical geometries one of the field-only integral formulations considered in this paper exhibits eigenvalues clustering at one-a property similar to second-kind integral equations. All Science Journal Classification (ASJC) codes • electromagnetic scattering • integral equations • spherical harmonics Dive into the research topics of 'COMBINED FIELD-ONLY BOUNDARY INTEGRAL EQUATIONS FOR PEC ELECTROMAGNETIC SCATTERING PROBLEM IN SPHERICAL GEOMETRIES'. Together they form a unique fingerprint.
{"url":"https://researchwith.njit.edu/en/publications/combined-field-only-boundary-integral-equations-for-pec-electroma","timestamp":"2024-11-03T03:32:37Z","content_type":"text/html","content_length":"52747","record_id":"<urn:uuid:e1ce5eb0-1779-480b-83f4-fa61045ec27c>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00738.warc.gz"}
Anna University - Electromagnetic Theory (EMT) - Question Bank - All Units Electromagnetic Theory Part A 1. State divergence theorem. 2. State Stoke’s theorem. 3. What is del operator? How is it used in density curl, gradient and divergence? A = x ax + y ay+ y az 4. Define vector product of two vectors. 5. Write down expression for x, y, z in terms of spherical co-ordinates r,θ and φ. 6. Write down the expression for differential volume element in terms of spherical 7. What is the divergence of curl of a vector? 8. Write expression for differential length in cylindrical and spherical co-ordinates. 9. Find the divergence of F= x y ax+ y x ay + z x az 10. Define a vector and its value in Cartesian co-ordinate axis. 11. Verify that the vectors A= 4 ax - 2ay + 2az and B = -6ax + 3ay - 3az are parallel to each other. 12. List out the sources of electromagnetic fields. 13. When a vector field is solenoidal and irrotational.? Part B 14. (i) State and prove Divergence theorem. (ii) For a vector field A, show explicitly that ∆.∆ x A=0: that is the divergence of the curl of any vector field is zero. 15. (i) State and prove Stroke’s theorem. (ii) Show that the vector H = (y2- z2+3yz-2x) ax + (3xz+2xy) ay + (3xy- 2xz+2z) az is both irrotational and solenoidal. 16. Using Divergence theorem, evaluate ∫∫ E.ds = 4xz ax - y2 ay + yz az over the cube bounded by x=0,x=1,y=0,y=1,z=0,z=1 17. What is the different co-ordinate systems used to represent field vectors? Discuss about them in brief. 18. (i) Given A= 5 ax and B= 4 ax + t ax Find t such that the angle between A and B is 45. (ii) Using Divergence theorem evaluate ∫∫ A.ds where A = 2xy ax + y2 ay + 4yz az and S is the surface of the cube bounded by x =0 , x = 1; y = 0, y = 1; and z = 0, z = 1. 19. (i) Determine the divergence and curl of the vector A = x ax + y ay+ y az (ii) Determine the gradient of the scalar field at P(√2, л/2, 5) defined in cylindrical co-ordination system as A = 25 r sin Ф. 20. Given point P(-2,6,3) and vector A= y ax + (x+z) ay Evaluate A and at P in the Cartesian, cylindrical and spherical systems Part A 1. State coulomb’s law. 2. State Gauss’s law. 3. Define dipole moment. 4. Define electric flux and flux density. 5. Define electric field intensity or electric field. 6. What is a point charge? 7. Write the Poisson’s and Laplace equation. 8. Define potential and potential difference. 9. Give the relationship between potential gradient and electric field. 10. Define current density. 11. State point form of Ohm’s law. 12. Define polarization. 13. Express the value of capacitance for a coaxial cable. 14. What is meant by displacement current? 15. State the boundary conditions at the interface between two perfect dielectrics. 16. Write down the expression for the capacitance between (a) two parallel plates (b) two coaxial cylinders. 17. Calculate the capacitance of a parallel plate capacitor having an electrode area of 100 cm2. The distance between the electrodes is 3 mm and the dielectric used has a permittivity of 3.6 the applied potential is 80 V. Also compute the charge on the plates. 18. An infinite line charge charged uniformly with a line charge density of 20 n C/m is located along z-axis. Find E at (6, 8, 3) m. Part B 19. (i) Derive an expression for electric field due to an infinite long charge from its principles. (ii)Derive the boundary conditions at the charge interfaced of two dielectric media. 20. Find the electric field intensity due to the presence of co-axial cable with inner conductors of ρs c/m2 and outer conductor of - ρs c/m2. 21. What is dipole? Derive the expression for potential and electric field intensity due to a dipole. 22. (i) Compare and explain conduction current and displacement current. (ii) A circular of radius ‘a’ meter is charged uniformly with a charge density ρs c/m2. Find the electric field at a point ‘h’ meter from the disc along its axis. 23. A circular disc of 10cm radius is charged uniformly with a total charge of 10^-6 C. Find the electric intensity at a point 30cm away from the disc along the axis. 24. (i) Derive the expression for electric field intensity due to a circular surface charge. (ii) Two parallel plates with uniform surface charge intensity equal and opposite to each other have an area of 2 m2 and distance of separation of 2.5 mm in free space. A steady potential of 200 V is applied across the capacitor formed. If a dielectric of width 1 mm is inserted into this arrangement what is the new capacitance if the dielectric is a perfect non- conductor? 25. (i) State and prove Gauss’s law. (ii) Derive an expression for energy density in electrostatic fields. 26. (i) Derive Poisson’s and Laplace equation. (ii) Three concentrated charges of 0.25 µ C are located at the vertices of an equilateral triangle of 10 cm side. Find the magnitude and direction of the force on one charge due to other two charges. 27. (i) Using Laplace’s equation find the potential V between two concentric circular cylinders, if the potential on the inner cylinder of radius 0.1 cm is 0Vand that on the outer cylinder of radius 1 cm is 100 V. (ii) A point charge of 5 n C is located at (-3, 4,0 ) while line y = 1, z = 1 carries uniform charge 2 n C/m. If V =0V at O (0, 0, 0) find V at A (5, 0, 1) Part A 1. State Ampere’s circuital law. 2. State Biot-Savart law. 3. State Lorenz law of force. 4. Define magnetic scalar potential. 5. Write down the equation for general, Integral and point form of Ampere’s law. 6. What is field due to toroid and solenoid? 7. Define magnetic flux density. 8. Write down the magnetic boundary conditions. 9. Give the force on a current element. 10. Define magnetic moment. 11. Give torque on a solenoid. 12. State Gauss’s law for magnetic field. 13. Define magnetic dipole. 14. Define magnetization. 15. Define magnetic susceptibility. 16. What are the different types of magnetic materials? 17. What is the inductance per unit length of a long solenoid of N turns and having a length L meters?Assume that its carries a current of I amps. 18. A parallel plate capacitor with plate area of 5 cm2 plate separation of 3 mm has a voltage 50 sin 103 t applied to its plates. Calculate the displacement current assuming ξ= 2 ξ0 Part B 19. (i) Derive an expression for the force between two current carrying wires.Assume that the currents are in the same direction. (ii) State and explain Biot-Savart’s law. 20. Obtain an expression for the magnetic field around long straight wire using magnetic vector potential. 21. (i) Obtain an expression for the magnetic flux density and field intensity due to finite long current carrying conductor. (ii) Give a brief note on the magnetic materials. 22. Derive the expression for magnetic field intensity on the axis of solenoid at a) center and b) end point of the solenoid. 23. (i) State and explain Ampere’s circuital law. (ii) State and prove boundary condition for magnetic field. 24. Derive an expression for the inductance of solenoid and toroid. 25. Derive an expression for the inductance per meter length of two transmission lines. 26. Obtain the expression for energy stored in magnetic field and also derive an expression for magnetic energy density. 27. (i) Derive and expression for self inductance of co-axial cable.of inner radius a and outer radius radius b. (ii) A circular loop located on x2+y2 =9, z=0 carries a direct current of 10 A along aθ. Determine H at (0,0, 4) and (0,0,-4). 28. An air coaxial transmission line has a solid inner conductor of radius ‘a’ and very thin outer conductor of inner radius ‘b’. Determine the inductance per unit length of the line. Part A 1. State Faraday’s law of electromagnetic induction. 2. Define self inductance. 3. Define mutual inductance. 4. Define coupling coefficient. 5. Define reluctance. 6. Give the expression for lifting force of an electromagnet. 7. Give the expression for inductance of a solenoid. 8. Give the expression for inductance of a toroid. 9. What is energy density in the magnetic field? 10. Define permeance. 11. Distinguish between solenoid and toroid. 12. Write down the general, integral and point form of Faraday’s law. 13. Distinguish between transformer emf and motional emf. 14. Compare the energy stored in inductor and capacitor. 15. State Lenz’s law. 16. Define magnetic flux. 17. Write the Maxwell’s equations from Ampere’s law both in integral and point forms. 18. Write the Maxwell’s equations from Faraday’s law both in integral and point forms. 19. Write the Maxwell’s equations for free space in point form. 20. Write the Maxwell’s equations for free space in integral form. 21. Determine the force per unit length between two long parallel wires separated by 5 cm in air and carrying currents of 40 A in the same direction. Part B 22. (i) State and explain Faraday’s law. (ii)Compare the field theory and circuit theory. 23. Develop an expression for induced emf of Faraday’s disc generator. 24. Derive the Maxwell’s equation for free space in integral and point forms explain. 25. Derive Maxwell’s equation from Faraday’s law and Gauss’s law and explain them. 26. Derive the Maxwell’s equation in phasor differential form. 27. Derive the Maxwell’s equation in phasor integral form. 28. Derive and explain the Maxwell’s equations in point form and integral form using Ampere’s circuital law and Faraday’s law. Part A 1. Define a wave. 2. Mention the properties of uniform plane wave. 3. Define intrinsic impedance or characteristic impedance. 4. Calculate the characteristics impedance of free space. 5. Define propagation constant. 6. Define skin depth. 7. Define polarization. 8. Define linear polarization. 9. Define Elliptical polarization. 10. Define pointing vector. 11. What is complex pointing vector? 12. State Slepian vector. 13. State pointing theorem. 14. State Snell’s law. 15. What is Brewster angle? 16. Define surface impedance. 17. Write the wave equation in a conducting medium. 18. Compute the reflection and transmission coefficients of an electric field wave travelling in air and incident normally on a boundary between air and a dielectric having permittivity of 4. 19. Calculate the depth of penetration in copper at 10 MHZ given the conductivity of copper is 5.8 x 10 7 S/m and its permeability = 1.3=26 mH/m. Part B 1. (i) Obtain the electromagnetic wave equation for free space in terms of electric field. (ii)Derive an expression for pointing vector. 2. (i) Obtain the electromagnetic wave equation for free space in terms of magnetic field. (ii) Calculate the intrinsic impedance, the propagation constant and wave velocity for a conducting medium in which σ = 58 ms/m, µ r = 1 at a frequency of f = 100 M Hz. 3. (i) Derive the expression for characteristic impedance from first principle. (ii)Show that the intrinsic impedance for free space is 120π. Derive the necessary equation. 4. (i) Explain the wave propagation in good dielectric with necessary equation. (ii)Define depth of penetration. Derive its expression. 5. (i) Derive the expressions for input impedance and standing wave ratio of transmission line. (ii) Find the skin depth at a frequency of 1.6 MHz in aluminium σ= 38.2 ms/m and µr = 1. 6. (i) State and prove pointing theorem. (ii) Define surface impedance and derive its expression. 7. Define Brewster angle and derive its expression. Also define loss tangent of a medium. 8. Determine the reflection coefficient of oblique incidence in perfect dielectric for parallel polarization. 4 comments: 1. Ghanshymkumar060997@gmail.com 2. Impressive and powerful suggestion by the author of this blog are really helpful to me. we provide Electrician in Springfield Mo at affordable prices. for more info visit our website. 3. i want numerical method book plese share 4. A power station, also referred ITEHIL to as a power plant and sometimes generating station or generating plant, is an industrial facility for the generation of electricity.
{"url":"http://www.vidyarthiplus.in/2011/11/anna-university-electromagnetic-theory.html","timestamp":"2024-11-07T09:06:32Z","content_type":"application/xhtml+xml","content_length":"175453","record_id":"<urn:uuid:1e39e8ec-d142-4ff6-b672-aae66ff23923>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00685.warc.gz"}
CHAPTER ONE 1.1 INTRODUCTION Generally, differential equations arises from a series of practical science and engineering problems. Any equations connecting a variable x, some functions of x, f(x), and certain differentials of x is called a differential equation. Differential equation are classified into two different types, namely; i. Ordinary Differential Equation (O.D.E) and ii. Partial Differential Equation (P.D.E). A differential equation containing an independent variable x and total derivatives only is called an Ordinary differential equation. Also, if a differential equation contains Partial derivatives, such differential equation is called a Partial differential equation (P.D.E). A differential equation is said to be linear if the unknown function and its derivatives appears to have a power of one, we have to note here that products are not allowed. Differential equations have so far been well recognized in engineering, economics and other disciplines because of its great applications in solving a deterministic relation involving some continuously varying quantities ( using functions) and their rates of change in space or time as their derivatives is established. Matheatically, differential equations are studied from several perspectives, mostly concerned withns, also we have their solutions, some of them are; the analytical method which puts emphasis on qualitative analysis of systems described by different equations, also we have the numerical methods which has to do with how to determine solutions with a given degree of accuracy. Differential equation a very important field of study and its being treated widely in pure and applied mathematics, engineering, and Physics. Because all of these disciplines are concerned with the many properties of differential equations of various types. Existence and uniqueness of solutions are majorly considered in pure mathematics, while applied mathematics deals with the adequate and proper justification of the methods for approximating solutions. Differential equations are also applicable in solving real-life problems, in this case the solutions can be approximated using numerical methods because the problems may not be directly solvable. Many important laws in science, like physics and chemistry, can be used to form a differential equations. One other importance of differential equation is its usefulness in modeling the behavior of difficulty systems in biology and economics. The principal of differential equation is well developed in such a way that different methods can be used to solve them which varies significantly with the type of the equation. Linear systems are best study solved by the use of matrices of which, however, only modest knowledge will be needed here. Matrix is the latin word for womb, and it means that sense in English. Generally, matrix means any place in which something is formed or produced. Matrix in mathematics means rectangular array of numbers, symbols, or expressions arranged in rows and columns. The items in a matrix are called its elements or entries. Matrices of the same size can be added or subtracted. The horizontal and vertical line in a matrix are called rows and columns respectively. To Specify the size of a matrix, a matrix with m-rows and n-colums is called an m-by-n matrix or mxn matrix, while m and n its dimensions. 1.2 STATEMENT OF PROBLEM Solution of non-homogeneous linear differential equation is rarely touched into etail and because of this reason, many students see it as a difficult aspect of mathematics. This research work is focused on solving non-homogeneous linear third order ordinary differential equation with the method of matrix. 1.3 AIMS OF THE STUDY The aims of this research work are as follows; · To show the procedures in applying matrix method in the solution of non homogeneous linear third order differential equation. · Solves problems involving linear third order ordinary differential equations using matrix method. · Explain how differential equations can serve as a problem solving tool in real life situation. 1.4 OBJECTIVE OF THE STUDY This research work is brought-up in order to provide and understanding of non-homogeneous linear third order differential equations and to give method for solving them. The objectives of this study are; · Solve applied problems that involves linear third order differential equations. · Find the solution of a third order linear differential equations. 1.4 LIMITATION AND SCOPE OF STUDY This work is focused on solving non-homogeneous linear third order differential equation using matrix method. Conversion of differential equation into matrix form and applying the method of interest to solving the equations shall be considered. The limitation encountered in carrying out this research work is the lack of power supply and materials in terms of textbooks. 1.5 LAYOUT OF THE WORK This research work did not cover in details all about the solution of non-homogeneous linear third order ordinary differential equations but it will serve as a background knowledge for anyone who is willing to go further in the subject of study. PLEASE, print the following instructions and information if you will like to order/buy our complete written material(s). After paying the appropriate amount into our bank Account below, send the following information to 08140350866 or 08058580848 (1) Your project topics (2) Email Address (3) Payment Name (4) Teller Number We will send your material(s) immediately we receive bank alert Account Name: AKINYEMI OLUWATOSIN Account Number: 3022179389 Bank: FIRST BANK. Account Name: AKINYEMI OLUWATOSIN Account Number: 2060566256 Bank: UBA. Account Name: AKINYEMI OLUWATOSIN Account Number: 0042695344 Bank: Diamond As a result of fraud in Nigeria, people don’t believe there are good online businesses in Nigeria. But on this site, we have provided “table of content and chapter one” of all our project topics and materials in order to convince you that we have the complete materials. Secondly, we have provided our Bank Account on this site. Our Bank Account contains all information about the owner of this website. For your own security, all payment should be made in the bank. No Fraudulent company uses Bank Account as a means of payment, because Bank Account contains the overall information of the owner Please, DO NOT COPY any of our materials on this website WORD-TO-WORD. These materials are to assist, direct you during your project. Study the materials carefully and use the information in them to develop your own new copy. Copying these materials word-to-word is CHEATING/ ILLEGAL because it affects Educational standard, and we will not be held responsible for it. If you must copy word-to-word please do not order/buy. That you ordered this material shows you have agreed not to copy word-to-word. 08058580848, 08140350866 YOU CAN ALSO VISIT: YOU CAN ALSO VISIT: <a href="http://www.greatmindsprojectmaterials.com">www.greatmindsprojectmaterials.com</a> <a href="http://www.greatmindsprojectsolution.com">www.greatmindsprojectsolution.com</a> <a href="http://www.achieversprojectmaterials.com">www.achieversprojectmaterials.com</a> <a href="http://www.naijasplash.com">www.naijasplash.com</a> <a href="http://www.achieversprojectmaterials.blogspot.com">www.achieversprojectmaterials.blogspot.com</a> <a href="http://www.achieverprojectmaterial.blogspot.com">www.achieverprojectmaterial.blogspot.com</a> <a href="http://www.acheiversprojectmaterials.blogspot.com">www.acheiversprojectmaterials.blogspot.com</a> <a href="http://www.archieverprojectmaterials.blogspot.com">www.archieverprojectmaterials.blogspot.com</a> <a href="http://www.acheiversprojectmaterials.blogspot.com.ng">www.acheiversprojectmaterials.blogspot.com.ng</a> <a href="http://www.archieverprojectmaterials.blogspot.com.ng">www.archieverprojectmaterials.blogspot.com.ng</a> <a href="http://www.achieversprojectmaterials.blogspot.com.ng">www.achieversprojectmaterials.blogspot.com.ng</a> <a href="http://www.achieverprojectmaterial.blogspot.com.ng">www.achieverprojectmaterial.blogspot.com.ng</a> <a href="http://www.achieverprojectmaterial.wordpress.com">www.achieverprojectmaterial.wordpress.com</a> <a href="http://www.achieversprojectmaterials.wordpress.com">www.achieversprojectmaterials.wordpress.com</a> <a href="http://www.acheiversprojectmaterials.wordpress.com">www.acheiversprojectmaterials.wordpress.com</a> <a href="http://www.archieverprojectmaterials.wordpress.com">www.archieverprojectmaterials.wordpress.com</a>
{"url":"http://achieversprojectmaterials.blogspot.com/2017/08/solution-of-linear-third-order-non.html","timestamp":"2024-11-03T23:20:18Z","content_type":"text/html","content_length":"94496","record_id":"<urn:uuid:f598a74b-f49a-424c-aac4-bb27252d0fb0>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00273.warc.gz"}
Are we made of math? Is math real? [This is a transcript of the video embedded below.] There’s a lot of mathematics in physics, as you have undoubtedly noticed. But what’s the difference between the math that we use to describe nature and nature itself? Is there any difference? Or could it be that they’re just the same thing, that everything *is math? That’s what we’ll talk about today. I noticed in the comments to my earlier video about complex numbers that many people said oh, numbers are not real. But of course numbers are real. Here’s why. You probably think I am “real”. Why? Because the hypothesis that I am a human being standing in front of a green screen trying to remember that the “h” in “human” isn’t silent explains your observations. And it explains your observations better than any other hypothesis, for example, that I’m computer generated, in which case I’d probably be better looking, or that I’m a hallucination, in which case your sub consciousness speaks German und das macht igendwie keinen Sinn oder? We use the same notion of “reality” in physics, that something is real because it’s a good explanation for our observations. I am not trying to tell you that this is The Right Way to define reality, it’s just for all I can tell how we use the word. We can’t actually see elementary particles, like the Higgs-boson, with our own eyes. We say they are real because certain mathematical structures that we have come up with describe our observations. Same thing with gravitational waves, or black holes, or the particle spin. And numbers are just like that. Of course we don’t see numbers as objects walking around, but as attributes of objects, like the spin that is a property of certain particles, not a thing in and by itself. If you see three apples, three describes what you see, therefore it’s real. Again, if that is not a notion of reality you want to use, that’s totally okay, but then I challenge you to come up with a different notion that is consistent and agrees with how most people actually use the word. Interestingly enough, not all numbers are real. The example I just gave was for integers. But if you look at all numbers with infinitely many digits after the decimal point we don’t actually need all those digits to describe observations, because we cannot measure anything with infinite accuracy. In reality we only ever need a finite number of digits. Now, all these numbers with infinitely many digits are called the real numbers. Which means, odd as it may sound, we don’t know whether the real numbers are, erm, real. But of course physics is more difficult than just number. For all we currently know, everything in the universe is made of 25 particles, held together by four fundamental forces: gravity, the electromagnetic force, and the strong and weak nuclear force. Those particles and their forces can be mathematically described by Einstein’s Theory of General Relativity and Quantum Field Theory, theories which have been remarkably successful in explaining what we observe. For what the science is concerned, I’d say that’s it. But people often ask me things like “what is space-time?” “what is a particle?” And I don’t know what to do with questions like this. Space-time is a mathematical structure that we use in our theories. This mathematical structure is defined by its properties. Space-time is a differentiable manifold with Lorentzian signature, it has a distance measure, it has curvature, and so on. It’s a math thing. We call it “real” because it correctly describes our observations. It’s a similar story for the particles. A particle is a vector in a Hilbert space that transforms under certain irreducible representations of the Poincare group. That’s the best answer we have to the question what a particle is. Again we call those particles “real” because they correctly describe what we observe. So when physicists say that space-time is real or the Higgs-boson is real, they mean that a certain mathematical structure correctly describes observations. But many people seem to find this unsatisfactory. Now that may partly be because they’re looking for a simple answer and there just isn’t one. But I think there’s another reason, it’s that they intuitively think there must be something more to space-time and matter, something that distinguishes the math from the physics. Something that makes the math real or, as Stephen Hawking put it “Breathes fire into the equations”. But those mathematical structures in our theories already describe all our observations. This means just going by the evidence, you don’t need anything more. It’s therefore possible that reality actually is math, that there is no distinction between them. This idea is not in conflict with any observation. The origin of this idea goes all the way back to Plato, which is why it’s often called Platonism, though Plato thought that the ideal mathematical forms are somehow beyond human recognition. The idea has more recently been given a modern formulation by Max Tegmark who called it the Mathematical Universe Hypothesis. Tegmark’s hypothesis is actually more, shall we say, grandiose. He doesn’t just claim that actually reality is math but that all math is real. Not just the math that we use in the theories that describe our observations, but all of it. The exponential function, Mandelbrot sets, the number 18, they’re all real as you and I. If you believe Tegmark. But should you believe Tegmark? Well, as we have seen earlier, the justification we have for calling some mathematical structures real is that they describe what we observe. This means we have no rationale for talking about the reality of mathematics that does not describe what we observe, therefore the mathematical universe hypothesis isn’t scientific. This is generally the case for all types of the multiverse. The physicists who believe in this argue that unobservable universes are real because they are in their math. But just because you have math for something doesn’t mean it’s real. You can just assume it’s real, but this is unnecessary to describe what we observe and therefore unscientific. Let me be clear that this doesn’t mean it’s wrong. It isn’t wrong to say the exponential function exists, or there are infinitely many other universes that we can’t see. It’s just that this is a belief-based statement, not supported by evidence. What’s wrong is to claim that science says so. Then what about the question whether we are made of math? Well, you can’t falsify this hypothesis. Suppose you had an observation that you can’t describe by math, it could always be that you just haven’t found the right math. So the idea that we’re made of math is also not wrong but unscientific. You can believe it if you want. There’s no evidence for or against it. I want to finish by saying I am not doing these videos to convince you to share my opinion. I just want to introduce you to some topics that I think are thought-stimulating, and give you a starting point, in the hope it will give you something interesting to think about. 317 comments: 1. mm... if we observe three of "something" than that 3-something most or many people consider to be real. Ad maybe also this "three" connected to and taken apart from the "something" many or some might consider to be real. Two questions pop up: what about the number 3 not connected to "something", just as an abstract (plationian) entity? Do most persons consider this abstract number 3 to be real or not? Some might, some might not, it seems to me a philosophical issue. Secondly, the mind notoriously almost always puts everything in three dimensions of space and one dimension of time. That is fine with Newtons Laws, but we run into difficulties (and in my case headache) if we try to model General Relativity and Quantum Theory in our limited minds or brains. That creates a kind of havoc with the common sense definitjon that something is considered real because of our observations. These observations become indirect and reality becomes like a veiled bride. And this veil is heavily shrouded in math.... Maybe we need another definition of "reality" not so strongly connected to our observations... These are just some thoughts, just freewheeling. 1. 3 is a nice number for juggling. Siteswap 3 is like a regular braid, or a standing wave traveling in time. And in space, the situation is ideal. 3 is just large enough to be interesting and small enough to comprehend. For juggling in general, the time dimension crashes through combinatorics. The spatial dimension crashes through graph theory. Though knots prefer 3 dimensions, juggling is not just braid theory, which might be considered a 3rd perspective after time and space. Thesis, antithesis, and synthesis; 3 elements of philosophy like a dreamspace of string 2. This comment has been removed by the author. 3. I think one of the messages of this video is to explain, like Feynman also did, that to avoid your headache you must not try to map the “real” of science onto the “real” in your mind. Yes, I understand that this is very unsatisfactory, but this is the best that science can do at the moment. I do not think that math is shrouding things. I think it is the opposite. I think the math is indicating that the underlying mechanism is a logical mechanism. I avoid my headache with that 2. An excellent video! You had me laughing out loud (too loud!) with your German-speaking-subconscious hallucination comments. 1. Hi Terry, I Google-translated her remark in German: 'and that kind of makes no sense, does it?' Given my dreams, that is something I would do, if I haven't already done so then forgotten, given how absolutely nonsensical my dream life can be. The language would not have previously existed, though. 2. Hi C Thompson, I want to tell you, and you math geeks, about a repetitive dream that I used to have. I’d be walking in a park and come across a bench with an open book lying face down on it. I’d pick the book up and turn it over to see what was written inside. But I’d always wake up as I was turning it over before I could read it. The dream persisted for a long time, and I learned to recognize it as a dream and would try to force myself to remain asleep so that I could read the One day it worked!!! I turned the book over and it was full of maths equations that I couldn’t understand. I never had the dream again. P.S. How are you doing? 3. Hi Jonathan, I'm good, just relaxing at Mum's place with my cat. I'm enjoying being out in the countryside and eating and sleeping as I please, talking with Mum and having my usual strange sleep salad dreams. I hope you're keeping cool. :) 4. C Thompson, wow, thanks! I did not realize that Google provided voice translations. In retrospect that makes perfect sense given how good their voice recognition is. I do use the very cool text-image-translation feature they provide, since for for multi-lingual Zoom meetings it lets me use my phone to read a comment without noodling around with the computer image. That one is arguably a vastly more difficult converstion task than voice translation! Mr. Jonathan Camp, your story reminded me of two dreams. The first one was back in college when I was taking several math classes. I was in my room sleeping badly, and in my dream (and in real life) I was cold because I had kicked the covers off. In the dream my cover had become a matrix, and if I solved the matrix-blanket problem I would get warm again. I solved it multiple times, and got very frustrated when I nonetheless stayed cold. The other was when I was in a hospital and, shall we say, not in very good shape (I have myasthenia gravis, nicely controlled now). The combination of drugs I was getting induced incredibly detailed, brightly colored hallucinations, and in one of them I was researching my own condition, looking for insights in a large book that might help my treatment. The book was filled with vivid, excrutiatingly detailed script, and as I looked at it I realized “All this text is just nonsense being generated by my brain. I will never find any answers here because my brain doesn’t contain the information I need for my own treatment, so even if I could read this it has no value.” One result of that period of time was bafflement on my part: Why would anyone want to be in a state of mind where everything they did was unreal and disconnected with reality? My illness-force experience of this world of psychedelic hallucinations was uniformly unpleasant and distressing. I like reality, it’s a gift! 5. I just used the text translator and copy-pasted. I have it permanently set to German for Dr. Hossenfelder's and some of her followers' German writing. My German is mostly from a semester's worth in 8th grade, I didn't learn much more. I know how to pronounce the words but I can't ask where the toilets are, so Dr. Hossenfelder etc. are light-years ahead of me there. I do want to memorise Dr. H's phrase and casually toss it out in conversation. I've a friend who has had vivid hallucinations involving complex geometry and mystic imagery, which he's drawn amazingly detailed artwork inspired by these visions. Also at Jonathan Camp: If anyone is interested in dreams in general, one of my favourite sites is the World Dream Bank, which focuses on dreams and dream-based art but is also an almost life-long record of the bloke who runs it. Be aware though that there's much potentially-offensive and not-safe-for-work but there are categories like Space, Mathematics and Science that play with ideas in bizarre and entertaining ways. 3. Hi Sabine, this is a great video. As you rightly note, a large portion of the mystery is circumscribed by definition: how do we define what it means to be real? You define 'real' to mean 'that which is a good explanation for our observations' because this seems to you how we use the word in everyday language. I'm going to take your challenge to come up with another definition that is consistent and agrees with how most people use the word. I think it is a better definition than the one you are using. That which is 'real' is 'non-deceptive in its appearance and requires no further explanation to match it with our observations no matter how much it is analysed.' Here are some examples of how this better matches common usage: * Most people say that dreams are unreal while waking life is reality. * Most people say that illusions are unreal while something that is not an illusion and non-deceptive is real. * Most people say that the psychotic hallucinations are unreal while those perceptions of neuro-typical people are to be regarded as real. * Most people say that the hairs that one sees because of cataracts of the eye are unreal because they require further explanation... ie, the hairs are unreal. * Most people say that a rope that is mistaken for a snake on a moonlit night is unreal even though someone walking along might jump in fright after initially seeing it. I hope you can see that my preferred definition better meets these examples? With this updated def. I would say that none of the Higgs boson, gravitational waves, Black holes, or particle spin are real. They emphatically do meet our observations, and perhaps they are our best explanations, but they are still deceptive and require further elaboration. In this same way, math is not real. Even the math that best explains our observations or that which we use to predict the future behavior of physical systems is unreal. 4. Math is not real. It is the reverse, math is derived from reality by humans. There is only our-math. Reality is more universal. Very dangerous going the other way so are those doomed s-theorists 5. Very interesting and convincing. I like to say that math is just thinking, thinking is math, but I may have to expand that a bit. Platonism always struck me as going too far. Take the old fox-goose-rice example, where you have to cross a river with them in a boat which will only carry you plus one of them. I consider that a math problem, to be solved by considering all the cases and finding the one that works. Under Platonism, it seems to me all those possible cases, right and wrong, would be part of the Platonic universe, which seems somewhat silly to me. That is, there is bad math and good math, and what doesn't work (i.e., bad) in one situation might work in another, so who decides what is and isn't in the Platonic universe? Whereas in your concept, math is part of our universe, and good and bad are determined by what works and doesn't work right here. I like that better. Neal Stephenson's novel "Anathem" is based on the Platonic Universe concept, though, and he makes it seem vaguely plausible. Enough to carry a good story, anyway. I consider it his best novel, better than "Snow Crash", better than "Cryptonomicon". 1. There is a confusion in Sabine’s video which I tried to explain on her Twitter post for this video. There is a difference between the platonic concept of maths, and how it’s used in physics or in your example. From a platonic perspective, there is a sense in which maths has a real nature beyond our use of it. Yes, there are areas of maths that don’t seem to be fundamental to nature, but the core areas including everything from pi to imaginary numbers are seen as discoveries. Yes they can be partial, such as Euclid, but mostly it’s an uncovering of something that is changeless and absolute. Sometimes physicists feel they “discover” a theory, but in reality they create a better way to describe and predict what nature does. It will always be an abstraction, not the thing itself. However platonic entities are the thing itself, and it’s the ways we describe and use them which is the abstraction. I’ve probably not explained this very well, but I do think this is an important distinction which is not clear in Sabine’s video. 6. Maybe math is not about numbers and structures but more about relations between such entities. This is what makes reality mathematical. 7. Alice in Wonderland and rabbit holes come to mind. Is there a reality beyond my perception ... I assume so? When I say an apple is red, I really mean I perceive it as red. I agree that mathematical descriptions can provide some incredibly accurate descriptions of reality and provide some interesting ways of thinking abut this thing called realty. Are mathematics and reality and one and the same? I will suspend belief for the moment. 8. “I see three apples!” What could be a more definitive example of the Tegmark idea that math is reality? Of course integers are real! A fruit farmer enters the room and informs you that the fruit on the left is a quince, not an apple. Are there still three apples? You get annoyed by such nitpicking (apple picking?) and decide to eliminate any confusion by building a mathematically formal apple recognition system, Instead, the situation gets worse. Aspects of recognizing apples that are easy for a brain whose ancestor’s lives depended on finding edible fruits prove complicated for a formal system. You find you need programs to recognize the concept of an “object” and sophisticated optical spectrum scanners to analyze its surface. Even then, the results are ambiguous. You give up and try counting simpler objects. Atoms! What could be simpler than atoms? And indeed, you find that the criteria for defining entities with smaller total numbers of quantum-level properties are less complex than gigantic entities such as apples. Reality becomes more countable at the fermion level. Relief! But hold on: Your sensor equipment became truly massive to get that level of resolution of reality. Hmm. It’s almost as if there is a trade-off, one in which “simple” integer counting requires either a great deal of forgiveness and sloppiness in defining the object (e.g., apples) or a great deal of sensory perception augmentation to get close to more precise, quantum-defined numbers That’s odd. Aren’t integers supposed to be the most straightforward and precise of all mathematical constructs? Why do they require so much equipment to perceive and define? Is more going on John Wheeler once famously claimed that reality is made from bits [1], the computer science basis of integers. Here’s what Julian Barbour says to that [2]: “Wheeler’s thesis mistakes abstraction for reality. Try eating a 1 that stands for an apple. A ‘bit’ is merely part of the huge interconnected phenomenological world that we call the universe and interpret by science; it has no meaning separated from that complex.” So back to the question of Sabine’s excellent video: Is math real? The universe certainly has rules that enable incredibly complex constructs in the natural world. You are one such construct. If these rules are what you call math, math is as real as anything we can discern with our senses. But I would argue that a much better name for this particular set of rules is physics. Why are smoothness and limits in calculus so difficult to prove formally? It’s because both derive from the inability of quantum mechanics to support infinite information density. This limit seeps into our math almost by osmosis. You cannot prove such properties in any meaningful way without first tipping your hat to the brutally unforgiving limits on detail imposed by quantum Even more unsettling is this: In sharp contrast to the already-given rules of physics, the rules of mathematics depend inextricably on the levels of cognitive and information processing complexity seen in entities such as cells (amazing things, cells), humans, and computers. Thus, as demonstrated by the unexpected complexities of defining integers, math and cognition form an inextricably bound duality. Is math real? Maybe. But only if you first accept that math is a subset of both physics and cognition, one that is necessarily guided and limited by cognitive processes. We have a more humble and more resource-conscious name for this precise form of cognition in computer science. We call it programming. [1] Wheeler, A. Toward “It From Bit”. Quantum Coherence and Reality Conference, University of South Carolina, Columbia, December 10-2, 1992. [2] Barbour, J. Bit from It. FQXi Essay Contest 2010-2011 (2011). 1. There are FOUR apples! 2. Terry, I feel like I'm halfway between a physically unique entity and a bunch of traits that can be recognised and explained as a conglomerate of habits, Attention Deficit Hyperactive Disorder, bad life experiences glued together with Douglas Adams references, song lyric quotes, etc, and now I'm also attached to everything by maths rules. This is a comment to cogitate on. @Jonathan: There is no spoon! 3. "It's easy to eat apple pie but hard to eat pi apples!" (not by Groucho Marx) 4. :3 9. The problem is largely that mathematics is not an empirical subject. Contrary to Sesame Street and the old film “Donald Duck in Math Land,” mathematical objects do not lie around us for direct observation, detection or measurement. In the case of three apples, we use the number 3 to describe a quantity of objects that are categorically the same or similar. We use this independent of whether the apple is a Fuji apple or a golden delicious. So there is something at work here with our ability to make such categorical assignments. Grothendieke and Etale developed a form of cohomology of categories to understand at least within mathematics how this happens. How this occurs with our assignment of mathematics to physical systems is not as clear. I think spacetime is built from entanglements. With quantum mechanics we have the issues of action per ħ and in statistical mechanics we have entropy per k. The action gives us a time derivative ∂S/∂t = iH, a form of the Hamilton-Jacobi equation and similarly with entropy, ∂S/∂t = (∂S/∂β)(∂β/∂t) = C, which is complexity and ∂β/∂t for black hole gravitation is a form of the geodesic deviation equation. Indeed, the Schrödinger equation is a form of geodesic deviation. Strominger worked a hydrodynamic approach to general relativity, and what is curious is the ratio of viscosity to entropy is η/S = 4π. The viscosity of spacetime η = s√(ρc^4/8πG) is a direct measure of the quantum entropy that form space or spacetime by von Neumann S = -k Tr[ρlog(ρ)], Here ρ in the viscosity equation is the vacuum energy density and in the second ρ means the density matrix ρ = |ψ〉〈ψ|. Of course, in doing this I shift the geometric meaning of spacetime from Riemannian geometry to the geometry of entanglements that involves Riemann in addition to Teichmuller and Mirzakhani. So, this still leaves us a missing ontological junction between the physical world and mathematics. 1. I find myself in rare but total disagreement with Dr. Crowell's statement that math is not empirical. The case I like to point to is Andrew Wiles' proof of Fermat's Last Theorem. It was based on an empirical conjecture of a correspondence between the characteristic numbers of two different fields of mathematics. Proving that conjecture was true was the last link in a long chain leading to the proof, and it arose from empirical observation of the calculated characteristic numbers. I have found some others agreeing with my position, as follows: Greg Chaitin, co-founder of Kolmogorov -Chaitin Information Theory: “For years I’ve been arguing that information-theoretic incompleteness results inevitably push us in the direction of a quasi-empirical view of math, one in which math and physics are different, but maybe not as different as most people think. As Vladimir Arnold provocatively puts it, math and physics are the same, except that in math the experiments are a lot cheaper!” Tim Johnson, at the Magic, Maths and Money Blog: A more sophisticated misunderstanding relates to the way mathematics is conducted. The error originates in how mathematicians present their work, as starting with definitions and assumptions from which ever more complex theorems are deduced. This is the convention that Euclid established in his Elements of Geometry and led Kant to believe that synthetic a priori knowledge was possible. Euclid actually started with Pythagoras’ Theorem, and all the other geometric ‘rules’ that had emerged out of practice, and broke them into their constituent parts until he identified the elements of geometry. It was only having completed this analysis did he then reconstruct geometry in a systematic way in The Elements. Today the consensus within mathematics is that the discipline is analytic, from observations, not synthetic. Outside of mathematics there persists a belief in the power of pure deductive, synthetic a priori reasoning. John Von Neumann: “Mathematical ideas originate in empirics. But, once they are conceived, the subject begins to live a peculiar life of its own and is … governed by almost entirely aesthetical motivations. In other words, at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. Whenever this stage is reached the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas.” Scott Aaronson on a mathematical breakthrough in complexity theory: "It’s yet another example of something I’ve seen again and again in this business, how there’s no substitute for just playing around with a bunch of examples." 2. Jim V: I thought about mentioning this idea of mathematical empiricism that Chaitin has written about. Time and so forth caused me to not mention this. This is a type of empiricism, and with theorem proving assistant algorithms such as Coq this is becoming more prevalent. I think this is still qualitatively different from standard scientific empiricism. The measurement of the orbit of a planet to frame classical mechanics or the properties of an atom involves systems that are completely independent of anything we fabricate. What mathematics we employ in theoretical developments are what we impose, not what nature imposes. Mathematical empiricism involves the use of symbolic structures that are then cast as algorithms. Computers, computer languages etc are not something nature generates. There is of course the issue that Seth Lloyd brought to the fore. He makes the argument that the universe is a sort of computer. I tend to look at this a bit more guardedly. The Lie algebra of a gauge interaction has properties parallel to logical operations. This means that complex interactions can have some complex processing-like structure. However, for the most part this is just a random set of operations. The set of all possible coin tosses of length N has 2^N possible binary strings. Some of these could be binary codes you run on a computer. The set of elementary particles in the universe with a gauge group containing R roots, R = 2 for SU(2) and R = 6 for SU(3) (these have 1 and 2 weights filling out the dimension of these), There are then R^N possible symbol strings. In the observable universe N = 10^{80} or for those entering a black hole N ≈ 10^{60}, and so the set of possible strings means there are computing algorithms. Of you enter a black hole you could sample all of these in principle. The SU(3) root system connects with the hexacode Golay code. Given though that these occur by randomness, it is not quite the same as a person framing a mathematical problem on a computer. To think these are equivalent might be compared to a sort of Platonism, and I am completely agnostic on that. As Garrison Keillor put it about “Guy Noir,” One man on the 13th floor of the Atlas Building seeks answers to life’s perplexing questions. Guy Noir Private Eye.” The quote is something like that. I see the relationship between physics and mathematics as something we may never know, and I suspect we can never know. It is something we can discuss in metaphysical conversations over scotch and cigars. 3. Papa57: It is not clear what bearing this has on the relationship between physics and mathematics. 4. @JimV "Euclid actually started with Pythagoras’ Theorem, and all the other geometric ‘rules’ that had emerged out of practice, and broke them into their constituent parts until he identified the elements of geometry. It was only having completed this analysis did he then reconstruct geometry in a systematic way in The Elements." Nice example. And exactly that happens during the development of thinking as described by (some good) physiologists (which also agrees with personal observations and contemplations). The first phase of reflexes, stimuli, is habitually skipped and later forgotten. When a child (some at least) first learns the placeholder (the name), e.g. Vadim, and from observation after others - "Who did this? Peter did this." - learns to answer when asked, "Who did this?" with, "Vadim did this". While in effect it means such and such stimuli led to processing and to such and such response. As when a parent says, "Do that." it is the parent who stimulates the reflex, a child processes and responds in some way. After awhile it starts to associate and identify with the placeholder and learns to mark it cumulatively as 'I', i.e. starts to skip the initiating stimuli (as if he was initiator of the action from the beginning, identification with agency, or 'I') and takes the processual phase of the reflex (stimulus-process-response) to be his [= appropriating agency] will. This 'will' being the feeling of the processual phase of the reflex (while the brain computes output based on all known to it input). When brain computes the expected output - it feels as pleasant. When not - not pleasant, so tries to reach some closure. So 'I' and 'free will' are born exactly out of forgetfulness about the first phase of the reflex, namely stimuli, and appropriating initiation of action to itself (so pride and shame are born). As if it suddenly magically happened all-ready out of some miracle well (pun intended for Zen connoisseurs). Then writing some tractates ceaselessly about 'greatness of The Will' and how it 'makes us human so special'. While in effect is just a habituated forgetfulness. "I see the relationship between physics and mathematics as something we may never know, and I suspect we can never know." Never is a long time. As I understand Jonathan Gorard does some research on this subject and finds some interesting patterns from mapping math & physics from the perspective of that "proof space" (myself been thinking that 'insights' in that space may be represented by lightlike paths, and voila! That also riffs with your entanglement spaces & knowledge sharing). And calls it... metamathematics :-) But I wouldn't count it alongside philosophical musings on metaphysics, it's just another level of math abstraction. 5. Geometry is classified as mathematics. There are several areas of mathematics, and subjects such as number theory and algebra have little in the way of mental pictures, while geometry has mental imagery. Of course, we know there are connections between these two, where algebra often describes geometric constructions. The Langland’s conjecture means that number theory may become as central to the geometry of physics as algebra is now. Space and spacetime is an oddity. If it is the emergence from quantum entanglement, this is suggestive of space as a purely mental construction. As I outlined above the perspective of spacetime as a hydrodynamic fluid and as defined by entropy of entanglement gives a ratio of this fluid viscosity and entropy η/S = 4π. I also think that entanglement of states is conserved in spacetime for these states on paths, think of a sum over paths in a path integral, as geodesic on the spacetime. Proper time is then a parameterization of paths that conserve entanglement. Time and space are then emergent structures from entanglement. Does this mean that we should just dismiss space and spacetime as just an illusion? Whether space and spacetime are considered ontologically real may depend on your perspective on this. The Fermi and Integral spacecraft found that spacetime is smooth down to a scale lower than the Planck scale. This measurement was due to the simultaneous detection of a range of electromagnetic radiation from very distant, billions of light years, burstars. As such this a choice of measurement with an extreme IR probe. An extreme UV probe may likely find spacetime very broken or foamy by contrast. This is not unlike a choice of measurement of a quantum state. In a duality between reality and locality, most measurements are set up to select realism of measurements without locality. Of course, of late the opposite form of the Bell inequalities has been on interest, but for most work we choose locality. I would then say much the same of space and time. I then would say an operational perspective on this would say that space, time and spacetime are real or “real enough” for all practical purposes. If these IR probe measurements of EM radiation from distant source continue to hold up, we gain some confidence that our standard ideas of space in point-set topology and calculus are reasonable models. The mathematics then has operational “truth-value,” and mathematicians may not be errant in saying this geometry is real. Of course, we cannot prove this. The dual UV perspective is more discrete or knot topological in foam and the like. This dual perspective is operationally real as well. It is interesting how much of string theory and related physics has such close analogues in condensed matter physics with lattice and finite element structures. 6. I don't think of spacetime as mere illusion but a precise abstraction suitable for its purposes (for an emergence of a specific type of observer, which is currently implicit, and of specific theory of observations and appropriate geometry). So "real enough" is fine with me. I enjoy your considerations about hydrodynamic but some areas seem to dissonate. If spacetime is emergent (which I think it is), then there is something funky going on with matter (don't know the Higgs mechanism and incompetent to think of spinors, didn't assimilate math, but thinking of some kind of frequency difference which zaps in the dynamic field, I enjoyed Penrose's stress that in QM in Hamiltonian instead of momentum we use the differential *operator*; but then he goes on to construct this very peculiar twistor object, which contains intrinsic and extrinsic attributes from the get go while representing 'pure holomorphicity', then as a result gets rid of massless field equations in that formalism... that's some witchcraft, unfortunately cannot check the whole structure, but it's also surprising in similar ways). So eventually the observed spacetime behavior is smooth, but isn't it because of choice of states, our computation of normalized states and something in-between *may* be lost. So regarding entanglement states thinking proceeds as follows, it may be alright that entanglement is conserved for geodesics in spacetime. But is this necessarily the only way? I.e. may it also be conserved for some non-local processes (from the point of view of a specific measurement, i.e. knowledge intrinsic to the observer and in the measurement, so otherwise-entangled)? In that scenario, what the observer sees as non-local in spacetime happens in some projection space by definition (alert for hand-waving crap, but intuition is like that - some fiber bundle, with image on fibers representing knowledge of the observer and the projection of that image representing emergent spacetime). Yet, non-locality is only conditional for the type of the observer as spacetime is apparent, provisional (meaning, the whole 'observer vector space' from the bundle, not just a single image on it, not single human knowledge, but all knowledge of that kind). So entanglement is conserved, yet the state is not necessarily always observed as local, so not always on spacetime geodesics, but lives according to the entanglement space laws. That of course might mean 'will never know' and may be hidden by computational irreducibility as effective expression of this phenomenon will be not 'how and what' to observe but 'where to look' in the first place, i.e. something termed by scientific or mathematical intuition. And all working methods on paper will be expressed in constructive math (including model of *local* observer, i.e. local consciousness), yet there will be that 'hunch' for non-ergodic patterns which may stem from that spacetime non-local part. Of course, that is not new. In fact, it's exactly the principle of Brahman (universal knowing, or 'differential operator', or computation; btw, not unlike Jewish basic principle). Where it emanates as a type of personal deity with the world (spacetime - an instance of that type, it's irrelevant here whether worlds are one or many, the base space of fiber bundle), a type of personal soul with the body (local human consciousness - an instance of this type, or the image on the vector space of the fiber bundle). In fact, definition of 'real' in Upanishads is 'that, which is unchanging among change'. Some tried to ontologize into a thing to simplify matters (Terry's 'bits' analogy). But what is meant more akin to the universal function of 'knowing'. Oh, boy... :-) 10. I read James Gleick's book on Chaos Theory as a young adult and it blew my mind in a way nothing else had, it was a major paradigm shift for me. (Now I have smaller ones once every few That so many different disparate structures and phenomena in the Universe can be described and modeled that includes how the world functions on a mathematical level in a way I was previously unaware of was wondrous. All of the 'math things' that many of you describe here feel like they're on another layer of reality that I can sometimes see through the deep fog of my ignorance (that I am slowly dissipating) and I glimpse those same wonders. I think mathematics is real/exists insofar as it is something thought of, used and recorded. We're well aware though that a description of something may not correlate with reality as it manifests. *looks at String Theory and certain political ideologies, while thinking at least String Theory hasn't actually harmed anyone yet*. But then, I've photographed my bowl of miso soup in a restaurant because the settled solids and seaweed pieces reminded me of a Calabi-Yau manifold, so... :) 11. To me the question "Is math real?" is one of those pseudoscientific quandaries with no meaningful way to resolve. Some say that math would exist, I suppose in some form of Platonism - the theory that numbers or other abstract objects are objective, timeless entities, independent of the physical world and the symbols used to represent them, even if there were no intelligent life in the universe. Even if that were the case, what purpose would math serve? Could math alone create stars, planets, rocks, much less living thinking beings? 12. Three things: A. Maths are human-made and do not actually exist in Nature. B. Aren't mathematicians employed by physicist to create the maths necessary to falsify (if I'm using the term concretely) their hypothesis? C. Are there any other animals in the world that demonstrate the use of maths in their daily lives? 1. Hi jonathan. All good things are... - Regarding Point C (the other two points are too high for me) I think: In a certain way all of them. Flagellates alone, which move towards a light source, apply "the knowledge of nature" about gradients in order to improve their chances of reproducing... Another example: cells in general apply the concept from inside and outside... And the "knowledge" about the validity of the efficiency of structures that follow the Fibonacci is realized innumerable times in nature, too. - [ three... ]° (x) 2. At least some species of mammals can count to a simple degree, but I don't know if that's the extent of their abilities. 13. Thank you for giving me something interesting to think about. That is why I'm here. 14. If one chooses to think of both math and science as human inventions then it could be asked "Is science or math more important?" It's hard to imagine we could have science, at least modern day science, without the support of mathematics, and yet math has grown and evolved with ever new discovery in science, making it even more useful in the study of science. The title of a book published in 1951 by Scottish-born mathematician and science fiction writer Eric Temple Bell (1883-1960) 'MATHEMATICS Queen and Servant of Science" seems to capture the essence of the dual importance of each. 15. It seems that the concepts of 'real' and 'existence' (implicitly linked with identity with a concept of an independent agent, i.e. 'I', and its inevitable derivative of 'free will') are what causes most issues in considerations of that sort. To be more precise, people make assumptions (or take some things for granted, so just don't know that they took some initial assumptions) and make further propositions while staying in zeroth epistemology. "C'mon, everyone knows what we are talking about." While in fact 'reality' and 'existence' are words which may cover very broad maps of meaning and hide some unclear areas. So most of the time they cover unexplored areas taken for granted as obvious. Same relates to questions of 'why' category (i.e. extending to 'what', i.e. search for essence-of-things, beginning-end, grand-purpose, etc., like the mentioned 'yes, but what space-time *really is*'?). Attempt to conceptually grasp what something *really is*. It may be alright as thought stimulating exercise or conversation (being aware of the process of abstracting, in order to examine some process in detail). The trouble is - we only make sense of things in relations, technically through a functional structure of relations of what relates to what and how (one may add extensionally but it would complicate matters unnecessarily). And concept (and accordingly any definition, however good-bad it may be) does not cover the structure of relations. In that sense, one may separate empty concepts (illusions) from abstractions. The first category may be quite elaborate but eventually dissonates with the observed phenomena in some major ways (or superseded by more precise structure). So, fairy tales, myths, religions as instruments to 'explain' life (and socio-engineer behavior of a group of people) are the earliest examples of that sort of thinking. They also played the function of sharing of knowledge in the group (so not only primitive superstitions or instruments of control, not so simple). The second category is scrupulously and painfully built from observations and elaborate developments in attempt to capture the relational structure of the processes around. So represents the real, actual, relevant Yes, dichotomy is not so black and white so some phenomena that started as 'explainers' (superstitions, etc.), i.e. as empty concepts, will be examined, studied and find their way into knowledge, but through means of developing a proper relational structure (language), i.e. condensing of abstractions, most others will be revoked (majority do not hold to the principle of energy In that sense, empty concepts are illusions, while abstractions are worked out and written down relations, which represent the best knowledge we got so far. The difficulty of a beginning thinker is to distinguish one from another (and current systems of education do not help in that) that abstractions are really developed, like vehicles or airplanes, which required many bright men, thousands of work-hours, etc., to condense knowledge. I.e. they are literally built, represent something tangible. In that sense a precise design of an airplane is as 'real' (if not more so, as it potentially contains more degrees of freedom) as the implemented product itself. But due to the fact that it was literally developed out of the graveyard of unsuccessful prototype models (and not on 'spirits of ancestors' as people in Cargo Cult believe). So that developed knowledge of relations is 'real'. I like to think of math as the science of relations (yes, science). But it's still puzzling that many (even) bright people still formulate those questions in such terms (of 'real' and 'existence') and keep confusion going. 16. I’m in complete agreement Sabine, though it seems to me that you haven’t provided the curious with an answer to their question. If mathematics isn’t in itself “real” (contra Plato and Tegmark of course), then what shall we call it? I have a suggestion. Mathematics may be termed “real”… in the capacity of a human language. This is to say in the sense that French and English are also “real”. Math is a strange language since while natural languages evolved into our species, math was recently invented under the advent of civilization. (If math would have evolved into us then I think we’d automatically know the sorts of things that a pocket calculator tells us when we punch in the right keys.) It’s strange to me how rarely people in academia speak of mathematics as a language. This seems to open the door to all sorts of funky notions. 1. There is a range of interpretations of mathematics. Most mathematicians think there is some sort of objective aspect to mathematics that is outside human constructions. This is in some way Platonism, and there is no way to prove this. Brouwer advanced intuitionism, which says mathematics is the creation of the human mind. So mathematics is in that setting just a set of rules similar to chess. There are animals that understand numbers. Corvid birds and parrots can count and even add and subtract. This means mathematics is trans-species. Is it something other intelligent life works? If we find signals from some ETI out there that has numerical or mathematical coding, then I think there is more universality. I have sympathies with the idea that mathematics has some universality to it that is beyond any mental processing. The problem is that I suspect we can never understand how this comes about. 2. Thanks Lawrence, It may be that if we fully endorsed mathematics as a language construct then this would also suggest how any universality might exist to mathematics. Try this: First observe that every statement in the language of mathematics may also be phrased in the language of English. Why? Because English is infinitely expandable and we tend to talk about the math we do. So any new mathematical symbol will also be given an English name so we might reference it orally. Second, does it matter that non-humans are sometime able to develop symbolic representations from which to think and even communicate? I’m not sure this changes anything. A dog might know its name, or a crow might even count. Symbols may be tools for other animals just as they are for us. I’d expect advanced alien species to both name themselves and count, and not because names or numbers are beyond invented constructs, but rather because conscious forms of function should tend to have certain similar desires. So that might be a reasonable answer. In our language of mathematics there should be various tools that tend to be useful for other reasonably advanced form of conscious function, and so if advanced enough they should also tend to develop the language that we call mathematics. 3. What's wrong with just calling it unreal and leaving it at that? Math isn't real. It requires human explanation and elaboration to have any awareness at all of it. See my definition above for 'real' and ask yourself whether it meets the expectations of worldly language. 4. Take a look above. I just wrote a post on the reality of geometry in light of how it may be emergent from quantum entanglements. There is a lot there I will no repeat here. I agree in one sense that an operational perspective this is possible. I would say that probably if we received a message from some extraterrestrial intelligence (ETI) that we may have some confidence that math in universal. First off, they are using the electromagnetic field in communications technology. This means these beings are using Maxwell theory, which is represented mathematically. If they also encode things in some form of numerics this also suggests a universality to mathematics. I would say then from an operational perspective that mathematics appears, not proven mind you, to have some possible universality. In this sense this may be good enough FAPP. BTW, when it comes to dogs, I am amazed how a creature that has such complex social behavior and considerable memory abilities are so hopelessly unmathematical. Dogs have almost no spatial reasoning. For evidence of that, just tie two or more dogs up outside and watch. Even one dog gets wound up and seems unable to just walk the other winding direction. Dogs also display no numerical ability. 5. Manyoso, You can always defines something to not be real and argue that this is done from a generally useful definition, though someone else might define it to be real and argue that this is done from a generally useful definition. From my position it seems productive to accept either definition to try to understanding if anything useful is being said in a given case. I’d say that most people here would call English “real” in the sense that it’s the language that we’re speaking right now. Furthermore when I tell you “2 + 2 = 4”, it might be said that math is real in the same sense. Right? 6. Thanks Lawrence, that’s interesting. I’m not entirely sure if you’re agreeing or disagreeing with me however. I’m saying that mathematics exists as a language that the human created. Furthermore I’d expect any ETI that is advanced enough to also develop what we’d call the same language, as well as various other terms that may be translated into English. I propose that any universality that we might observe here should stem from our similar need/desire for such lingual tools. 7. Philosopher Eric: I would say most physicists consider mathematics as a sort of language, or a set of rules similar to those of chess. Though there are departures from math and games. I tend honestly to think mathematics is more than this, but I have no proof. Noam Chomsky developed transformational grammars that are math-models of languages. These turn out to be applicable in computer science more than in understanding human language. I might conjecture that human language is some informal set of structures with some mathematical roots. When it come to ETI, unfortunately I suspect the SETI effort will probably never find any hint of such. I suspect it is too rare and distant. Too bad in many way if so. 8. I think I understand Lawrence. Your advanced grasp of mathematics tells you that it’s far less arbitrary that something like the game of Chess. So you’d rather not say that math is only a language, but have no proof of this. Well note that an ETI shouldn’t need Chess to advance, though it should need mathematics. To me that seems like a plausible way to support your inclination. It seems to me that math should have been discovered rather than invented like one of our games. Still I’m hesitant to call math anything more than a language even still. All sorts of platonic foolishness seem to result from that, such as the ideas of Tegmark. It sounds like we’re in agreement on the Fermi paradox. Consider my own answer to this riddle. Yes there should be countless places out there that harbor robust ecosystems that produce advanced intelligences. There are two things which make me doubt that we’ll ever have any direct evidence of this however. One is that advanced intelligences should tend to kill themselves off pretty fast in a geological sense once they become powerful enough. Apparently we’re the first such generation on Earth, and I’d expect more to come after us. The time of the next one should depend upon how much life we kill off beyond ourselves. The net effect should be tiny blips of intelligent EM radiation that quickly degrades to nothing intelligible in space, separated by long periods of silence even here. Secondly, despite all the sci-fi fun we have regarding space exploration, I think we’re confined to this ecosystem just as the intelligent products of other ecosystems should be confined to their’s. Some realize how fragile biology is, and so imagine that our robots could become self sustaining elsewhere. I doubt this however. Even conscious robots should require the resources of this planet to become self sustaining. 9. Strange: To me what Sabine said did answer the curious and what you say is equivalent to what Sabine says. 10. I would say the difference between mathematics and games is that games are finite. Some years back the number of possible checkers games was computed. The number of possible chess games also should be finite. Most of mathematics is not closed in this manner, where induction and other methods involve a sort of infinitude. When it comes to humanity I would almost say we are collectively obsessed with committing mass suicide. What is disappointing is that it is clear we are pretty badly screwing this up. In fact we almost could not do much worse. If one pauses for a moment and think about the big problems, pollution, energy, resource loss, and societal ones such as nuclear weapon proliferation, drug abuse and the like, through my lifetime I would say we have not solved anything. We have done some ameliorative actions and solved parts of these, but largely through my lifetime I would witness to the fact we humans have not solved a single damned thing. Maybe other ETI are not so collectively insane. If we communicate with then it is clear they would control nature and use resources. Whether they become addicted to this as we have is unknown. I think one thing that will prevent ETI communications is they are rare, and potentially the closest one might be 10s or 100s of millions of light years away. 11. Lawrence, That sounds right to me. Consider my own assessment of why we seem so collectively fatalistic. It’s that personally feeling good each instant is what constitutes the value of existing for anything. I consider this the purpose which evolution uses to drive the conscious form of function. Before consciousness, life should have essentially functioned robotically and thus without purpose. While non-conscious life could deal well with “closed” environments reasonably well (such as a checkers board), it needed a purpose based element to advance further regarding the open ended circumstances that were more standard. This purpose I speak of is sometimes referred to as “qualia”. The essential difference between us and other conscious forms of life I think, is that language and hard science have made us extremely powerful before we could grasp sufficiently effective ways to use it. Though I do suspect that we’ll kill ourselves of in the next 100,000 years or so, I also have some hope that we’ll get a reasonable way down that road. I suspect that our soft sciences will finally begin progressing and so help balance our amazing power with better understandings of our nature itself. If our nature gets reasonably figured out academically, I suspect that a true world government, along with continued technological advancement, could spare us for quite a while. I presume that other intelligent life out there exist under similar constraints. 17. Is math evidence? 1. This comment has been removed by the author. 2. Hi Steve, I was trying to come up with a cogent question yesterday and failed. What I was trying to ask was, what would maths be evidence of, and in what form? 18. I saw an exchange between Tegmark and Massimo Pigliucci who made this point, that his idea was metaphysics rather than science and Tegmark agreed. Tegmark does sometimes talk as though the mathematical universe was actually science. 19. This comment has been removed by the author. 1. I should learn to wait until I've fully thought out what I want to say before I push the send button. 20. In continuation on the topic of metaphysics, I looked up a well-directed excerpt from a reflection of an ingenious physiologist Ivan Sechenov, "Who is to elaborate on the problems of psychology, and how?" where he examines metaphysical developments in thinking (in the end of 19th century) and which may be of interest: "But why does the metaphysical method of studying psychical phenomena lead to absurd deductions? Does the fallacy lie in the logical form of metaphysical reasoning, or only in the objects of We are already acquainted with the logical side of reasoning: it consists in comparing two objects (i.e., either two concrete forms or the whole and one of its parts, or two parts of one and the same form, or of two separate forms) and in their commensuration from the point of view of similarities, dissimilarities, causal relationships, etc. Besides, we can detect by intuition any, at least serious, fallacy in logical reasoning; in such cases we say: "the inference is illogical", "the reasoning is inconsistent", etc. Metaphysics, however, cannot be accused of inconsistency; otherwise its doctrines would not have held sway for such a long time. On the contrary, it is the consistency of metaphysical reasoning, along with the universality of the problems it undertakes to solve, that attracts most. Hence the error must lie in the objects of metaphysical investigation. This circumstance is of extreme importance to us, because it convincingly shows that the real substrata of all psychical processes are invariable, no matter whether our reasoning is based on reality or on pure metaphysical abstractions. But what kind of error is contained in the objects of metaphysical investigation? When the metaphysician in his desire to obtain more profound knowledge ignores the world of real impressions (which for him are a kind of profanation of the essence of things by our sense organs), and turns of necessity to the world of ideas and concepts (since there is not other place to which he can retire), and does so with the conviction that that which is truly ideal, that is, the least real, is what really matters, he inevitably deals with abstractions; he forgets that these abstractions are fractions, i.e., conventional values, and, without a moment's hesitation, objectivises or transforms them into essences. I say, and I say it with deep conviction and without any exaggeration, that the metaphysician tries to prove that 1/2=1, 1/10=1, 1/20=1, etc. He does the very thing a mathematician would do if he were to take it into his head to isolate a mathematical point or an imaginary value without acknowledging their conventional character. What is more, conventional mathematical values even in their isolated form are still abstractions, while the ultimate objects of metaphysics, or its essences, are products of decomposition not of real impressions but of their verbal expressions. This is the second deadly sin of metaphysics, a striking example of which is confounding the name of an object, i.e. mere sounds, with the object itself, for instance, the name Peter with the man Peter; this lapse is rooted in the peculiarities of language and in the attitude of the human mind towards its elements." So, in essence, when nerds don't have unis/labs/places to hang out and be at peace, they are getting pissed off by people around and start churning heavy metaphysics... Moral? Better to keep nerds happy. 21. I wrote to Max Tegmark, and asked him: “Bearing in mind that I am a poet whose work often reflects a fascination with science and not the other way 'round, does the following make any sense at all? Having just finished [your book] Our Mathematical Universe, I would say that the MUH [your Mathematical Universe Hypothesis] holds that our universe, while far more complex, is no more real than a triangle — and I do not mean a triangular physical object, nor three objects forming the vertices of a triangle nor three objects forming the sides of a triangle — but a triangle. ” He replied: 22. If the maths is real, do stars have consciousness? 1. I 'fess up - ArtPulseDynamics asked me that question as a joke after watching the video; I dared him to ask here. It was funny at the time. 23. Dear Sabina. This made me giggle. I think, reading the last paragraph, that even you started to doubt yourself. 24. It's quite natural to assume that a multiverse of all possible algorithms exists. It's doubtful whether a multiverse of all mathematical objects as we conventionally define them exists, simply because most of them require an infinite amount of information to be specified unambiguously. See e.g. this article: https://arxiv.org/abs/math/0411418 The reason why this is plausible is because we are algorithms ourselves. Your existence is due to the universe running via your brain whatever algorithm defines your identity. But the implementation doesn't matter. If someone were to run an exact simulation of your brain, then that would generate the exact same conscious experience that the real brain generates, provided the simulation is precise enough to capture the algorithm precisely. We can then consider a thought experiment where not only the brain is simulated but also the rest of the body and the local environment, causing you to be conscious of the virtual environment generated by the computer instead of the real world. The simulation then doesn't have to run in real time. The only thing that matters is that the simulation has to actually run. But because the way the simulation is implemented doesn't matter, this means that the computer can render it in some scrambled form without that affecting the conscious experience. Since our own universe counts as a computation rendering our consciousness, it then follows that the scrambling due to applying a time evolution operator should be irrelevant. This then implies externalism because if we assume that the present moment exists, then applying a time evolution operator to the present moment maps it to future or past states that then exist in a scrambled form inside the present moment. Conscious experiences of past or future observers therefore exist. One can then speculate that quantum mechanics is an effective theory of a multiverse of algorithms. In the conventional formulation a system on which we intent to perform a measurement should be scribed by a complete set of commuting observables. But one can argue that strictly speaking we only ever observe our own brain states, therefore there should exist a commuting set of observables for brain states. and this is then going to represent the algorithm implemented by the brain. Such a set of observables then defines a sector of the multiverse where a particular observer is present. One can then speculate that the quantum multiverse is not the real multiverse, but that the real multiverse is the set of all algorithms. Quantum mechanics then yields a local linear approximation of this multiverse. 1. You see, there is a dig difference between, "we are algorithms ourselves" (ontology, for both 'us', 'algorithms' and identity between them) and, "we can be represented by algorithms" (epistemology, if 'us' is taken weakly). In that perspective, we can talk about good enough emulation of local conscious experience (AGI). Yet, it may not be just about the skin-bag (not meaning anything extra-goo-goo like soul, but information shared or linked through the environment is enough), so we may discover (a conjecture) that the computational effort that is needed to cross that ultimate barrier is either more expensive than is available to us (or not worth it, as it's cheaper to work directly with bio-material already plugged-in into the environment) or computationally irreducible in the paradigm of computation (which may in itself turn out to be the best paradigm of all, of which I'm not so sure). So, if "it quacks like a duck", it only means that "FAPP of current human knowledge, it quacks like a duck". As we may never know that we may trigger some extinction event just by eradicating ducks, because they were transferring some wasp parasite, helping some bees to survive, then those bees pollinate some plant, and yada-yada-yada. So Nature may be running top speed already, we may only humbly ask for a ride. Some observations support this: I have't checked the paper, but if you map 'the present moment' (whatever it is) to states, for future you will need to select a finite basis and normalize beforehand what you want to know according to the energy available, so that's already losses. And past may not be recoverable at all (WF collapse for entangled states). And in order to talk about past or future conscious observers you need a theory of the conscious observer (even local is enough as any other may not be feasible). In that sense, it's not clear what algorithm in theoretically unknown space (multiverse, entangled space, spinor network, twistor space, whatever) is and it's doubtful that such construct (if ontologically defined) may generalize well. The best what is currently done is groping for an elephant (attempts to recover QFT & GR) by its waste products. Or pray to the almighty Differential Operator. Infinities and singularities may be regarded by local observers (=limited computation) e.g. *as* asymptotically running functions (as well as real/transcendental numbers, etc.), which may be used with care in order to evaluate some limits, induce, negate unfruitful areas, etc. We don't necessarily need to "forget all about Cantor crap" and all switch to constructive math. Whatever works. All our math is hacking after all. Just some remember it, and some seem to forget it. In other words, to get an egg one needs a universe. I am suspecting that all such attempts to ontologize anything on the macro scale and replicate it will hit the wall of the second law. PS Yet, there is some new voodoo (which does not seem to be what it claims, namely to "evade the second law", maybe locally under specific conditions, etc., interesting if Sabine might find it interesting and cover it): 25. Galileo Galilei said that the language of the universe is writing in mathematics. I would add that like any other language, mathematics can be use to write fiction. 1. ... Which might be more interesting to mathematicians and some physicists than to anyone else, likely. 2. Although I shouldn't presume that because I don't actually know. 26. For me, any definition of reality must minimally include some reference to observations *and* reproducibility. Something along the lines of "that which can be observed and reproducibly verified by others". Numbers seem to fit that definition but numbers can't be observed independently of the thing you are observing which makes them tricky. Does something need to be observable independently to be real? Is that even possible? An analogous example is the colour red. Real or not? People can reproducibly observe three apples to be red, but where or what is the redness of a thing? You can describe those apples down to the fundamental particles and you won't find red. Just some photons with a particular (number!) range of wavelengths to which we attach the word red. Red is never independent of those photons so is it real? So what about the three-ness of those apples? Like others have said above (and like the redness) three is just a property of the apples, like red. Three is no less real than red and also can't be observed independently. The difference is that three is far more widely applicable as a property. Does that make it more of less real than red? 1. Dear Plato, So if there is "nothing" (no dimensions, no space, no time, no "somethings" (which might create spacetime)), there is no abstract Plationian number threes? Can an abstract number 3 exists other than as a representation (for instance in a brain?)? So no universe, no plationan "space" either? 27. This comment has been removed by the author. 1. This comment has been removed by the author. 2. This comment has been removed by the author. 3. This comment has been removed by the author. 4. This comment has been removed by the author. 28. Hi Sabine, as I see it, physics needs math and a process of calculus for prediction, and concept for humans to communicate and (most of the time) to believe they understand something to the presupposed "real thing". And existence is the only point that cannot be denied - hence a "reality". But physics from its beginning is about separating objects, variables, fields, observables, whatever is needed. So, after hundreds of years of separation physics is stuck in it, and the only unity that still appears in physics is the mathematical structures of the calculus (and theory). So with respect to your question, I first understand that the unity of nature - which can be seen anywhere any time - cannot get rid of with math. That is first level, but the question also relates to the most scary philosophical question, which is also the foundation of religions: why is there something rather than nothing? And it is answered with the preexistence of math. So it looks to me like math becomes another name for God - (maybe with logical power instead of magical in the eye of a scientist). I guess Tegmark not a religious person. But should write Math ? 29. I tend to believe math was invented not discovered. Yet the famous Fibonacci sequence has captivated mathematicians, artists, designers, and scientist for centuries. Also known as the Golden Ratio, its ubiquity and astounding functionality in nature suggest its importance as a fundamental characteristic of the universe. Leonardo Fibonacci came up with the sequence when calculating the ideal expansion of rabbits over the course of one year. Today its emergent patterns and ratios (phi = 1.61803...) can be seen from the microscale to the macroscale, and right through to biological systems and inanimate objects. While the Golden Ratio doesn't account for 'every' structure or pattern in the universe, it's certainly a major player. Some examples are the in the structures of many types of seeds, flowers, fruits and vegetables, tree branches, shells, hurricanes, faces, spiral galaxies, even the microscopic realm is not immune to Fibonacci. The DNA molecule measures 34 angstroms long by 21 angstroms wide for each full cycle of its double helix spiral. These numbers, 34 and 21, are numbers in the Fibonacci series, and their ratio 1.6190476 closely approximates phi 1.6180339. Ref: 15 Uncanny Examples of the Golden Ratio in Nature 30. Hi Sabine, I find your examples somewhat confusing. First, you claim that not all real numbers are "real". Then, you say that space-time, as a differentiable manifold, is "real". Doesn't the existence of a differential manifold presuppose the existence of real numbers? 1. Well, physicists arguably construct space-time over the real numbers. But ask yourself if you'd notice any difference if you replaced the real with the rational numbers. Does completeness play any role for anything observable? I don't think it does. 2. Hi, Sabine. I don't want to poop currants again, but wasn't the opposite kind of your main argument against the simulation hypothesis?* As usual: no need to answer me included. ^.^ , The fact that Heisenberg made it clear to us that complete observability is not possible underpins your claim here, but also reminds me to the definition: "Truth is the sum of the influences that bring you to your next decision." I still like the idea, even if it doesn't represent to the usual opinion. 3. "I don't want to poop currants again, but wasn't the opposite kind of your main argument against the simulation hypothesis?" In case you mean the discreteness argument, the rational numbers are dense in the real numbers. That wasn't my main argument though. 4. Sabine, you said: Does completeness play any role for anything observable? I don’t think it does. I agree emphatically, but the full implications of that simple statement on physics and math are surprisingly broad and profound. For physics, if completeness never impacts observable physics, then models that begin conceptually at the continuum level and “back off” in complicated ways to accommodate the imprecision in the real universe cannot be the best way to represent reality. But these models work! Yes, emphatically. The trick is to realize that the stepwise and thus inherently imprecise algorithms used to implement such models are also capturing subtle details of the physics involved and must be explicitly included in the models. Quantum mechanics even provides a delightfully generic heuristic for doing just that, which is this: The level of detail is proportional to available mass-energy. An example of this would be to merge General Relativity with the algorithmic calculation processes used to make actual GR predictions. This merger provides GR with new variables that previously assumed space always has infinite precision, regardless of available mass-energy or other factors. For example, if you link the algorithmic side of GR to the assumption that matter creates spacetime — which I would suggest is not that radical since you cannot “see” any aspect of spacetime without first having matter — then the ability to bend spacetime also changes. In cosmic voids devoid of matter, space loses resolution and becomes a bit blocky and excruciatingly flat. That’s another way of saying that cosmic voids are explosive. They lose the ability to complete with regions of space where matter content enables curvier and thus more attractive space. The universe becomes less stable at large scales, not more. At galactic levels, such variable levels of “space resolution” could easily play a role in why galaxies have such a lovely diversity of beautiful forms. MOND? Probably, since spacetime stiffness would also extend the reach of large gravity wells. But unlike standard MOND, this approach would not violate GR, only give it new My brother Gary, a lawyer, came up with an excellent phrase for capturing the idea that matter creates spacetime and thus affects its level of resolution: “In spacetime, matter matters.” For math, if completeness never impacts reality, then a good deal of physics-related math that is assumed to be okay goes out the window. My favorite example is the entire topological concept of manifolds. (Somewhere out there: “What? Terry, dude, manifolds are rock solid and the basis of all sorts of physics, including General Relativity!” Add spicy words to taste. :) The problem is that continuum thinking allows shapes such as balloons in 3-space to be “thinned down” until the inner and outer faces merge at some infinitesimally small scale, thus creating a 2-space. Topologists then discard the embedding space and focus only on the internal connectivity properties of lower-dimensional space. It’s even called a 2-sphere, which can be confusing since the surface of a 3-ball is a 2-sphere! One cannot do infinitesimal manifold surface mergers in a universe that lacks completeness. The best alternative is polarized manifolds with embedding spaces, such as a 2-sphere as the polarized (ball on one side, air on the other) surface of a 3-ball. Would such a shift impact physics theory? Sure. If nothing else, it would make all embedding space very much real. A hypersphere universe would require more than four dimensions, for example. Other impacts, such as on particle physics, are much more subtle. So again: While the assertion that mathematical completeness has no impact on anything observable seems innocuous, the devil is in the details. This particular feisty little devil ends up flipping many rather significant issues in physics and math on their heads. 5. It is a similar question with the simulation argument. Would you notice the difference if all the things you observed were approximated using computable numbers? What calculation would you make to show that it wasn't, if all you have to work with is also computable numbers? 6. Hi Sabine, An even more basic question: is the number π "real" according to your definition? 7. Tamás, No, Pi isn't real because you'll never need all those digits. I actually used this as an example in the video. 8. Robin, "It is a similar question with the simulation argument. Would you notice the difference if all the things you observed were approximated using computable numbers? What calculation would you make to show that it wasn't, if all you have to work with is also computable numbers?" Well to say the obvious that depends on how good the approximation is. But I don't know what you think this has to do with the simulation argument. The laws of nature aren't numbers. 9. Sabine, I find this interesting because for me, the existence of π is a textbook example of a "good explanation for our observations" (of course it is true that we do cannot observe all the digits, but we can prove that they must be there.) 10. @Tamas: I don't think manifolds presuppose the existence of real numbers; it simply turns out that in our description of them, historically speaking, they were required. To be more precise, it ought to be possible to characterise the category of manifolds by a list of properties. This is is what would normally be called a 'universal' property in category theory. I put the term 'universal' in quotes as I find this term confusing; personally, I find the term 'characterising' more descriptive. It's not the first time that the naming of a concept in mathematics is obscure. Maths is hard enough without making it harder by not naming concepts well. 11. "it is true that we do cannot observe all the digits, but we can prove that they must be there" no you can't 12. @Mozibur: I was referring to differentiable manifolds. You can try to define what "differentiable" means without using the field of real numbers, but isn't that unnecessary nuisance? It is so much simpler with reals. @Sabine: maybe there is a misunderstanding, but you define a mathematical object to be real if it offers "a good explanation for our observations". Observation: if I try to draw circles, no matter how small or large, and try to measure the ratio of the circumference and the diameter, then I get slightly different values, always close to Explanation: in Euclidean space and for perfect circles, the ratio does not depend on the size of the circle. We call this ratio π. Since our world locally resembles Euclidean space, my circles somewhat resemble perfect circles, and my measurements are more-or-less precise, I always get something close to π. Don't you agree that this is a good explanation for my observations? Do you have a better explanation? 13. All the Maths only exists in our minds as far as we know.It is just an artefact of the evolved brain in the physical world, which then unsurprisingly works well in describing the physical 1, π, a differentiable manifold with Lorentzian signature, a vector in a Hilbert space that transforms under certain irreducible representations of the Poincare group, are all apparently precisely defined although not known to be consistent. The maths structures fit the physics observations only up to the precision of measurement. So π is as real as a differentiable manifold with Lorentzian signature. 14. Terry Bollinger1:35 PM, August 01, 2021 I still haven't completed my homework from your last comments, but.. "For math, if completeness never impacts reality, then a good deal of physics-related math that is assumed to be okay goes out the window. " Isn't the proof in the pudding? If a better model for the physical data is achieveable by moving to discrete Maths, where is this better model with its better fitting results? 15. Steven Evans 5:36 AM, August 03, 2021 That is a meaningful point of view. What I find confusing is Sabine's claim that a differentiable manifold with Lorentzian signature is real, while π is not real. 16. Tamas, I do not know a theory that explains our observations which does not use a differentiable manifold. I know that if you cut off Pi after 10^300 digits, that'll still describe our observations. 17. Sabine, My take on this is that Pi as a mathematical object is conceptually simpler than "the first 10^300 digits of Pi" as a mathematical object. So while we can use the latter to describe our observations, it will just become a more complicated description. 18. Sabine Hossenfelder7:23 AM, August 03, 2021 "I know that if you cut off Pi after 10^300 digits, that'll still describe our observations. " But isn't the 2 in E=mc^2 conceptual and experimentally E=mc^1.99999999, say, would do? Equally for some circular orbit C=πD is conceptual, but C = 3.141592654 x D would do experimentally. Isn't the point that we have reasons to think that the 2 and the π are the safest numbers to put in the theories in terms of maintaining their validity however much precision in measurement increases? But as concepts, 2 and π both only exist in our minds as far as we know? 19. Steven Evans said: Isn't the proof in the pudding? If a better model for the physical data is achieveable by moving to discrete Maths, where is this better model with its better fitting results? Yes: If such ideas have merit, they must produce experimentally verifiable predictions by which they can be tested. They should also provide simpler and more computationally efficient models of known phenomena. Probably the best test candidate for mass-indexed, precision-aware algorithms is predicting the large-scale structure of the universe. No other topic in physics offers a more extreme disparity in the spatial distribution of ordinary matter. The hand-wavy prediction is that adding an algorithm-level scaling factor to code that implements general relativity equations should enable accurate prediction of large-scale cosmic structure, and do so without the need for dark matter or dark energy. The net impact of such a scaling factor in GR code would be to make the emptiest regions of spacetime "blockier" and less capable of supporting gravity. The resulting rule of thumb is this: The larger a cosmic void grows, the more aggressively it expands. Instead of dark matter pulling things in, emptier regions in general and cosmic voids in particular should push ordinary matter away from them, while ordinary matter will try to stick to itself. Thus regions containing matter and regions empty of matter should behave somewhat like incompatible fluids, with one of the fluids -- the one that contains ordinary matter -- "stickier" than the other fluid. That difference alone should provide testable differences in predictions of how the universe looks at very large scales. Closer to home, matter-indexing of quantum field theory could provide an intriguing new twist on renormalization, specifically in how and why the importance of some virtual loops fades off. By "intriguing" I mean it sort of flips the entire interpretation upside down? The total mass-energy of available in a situation becomes a fundamental given, not a derived value. How "real" virtual loops become then depends on how distant they are in derivation from the mass-energy that enables their existence. If mass indexing is valid in the context of quantum field theory, then the math should end up simpler and have fewer arbitrary assumptions, yet still produce the same predictions of particle and field interactions. One final note: While precision-aware approaches have a concept of scale or granularity attached to them, they are definitely not the same as discrete (e.g., cellular automata) approaches. Bits, precise numbers, and well-defined lattices are all emergent, limited-resolution artifacts in a low-resolution holographic universe. Anything "discrete" thus cannot be fundamental in such a framework. 20. Addendum: I just realized that my own assessment of a matter-indexed universe as "two immiscible fluids, one self-sticky, the other one expanding," has a nicely mundane interpretation: The large-scale universe should look like a fluffy loaf of bread. 21. Steven, "But isn't the 2 in E=mc^2 conceptual and experimentally E=mc^1.99999999, say, would do?" Yes, but those are both rational numbers so I don't get the point. And in any case as you certainly known E=mc^2 isn't the right equation, it's actually a scalar product of two vectors, that is, a contraction with a two tensor, and the reason there's a two in that exponent is the same reason graviational waves have spin 2. Without that two, all of General Relativity wouldn't work. Thus, the 1.9999999 might be compatible with some observations, but not with the vast majority of them because that'd make the theory inconsistent. 22. Sabine, In my view, Pi as a mathematical entity is simpler than "the first 10^300 digits of Pi" as a mathematical entity. The most economic way to define "the first 10^300 digits of Pi" is to define Pi first. Thus, if you replace Pi by the first 10^300 digits in your theories, then you get a more complicated theory, not a simpler one. Besides, just as in case of replacing 2 by 1.99999, this would make the math inconsistent. 23. Let me return to this question of Sabine for a moment: "Sabine Hossenfelder 8:42 AM, August 01, 2021 Well, physicists arguably construct space-time over the real numbers. But ask yourself if you'd notice any difference if you replaced the real with the rational numbers. Does completeness play any role for anything observable? I don't think it does." Actually there is a huge difference. Consider the function that is 0 on rationals smaller than Pi, and 1 on rationals larger than Pi. If we consider only rational numbers, then this function is continuously differentiable, and its derivative is the all-0 function. So you can have arbitrary fluctuations in differentiable functions with all-0 derivative. I think this is a huge Terry Bolinger said: “Thus regions containing matter and regions empty of matter should behave somewhat like incompatible fluids, with one of the fluids -- the one that contains ordinary matter -- "stickier" than the other fluid. That difference alone should provide testable differences in predictions of how the universe looks at very large scales.” In fact, that’s the conclusion reached in this paper https://academic.oup.com/ptp/article/69/1/89/1836044?login=true and a series of follow up papers published by H. Sato and K. Maeda in the early eighties. And that leads to my own assertion that all avenues to an overarching theory that combines Newton/ Einstein and MOND haven’t been fully explored. If there were only one Void in the Universe and one gravitationally bound matter structure the expanding Void would just push on it and that’d be the end of it. But, there are thousands of Voids and thousands of matter structures. So when an expanding Void ‘pushes’ against a matter structure there are others ‘pushing’ back from other directions. And while the net effect is overall expansion, what’s under appreciated are the junctions between the expanding Voids and the bound matter structures. I contend that junction is curved! It lenses, looks and acts like a weak gravitational field. And though it’s too weak to alter the dynamics of the densest regions, it can affect the less dense regions. And like you said, no Dark Matter is needed. 25. @Terry "Somewhere out there: “What? Terry, dude, manifolds are rock solid and the basis of all sorts of physics, including General Relativity!”" BI (Before Internet) folks think stones are real and quantum states are creepy. AI folks think quantum states are real and stones are creepy. Transitional folks think they are in an asylum, so anything goes :-) 26. Tamas, "Actually there is a huge difference. Consider the function that is 0 on rationals smaller than Pi, and 1 on rationals larger than Pi. If we consider only rational numbers, then this function is continuously differentiable, and its derivative is the all-0 function. So you can have arbitrary fluctuations in differentiable functions with all-0 derivative. I think this is a huge Of course it's MATHEMATICALLY a huge difference whether you define a function over the real or rational numbers. I was, needless to say, referring to the difference for our observations. "In my view, Pi as a mathematical entity is simpler than "the first 10^300 digits of Pi" as a mathematical entity. The most economic way to define "the first 10^300 digits of Pi" is to define Pi first." Possibly correct, but physicists don't actually use the first 10^300 digits of Pi. I didn't quite anticipate I'd have to spell this out, sorry. I don't know what inconsistency you might be referring to. 27. Brad, thanks, what a fascinating reference! And it's from way back in 1983! Here's a quick quote: "For example, the perturbed region with a density less than the critical density expands forever but the closed universe itself shrinks to zero volume within a finite time." I wonder if Roger Penrose is familiar with these papers? Their idea the universe shrinks to zero volume as a void expands is at least reminiscent of Penrose's latest CCC concepts. Penrose might find these papers quite interesting. I will definitely look at these papers in more detail. It's interesting that their premise seems surprisingly modest: the geometry of a closed universe unavoidably leads to void instability. Have you published anything on your colliding-voids variant of the idea? Again, thanks! 28. Sabine, "Of course it's MATHEMATICALLY a huge difference whether you define a function over the real or rational numbers. I was, needless to say, referring to the difference for our observations." The mathematical difference that I mentioned implies, among other things, that we can no longer use differential equations to describe our observations (or at least we have to make some annoying extra effort to get rid of all the pathological solutions). "Possibly correct, but physicists don't actually use the first 10^300 digits of Pi. I didn't quite anticipate I'd have to spell this out, sorry. I don't know what inconsistency you might be referring to." I don't see how physicists' current computational methods have anything to do with what is real and what is not. The mathematical inconsistency is simply the contradiction we get with the definition of the sine function if we assume that Pi is rational. 29. Tamas, "The mathematical difference that I mentioned implies, among other things, that we can no longer use differential equations to describe our observations (or at least we have to make some annoying extra effort to get rid of all the pathological solutions)." That's just wrong. For solving a differential equation it's completely irrelevant whether you postulate the functions are defined over the real or rational numbers. Your example above isn't even in the solution space. Of course if you have a function over the rational numbers it doesn't have values for the real numbers, so what "pathological" solutions are you referring to? The solutions are physically entirely indistinguishable, which is my entire points. "The mathematical inconsistency is simply the contradiction we get with the definition of the sine function if we assume that Pi is rational." Of course you do not assume that Pi is rational. What are you even talking about? 30. sorry, I meant irrational when I wrote real 31. I think my argument can be summarized as follows: The question is not whether we _need_ irrational numbers to describe our observations - it is whether we can use them to give _simpler_ (and thus better) mathematical models that describe our observations. I'm trying to convince you that this is the case: replacing the reals by rationals sometimes makes the mathematical models more complicated. 32. "Your example above isn't even in the solution space. Of course if you have a function over the rational numbers it doesn't have values for the real numbers, so what "pathological" solutions are you referring to?" My example didn't have values on irrational numbers. Let me repeat: Consider the function that is 0 on rationals smaller than Pi, and 1 on rationals larger than Pi. This is defined only on 33. Tamas, I understand what you say. I am saying it's wrong. It makes no difference whether you use the rational or real numbers for anything in physics. Using the real numbers makes nothing simpler - it makes no difference. The easiest way to see this is that physicists never ever use any properties of the real numbers specifically (that rational numbers wouldn't also have) for anything. As to your example, I don't know what you think this shows. Can you define discontinuous functions on the rational numbers? Yes, you can. So what? 34. Sabine, it still seems to me that you misunderstand my example. It is - defined only on rational numbers - continuous (every rational number has a neighborhood where the function is constant) - its derivative is 0 everywhere (again, every rational number has a neighborhood where the function is constant). 35. Sabine, Some of my answers are not showing up, so it is a bit hard to argue, but I'll try. You are right that "physicists never ever use any properties of the real numbers specifically". So why do their rational computational methods for solving differential equations work, if there are pathological solutions over the rationals? Here is why: 1) We know from math that, under certain assumptions, the differential equations have a unique smooth solution over the reals. Crucially, this is not true over the rationals, where the smooth solutions can be rather pathological, as my example shows. 2) We can also prove using math that, again under certain assumptions, the physicists' rational computational methods converge to this unique smooth real solution. We can even give quantitative bounds on the rate of convergence. So, you are right, the physicists only use rational numbers, but if we want to understand why this works, then we have to use math with real numbers. 36. Addendum 2: The large-scale universe should resemble Ciabatta bread [1], only stringier. Voids in Ciabatta bread expand fastest while enclosed by bubble walls that contain expanding gases. However, in a Ciabatta universe, there is no gas pressure. Thus, the rate of void expansion is driven solely by the absence of matter, not the presence of walls. This difference means that in a Ciabatta universe, walls are unstable with respect to collapse into filaments, and filaments are unstable with respect to collapse into “compact,” roughly spherical galaxy superclusters. [1] https://en.wikipedia.org/wiki/Ciabatta#/media/File:Ciabatta_cut.JPG 37. Tamás to Sabine: … I’m trying to convince you that … replacing the reals with rationals sometimes makes the mathematical models more complicated. Tamás, I have a question for you: Can you give me a single example of a physics experiment that used real numbers to predict the results? If your instant first thought was “yes,” please consider what you are saying. Every numeric prediction ever made in physics, whether back in the days of hand calculation or more recently by computer, was made using finite numbers of digits. Likewise, every fraction calculated in any physics problem used ratios of finite strings of digits. Even the decimal fractions used to describe such values are nothing but large integer numbers over powers of ten. Another name for these omnipresent ratios of finite numbers of digits is rational numbers. Hiding the rational number foundations of all physics calculations by calling the more abstract parts “real numbers in equations” and the iterative, rational-number parts that do the actual work “algorithms that find approximate the equations” does not make the rational numbers disappear, nor does it simplify anything. In fact, I would point out that a head-in-the-sand strategy regarding real numbers has done astonishing damage to physics by encouraging sloppy models that are chock full of calculation noise posing as “theory.” Calculaton noise is what you get when you extend your predictive precision far beyond the ruthlessly hard information limits established a century ago by quantum mechanics. Examples of such damage include: string theory; many-worlds nonsense (that one’s not even quantum, it’s just “I flunked coding theory!” infinite wave noise); the idea that we might be in a simulated universe because, hey, you know, you can stack simulations without regard to resource limits; the idea that an electron is a “real” point hiding somewhere in its Schrödinger wave function, versus the softer, fuzzier, and energy-paradox-free Dirac field (think orbitals); and even erudite Kruskal-Szekeres coordinates (ouch!) with their “free” infinite density of points at the center, resulting in a Jedi mind trick that falsely convinces readers that K-S has “solved” the infinite time dilation paradox of black hole event horizons. All of the above assume, at levels so implicit that physicists usually are not aware of it, that Platonic perfection is part of reality and thus “free” when modeling that reality. Quantum mechanics when expressed in terms of available information does not support this premise. Ironically, even wave models of quantum mechanics fall into this trap by showing pristine, exquisitely precise waves of probability when from an information perspective, there are at most just a few bits of “real” structure available. 38. Time for a confession: I’m responsible for addicting physicists to real numbers! Well, maybe not just me, but certainly my ilk. We are an evil lot, we electronics and computer and software types. Decades ago, we looked out on a world chock full of engineers and scientists and mathematicians and game players (especially game players) who shared a deep yearning for the perfection of Platonic structure, for the reality of real numbers, for the soft differentiable smoothness and seductive curves of manifold smoothness, and thought, “Wow! What an opportunity to fleece some rubes!” And so it began. At first, we offered our clientele just a taste, just a bit here, a few more bits there, enough to get them hooked. Then we poured it on! They all wanted perfection, a world that doesn’t exist, a Reality of Reals, smooth and seductive, offering infinite precision at no cost, the stuff of dreams. So did we give it to them? Of course we didn’t! Come on, no such world exists! But oh, how almost-real and almost-free we could make that world seem, just by offering a little more silicon for just a few more bucks, Visa and Mastercard accepted! Dreams Come True, Reals Made Reals, Differentials Made Smooth. After Jurassic Park and Terminator 2, we even got the movie industry hooked! And every bit (heh, get it, “bit”?) of our supposed supply of “real” real numbers was a Unix pipe (heh, get it, “Unix pipes?”) dream! All we really gave them for their oh-so-many bucks was lots of bits, rational numbers, and time-bound iterative algorithms. But how they fell for it! Even physicists who should have known became so entranced with our costly-contrived cotton-candy visions of Platonic reality that they started spouting off about simulated universes inside of simulated universe because, you know, real numbers must be free after all, wow! (It helped that the bills for new computers always went to accounting, not them.) And so the sad conclusion: We, the computer industry that knows in gruesome detail what is going on under the hood, chose to use our knowledge to addict the entire world to the fantasy of real numbers while conning them with nothing but rational ones. We put a computer in every pocket, typically at the cost of many chickens indeed. If we did it over, might we have had mercy and left physicists out of our numbers-con, thus saving the world a few billion bucks in wasted research money and lives? Yeah, maybe. Plus university physicists can be so darned slow in paying our invoices! So forget physicists, it’s still gamers that are the true core of our real numbers con. Long live MMOG! 39. Tamas, Sorry, I don't know what you mean. You have constructed the function so that clearly the left limit to Pi isn't the same as the right limit. Why do you think the function is continuous? There is no \delta for which, etc etc 40. Hi again Tamas, Sorry, I found a whole bunch of comments (from you and other people) in the junk folder. Not sure why. In any case, they should all have appeared now. It occurs to me there's an easier way to answer your question. If you are worried about us supposedly getting flooded by discontinuous solutions to differential equations, why do you think we never have this problem when we solve the equations numerically which we arguably don't do on the real numbers. (Not even on the rationals.) 41. Sabine Hossenfelder 1:14 AM, August 05, 2021 The function is continuous because Pi does not exist, so it cannot be discontinuous there. I think the point is that in math, you cannot both have your cake and eat it, i.e., you cannot say that only rational numbers exists, and at the same time pretend that their topological space is a line. No, the topological space of rational numbers is a much more complicated structure, that's why we have these strange continuous functions. 42. Sabine Hossenfelder 1:34 AM, August 05, 2021 I think I have answered this as well as I can in Tamás 5:37 PM, August 04, 2021. In a nutshell, we don't have this problem because we can mathematically prove that the numerical solution methods converge to unique smooth solution that exists over the real numbers. 43. Tamas, I hadn't seen your earlier answer, sorry. I still don't know what you want though. You seem to agree that the difference between defining a function on the reals and rationals doesn't matter for the physics, so what's your point? As to the discontinuity. In the end it's a matter of definition. If you don't want to call the function discontinuous because the point isn't in the space you defined the function over, then call it singular. I'd argue that since there is a sequence (defined on the rational) that converges to a single value whereas the value of the function doesn't, it's discontinuous (and hence Either way you put it we seem to agree that such cases of course won't show up in solutions to differential equations just by replacing R with Q, hence you don't need R. 31. I've often thought about the apparent paradox of infinite precision and true infinities in pure math, and the limited opportunity for the real universe to make anything of these things due to the limited amount of energy available to it. I reached a sort of compromise based on this energy saying the potential is there, but to explore it must take energy, therefore math is as real as we decide to make it. The unexplored depths of real numbers will wait forever and will always be there if we decide to go searching. 32. Suppose that math is the investigation of axiomatic systems. It doesn't really make sense to ask whether math is real; we should ask whether a particular axiomatic system is a complete and accurate model of our universe. It seems that no empirical evidence will ever be able to tell us whether we need all of the real numbers or whether we need the axiom of choice. There may be infinitely many axiomatic systems that are consistent with empirical evidence, or we might find that we are always refining our axiomatic models, getting closer and closer to the perfect model, but never reaching the final model, because it is impossible to fully describe the universe with axioms. It seems there are questions we will never be able to answer. 33. I’ve always believed that the set of axioms chosen for our “math” matter (as shown by Kurt Godel). A particular “choice” “limits” what you can do with any given system based on those axioms. So how do we know that we even have the “right” set of axioms? And I think we know there are truths inexpressible by the axioms we use. Do you think that affects physics in some way? 34. I think the imagination of reality we have is based on our sensual perception of our surroundings: The retina producing images, the cochlea forming the sounds and so on. Mathematics is just a production of a very odd structure we call consciousness and until yet I have found no satisfactory definition for it. But apart from undefinable, it produces mathematics, doesn’t it? So in the end I believe mathematics as a product of our consciousness - that some parts of it describe the reality we percept (even the reality only our technical detectors can percept) is a mystery that I have been wondering for all my life (65 long years). Klaus Gasthaus 35. @Sabine Thanks for an interesting and balanced post. As I understand it (in shorthand) reality in math is some function of utility. I find that easy to accept from practical viewpoint and life experience. So say some math is useful to explain a hypothesis but not make a prediction (yet) or to actually lead to actualizing the math physically. A possible conclusion might be that the utility is subjective. Not very useful or insightful thought but still..Maybe tendency to jump to motivation isn't completely wrong? 36. Godel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of probability in formal axiomatic theories. These results published by Kurt Godel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible. Ref: Wikipedia 'Godel's Incompleteness Theorems' For the sake of argument let's suppose Godel's theorems do prove that it's impossible to find a complete and consistent set of axioms for all mathematics. What does that say about theories in science? In modern day theoretical science all theories are based on postulates, which in turn are based on mathematical axioms. That would seem to put a damper on finding a complete and consistent Universal Theory of Everything in science. 37. It's at least possible to offer a working definition of 'real' that facilitates a discussion - but only within a specific context. The word 'physical' is more problematic. I'd say it means nothing. Its a placeholder for a concept many people seem to wish they had; but it's really just sound in the air. 1. Agreed. The video is great, but the entire conversation is rooted in the word 'real' which is wide open to subjective interpretation and misinterpretation. 2. As I said, if you want to use a different definition of "real" fine, but then please make sure it's consistent and meaningful. 3. Would you be satisfied with: "What is real is what can be measured"?* 4. Who measures? What is a measurement? Do you actually need to measure something or does it just have to be possible to measure it. How do you know it is possible to measure it? I don't know what this definition even means. 5. I think I gonna take that as a NO! ^.^ , 6. On second thought... 1) any field 2) energy transfer 3) in this context it has to happen 4) I was hoping that you could tell Do you like currant cake?* 7. Good morning, muck, der and Dr. Hossenfelder. I was typing out an answer to Sabine full of explanations then I remembered I was replying to a physicist with a degree in mathematics. I'll just add, I think the measurement is taken by someone or something that collects and collates the data - the observer who hears the tree fall in the forest and registers the resulting vibrations as the sound the tree made. Muck, should we be suspicious of currant cake? :) 8. Good morning, C Thompson (8) A little mistrust rarely hurts, as long as it doesn't lead to prejudice in the next step, I think... and I think that collecting the data by a sensor in your example is actually another at least according to my definition (x I'm really not the type who loves to define things but I actually really like currant cake x) 9. Got it. I'm wondering why one would not want to poop currants, figuratively speaking. Google is too literal with its results. 10. (8) °[ I was wondering where to write this. So many brilliant thoughts that one would like to build on. I think I'll stick with the currants... "You probably think I am "real". Why? Because the hypothesis that I am a human being... explains your observations. And it explains your observations better than any other hypothesis...", [from the on top linked YouTube clip ~ 0:37 - 0:56] How can one be sure that this statement is true?* I mean: Sabine claims that this is "the best we got", regarding our observations. I dare to think that this is just one side of the medal, or in other words a cherrypicked hypothesis, which misses an - at least for me - important aspect: The "self interest" {but I realy do not like this term, so let me say:} "the hope of self-realization due to interaction" or more simple "the wishfull-thinking" of the recipient. I think it is not only the "fits the observation best"-assumption but also (if not primarily) the assumption which gives the observer the maximum of possible interest... With the intended meaning of: interest <> that what is between. Following this tain of thought it becomes - at least for me - even more clear that the "Multiverse-Hypothesis" is not only unscientific but also without any interest because it terminates the process of self-realization... at least outside the community of its believers and wanna make believers. 38. I guess for me the problem with the Platonist route is a horse & cart thing something like this: Does an electron consult the standard model to decide how to move? As I see it, the electron just get bumped along by what around it and "cares not one whit" for the Standard Model. We know by observation that the way electrons get bumped has regularities so we can can construct a mathematical artefact to describe what they do. Actually, a series of artefacts, currently the Standard Model, later on probably something else. The next version might use a quite different formulations, like particles>waves>fields>the next thing, with a different gestalt and a different set of consequences. From my point of view, the equating of reality with equations is a kind of conceit leading to anything, like the idea that other universes must exist because you have written down an equation. The most egregious example of this in physics (absolute doozies abound elsewhere) would have to be Everettian Many Worlds that generates a complete universe, or at least a light cone, whenever anything at all happens, and actually, when things don't happen but might have. All in service of some equation. Models and out-there reality are completely different things. We can't talk or think about reality - or do physics - without models but they are just different things. 39. Sabine, "But what’s the difference between the math that we use to describe nature and nature itself? Is there any difference?" There is an obvious difference. If you want to go from A to B you need a car. A description of a car is not enough. A car you can drive is real. It can take you from A to B. A picture of that car is also real. You can look at it, place inside an album, etc. But the picture is different from the car, you cannot drive it. A mathematical description of that car is also real, but it's real as a mathematical description, not as a car. You can store it on a computer, copy it, make a simulation using it, etc. but you cannot drive it. 1. Andrei, I think you're confused about what "description" means. Of course we do not have a description of a car that is as complex as a real car and that does the same thing as a car etc. A picture of a car is but a rough description of some features of a car. You have to ask yourself the following: What properties does a car have that you cannot describe by math? The fact that no one *does* describe a car by math is rather irrelevant. 2. Even if the description would include all details, down to subatomic particles you still cannot drive it. It's just some data on a computer or paper. A perfectly accurate description of a car is still not a car. There is no property of a car that cannot be described by math. But that description, even if perfect, is still not a car. You can use the description to build a car, true, but the building process is an absolute requirement. The description only contains the information needed to build the car, you need actual matter (electrons, protons and other particles) to make a car. 3. "There is no property of a car that cannot be described by math. But that description, even if perfect, is still not a car." If you can't find a difference between the description and the real thing thing then talking about a difference isn't scientific. As I already said, you are confused about what a "description" is. It's not just an equation or a list of properties. It's the complete mathematical structure. I hope you understand that an equation isn't the same as a solution to the equation which isn't the same as the embedding of that solution into space-time? 4. Btw, Tegmark explains this very nicely in his book. 5. Sabine, “If you can't find a difference between the description and the real thing thing then talking about a difference isn't scientific.” There is a difference. A car moves, powered by its engine, it evolves/changes in time. A description of the car, no matter how complex is static. A mathematical structure corresponding to a car does not move powered by the mathematical structure corresponding to its engine. There is no time evolution there. An electron accelerates in a magnetic field. The mathematical structure corresponding to an electron does not. Tegmark understands this problem, and tries to solve it by an appeal to the block universe concept in chapter 11 of his book, “Is time an illusion?” You would expect to find here an explanation for this illusion of time, but there is nothing there. He says: “If the history of our Universe were a movie, the mathematical structure would correspond not to a single frame but to the entire DVD” The problem is that a DVD is not a movie. You need a DVD player, which is a real device, evolving in time, to transform the static information burned on the DVD into a movie. Tegmark does not explain anything, he simply asserts that an observer inside the mathematical structure we call the universe would have this “illusion of time”. He does not deduce our observations from its postulate. He postulates our observations as well. It’s the same trick employed by the many-worlds proponents. They can’t explain Born’s rule from their postulates, so they simply assert that our observations are in agreement with Born’s A scientific theory should explain what we observe starting from its postulates, not postulate the observations themselves. In the chapter “Description Versus Equivalence” Tegmark makes the argument you are referring to: “Remember that two mathematical structures are equivalent if you can pair up their entities in a way that preserves all relations. If you can thus pair up every entity in our external physical reality with a corresponding one in a mathematical structure (“This electric-field strength here in physical space corresponds to this number in the mathematical structure,” for example), then our external physical reality meets the definition of being a mathematical structure—indeed, that same mathematical structure.” OK, let’s analyze his “argument” P1. two mathematical structures are equivalent if you can pair up their entities in a way that preserves all relations. – OK, I agree with that P2. “you can thus pair up every entity in our external physical reality with a corresponding one in a mathematical structure” – OK, I agree with that too. C. From P1 and P2: “our external physical reality meets the definition of being a mathematical structure” Can you spot the problem here? P1 is about two mathematical structures, not about physical reality, so C does not follow. Nice try, Dr. Tegmark! 6. Andrei, A function of time is a mathematical structure. And, no I don't see a problem with that argument. It's logically correct. 7. @Andrei "A car moves, powered by its engine, it evolves/changes in time. A description of the car, no matter how complex is static." Think how you've developed the knowledge of math in the first place, what it takes to share it and teach, i.e. to propagate further, and you will see a dynamic picture, though of a different The design represented by abstractions itself is constantly evolving (in its own 'knowledge space'). People are just habitually thinking that "abstractions are just words, not real", so some fancy of imaginative quick brain that got lucky while others didn't. Or "Don't know the meaning of the word? Look in the dictionary!" Yet, it does not work that way. If I look "Dirac's sea" in the dictionary w/o appropriate developed structure of knowledge, I will start questioning sanity of people who do that stuff (or my own, it doesn't matter FAPP). I think the best analogy for abstractions that we haven't got in the discourse (and words in general) is that it's not some static textual concept, but we can approach it as a program that one has to install (yet carefully, as there are also viruses and malware) in order to learn something. It does create a possibility for operational usage, but that may as well not be expressed. Yet, if one does not learn the design of a car (or does not repeat the whole development in knowledge which that design compresses in itself), there is no car. That type of knowledge does require certain organized processes of high level of cooperation and concentration of bright brains. If you still say 'that's just mental!' - then it just does not contain any information concerning how would you propagate knowledge (e.g. "how to teach Cargo Cult people to build real airplanes? how to organize their society (with an assumption that most adults will not already change their views, they are calcified, so won't uninstall the malware)? where to begin, even? "). Yet even that argument is not precise, as when we learn new information, new synaptic connections are physically forming in the brain, and the most funny part in that is that we rarely understand how will they actually manifest (yes, AI and neuroscience will help, but up to a point, and there will be "coming back to the drawing board" moment). So at the very least, you can always say that by learning the design of a car (math), you do change you synaptic connections, so there are dynamic processes that are involved (and which you can check already), it's only that they are not what you expect, so to speak, not under you nose. 8. I guess the argument above may lead astray. The import is that P2 is possible at all, i.e. compression can be done from observations to a structure of functional relations. Nature enables that feature! And transfer of knowledge is therefore possible, which is the source of wonders (at least to me, as it's not at all obvious feature). So if it wasn't a priori in-built feature in Nature, the compression itself would be under question (in fact, I don't think there would be anything that would start pondering about it). So, although I don't know anything about Max Tegmark, it's understandable that some people wonder about that miracle and postulate mathematical foundations to Nature itself. So what we call math is just a small subset that we've pried so far from Nature. 9. Andrei, to me it seems you are mixing up two different “Real”s or realities. What I think Sabine is talking about is the reality of a (mathematical) model and not about an physical reality that you seem to be talking about. 10. Vadim, "The design represented by abstractions itself is constantly evolving (in its own 'knowledge space')." The design is not evolving by itself. You need a physical brain to evolve it. The car is moving by itself (assuming it's autonomous). So the design cannot be the same thing as the car. There is a correspondence, but not identity. Likewise, a DVD does not play itself. You need a physical DVD player to do that. There is a correspondence between the movie and the information stored on the DVD, but the DVD is not a movie. Sabine said that "a function of time is a mathematical structure." True, but you still need a physical device, a computer, to make it into an evolving structure. Again, there is a correspondence between the function and the changing graph on the computer screen, but the function is not the changing graph. Tegmark simply postulate that it is in fact the case that the DVD IS the movie and no DVD player is necessary. The DVD is just undergoing the illusion of being played on a DVD player. Tegmark does not explain how a static structure can have illusions, and why we have the illusion of time and not some different one. As such, his theory is devoid of any value. "The import is that P2 is possible at all, i.e. compression can be done from observations to a structure of functional relations. Nature enables that feature!" If you are able observe something your brain has to be able to store that information, so, at the vary least, a description in terms of brain patterns must be possible. Hence, the fact that we can provide descriptions for our observations does not look unexpected to me. But anyway, my point is that Tegmark fails to provide any evidence that physical reality IS math. As my above examples are showing, a correspondence does not imply identity. 11. @Andrei "The design is not evolving by itself. You need a physical brain to evolve it. The car is moving by itself (assuming it's autonomous). So the design cannot be the same thing as the car. There is a correspondence, but not identity." The point is that it's the same with the car (it does need brain, i.e. the fact that it is a kind of a "flywheel" (memory) that just reproduces environmental stimuli is enough). You see, the argument is not about going into symbolic space (or conceptual) vs perceptual space, and not about their successive products (reduced to abstractions, that are not static, so represented by relations), but that both spaces manifest patterns, which are captured by relations, sometimes curious and surprisingly similar according to mathematical structures. And that is what I was The design of a car is not static, because the car is not what it seems when perceived by monkey. And if you postulate that 'autonomic detached from the network AI controlled car independent of its environment, evolution of neg-entropy pumping molecules organized in curious patterns, etc. is more real than operation of all the structures that are necessary to produce them' you are simply returning to square one, i.e. you are equating such an appearance to Nature, which... it is! :-) Some say, "it's just a product of human thought" but it's half-baked head in the sand argument. Do you know where did this thought come from? And to whom? Do you really think you have agency over it? Concerning correspondence and identity. Identity in itself is a murky operation (it often may relate *a section* of one process to another process, so making something static and then forgetting about it, if person who is using it is not careful). Especially when we take some postulates out of context (that's why I prefer considerations, postulates or axioms, i.e. definitions, are rarely useful for thinking). I don't know what Max Tegmark considers in his text, but P2 is semi-correctly formulated the way I see it, i.e. "...reality *meets the definition* of being a mathematical structure", which to me reads as "for all we know, phenomena *can be seen as* mathematical representation, so they must rely on some orderly structural foundation". I.e. my light mistrust goes to "definition", but it can simply mean, "Nature exhibits orderly, hence, mathematical behavior". Yet, if he strongly identifies and strongly ontologizes such concepts, it's indeed a confusion. Sometimes it's used because it sells well or easier to get a grant, compare two statements, "You are an automaton which can be uploaded through the wires! And I work to make it a reality!" with, "FAPP, conscious behavior can be emulated and represented by model, which can be uploaded through wires". Which in colloquial tones may be expressed as designs, cars, triangles, manifolds, trees, apples, you, I, universe are all "real enough" in that sense. They are abstractions. Yet, of different orders. It's just often difficult enough to express something in words, so I tend not to pick and choose by excerpts (while the import seems coherent). And I thought it was, so attempted to add few cents. But I personally do not like the concept of "real" (and "existence") itself, as it doesn't tell you anything and for synthetic generalization one can always use Nature (or Universe, Cosmos, life, etc.). But I'm not "against it" as it can be used as thinking process generator and helps to bring up other matters and clear air a bit. So like a philosophising operator (oh goodness, not a philosopher). 40. This comment has been removed by the author. 1. Jonathan, I tried to organise an MRI scan at an imaging centre with staff trained to work with patients with pacemakers in Canberra (closest to Mum's place outside of Sydney, which has a rampaging COVID-19 outbreak) and then to organise the right doctor from the hospital I was in to expedite my appointment and I couldn't manage to think of what to say properly so Mum did it for me. Concurrently, I am perfectly capable of cogitating on such abstractions as 'Is maths real? What is 'real', even?' and following along somewhat, so go figure. I'm here all month. :} I think 'it's aliens!' is now a catchphrase of the S.H. fandom. 2. I'm leaving my now non-sequitur reply to the removed comment to baffle future readers. 41. Anyone at all interested in the Multiverse, or who wants to discuss it, should read an excellent book about the Multiverse written by someone who is a) extremely knowledgeable on it and many other topics and b) has no personal stake in the debate. We need to be careful that opinions are not based on the loudest sound-bites. 1. Phillip Helbig8:58 AM, August 02, 2021 "To begin with, it seems hard to deny the possibility in principle that a multiverse theory might hold" So what is the scientifically testable formulation of a multiverse theory? Oh, he doesn't have one - it's not even a scientific question. And how do we calculate the "possibility" of a theory, even of "just" being non-zero and so possible "in principle"? Oh, he doesn't know, because he's just spewing bullsh*t. More complete drivel from another hopeless moron and the CUP are only too happy to stick a "cool" graphic on the cover and hawk yet more of this stinking rubbish. After reading the 210 pages of this comic what extra facts will we learn about the physical universe? Answer: zero This guy and the CUP are outright frauds. Just churning out vaguely plausible sounding, pseudo-intellectual rubbish at the taxpayers' expense. Disgraceful, incompetent parasites the lot of 2. The multiverse is a consequence of inflation. Inflationary cosmology has some empirical support. It is consistent with the structure of the CMB and the Λ-CDM predictions appear to be in line, at least with respect to the accelerated expanding universe as some physical vacuum with expansion that transitioned from an unstable vacuum with a much larger acceleration. That inflationary acceleration was 60-efold or about 10^{26} expansion in 10^{-30}sec. The multiverse comes about when one considers that the transition to the physical vacuum occurred in a causal bubble, and was not something that occurred throughout the de Sitter spacetime. So this inflationary bubble as a transition from an unstable vacuum to a physical stable vacuum should then just be one of a vast number of these. Inflation is consistent with the data, though as yet we do not have data on B-modes or other clear evidence, and the multiverse is then a consistent derivation from inflation. As a result the multiverse is probably at least a 50% proposition. 3. Phillip Helbig8:58 AM, August 02, 2021 Galileo taught us 450 years ago that all that matters is how well the model fits the data. Have Simon Friederich and the CUP still not received the memo? So what Simon Friederich feels it "seems hard to deny" is irrelevant. "God", the character from an Iron Age fairy tale which bizarrely he mentions, is meaningless and irrelevant. And the noddy little Philosophy 101 ideas like the "inverse gambler's fallacy" are of no interest to Physicists with the LHC, LIGO, Hubble, etc. So this book is a menage a trois of pre-Galilean confusion, Iron Age myth and undergraduate "philosophy", not "excellent". ** There is no model in existence that includes fine-tuning/a multiverse which fits the data better than a model without them ** That's the point, as made by Galileo half a millenium ago. 4. The inverse gambling fallacy may bolster your case. If you witness an unlikely outcome, say throwing 6 coins and they all come up heads, the fallacy is to assume there has been some prior set of trials. The error can be seen in an application of Bayes' rule P(X|Y) = P(X) P(Y|X)/P(Y). If you observe a single trial that is exceptional then P(Y|X) = P(Y), the trial is independent of previous trials, and this leads to P(X|Y) = P(X). Our learning of X does not influence Y. This in ways takes the fine tuning and argument by design down a notch or two. Your arguments here amount to a lot of yelling and thrashing around, but really do little to add any real content. The multiverse has some physics basis to it. This in no way is a proof for other cosmologies. I suspect the great majority of these are off-shell terms in quantum cosmology. This means they can be dismissed as physically real. I do not know whether this argument can remove all other cosmologies and leave only the observable one. 5. Lawrence Crowell6:18 PM, August 03, 2021 We've been though this before, Lawrence. Inflation and the multiverse are not falsifiable theories so are unscientific. There are 100 versions of inflation predicting B-modes and another 100 that don't. Consistent with the data? So is God. Λ-CDM? Contains a lot of energy in early unobservable times. Doesn't make it true even if it's the current best model. Revisit Galileo - no jumping to Platonic conclusions. Transitioning from unstable vacua to physical vacua in innumerable bubbles?? All unfalsifiable pseudoscience. "As a result the multiverse is probably at least a 50% proposition." You would be having a laugh if you'd written 1%. The multiverse is not even a scientific proposition. It's currently meaningless. 6. Who says these are not falsifiable. Inflation most certainly is falsifiable. If there is something found in the universe that falls outside of inflation then it is wrong. The multiverse, while it is a consequence of inflation, is more nuanced. The main prospect is for finding evidence of the multiverse is to find signatures of interactions between different different pocket You are making category errors. One does not need to directly observe something. All one needs is to observe and measure consequences of something. In the case of vacua transition, that is the mechanism of inflation. So far that is consistent with data, and with further work maybe detection of B-modes can reach the 5-sigma level. That is all one needs. We do not observe quantum waves, but rather measure physics to obey properties predicted by quantum waves. If multiverse is not testable, then neither is a single universe hypothesis. 7. Lawrence, Inflation is not falsifiable because the word "inflation" doesn't define the theory. You can chose any potential you want and make that fit to whatever observation comes. And in any case, finding evidence for inflation in *our* universe tells you nothing about the reality of universes that we can't observe. Lots of physicists are seriously confused about this. Just because you have math for something in your theory doesn't mean it's real. We assign reality to something when we observe it. If you can't observe it, you have no rationale for calling it real. Seems to be difficult to grasp. 8. Lawrence Crowell7:08 AM, August 04, 2021 But the inverse gambler's fallacy is irrelevant to physics anyway as we only have 1 observed example of physics and cannot say whether it is exceptional or not. No probabilities can be Fine-tuning and the argument by design are unfalsifiable and therefore not scientific, so they are on the bottom notch from the get-go. Again, inverse gambler's is irrelevant. Remove all other cosmologies? What other cosmologies? There is only 1 observed cosmology. Inflation, the multiverse, fine-tuning, string theory, "God" are all unfalsifiable and therefore unscientific. They all deal with "spaces" beyond physical data and so you can tweak the "theories" at will to try to make them join with the actual physical data. From a scientific POV they are all literally meaningless. God created the universe 4,500 years ago. But the universe is at least 13.7 bn years old. Oh, OK, God created the universe 13.7 bya. This is the kind of blatant moving of goalposts that is being perpetrated in all these "theories". It is not Physics. The CUP book ignores what Galileo told us half a millenium ago, bizarrely mentions a mythical character from an Iron Age fairy tale, and introduces irrelevant ideas from undergraduate It should not be understated how utterly, utterly ludicrous it is that this book has been published. Given its content, it should have been written with a quill on parchment, or maybe published on stone tablets. What will CUP be publishing next: "Voodoo A concert pianist's take"? "Witchcraft and sorcery Your local newsagent's outlook"? It's anybody's guess. 42. “This idea is not in conflict with any observation. The origin of this idea goes all the way back to Plato, which is why it’s often called Platonism … “ Plato’s basic understanding was that the world is determined by structures. The objects which we see are more or less unimportant stuff. And the rules which we observe in the world are not physical laws as we understand it since Newton but the consequence of the dominance of these structures. Like the motion of the planets which follows the basic structure “circuit”. It was of some influence on physics that the German educational system one century (or so) back was based on this position of Plato. When it was detected with some helplessness that particles behave differently than expected, Werner Heisenberg stated that the only solution for physics could be to go back to the concept of Plato. So he developed and enforced a structure-based understanding of it in his version of quantum mechanics, in contrast to Schrödinger and de Broglie who wanted a more physical solution. Similarly Einstein was influenced by this spirit (even though he didn’t like it and left the school early). Also Einstein developed a structure-based relativity in contrast to Lorentz who as well aimed at a more physical solution. So, Plato is more a part of our physical world view than most of us are aware of. 1. “So, Plato is more a part of our physical world view than most of us are aware of.” antooneo, that’s an excellent observation. Any mathematical use of words such as “point,” “line,” or “surface” also amounts to an invocation of the perfect structures of Platonism, since none of these concepts have exact physical representations under the known rules of our universe. When Plato postulated such perfect structures, his hypothesis was so effective at explaining observations that it remained largely unchallenged for nearly two millennia. It was not until the 1920s that the overwhelming evidence for atomicity and quantum blurriness destroyed any serious hope for uncovering real-world examples of Platonic perfection. Yet as you noted, even quantum theory founder Heisenberg reacted to emerging quantum theory not by embracing uncertainty but by recommitting himself to the path of Platonic perfection and structure. But why did Heisenberg take such a position in the face of quantum uncertainty? My suspicion — nothing more — is that even though Heisenberg, by his statement, “formed his mind” by studying Plato, it was the subtler influence of Newton and Leibniz’s calculus that most inclined him towards recommitting to Platonic perfection. For both Heisenberg and most scientifically inclined folks, the deeper question is this: How can the centuries-old masterpiece of calculus with its intrinsic reliance on infinitely detailed lines and surfaces work so well if there is not some deeper Platonic world of pure structure residing at the end of its infinitesimal limits? Aren’t the mundane algorithms of numeric calculation nothing more than imperfect lenses for glimpsing that ultimate, timeless perfection? Here’s a different interpretation: Far from accessing some timeless world of perfect structure, the calculus is just a type of compiler. Its rules transform one formal expression into a new formal expression that gives the same result — has the same “meaning” — when it is “executed.” After all, taking a limit merely creates a new formalism that, with the application of sufficient calculation resources, gets you closer a bit closer to the still-unreachable goal of Platonic perfection. In this interpretation, the issue of limits becomes a heuristic — and not always a good one, as demonstrated by the false dualities of manifolds — for ensuring that meanings of the two forms remain compatible even when iterated to “infinity.” For anyone dedicated to Platonic perfection, the most unsettling feature of this interpretation is that the Taylor series you use to calculate a result may be closer to the actual physics than any timeless expression of classical Platonic perfection can ever hope to be. 2. Terry Bollinger, you say: “Any mathematical use of words such as “point,” “line,” or “surface” also amounts to an invocation of the perfect structures of Platonism, since none of these concepts have exact physical representations under the known rules of our universe.” If Plato’s concepts do not have exact representations in our universe, are they then necessary for us for understanding the universe? Maybe these exact representations are beyond that what our universe contains. And has his hypothesis really been so effective? Take the example which I have mentioned: In the history, the followers of Plato did not see a reason to leave the Ptolemaic system. Because they did not see a reason to ask, WHY the planetary motion is as it is. It was Newton to find this, and Newton’s goal was not to find more abstract structures but to understand the rules of the mechanical motion. And yes, I find the view of Plato in our present physics, but not as an advantage. If we look at quantum mechanics in the way of Heisenberg and at relativity in the way of Einstein, it reflects Plato in the way that both acted to find structures but failed to find the causes of the physical observations. And I see here the master reason why present physics is in a type of a You suspect that the world has somewhere a perfect structure which we have to find. Maybe it is that way, even though I doubt this. But I see the blockage of the development of a better understanding if this is our only or our exclusive view. For example it is possible to understand particle properties by direct understanding and make calculations yielding precise results, even though Heisenberg has stated that this is impossible and should not even be attempted. And on the other hand it is possible find the physical causes of relativity which helps to solve open problems like dark matter and dark energy. But unfortunately well-educated physicists do not even attempt to do this. So, Plato is still with us, but in my view this is more a load than an advantage. 3. "How can the centuries-old masterpiece of calculus with its intrinsic reliance on infinitely detailed lines and surfaces work so well if there is not some deeper Platonic world of pure structure residing at the end of its infinitesimal limits?" To those who feel this way (which I understand is not Dr. Bollinger's position), I would reply that calculus is just the limit of finite-difference systems as the minimum increment goes to zero. Therefore in a universe governed by finite-difference equations, provided the minimum increments were small enough not to force themselves on people's attention, the notion of calculus would still be natural to derive. (And in fact, calculus is used as an approximation for many discrete systems, such as fluid flow and electricity.) I for one do not see the logical necessity assumed by the above quote. It seems to me the inverse of an argument that since large numbers exist, infinity must exist, in some higher plane. I much prefer to think that our universe has certain properties which allow certain things and relationships to exist and work, and others not to work, so our Platonic universe and our physical universe are one and the same. For example, conservation of energy on the macroscopic scale says that some things tend to exist long enough to count, so integers work. In another universe, this might not be possible, or one plus one might equal zero, if every particle was its own anti-particle. So in order for the Platonic plane to apply to all conceptual universes it might have to contain contradictory relationships such as 1=-1. If we limit it to things which work well enough to be useful (if only conceptually) in this universe, then, voila. 4. antooneo: You suspect that the world has somewhere a perfect structure which we have to find. Please read my entire comment. Respectful restatement of an idea with which one disagrees more than anyone else on this planet is not quite the same as being an advocate of that idea... :) 5. JimV: One little point. You say: “I much prefer to think that our universe has certain properties which allow certain things and relationships to exist and work, and others not to work, … . For example, conservation of energy on the macroscopic scale says that some things tend to exist long enough … .” The conservation of energy seems to me a good example for this discussion about Plato. We know this physical law, but has anyone anytime asked for the cause of it? To my knowledge not. And also this is part of the follow-up of Plato. On the other hand, I know a particle model from which the conservation of energy follows. That has an important consequence. Quantum mechanics use the model of exchange particles to describe forces. Now the permanent emission of these exchange particles (for instance for the electric force) means a permanent violation of the energy law because any exchange particle can transfer energy onto a charged object, maybe after very long time and distance. In the view of this, it makes sense to deduce the energy law from the internal set up of a particle because in this case the law is only effective for structures from an elementary particle upward, and so there is no logical conflict with this exchange process. 43. (1+2¹+3²+5³+1/2¹*3²/5³)⁻¹ = 137,036⁻¹ 44. Nope, math is just a language like any other. We all speak it. Some to a greater extent than others, and there are different dialects. I cite Reverse Polish Notation as one of them. I speak 4-function math with little understanding beyond that. Others have a much greater vocabulary than I do. Just like spoken languages, maths change with time. Wasn’t the order of operations different before the twentieth century? New *words* are added occasionally to aid in the connivance of ideas. Isn’t 𝚿 just shorthand for a much longer wave function equation? It didn’t exist until that Schrodinger guy came along and added it to our vocabulary. Just like spoken words, 𝚿 probable has several different meanings depending on the context. Maths can be used to write stories, and that includes fiction. It can be used to write history in financial ledgers. It can be used to write forecasts. It’s used to describe Nature around us. Someone once said that figures don’t lie, but liars can figure. So yes it is used to deceive others as well. Pyramid schemes come to mind. Someone else once said that mathematics is the universal language. I say, universal only if we’re talking about planet earth. Other species may find our calculus as childish. If we find something that can’t be described by maths, it’s probably aliens. 1. I like this one. :) (Shroedinger = 'That cat guy again' 2. Random cogitation: I was watching Olympic show-jumping earlier and contemplating how different mathematic modalities (is that the word?) are like how different subjects have different specialised vocabulary to distinguish and describe different things. Perhaps. it's like how someone familiar with horses (as a child I was obsessed with equines) can look at the coats of different animals and think: chestnut, sorrel, dun, bay, blood bay, liver - but someone else might look at the same variety of colourings and think, 'brown, or brown and black.' Many of the people here know of a variety of breeds and colourings of these maths, but I'm looking at the field and thinking, 'it looks like ... math things' 45. This comment has been removed by the author. 46. I always think of physics as being maths with units, where it's important to know the quantity you're talking about, and not just the mathematical equation. So Pythagoras' theorem has a nice mathematical form, which is useful when dealing with lengths, but not quite so useful when dealing with densities or magnetic fields. I know that many equations in physics work with dimensionless quantities where the units are spirited away but, under the bonnet, there is always some actual physical quantity at play. If you see three apples, then three describes part of what you see; but you also need the units, apples in this case. I remember at school when I would get zero marks for the answer to a physics question whenever I left off the units. Overall I think I am more impressed with the world being explicable in terms of four base units (mass, length, time, electric charge - and the other few) than being explicable in terms of mathematics. And, of course, in any mathematical equation describing the world, the units have to balance. So, along with the question "Is maths real?", perhaps we should also ask the question "Are units real?". 47. What about solving a quadratic equation involving motion and one of the two solutions is negative? 48. I think the imagination of reality we have is based on our sensual perception of our surroundings: The retina producing images, the cochlea forming the sounds and so on. Mathematics is just a production of a very odd structure we call consciousness and until yet I have found no satisfactory definition for it. But apart from undefinable, it produces mathematics, doesn’t it? So in the end I believe mathematics as a product of our consciousness - that some parts of it describe the reality we percept (even the reality only our technical detectors can percept) is a mystery that I have been wondering for all my 65 years. 49. A precursor to Plato was Pythagoras, however, we know so little about his philosophy that it's hard to say anything concrete. However, Plato was known to belong to certain Pythagorean circles and I think it is fair to say that his philosophy is heavily influenced by Pythagoreanism. In fact, I'd say, if you're interested in what Pythagorean philosophy is like, read Plato. I'd also distinguish mathematical Platonism from Platonism per se as the former is a more recent creation and a much truncated version of Platonism. Plato thought of mathematics as real and a stepping stone to his theory of forms/ideas, whose steps are the dialectic. Hegel described this dialectic in his *Phenomenology*, but starting in the reverse order, from Being/Non-Being or what the Pythagoreans and Plato would have called the Monad or the One. Two of the lower forms of the One is the form of the Good and Justice. In Christian or Islamic Platonism this is identified with the attributes of God/Allah. It's why when Martin Luthor King said, "the universe bends towards justice", he meant something like this and not simply as nice sounding language. As an Islamic Platonist, I wholeheartedly concur. For Plato, mathematics is an aspect of neccessity. In ancient Greek religion, this would be Ananke, commonly described as holding a spindle in order to weave the fabric of reality. Physics would then be the art of physical neccessity and physicists are driven to find the irreducible minima of physical reality and the best way to describe it. Since Einstein, this has been held to geometric. But now I'm not so sure. Einstein himself regarded the geodesic equation in General Relativity as a unification of gravity and inertia, rather than a geometric phenomena. I'd also like to point out that the philosophical position in direct opposition to mathematical Platonism is mathematical nominalism. This states that mathematical entities ate not real and only name things. For example, the number two is an abbreviation that describes two things, for instance, two bowls, two chairs or two trees. I think it pays to name philosophical concepts in the same way it pays to name things in physics, especially when they're common enough concepts, because they are common to all. Feynman thought so when he retired his idiosyncratic notation for tan - a large T with the overhanging bar stretched over the argument. Finally, I think it's worth adding that the mathematical forms in Platos philosophy is impressed upon the substance of the world. In a sense, it can be described as part of natural law. And although it's real, its reality should be distinguished from the matter of the world. How these two differing ontological categories actually interact is a puzzle, rather like the mind body distinction in Cartesian philosophy. Except of course, in the latter both mind acts upon body and body acts upon mind; whilst in the former, the action is only one way - mathematical form is the acting agent and matter the substance that allows itself to be acted upon by mathematical form. Presumably this is why Aristotle conceptualised his notion of force in the manner he did, that is without concieving that there could be a reaction (back-reaction). So in a sense, we can say neccesity *forces* matter to act in the way that it does ... 50. ... Finally, I think it's worth adding that the mathematical forms in Platos philosophy is impressed upon the substance of the world. In a sense, it can be described as part of natural law. And although it's real, its reality should be distinguished from the matter of the world. How these two differing ontological categories actually interact is a puzzle, rather like the mind body distinction in Cartesian philosophy. Except of course, in the latter both mind acts upon body and body acts upon mind; whilst in the former, the action is only one way - mathematical form is the acting agent and matter the substance that allows itself to be acted upon by mathematical form. Presumably this is why Aristotle conceptualised his notion of force in the manner he did, that is without concieving that there could be a reaction (back-reaction). So in a sense, we can say neccesity *forces* matter to act in the way that it does. And as a final punchline, I'd say that freedom cannot obviously be an aspect of neccessity - it is antithetical to it, that is, it is its opposite. It is an aspect of the Good, as freedom is a good (as is neccessity!) And it manifests itself in the freewill of human beings (and animals - and less obviously, plants). This is one way of resolving the paradox of human free will arising from a deterministic world. In fact, I'd say what we call freedom is a dialectic between pure freedom and pure neccessity. Pure freedom doesn't manifest itself in this world. It's what Heraclitus would have called the unity of opposites. Aristotle refers to it too, and this is why he says that all philosophers (preceding him) would say that the roots of Being are contraries, his name for these unities. It's also why *sublation* figures so prominently in his philosophy, it's his name for the unity of opposites. And then of course Marx borrowed the concept for his material dialectic - he cut out the spiritual. It's why he's said to have turned Hegel upside down. But what else is new? Every modern philosopher had been busy doing the same: Kant, Schopenhauer, Freud, Jung & Nietzsche and more recently, Dawkins and modern physicists (or as I like to call them, neo-Epicureans. After all, one of the first athiest philosophies was that of Democritus, adumbrated by Epicurus, valorised by Lucretious and revived in the renaissance and evangelised in the modern era. Although none of the earlier philosophers are as thorough going materialists as those of the modern era. They still believed in some kind of spiritual/divine reality). And Ok, really finally - the One of Pythagoreanism can be identified with the Dao of Daoism and the Two of Pythagoreanism, aka the unity of opposites, can be identified with the taijitu, the Ying-Yang symbol in Daoism. The same one that Bohr put on his coat of arms when he was knighted with the motto Contraria Sunt Complementa - contraries are complementary. Personally, I find it eye-opening that two different philosophical traditions have come up with essentially the same philosophy of reality. This is real metaphysics and not the sad little strawman that modern athiests are busy knocking down, again and again. (Sorry for the long post). 1. Thank you for sharing your ideas, Mozibur. :) 2. It's difficult for me to walk through ontological forests of the ancient Greeks with clear cut edges and identities inhabiting the worlds, but at times amusing in its own way as they indeed set the cultural discourse on rails, which is not much evolved since (surprisingly, considering relativity), so in a sense they present all the thoughts one may encounter in leisurely conversation just expressed more fully. But even in the model you presented, that which is expressed by "free will", "freedom" (as another aspect of the Good) is then not per se necessarily what is meant by it. I.e. from a local unit perspective, it's not, "I'm free to do what I want!" but the Good's freedom (God's will), so to speak. As it then means the freedom is a dynamic characteristic in accord with the Good. (even by definition, can something closed and static in whatever space be free?) In that case, a unit's will (whether in accord with the Good or not) is only an appearance. So one necessarily ends up splitting wills and dichotomies of 'unreadiness' and 'readiness' to understand what freedom is really about. And that would be quite alright if it be understood that way (and not dogmatically pursued and propagated as the only way). The difficulty is - it doesn't. And if a scheme doesn't work as it is already (meaning not more simple to digest and accessible to any discourse), why not check what our developments in physiology and otherwise tell us instead. Which also of necessity will bring humility to anyone, yet the person will be more aligned with the modern language and structure of knowledge (as it's not by itself, at least in principle, more complex than metaphysics those Greeks are weaving). Another comment is on the opposites. I haven't read enough to comment what they thought. But Aristotle in Ethics seems to stress and develop the idea of the mean. That is, neither plus nor minus. One has to find equilibrium. But it's not just moderation, but that something tangible develops out of that discrimination of the mean and that results in ethos of a person. In other words, ethos is not given, not guaranteed just because one is born human, i.e. can only be considered as potentiality. But must be worked out. That is completely in accord with Buddha's the Middle Way. I.e. it's not stupid moderation, normalization same for all, but something that must be worked out individually given one's conditions. And that is profound. Considering relations, I find that part most revealing and deep, even though they dance around postulating ontologies. But one principle that Aristotle mentions is implicitly elucidating of his approach that ontologies by themselves are *only instrumental* (but that may be my reading). The principle is: "Arguing towards the first principles, not from them." Basically, indicating that it is a theory, a model, adequate for an occasion (e.g. implanting understanding of ethics to students, etc., whatever the situation might have been). But acknowledging that it's always approaching the unknown and must be done with diligence. Considering further developments. As if nothing new happened in the discourse. I think it's not so, just doesn't seem to get such a wide acclaim. E.g. Alfred Korzybski considered and integrated many confusing subjects, I assume Whitehead, Wittgenstein and other math philosophers contributed to everyday language (albeit subtly). Overall, thinking is just very-very conservative (whatever seems to be the case). It does not change w/o work. 51. This comment has been removed by the author. 52. I agree with Mr. Jonathan Camp12:13 PM, August 02, 2021 Math is a human language. And like any human language it does three things at once: it describes what is observed, it limits/defines the scope of observation, and it justifies observation& Ho and behold, that’s physics does: 1) experiments and observation, 2) computer simulations, and 3) math. Math is used both to analyse the data and to specify the design of a simulation. Withe math (a language!) it thus shortcuts the circuit of inquiry and justifies itself. Like any human language it is a tautological enterprise. In the transcription of Sabine’s video it is summed up “…something is real because it’s a good explanation for our observations.” Physics can nor will ever close the gap between the language-independent physical world we live by and the language-dependent inquiry we make. Sabine’s double question in the video “are we made of math?/is math real?” leaves out the gap. 53. Math is real? No, and it is rather important to see the difference! It is a question of metabolism, throughput in calories. You can’t “breath fire” into an equation, except metaphorically. Music, in this sense, is more real than mathematics. Now, I am in awe of mathematics, a feeling perhaps akin to walking through a medieval cathedral, such integrated complexity and reminder of something beyond my understanding. But for all its high flight, there are strings attached. To serve in physical theory, mathematics must be grounded, be exactly referenced and replicable in ‘real’ object or process. Thus, kilogram is determined by an exactingly machined chunk of platinum alloy Pt-10Ir and the interval between tick and tock is defined as being equal to the time duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the fundamental unperturbed ground-state of the caesium-133 atom. The International System of Units is an exacting and expensive endeavor that serves to secure the abstraction of mathematics to those things which breath fire. 54. I’m reminded of an M&M Candies commercial. On Christmas Eve our two M&M Candies, Yellow & Red, walk into their living room and discover Santa Clause standing there. The Yellow one exclaims He is real! Faints and falls over. Santa turns around and upon seeing the M&M candies, exclaims, They are real! Faints and falls over. Am I to live in fear that someday I’ll walk into my kitchen for a midnight snack and find a quadratic equation standing there? 1. Suppose a series of equations were identified that correctly explained the creation, appearance, motion, smell, taste, sound and digestion of the M&M. In such a circumstance what would the distinction be between the M&M and those equations? I suspect that you are caught up in an error by assuming that those equations wouldn’t be taking you into account and therefore couldn’t be real. I say this because your final sentence gives you the observer a prominent but false role. The M&M would exist as it does whether you found it in the middle of the night or not. Take yourself out of the picture completely and Dr. Hossenfelder’s observation may seem clearer to you. 2. @Jonathan: You'll wander into the kitchen and unthinkingly open the fridge and gaze within, realising that you proved the assertion of 'no free will' correct because you didn't even know you were fridge-ward until its palid interior light was upon you. As you prepare a snack, you'll muse that your brain made a pretty pantomime of choosing the ingredients. As you munch, your jaw will seem a device mechanical, of angles and forces. You feel slightly disembodied as your skeleton and musculature, blood and gristle, array themselves in an elaborate series of mere pivots, beams, swivels and pipes. Several hours later, you'll rip the most satisfying wind and laugh, blowing the ghastly spectre of disembodied mathematics clean away. 55. A good explanation for our observations doesn’t mean that its real. Some of us have recently discussed the future possibility of AI cranial implants. It would be neat to have a chip installed in my brain that would turn me into a maths wizard. Or would it? Other than to impress my friends, what would I do with it? I already know all the maths that I need for my daily life. I can use a calculator, spreadsheet, and program computers for almost anything else beyond that. But I want to get to my point about maths not being real here. They are real in the sense that maths is a human-made language and nothing more. But that’s not my point. I like intelligent women! So do you, and don’t tell me that you don’t. But there is a difference between real intelligence and implants. I don’t know about you, but I prefer real. 1. Think about something you believe is real and try to explain why you think it's real. Maybe then you'll understand why I say what I say. 2. Hi Jonathan (8) Yesterday I watched "Ghost in the Shell" in the 2017 film version. In the story, our conventional understanding of the term implantation is turned upside down. The question of the meaning of 'knowing one's own story' is also asked... The story draws the conclusion that the character of the heroine, without knowing the way in which she was formed, asserts itself over an illusion that has been given to her in order to make her controllable. I think the story is great and worth to be mentioned here. ^.^ , 3. Sabine, I have variety of hammers which I consider to be real, could send pictures. The reason I believe them to be real is that when I accidentally hit my thumb (not so often these days) I feel pain due to the transfer of energy to soft tissue containing an abundance of nerve endings. While have suffered considerable discomfort due to your postings on the subject of determinism, I have yet to be hit with an actual equation that causes pain. My point is, the criteria for distinguishing between what is real and any of its abstract representations,is energy throughput in appropriate metric. That is an observable condition. So, is math real? You posed the question and did not seem to come down firmly on the affirmative. I do not understand why you say what you say. 4. Dear Dr. Hossenfelder, Wow! Sabine Hossenfelder has asked me a question!! “Think about something you believe is real and try to explain why you think it's real. Maybe then you'll understand why I say what I say.” The question may be rhetorical, but I feel an obligation to reply. No off the cuff comment here, only to be deleted later. This is my opportunity and I will not back down from it. Yes, Sabine, I have been paying attention. I accept your test. I would ask for your patience as I seldom have more than an hour when I post to your blog. I will compose my answer off-line and upload it later. You will not be disappointed. 5. "The reason I believe them to be real is that when I accidentally hit my thumb (not so often these days) I feel pain due to the transfer of energy to soft tissue containing an abundance of nerve endings. The pain is an observation (so is the visual input etc) which is well explained by an object you call "hammer". Whether you have an equation for that is entirely irrelevant. 56. This comment has been removed by the author. 1. I deleted this comment because it was in the wrong place. I reposted it where it belongs. 57. One can have 1/3, which is rational and infinite decimal number. One needs infinite precision to measure something as 1/3. Square root of two... the same. Can one say something that is square root of two? Yes, take a square of length one. Can one define something of length one? No, because immersed in the reals, one needs infinite precision. Nobody has a tool to measure with infinite So the use of real numbers, or rationals or complex numbers systems... to be used as models is based upon the idealization that one has in principle infinite precision. But this is just an idealization or an approximation. There is also a metaphor on this- But all this works well, until now. If we say that on the table are two apples... we do not see the problem of the idealization. Two apples or three apples are totally different... or not? After all, an apple is also an idealization, because an apple is composed of molecules that can fly partially on air, making the apple fuzzy, and mixing with the other apple... There is also a metaphor and idealization here. And still, we are able to perceive two apples rather than three apples with the maximum precisio possible, which is +- 1. And if there is just half an apple? This is not an apple, one can say. If not, one can imagine marbles... which are much harder :). All this is to say that at least integer arithmetics is real, it is present coherently and undoubtedly in Nature. Humans (among other beings) can recognize it and the arithmetics. It can be the case of beams that can undoubtedly recognize further mathematical structures? Can we perceive them as we can perceive arithmetic? It could be. The capacity of perception of mathematical structures in Nature should then be taken as a sufficient criterion for real mathematical structure. Thus between the spectrum that arithmetic is real and all mathematics is real, must be the truth. 58. "So when physicists say that stuff is real, they mean that a certain mathematical structure correctly describes observations." This clear description of what “real” means in science is what I find missing is virtually all outreach science communications while I think this is critical if the purpose of this outreach is promoting understanding of science. Take for example the book “A Brief History Of Time” from Steven Hawking. In chapter 1 it says basically the same: “a theory is just a model that exists only in our minds and does not have any other reality”. But then in chapter 2 this seems to be contradicted by things like: “We must accept that time is not completely separate from and independent of space” and “the fact that space is curved”. It is then left upon the reader, mostly laypeople mind you, to remember that this is just a model that does not have any other reality. No wonder many laypeople get confused about science. But I get the impression it is worse that this. I get the impression that many physicists are also confused about this and I wonder if Steven Hawking was one of them? 59. Yes, Sabine’s video/transcipt is perfectly clear. Verbatim: - “We have no rationale for talking about the reality of mathematics that does not describe what we observe, therefore the mathematical universe hypothesis isn’t scientific” - “So the idea that we’re made of math is also not wrong but unscientific. You can believe it if you want. There’s no evidence for or against it.” - “Just because you have math for something doesn’t mean it’s real.” BTW, I guess no hands-on researcher will take her/his model for anything else than scientific [I.e. language-dependent] reality. But I agree, Leon, that scientist (physicists included) tend to be sloppy in making/explaining the difference. If I’m not mistaken it was the (not so) late physicist Steven Weinberg who quipped that the universe speaks in numbers. I bet he meant that the objects he observed provided answers in terms of the numbers that his inquiry was based upon. Oh well. I guess our host SH reads this too. Signora Hossenfelder: sing us a song about it. 😎 60. Sabine, I get your point about trying to explain why anything we think is real, is actually real. Rene Descartes (1596,1650) famously said "I think, therefore I am". That is most likely the only thing anyone can ever be certain of. 1. Howard, Yes, that's right. But we arguably don't only use the word "real" for our own thoughts. Even if you're a solipsist, we organize our thoughts in other categories, talking about things and their properties and so on. 61. Dear all, If we take the perspective from our brains, there are (at least) two "realities". One is the "outside world" which we perceive through our senses (eyes, ears, nose, touch, skin and such). Where different senses confirm one another, we safely assume "something is out there". The belief is strengthened when we communicate with others who confirm that they have similar perceptions and convictions. I think that most people don't belief that this "world outside" is just cooked up by our brain, of by our act of observing something. That the moon is there, even when no-one is looking. The interface of our senses might be a bit tricky at times. When a bomb explodes it creates waves in the air, but when there is no creature to perceive these waves as sound, than there was no sound, there were only waves. But these are the details. There is a second "reality" in our brain, which does only indirectly result from our senses. These are our emotional feelings and thoughts and so on. Mathematics are a result of our thoughts and in my view firmly belong in this "second reality in our brains". Sometimes the mind plays tricks with us, and some people see or hear things which are not really there. But again, these are details. In our day-to-day living experiences where our attention lies, all the realities in our brain are combined in one. It is sometimes a bit difficult to dis-entangle what is cooked up by ourselves, and what is safe to assume to be the direct result of something "out there". So we get all kind of discussions about the nature of reality. Is God real, is math real. I think a "brain-based analysis" could bring these discussions on a somehow firmer footing, or at least offer a scientific perspective. 62. If Euclid (...probably lived in the 3rd century B.C.) was still looking for plausible intuition for mathematical foundations and thus made an interdisciplinary connection that could be evaluated as right or wrong, in modern mathematics the question of right or wrong does not arise. Euclid's definitions are explicit, referring to extra-mathematical objects of "pure contemplation" such as points, lines, and surfaces. "A point is what has no width. A line is length without width. A surface is what has only length and width." When David Hilbert (1862 - 1943) axiomatized geometry again in the 20th century, he used only implicit definitions. The objects of geometry were still called "points" and "straight lines" but they were merely elements of not further explicated sets. Hilbert said that instead of points and straight lines one could always speak of tables and chairs without disturbing the purely logical relationship between these objects. But to what extent axiomatically based abstractions do couple to real physical objects is another matter altogether. Mathematics does not create (new) phenomenology, even if theoretical physicists like to believe this within the framework of the standard models of cosmology and particle physics. 63. This comment has been removed by the author. 64. Sabine Hossenfelder12:25 AM, August 04, 2021 I see. Physicists don't currently need the irrationals but do need the rationals. COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon. Note: Only a member of this blog may post a comment.
{"url":"http://backreaction.blogspot.com/2021/07/are-we-made-of-math-is-math-real.html","timestamp":"2024-11-10T08:21:02Z","content_type":"application/xhtml+xml","content_length":"648523","record_id":"<urn:uuid:3883c1a8-5697-40f0-ad24-fc0a7a4f168e>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00579.warc.gz"}
Decoherent Histories Quantum Mechanics Generalizations in Fixed Spacetimes Quantum mechanics is arguably the most successful framework for prediction in the history of physics. Why seek to generalize it?There are at least two reasons: Alternative theories that are close to quantum mechanics but not quantum mechanics itself help motivate and analyze the experiments that test it. Beyond that there is cosmology. An application of quantum mechanics to the whole universe is an extension vastly beyond the theory's domain of successes. Textbook quantum theory must be generalized to apply to a closed system like the universe. Dechoherent histories asformulated on [xxxx] is an adequate framework for this when gross quantum fluctuations in the geometry of spacetime can be neglected. But when they can't be neglected, as in the very early universe, a further generalization is needed [yyy]. This page is one of two devoted to generalizing the decoherent histories quantum theory discussed on [DH]. In this page quantum spacetime is neglected. It is dealt with in the papers on the other page [yyy]. Generalized Quantum Theory [some chapters of Les Houches] Time Symmetry and Asymmetry in Quantum Mechanics and Quantum Cosmology [96] (with Murray Gell-Mann) We investigate a generalized quantum mechanics for cosmology that utilizes both an initial and a final density matrix to give a time-neutral formulation without a fundamental arrow of time and is therefore time-neutral. Time asymmetries can arise for particular universes from differences between their initial and final conditions. Theories for both would be a goal of quantum cosmology. A special initial condition and a final condition of indifference would be sufficient to explain the observed time asymmetries of the universe. In this essay we ask under what circumstances a completely time symmetric universe, with T-symmetric initial and final condition, could be consistent with the time asymmetries of the limited domain of our Unitarity and Causality in Generalized Quantum Mechanics for Non-Chronal Spacetimes [101] Spacetime must be foliable by spacelike surfaces for quantum mechanics to be formulated in terms of a unitarily evolving state vector defined on spacelike surfaces. When a spacetime cannot be foliated by spacelike surfaces, as in the case of spacetimes with closed timelike curves, a more general formulation of quantum mechanics is required. In such generalizations the transition matrix between regions of spacetime where states can be defined may be non-unitary. This paper describes a generalized quantum mechanics that can be applied to such situations. The usual notion of state on a spacelike surface is lost in this generalization. The generalization is acausal in the sense that the existence of non-chronal regions of spacetime in the future can affect the probabilities of alternatives today. Spacetime Alternatives extended over time Textbook quantum mechanics gives probabilities for alternatives at a definite moment of time. But this is an idealization. Measurements extend over some time interval. And in quantum gravity where spacetime geometry is not fixed but varying quantum mechanically there is no precise meaning to `at a moment of time'. For these reasons textbook quantum mechanics needs to be generalized to give probabilities for alternatives that extend over time. The papers below show how to do that. Spacetime Coarse Grainings in Non-Relativistic Quantum Mechanics [95] A sum over histories generalization of non-relativistic quantum mechanics is constructed from the following ingredients: i) A set of fine-grained histories that are Feynmann paths, ii) Coarse grainings defined as arbitrary partitions of these paths into classes not necessarily those defined by alternatives at definite moments of time, and iii) a decoherence functional defined by sums over histories in coarse grained classes. This is used to analyze the decoherence and probabilities for simple spacetime alternatives. An example is the set consisting of two histories defined by whether a particle crosses a fixed spacetime region sometime, or never. Nearly Instantaneous Alternatives in Quantum Mechanics [109] (with R. Micanek) This paper shows how alternatives extended over time reduce to those at one time as the time over which they are extended gets smaller and smaller. Decoherence becomes automatic, and the probabilities approach usual ones at a single moment of time. Representations of Spacetime Alternatives and their Classical Limit [133] (with G. Bosse) Like all classical quantities, spacetime alternatives that extend over time can be represented by different quantum operators. For example, operators representing a particular value of the time average of a dynamical variable can be constructed in two ways: First, as the projection onto the value of the time averaged Heisenberg picture operator for the dynamical variable. Second, as the class operator defined by a sum over those histories of the dynamical variable that have the specified time-averaged value. We show both by explicit example and general argument that the predictions of these different representations agree in the classical limit and that sets of histories represented by them decohere in that limit.
{"url":"https://web.physics.ucsb.edu/~quniverse/dhqm-gen-noqg.html","timestamp":"2024-11-03T12:22:42Z","content_type":"application/xhtml+xml","content_length":"10052","record_id":"<urn:uuid:44e8dd28-28c9-4124-a822-f1bcd3e8c52e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00554.warc.gz"}
Words to avoid in your investment communications with regular folks - Susan Weiner Investment Writing Words to avoid in your investment communications with regular folks Big words make your readers work harder to grasp your message. This is particularly true of jargon, such as “duration,” unless your piece is strictly for investment professionals. Below are some words to avoid when communicating with regular folks. Most of them are financial jargon. Others—like “mitigate“—are unnecessarily long or confusing. Replace jargon and long words with shorter, less technical words that pack more punch. They also make it easier for readers to absorb your message. • Accommodative monetary policy • Active share • Alpha • Barbell • Basis points • Beat, when used as a noun to refer to beating analyst forecasts • Bet • Comp • Compute as a noun or adjective • Conditional value at risk (CVAR) • Constructive, as in “we are constructive on small-cap stocks” • Contango • Convexity • Correction • Dead money • De-gross • Disseminate • Downside deviation • Drawdown • Duration • Ecosystem • Efficient frontier • Ex-, as in “ex-Japan” • Ex-growth • Expected return • Exposure • Externality • Fiscal • Flight to quality • Growth wall • Headwinds/tailwinds • Inverted yield curve • Kurtosis and other statistical terms (copula, eigenvectors, semi-deviation, subadditivity, etc.) • Leverage • Levered names • Liquidity • Long/short • Mean-variance optimization • Mitigate • Modern Portfolio Theory • Monte Carlo analysis • Orthogonal, which apparently is used to mean “uncorrelated,” although that doesn’t appear in the dictionary definition of the word • Pricing power • Rerate • Reversion to the mean • Risk assets • Risk on/risk off • Risk premium • Risks to the upside • Runway, when not referring to an airport runway • Secular • Sharpe ratio • Size up • Spanning a broad risk/return spectrum • Spread product—A Google Alert on “spread product” yielded results related to margarine and Vegemite. • Spend (as a noun) • Stack ranking • Tranche • Universal asset owner • Use case • Value at risk (VAR) • Value traps On a related note, don’t use acronyms without first defining them. This means words such as AUM, CAGR, CAPM, CLO, DOL, EBITDA, EPS, LIBOR, MBS, MLP, TTM, YOY, and YTD. It’s often best to avoid acronyms completely. I’ve discussed this in “How to capitalize financial acronyms.” If you’re writing an educational piece for regular folks It’s okay, even admirable, to educate your regular Jane or Joe investors about complex financial concepts. When you write to explain technical vocabulary, make sure you: • Define your terms using plain language. You can introduce the technical terms and then define them using the techniques in “Plain language: Let’s get parenthetical.” • Mention the WIIFM (what’s in it for me) so readers know why they should slog through the explanation. • Explain the benefits of the complex financial concept for regular folks. For example, don’t use a multi-billion dollar pension fund as your key example unless your readers are participants in a similar plan. • Use analogies, where possible, because they’ll stick in your readers’ minds better than dry explanations. Must you bore sophisticates? You may worry that your content will bore sophisticated readers if you go easy on technical vocabulary. No, you won’t. Not if you do it right. Read “How to make one quarterly letter fit clients at different levels of sophistication” for my take on how to keep everybody happy. If you’re communicating with other investment professionals Some jargon is okay if your communications go exclusively to other investment professionals. In that context, jargon can act as a kind of shorthand. For example, “basis points” can be used in a way that’s more precise than “percent.” “Spread product” is more concise than the definition of “spread product.” However, if you’re targeting institutional investors, don’t assume that they’re all sophisticated consumers of investment content. An investment committee, for example, can include less sophisticated Still, there’s no need to make your professional communications overly complex or wordy. Your suggestions for words to avoid? If you can suggest words to avoid in your investment communications, please share them in an email or social media post to me. Updates: I updated this on April 6, 2017, and Dec. 20, 2019 to add words suggested by my readers. I also updated on Dec. 16 and Dec. 23, 2019; Jan. 2, 2020; Jan 29, 2021; July 27, 2023; March 3, 2024; April 24, 2024. I appreciate the support of my readers. Thank you! Image courtesy of Sira Anamwong at FreeDigitalPhotos.net https://www.investmentwriting.com/newsite/wp-content/uploads/2016/07/ID-100356939-Sira-Anamwong..jpg 400 400 Susan Weiner, CFA https://www.investmentwriting.com/newsite/wp-content/uploads/2016/12/ investmentwriting_logo_2016.png Susan Weiner, CFA2017-01-10 06:33:412024-04-24 14:45:40Words to avoid in your investment communications with regular folks 10 replies Matt Underwood says: Clients do not do well with statistical terms such as kurtosis, distribution, correlation, etc. Susan Weiner says: Thank you for adding those terms–especially kurtosis. I must update my list. Theresa Hamacher says: I’d add “correction.” Somehow, I don’t think that most people would agree that a decline in prices is a “correction.” They’d say that market values declined, fell or dipped, but not that they Susan Weiner says: Great point–I hadn’t thought about that. Susan Weiner, CFA says: Here’s another to drop, “ex,” as in “Asia ex-Japan.” A regular person may ask “Does that mean that Japan is no longer part of Japan?” Well, in a sense it does, but why not say “Asia, except for Japan” when communicating with the general public? Dan Sondhelm says: Great article and feedback from others. I teach portfolio maangers to not say anything to do with a “bet” or “exposure.” Let’s leave bets to blackjack and while music exposure for a child Is positive, exposure on chicken pox is not. Say we are finding opportunity instead. Susan Weiner, CFA says: Thank you for your additions to the list! Suzanne Wagner says: I hate the overuse of “leverage,” when “make the most of” or “take advantage of” works just fine. Harriett Magee says: Great list, Susan! And I agree, Suzanne, that it’s best to avoid leverage with regular readers. How about “lever” as a verb–yes, investments types love it, but it’s not even a real verb unless you’re talking about moving rocks. My experience has been that the more junior the writer, the greater the tendency to load on the jargon and technical terms. “Seasoned” investment professionals immediately make me think of steaks. And we’re not fooling anyone when we call performance “suboptimal.” It’s just plain poor. Susan’s comment on “Spread product” was hilarious. When I lived in Australia, people ate Vegemite sandwiches with their morning coffee, always with skinny white bread. More nutritious that a jelly doughnut, right? Susan Weiner says: Thank you for adding to the list, Suzanne and Harriett!
{"url":"https://www.investmentwriting.com/words-avoid-investment-communications/","timestamp":"2024-11-01T19:13:26Z","content_type":"text/html","content_length":"98306","record_id":"<urn:uuid:ee797b8e-2801-4605-bd8f-8da9392bc78e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00622.warc.gz"}
BAS 1.7.3 • reverted back to code using SETLENGTH (issue #82) to address stack imbalance issues seen in interactive checks and not flagged in R CMD check. BAS 1.7.2 • added method="AMCMC" for bas.lm to use adaptive independent Metropolis Hastings for sampling models. With option importance.sampling = TRUE the adaptive independet proposal and be used for importance sampling with improved estimation of model probabilities and inclusion probabilities based on the Horivitz-Thompsom / Hajek estimator. • added unit tests for link functions implemented in family.c • fixed (issue #81) Removed legacy definitions of ‘PI’ and ‘Free’ and replaced with‘M_PI’ and ‘R_Free’ to comply with ‘STRICT_R_HEADERS’ prevent package removal on 9/23/2024 • fixed (issue #82) avoid SETLENGTH as non-API function when truncating vectors BAS 1.7.1 Minor Improvements and Fixes • Initialized vector se via memset and disp = 1.0 in fit_glm.c (issue #72) • Initialized variables in hyp1f1.c from testthat (issue #75) • Removed models that have zero prior probability in bas.lm and bas.glm (issue #74) • Fixed error in bayesglm.fit to check arguments x or y for correct type before calling C and added unit test (issue #67) BAS 1.6.6 New Features • Added support for Gamma regression for bas.glm, with unit tests and example (Code contributed by @betsyberrson) • added error if supplied initial model for the bas.lm sampling methods “MCMC” and “MCMC+BAS” had prior probability zero. • fixed printing problems as identified via checks • fixed indexing error for bas.lm and method = "MCMC+BAS" as bas.lm using method = "MCMC+BAS" crashed with a segmentation fault if bestmodel is not NULL or the null model. GitHub issue #69 • fixed error in predict.bas with se.fit=TRUE if there is only one predictor. GitHub issue #68 reported by @AleCarminati added unit test to test-predict.R • Fixed error in coef for bas.glm objects when using a betaprior of class IC, including AIC and BIC Github issue #65 • Fixed error when using Jeffreys prior in bas.glm with the include.always option and added unit test in test-bas-glm.R. Github issue #61 • Fixed error for extracting coefficients from the median probability model when a formula is passed as an object rather than a literal, and added a unit test to test-coefficients.R Github issues # 39 and #56 BAS 1.6.4 • skipped test on CRAN that fails to show a warning in the non full rank case when pivot=FALSE for bas.lm as default uses pivoting and documentation indicates that pivot=FALSE should only be used in the full rank case so that users should not encounter this issue in practice. Users will continue to see a warning of NA’s are returned, but should be aware that not all platforms may produce a warning (such as M1mac). Github issue #62 BAS 1.6.3 • Added checks and unit-tests to see if modelprior is of class ‘prior’ resolving Github Issue #57 • Removed polevl.c, psi.c and gamma.c from Cephes as no longer used after switching to R’s internal functions BAS 1.6.2 • replaced deprecated DOUBLE_EPS with DBL_EPSILON for R 4.2.0 release (in two places) so restore on CRAN BAS 1.6.1 • replaced deprecated DOUBLE_EPS with DBL_EPSILON for R 4.2.0 release • fixed warnings from CRAN checks for under R devel (use of | and if with class) • added a function trCCH that uses integration to compute the normalizing constant in the Truncated Compound Confluent Hypergeometric distribution that provides the correct normalizing constant for Gordy (1998) and is more stable for large values compared to the current phi1 function. This is now used in the TCCH prior for bas.glm. • Rewrote phi1 function to use direct numerical integration (phi1_int) when Wald statistic is large so that marginal likelihoods are not NA as suggested by Daniel Heeman and Alexander Ly (see below). This should improve stability of estimates of Bayes Factors and model probabilities from bas.glm that used the HyperTwo function, including coefficient priors for hyper.g.n(), robust(), and intrinsic(). Added additional unit tests. • Added thin as an option for bas.glm • added unit tests and examples to show the connections between the special functions trCCH, phi1, 1F1 and 2F1 Bug Fixes • added internal function for phi1_int when the original HyperTwo function returns NA Issue #55 See more details above. • corrected the shrinkage estimate under the CCH prior that did not include terms involving the beta function. BAS 1.6.0 • update FORTRAN code to be compliant with USE_FC_LEN_T for character strings Bug Fixes • fixed warning in src code for log_laplace_F21 which had an uninitialized variable leading to NaN being returned from R function hypergeometric2F1 BAS 1.5.5 • Fixed WARNING under fedora-clang-devel. Added climate.dat file to package for building vignette so that package does not violate CRAN’s policy for accessing internet resources and is more permanent if file location/url changes locally. • Fixed testthat errors under Solaris. Default settings for force.heredity is set back to FALSE in bas.lm and bas.glm so that methods work on all platforms. For Solaris, users who wish to impose the force.heredity constraint may use the post-processing function. BAS 1.5.4 • Modified prior probabilities to adjust for the number of variables always included when using include.always. Pull request #41 by Don van de Bergh. Issue #40 Bug Fixes • Fixed valgrind error in src/ZS_approx_null_np.c for invalid write noted in CRAN checks • fixed function declaration type-mismatch and argument errors identified by LTO noted in CRAN checks • Added contrast=NULL argument to bas.lm and bas.glm so that non-NULL contrasts do not trigger warning in model.matrix as of R 3.6.0. Bug #44 • Added check for sample size equal to zero due to subsetting or missing data Bug #37 • Put ORCID in quotes in author list (per R-dev changes) BAS 1.5.3 Bug Fixes Fixed errors identified on cran checks https://cran.r-project.org/web/checks/check_results_BAS.html • initialize R2_m = 0.0 in lm_mcmcbas.c (lead to NA’s with clang on debian and fedora ) • switch to default of pivot = TRUE in bas.lm, adding tol as an argument to control tolerance in cholregpovot for improved stability across platforms with singular or nearly singular designs. • valgrind messages: Conditional jump or move depends on uninitialized value(s). Initialize vectors allocated via R_alloc in lm_deterministic.c and glm_deterministic.c. BAS 1.5.2 • Included an option pivot=TRUE in bas.lm to fit the models using a pivoted Cholesky decomposition to allow models that are rank-deficient. Enhancement #24 and Bug #21. Currently coefficients that are not-estimable are set to zero so that predict and other methods will work as before. The vector rank is added to the output (see documentation for bas.lm) and the degrees of freedom methods that assume a uniform prior for obtaining estimates (AIC and BIC) are adjusted to use rank rather than size. • Added option force.heredity=TRUEto force lower order terms to be included if higher order terms are present (hierarchical constraint) for method='MCMC' and method='BAS' with bas.lm and bas.glm. Updated Vignette to illustrate. enhancement #19. Checks to see if parents are included using include.always pass issue #26. • Added option drop.always.included to image.bas so that variables that are always included may be excluded from the image. By default all are shown enhancement #23 • Added option drop.always.included and subset to plot.bas so that variables that are always included may be excluded from the plot showing the marginal posterior inclusion probabilities (which=4). By default all are shown enhancement #23 • update fitted.bas to use predict so that code covers both GLM and LM cases with type='link' or type='response' • Updates to package for CII Best Practices Badge certification • Added Code Coverage support and more extensive tests using test_that. • fixed issue #36 Errors in prior = “ZS-null” when R2 is not finite or out of range due to model being not full rank. Change in gexpectations function in file bayesreg.c • fixed issue #35 for method="MCMC+BAS" in bas.glm in glm_mcmcbas.c when no values are provided for MCMC.iterations or n.models and defaults are used. Added unit test in test-bas-glm.R • fixed issue #34 for bas.glm where variables in include.always had marginal inclusion probabilities that were incorrect. Added unit test in test-bas-glm.R • fixed issue #33 for Jeffreys prior where marginal inclusion probabilities were not renormalized after dropping intercept model • fixed issue #32 to allow vectorization for phi1 function in R/cch.R and added unit test to “tests/testthat/test-special-functions.R” • fixed issue #31 to coerce g to be a REAL for g.prior prior and IC.prior in bas.glm; added unit-test “tests/testthat/test-bas-glm.R” • fixed issue #30 added n as hyper-parameter if NULL and coerced to be a REAL for intrinsic prior in bas.glm; added unit-test • fixed issue #29 added n as hyper-parameter if NULL and coerced to be a REAL for beta.prime prior in bas.glm; added unit-test • fixed issue #28 fixed length of MCMC estimates of marginal inclusion probabilities; added unit-test • fixed issue #27 where expected shrinkage with the JZS prior was greater than 1. Added unit test. • fixed output include.always to include the intercept issue #26 always so that drop.always.included = TRUE drops the intercept and any other variables that are forced in. include.always and force.heredity=TRUE can now be used together with method="BAS". • added warning if marginal likelihoods/posterior probabilities are NA with default model fitting method with suggestion that models be rerun with pivot = TRUE. This uses a modified Cholesky decomposition with pivoting so that if the model is rank deficient or nearly singular the dimensionality is reduced. Bug #21. • corrected count for first model with method='MCMC' which lead to potential model with 0 probability and errors in image. • coerced predicted values to be a vector under BMA (was a matrix) • fixed size with using method=deterministic in bas.glm (was not updated) • fixed problem in confint with horizontal=TRUE when intervals are point mass at zero. • suppress warning when sampling probabilities are 1 or 0 and the number of models is decremented Issue #25 • changed force.heredity.bas to re-normalize the prior probabilities rather than to use a new prior probability based on heredity constraints. For future, add new priors for models based on heredity. See comment on issue #26. • Changed License to GPL 3.0 BAS 1.5.1 June 6, 2018 • added S3 method variable.names to extract variable names in the highest probability model, median probability model, and best probability model for objects created by predict. • Fixed incorrect documentation in predict.basglm which had that type = "link" was the default for prediction issue #18 BAS 1.5.0 May 2, 2018 • add na.action for handling NA’s for predict methods issue #10 • added include.always as new argument to bas.lm. This allows a formula to specify which terms should always be included in all models. By default the intercept is always included. • added a section to the vignette to illustrate weighted regression and the force.heredity.bas function to group levels of a factor so that they enter or leave the model together. • fixed problem if there is only one model for image function; github issue #11 • fixed error in bas.lm with non-equal weights where R2 was incorrect. issue #17 ## Deprecated • deprecate the predict argument in predict.bas, predict.basglm and internal functions as it is not utilized BAS 1.4.9 March 24, 2018 • fixed bug in confint.coef.bas when parm is a character string • added parentheses in betafamily.c line 382 as indicated in CRAN check for R devel • added option to determine k for Bayes.outlier if prior probability of no outliers is provided BAS 1.4.8 March 10, 2018 • fixed issue with scoping in eval of data in predict.bas if dataname is defined in local env. • fixed issue 10 in github (predict for estimator=‘BPM’ failed if there were NA’s in the X data. Delete NA’s before finding the closest model. • fixed bug in ‘JZS’ prior - merged pull request #12 from vandenman/master • fixed bug in bas.glm when default betaprior (CCH) is used and inputs were INTEGER instead of REAL • removed warning with use of ‘ZS-null’ for backwards compatibility Features added • updated print.bas to reflect changes in print.lm • Added Bayes.outlier function to calculate posterior probabilities of outliers using the method from Chaloner & Brant for linear models. BAS 1.4.7 October 22, 2017 • Added new method for bas.lm to obtain marginal likelihoods with the Zellner-Siow Priors for “prior= ‘JZS’ using QUADPATH routines for numerical integration. The optional hyper parameter alpha may now be used to adjust the scaling of the ZS prior where g ~ G(1/2, alpha*n/2) as in the BayesFactor package of Morey, with a default of alpha=1 corresponding to the ZS prior used in Liang et al (2008). This also uses more stable evaluations of log(1 + x) to prevent underflow/overflow. • Priors ZS-full for bas.lm is planned to be deprecated. • replaced math functions to use portable C code from Rmath and consolidated header files BAS 1.4.6 May 24, 2017 • Added force.heredity.interaction function to allow higher order interactions to be included only if their “parents” or lower order interactions or main effects were included. Currently tested with two way interactions. This is implemented post-sampling; future updates will add this at the sampling stage which will reduce memory usage and sampling times by reducing the number of models under consideration. • Fixed unprotected ANS in C code in glm_sampleworep.c and sampleworep.c after call to PutRNGstate and possible stack imbalance in glm_mcmc. • Fixed problem with predict for estimator=BPM when newdata was one row BAS 1.4.5 March 28, 2017 • Fixed non-conformable error with predict when new data was from a dataframe with one row. • Fixed problem with missing weights for prediction using the median probability model with no new data. BAS 1.4.4 March 14, 2017 • Extract coefficient summaries, credible intervals and plots for the HPM and MPM in addition to the default BMA by adding a new estimator argument to the coef function. The new n.models argument to coef provides summaries based on the top n.models highest probability models to reduce computation time. ‘n.models = 1’ is equivalent to the highest probability model. • use of newdata that is a vector is now deprecated for predict.bas; newdata must be a dataframe or missing, in which case fitted values based on the dataframe used in fitting is used • factor levels are handled as in lm or glm for prediction when there may be only level of a factor in the newdata • fixed issue for prediction when newdata has just one row • fixed missing id in plot.bas for which=3 BAS 1.4.3 February 18, 2017 • Register symbols for foreign function calls • bin2int is now deprecated • fixed default MCMC.iteration in bas.lm to agree with documentation • updated vignette to include more examples, outlier detection, and finding the best predictive probability model • set a flag for MCMC sampling renormalize that selects whether the Monte Carlo frequencies are used to estimate posterior model and marginal inclusion probabilities (default renormalize = FALSE) or that marginal likelihoods time prior probabilities that are renormalized to sum to 1 are used. (the latter is the only option for the other methods); new slots for probne0.MCMC, probne0.RN, postprobs.RN and postprobs.MCMC. Bug fixes • fixed problem with prior.bic, robust, and hyper.g.n where default had missing n that was not set in hyperparameters • fixed error in predict and plot for GLMs when family is provided as a function BAS 1.4.2 October 12, 2016 • added df to the object returned by bas.glm to simplify coefficients function. Bug Fixes • corrected expected value of shrinkage for intrinsic, hyper-g/n and TCCH priors for glms BAS 1.4.1 September 17, 2016 Bug Fixes • the modification in 1.4.0 to automatically handle NA’s led to errors if the response was transformed as part of the formula; this is fixed • added subset argument to bas.lm and bas.glm BAS 1.4.0 August 25, 2016 New features • added na.action for bas.lm and bas.glm to omit missing data. • new function to plot credible intervals created by confint.pred.bas or confint.coef.bas. See the help files for an example or the vignette. • added se.fit option in predict.basglm. • Added testBF as a betaprior option for bas.glm to implement Bayes Factors based on the likelihood ratio statistic’s distribution for GLMs. • DOI for this version is http://dx.doi.org/10.5281/zenodo.60948 BAS 1.3.0 July 15, 2016 New Features A vignette has been added at long last! This illustrates several of the new features in BAS such as • new functions for computing credible intervals for fitted and predicted values confint.pred.bas() • new function for adding credible intervals for coefficients confint.coef.bas() • added posterior standard deviations for fitted values and predicted values in predict.bas() • deprecated use of type to specify estimator in fitted.bas and replaced with estimator so that predict() and fitted() are compatible with other S3 methods. • updated functions to be of class bas to avoid NAMESPACE conflicts with other libraries BAS 1.2.2 June 29, 2016 New Features • added option to find “Best Predictive Model” or “BPM” for fitted.bas or predict.bas • added local Empirical Bayes prior and fixed g-prior for bas.glm • added diagnostic() function for checking convergence of bas objects created with method = "MCMC"” • added truncated power prior as in Yang, Wainwright & Jordan (2016) Minor Changes • bug fix in plot.bas that appears with Sweave • bug fix in coef.bma when there is just one predictor BAS 1.2.1 April 16, 2016 • bug fix for method=“MCMC” with truncated prior distributions where MH ratio was incorrect allowing models with 0 probability to be sampled. • fixed error in Zellner-Siow prior (ZS-null) when n=p+1 or saturated model where log marginal likelihood should be 0 BAS 1.2.0 April 11, 2016 • removed unsafe code where Rbestmarg (input) was being overwritten in .Call which would end up in corruption of the constant pool of the byte-code (Thanks to Tomas Kalibera for catching this!) • fixed issue with dimensions for use with Simple Linear Regression BAS 1.1.0 March 31, 2016 New Features • added truncated Beta-Binomial prior and truncated Poisson (works only with MCMC currently) • improved code for finding fitted values under the Median • deprecated method = “AMCMC” and issue warning message Minor Changes • Changed S3 method for plot and image to use class bas rather than bma to avoid name conflicts with other packages BAS 1.09 - added weights for linear models - switched LINPACK calls in bayesreg to LAPACK finally should be - fixed bug in intercept calculation for glms - fixed inclusion probabilities to be a vector in the global EB methods for linear models BAS 1.08 - added intrinsic prior for GLMs - fixed problems for linear models for p > n and R2 not correct BAS 1.07 - added phi1 function from Gordy (1998) confluent hypergeometric function of two variables also known as one of the Horn hypergeometric functions or Humbert's phi1 - added Jeffrey's prior on g - added the general tCCH prior and special cases of the hyper-g/n. - TODO check shrinkage functions for all BAS 1.06 - new improved Laplace approximation for hypergeometric1F1 - added class basglm for predict - predict function now handles glm output - added dataframe option for newdata in predict.bas and predict.basglm - renamed coefficients in output to be 'mle' in bas.lm to be consistent across lm and glm versions so that predict methods can handle both cases. (This may lead to errors in other external code that expects object$ols or object$coefficients) - fixed bug with initprobs that did not include an intercept for bas.lm BAS 1.05 - added thinning option for MCMC method for bas.lm - returned posterior expected shrinkage for bas.glm - added option for initprobs = "marg-eplogp" for using marginal SLR models to create starting probabilities or order variables especially for p > n case - added standalone function for hypergeometric1F1 using Cephes library and a Laplace approximation -Added class "BAS" so that predict and fitted functions (S3 methods) are not masked by functions in the BVS package: to do modify the rest of the S3 methods. BAS 1.04 - added bas.glm for model averaging/section using mixture of g-priors for GLMs. Currently limited to Logistic Regression - added Poisson family for glm.fit BAS 1.0 - cleaned up MCMC method code BAS 0.93 - removed internal print statements in bayesglm.c - Bug fixes in AMCMC algorithm BAS 0.92 - fixed glm-fit.R so that hyper parameter for BIC is numeric BAS 0.91 - added new AMCMC algorithm BAS 0.91 - bug fix in bayes.glm BAS 0.90 - added C routines for fitting glms BAS 0.85 - fixed problem with duplicate models if n.models was > 2^(p-1) by restricting n.models - save original X as part of object so that fitted.bma gives the correct fitted values (broken in version 0.80) BAS 0.80 - Added `hypergeometric2F1` function that is callable by R - centered X's in bas.lm so that the intercept has the correct shrinkage - changed predict.bma to center newdata using the mean(X) - Added new Adaptive MCMC option (method = “AMCMC”) (this is not stable at this point) BAS 0.7 -Allowed pruning of model tree to eliminate rejected models BAS 0.6 - Added MCMC option to create starting values for BAS (`method = "MCMC+BAS"`) BAS 0.5 -Cleaned up all .Call routines so that all objects are duplicated or allocated within code BAS 0.45 - fixed ch2inv that prevented building on Windows in bayes glm_fit BAS 0.4 - fixed FORTRAN calls to use F77_NAME macro - changed allocation of objects for .Call to prevent some objects from being overwritten. BAS 0.3 - fixed EB.global function to include prior probabilities on models - fixed update function BAS 0.2 - fixed predict.bma to allow newdata to be a matrix or vector with the column of ones for the intercept optionally included. - fixed help file for predict - added modelprior argument to bas.lm so that users may now use the beta-binomial prior distribution on model size in addition to the default uniform distribution - added functions uniform(), beta-binomial() and Bernoulli() to create model prior objects - added a vector of user specified initial probabilities as an option for argument initprobs in bas.lm and removed the separate argument user.prob
{"url":"https://cran.mirror.garr.it/CRAN/web/packages/BAS/news/news.html","timestamp":"2024-11-11T11:44:35Z","content_type":"application/xhtml+xml","content_length":"33936","record_id":"<urn:uuid:e404879a-900c-4a5a-ae73-d2a2157ae576>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00526.warc.gz"}
OpenStax College Physics for AP® Courses, Chapter 11, Problem 15 (Problems & Exercises) The greatest ocean depths on the Earth are found in the Marianas Trench near the Philippines. Calculate the pressure due to the ocean at the bottom of this trench, given its depth is 11.0 km and assuming the density of seawater is constant all the way down. Question by is licensed under CC BY 4.0 Final Answer $1.10 \times 10^8 \textrm{ Pa}$ $1090 \textrm{ atm}$ Solution video OpenStax College Physics for AP® Courses, Chapter 11, Problem 15 (Problems & Exercises) vote with a rating of votes with an average rating of. Video Transcript This is College Physics Answers with Shaun Dychko. This question asks us to find the pressure at the bottom of the deepest part of the ocean, which is the Marian Trench in the Pacific near the Philippines. So it’s going to be the gauge pressure anyway which you know is all that really matters here because the pressure due to the atmosphere will be negligible in comparison to the enormous pressure of the high column of water. Now the pressure will be density of the sea water times g times the height. So density of sea water is 1.025 times ten to the three kilograms per cubic meter, and we make the assumption here that this density is constant over this great height, which is probably not exactly true, it’s likely that density gets a bit bigger at the bottom but never mind it’s close enough, times 9.8 newtons per kilogram, times 11 kilometers which is 11 times ten to the three meters, and this gives 1.10 times ten to the eight Pascals. Now that number is hard for us to understand because it’s just some big number, but we can turn it into something, a unit that we can relate to a bit better by multiplying by one atmosphere for every 101 times ten to the three Pascals, this works out to 1090 atmosphere so that’s really high pressure.
{"url":"https://collegephysicsanswers.com/openstax-solutions/greatest-ocean-depths-earth-are-found-marianas-trench-near-philippines-0","timestamp":"2024-11-04T02:48:08Z","content_type":"text/html","content_length":"200470","record_id":"<urn:uuid:3db575fd-1d9f-4269-9a3f-023997eaaf22>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00667.warc.gz"}
TDS Calculation in Mortgage Archives | Alberta Real Estate School Today, we will understand what Total Debt Service or TDS Ratio is and how to calculate the TDS Ratio in the mortgage application for a real estate property. If you have ever applied for a Mortgage or have come across a Mortgage Professional, you must have heard of 2 main ratios – Gross Debt Service or GDS Ratio & Total Debt Service or TDS Ratio. Mortgage professionals use these 2 ratios to determine if borrowers can afford to pay off the mortgage for a specific real estate property that they are dealing with. TDS Ratio is thus, an essential indicator of mortgage affordability and approval. If you are planning to become a Mortgage Professional, you need to understand what GDS and TDS are and how to calculate these ratios. For now, let’s understand the concept and calculation of TDS Ratio step-by-step. First of all, let’s understand what are Debt Service Ratios? So, what are Debt Service Ratios (DSRs)? Debt Service Ratio (DSR) or Debt Service Coverage Ratio (DSCR) is used in the calculation of mortgage approval for a real estate property. It is a popular benchmark used in the measurement of an entity’s ability to produce enough cash to cover its debt payments, including repayment of principal and interest (on the mortgage) on both short-term and long-term debt. This ratio is often used when the entity applying for a mortgage has any borrowings on its account such as bonds, loans, or lines of credit. It is also a commonly used ratio in a leveraged buyout transaction, to evaluate the debt capacity of the target company, along with other credit metrics such as total debt/EBITDA multiple, net debt/ EBITDA multiple, interest coverage ratio, and fixed charge coverage ratio. Thus, as we understood, there are 2 types of Debt Service Ratios: 1. GDS (Gross Debt Service) Ratio 2. TDS (Total Debt Service) Ratio What is GDS Ratio? GDS refers to Gross Debt Service Ratio. As we understood, it helps us determine whether a person or an entity is eligible for the intended amount of mortgage or not. GDS is the percentage of your monthly household income that covers your housing costs and not any other debts, unlike TDS. It includes housing costs like Principle (P), Interest (I), Property Taxes (T), and Heating Costs (H). It also includes 50% of the Condominium Fees, if the property is a Condominium. What is TDS Ratio? TDS refers to Total Debt Service Ratio. As we understood, it helps us determine whether a person or an entity is eligible for the intended amount of mortgage or not. TDS Ratio is the percentage of your income needed to cover all of your debts. Thus, it is GDS (PITH) + Other Debt. Secured and Unsecured Debt Secured Debt Secured Debt is debt which is backed by a security. The lender has a financial security over the debt he has offered to the borrower. For instance, when a borrower applies for a mortgage for his property, the property is the security for the lender. If the borrower defaults in mortgage payments or runs bankrupt and seems unable to pay for the mortgage, the lender will take over the security, which is the property itself in this case and will recover his mortgage from that. Examples of Secured Debt include: Mortgage on a property, Car Loan, Secured Line of Credit, etc. Secured Debt Calculation in TDS Ratio – For Secured Debts in TDS Ratio Calculation, we will include a Monthly Payment of 1% of the Outstanding Balance. For Example: If $10,000 are outstanding on a Secured Line of Credit, then the payment included in the TDS Ratio Calculation will be: $10,000 x 1% = $100 Unsecured Debt Unsecured Debt , on the other hand is the debt which is not backed by a security. The lender does not have a financial security for the debt he has offered to the borrower. For instance, when someone makes a payment from their Credit Card of a bank, the bank do not have strong financial security from the person who is using their credit card. If the borrower defaults in his credit card payments, they may charge him fees for late or non-payments, but they cannot get a hold of the person, if his account is nil or if he runs away to a different country. Examples of Unsecured Debt include: Student Loans, Unsecured Line of Credit, Credit Card Payments, etc. Unsecured Debt Calculation in TDS Ratio – For Unsecured Debts in TDS Ratio Calculation, we will include a Monthly Payment of 3% of the Outstanding Balance. For Example: If $10,000 are outstanding on an Unsecured Line of Credit, then the payment included in the TDS Ratio Calculation will be: $10,000 x 3% = $300 Factors affecting TDS Ratio The factors that affect GDS Ratio include: 1. Principle Amount (P) 2. Interest Rate (I) 3. Taxes on the Property (T) 4. Heating Costs (H) 5. 50% Condominium Fees (C) – if the property is a Condominium 6. Other Debt (O) – Secured or Unsecured TDS Formula: Now, let’s understand the calculation of TDS Ratio with some examples. Sample Questions Example 1 Merissa and David Smith wish to purchase a house subject to financing. Their yearly gross income is $72,000. Their monthly mortgage payment is $1,400 that includes the principal and interest. The property taxes are $4,200 for the year. The estimated monthly heating costs for the property they are interested in buying are $120.00. In addition to this, they had borrowed $12,000 from their secured line of credit to cover their wedding expenses last year. Would Merissa and David qualify for the mortgage on the desired property? The Variables are: 1. Gross Household Monthly Income: $72,000 per year = 72,000 / 12 = $6000 per month 2. Mortgage Installment (Principal + Interest): $1,400 per month 3. Property Tax: $4,200 per year = 4,200 / 12 = $350 per month 4. Heating Cost: $120 per month 5. Secured Line of Credit: $12,000 x 1% (as it is unsecured debt) = $120 per month for TDS Calculation 1. Step 1: Total Monthly Housing Expenses = PITHO = $1,400 + $350 + $120 + $120 = $1,990.00 2. Step 2: TDS = PITHO / Gross Monthly Income = $1,990 / $6,000 = 0.3317 In this example, the TDS Ratio is less than 42%. Therefore, the couple qualifies for the mortgage when applying the TDS Calculation. Example 2 Jonathan and Lia want to buy a condominium property in Calgary. They have applied for mortgage on that property. Their joint annual income is $150,0000. The purchase price of the property is $725,000 with a monthly payment of $3,000 including principal and interest. The property taxes for the property are $6000 per year. Other property expenses include monthly condominium fees of $300 and monthly heating cost estimated at $250.00. They also have a car payment of $400 and credit card debt of $15,000. Would Jonathan and Lia qualify for the mortgage? The Variables are: 1. Gross Household Monthly Income: $150,000 per year = $150,000 / 12 = $12,500 per month 2. Mortgage Installment (Principal + Interest): $3,000.00 per month 3. Property Tax: $6,000 per year = $6,000 / 12 = $500 per month 4. Condo Fees: $300 per month 5. Heating Cost: $250 per month 6. Car Payment: $400 per month 7. Credit Card Payment: $15,000 per month x 3% (as it is unsecured debt) = $450 per month for TDS Calculation 1. Step 1: Total Monthly Housing Expenses = PITHOC = $3,000 + $500 + $250 + $150 + $400 + $450 = $4,750.00 2. Step 2: TDS = PITHOC / Gross Monthly Income = $4,750 / $12,500 = 0.38 In this example, the TDS Ratio is less than 42%. Therefore, the couple qualifies for the mortgage when applying the TDS Calculation. So, this was TDS Calculation for you guys. Stick around for more of such calculations and mortgage related topics. Get our Focused Study Guides, Exclusive Video Courses and “In-demand” Tutoring Sessions to get you through the Real Estate Exams on the first attempt! Get in touch with at 587.936.7779 or support@albertarealestateschool.com. Happy Studying!
{"url":"https://www.albertarealestateschool.com/tag/tds-calculation-in-mortgage/","timestamp":"2024-11-10T02:07:17Z","content_type":"text/html","content_length":"107566","record_id":"<urn:uuid:a3082b24-eb20-4922-8973-cf328d05c477>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00082.warc.gz"}
What were Einstein's predictions? What were Einstein's predictions? Interview with Lord Martin Rees, University of Cambridge Gravitational waves: this is something that Einstein predicted 100 years ago with his theory of general relativity and scientists have since been scouring the universe for them. This month, the team at the Laser Interferometry Gravitational Observatory, LIGO, announced they'd finally done it... But what are these gravitational waves, how were they found, and why does the discovery matter? To explain, first we need to go back several hundred years to the birth of the concept of gravity itself and the Cambridge scientist Isaac Newton. He was at Trinity College and - appropriately enough - so is the Astronomer Royal Martin Rees. He spoke to Chris Smith about the history of gravity, beginning with Newton's groundbreaking insights... Martin - It was the first great unification; he realised that the force that holds us to the ground and makes the apple fall is the same force that holds the moon in it's orbit around the earth and hold the planets in their orbits around the sun. He learnt that this force affects all substances equally and that it obeys the so-called inverse square law, which means that if two objects get twice as far away from each other, the force gets four times weaker and this law explained very well all that was known at that time about the orbits of the planets. But it breaks down in two ways; one way is if things go at nearly the speed of light and the other way is if gravity was very strong. Gravity in the earth and in the sun is not that strong but we imagine that there are objects in the universe where gravity is much stronger... Chris - Like black holes for example? Martin - An extreme phenomenon is a black hole, which was one of the great consequences of Einstein's theory, and Newton's theory breaks down under those extreme conditions. Chris - So along comes Einstein, quite a bit later, but how did Einstein change that? Martin - Well Einstein didn't really overthrow Newton, he extended and transcended Newton. And his theory allows us to correctly describe what happens under extremes of strong gravity and high speed, but also it gave us a deeper understanding into what gravity was. It wasn't really clear to Newton why it should be inverse square law, why all objects should fall at the same speed whatever they were made of, but that became natural when Einstein saw that this was really a consequence of space itself. Space interacts with mass and the mantra is: matter tells space how to curve; space tells matter how to move; and as an interaction between the behaviour of space and the matter in it. Chris - Einstein puts forward the idea of this concept of space time, where the fabric of the universe is the notional entity space time and big things that are very gravitationally active will exert an effect or an influence on that space time... Martin - Yes, space itself becomes a sort of active arena where things happen and the strongest gravity is around black holes. And if gravity changes, if for instance two black holes fall together, then there was an issue in that we thought that nothing could travel faster than light. So, if two black holes crash together, for instance, something must go at the speed of light in order to cause a change in the gravitational pull felt by distant objects, and so there must be some sort of wave that transmits information. And so, Einstein generally predicted that if things change, then they must emit gravitational waves and the trouble is that these waves are extremely weak and they're only emitted by very violent events indeed. That's why they've been so hard to find. Chris - And how do we know that Einstein got it right? Martin - Well, there've been lots of tests of Einstein's theory. Classically, soon after he proposed the theory there were tests of how light was bent when it passed close to the sun during an eclipse and astronomers have found evidence for black holes, but we'd really like to have detailed models for what black holes are like. Theorists can calculate what a black hole ought to be like, what shape it would be. But it's been this discovery of gravitational waves which has really helped to clinch that because what's been discovered is that we get this chirp of gravitational radiation that we've just heard earlier on and that's thought to be due to two black holes spiraling together. They orbit around each other, they omit gravitational waves that takes away energy and eventually they coalesce and merge and then form a single black hole, and this effect is predicted by Einstein's theory. We can calculate what ought to happen and what's marvelous is that what's been observed to happen is exactly what you would expect.
{"url":"https://www.thenakedscientists.com/articles/interviews/what-were-einsteins-predictions","timestamp":"2024-11-06T09:03:34Z","content_type":"text/html","content_length":"100864","record_id":"<urn:uuid:111bfa5f-3042-44fe-88f9-062f4053acd9>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00837.warc.gz"}
Lesson 2.6 Introduction Course Objectives This less Lesson 2 Subject:SociologyPrice: Bought3 Lesson 2.6 Course Objectives This lesson will address the following course outcomes: · 9. Compare proportional relationships represented in different ways, considering units when doing so. Specific Objectives Students will understand that · a relative change is different from an absolute change. · a relative measure is always a comparison of two numbers. Students will be able to · calculate a relative change. · explain the difference between relative change and absolute change. Measuring Change When a quantity, such as population, changes, we can calculate the absolute change and also the relative change. Absolute change is the new value of the quantity minus the original value. Relative change is the absolute change divided by the original value. Note that the “quantity” values are always positive (at least in almost all contexts). But the absolute change can turn out to be a negative number or a positive number. · If a quantity increases (has gotten larger), then the absolute change is positive. Why? When the new value is larger, then the new value minus the original value is positive, and thus the absolute change is positive. · If a quantity decreases (has gotten smaller), then the absolute change is negative. Why? When the new value is smaller, then the new value minus the original value is negative, and thus the absolute change is negative. The relative change’s sign (negative or positive) is the same as the sign of the absolute change. This is true since the relative change is found by dividing the absolute change by the original value, and the original value is positive. Another way to talk about negative change (either absolute or relative): A negative change can be said to be a decrease by the positive number. Example A: In 2013 Jerry received 12 speeding tickets. Since then, his driving has improved and in 2014 he only had one ticket. What is the absolute change in tickets? New value – original value = 1 – 12 = −11. The absolute change in his number of tickets from 2013 to 2014 is −11. Another way to say this is: The absolute change in his number of tickets from 2013 to 2014 is a decrease of 11. Jerry got 11 fewer tickets in 2014 compared to his 12 tickets in 2013. What is the relative change? Relative change = absolute changeoriginal value=−1112≈−0.92absolute changeoriginal value=-1112≈-0.92 = −92%. In other words: The number of tickets he received in 2014 decreased by about 92% from 2013. Example B: Suppose that when Sasha started college her school had 8,210 students. By the time she graduated there were 9.440 students. What was the absolute change in students? New value – original value = 9,440 – 8,210 = 1,230 The college’s enrollment increased by 1,230 students during the years Sasha attended the school. What was the relative change in students? Relative change = absolute changeoriginal value=12308210absolute changeoriginal value=12308210 = .1498 = about 15% The number of students increased by about 15% over the years Sasha was enrolled. Problem Situation: How the Census Affects the House of Representatives Every 10 years, the United States conducts a census. The census tells how many people live in each state. You can also find how much population has changed over time from the census data. The original purpose of the census was to decide on the number of representatives each state would have in the House of Representatives. Census data continue to be used for this purpose, but now have many other uses. For example, governments may use the data to plan for public services such as fire stations and schools. You will be given a list of states and their populations in 2000 and 2010. You will be asked to calculate the population growth for each state. You will examine how this affects the number of representatives each state has in the House of Representatives. From the last page, for your reference: Absolute change is the new value of the quantity minus the original value. Relative change is the absolute change divided by the original value. #1 Points possible: 10. Total attempts: 5 In 2000, the population of Nevada was 1,998,257. In 2010, the population had grown to 2,700,551. Compute the absolute and relative change in the population from 2000 to 2010. The absolute change was: people The relative change was: % (rounded to 2 decimal places) #2 Points possible: 24. Total attempts: 5 Compute the absolute and relative change for the states below. State 2000 Population 2010 Population Absolute Change Relative Change (to 2 decimal places) New York 18,976,457 19,378,102 % Texas 20,851,820 25,145,561 % Florida 15,982,378 18,801,310 % Michigan 9,938,444 9,883,640 % #3 Points possible: 10. Total attempts: 5 Of the five states you've now calculated the absolute and relative change for, a) which has had the largest absolute change in population? b) which has had the largest relative change in population? #4 Points possible: 8. Total attempts: 5 Why are the answers to the two parts of the last question different? Select all that are true. · A large absolute change may not be a large relative change if the starting population was large. · A large absolute change may not be a large relative change if the starting population was small. · A large relative change may not be a large absolute change if the starting population was large. · A large relative change may not be a large absolute change if the starting population was small. #5 Points possible: 6. Total attempts: 5 Michigan’s population changed from 9,938,444 to 9,883,640. Which are correct ways to describe the change? (select all that are correct) · Michigan's population increased by 54,804 people · Michigan's population increased by -54,804 people · Michigan's population decreased by 54,804 people · Michigan's population decreased by -54,804 people · Michigan's population changed by 54,804 people · Michigan's population changed by -54,804 people The number of Representatives each state has in the House of Representatives is based on the size of the population in the state. Since the number of representatives is fixed at 435, when the census was done in 2010 some states gained representatives and others lost representatives. You can see which states gained and lost representatives in this map from the Census Bureau. It's a common misconception that a state that lost representatives must have lost population. As you can see, New York lost two representatives, even though your calculations earlier showed the population increased. #6 Points possible: 15. Total attempts: 5 Look back at your calculations for Nevada and Florida a) Which of the two had a larger absolute change? b) Which of the two had a larger relative change? Based on the 2010 census, Florida gained two representatives, and Nevada gained one. c) Does it appear that absolute or relative change matters more when determining the number of representatives gained or lost? HW 2.6 #1 Points possible: 5. Total attempts: 5 Which of the following was one of the main mathematical ideas of the lesson? · Absolute change is measured as a quantity (for example, an increase of $3). Relative change is measured as a percentage compared to the reference value (for example, an increase of 3%). · The population of a state determines how many representatives that state has in the House of Representatives. · To calculate a percent, divide the comparison value by the reference value. · Consider this situation: Quantity 1 increases by 15%. Quantity 2 increases by 20%. Quantity 2 must have increased by a larger amount than Quantity 1. #2 Points possible: 8. Total attempts: 5 The following headlines all refer to change. Identify the change as absolute or relative . a. “Enrollments at Northeastern University are expected to increase by 1,500!” · Absolute change · Relative change b. “Another 14% tuition increase is expected.” · Absolute change · Relative change c. “A new proposal has sales tax rates dropping from 3% to 1%, a drop of only two percentage points.” · Absolute change · Relative change d. “A new proposal has sales tax rates dropping from 3% to 1%, a 67 percent drop!” · Absolute change · Relative change Questions 3 and 4 refer to data taken from the U.S. Census1. The dollar values take into account the changes in the economy over the years (i.e., inflation). Inflation is a complicated issue, but for Questions 3 and 4, you do not need to worry about it. #3 Points possible: 5. Total attempts: 5 A typical high-income household in 1980 earned $125,556. A similar household in 2009 earned $180,001. What was the relative increase in income for these households from 1980 to 2009? Round to the nearest one percent. % #4 Points possible: 5. Total attempts: 5 A typical middle-income household in 1980 earned $34,757. A similar household in 2009 earned $38,550. What was the relative increase in income for these households from 1980 to 2009? Round to the nearest one percent. % #5 Points possible: 8. Total attempts: 5 Due to temporary tax cuts in 2010, a person with typical deductions earning $50,000 per year would have saved 2% of their income plus $850 in federal taxes. a. How much money would this person save? $ b. What percent did this person save on her income? Round to the nearest tenth of a percent. % #6 Points possible: 8. Total attempts: 5 Due to the same law, a person earning $500,000 per year with typical deductions would save 2% of the first $106,800 they earned plus $14,250 in federal taxes. Fill in the blanks to complete the statement below. Round to the nearest dollar and to the nearest tenth of a percent. A person earning $500,000 a year saved $ or % of their income. Purchase A New Answer Custom new solution created by our subject matter experts
{"url":"https://studyhelpme.com/question/70365/Lesson-26-Introduction-Course-Objectives-This-lesson-will-address-the-following-course-out","timestamp":"2024-11-06T21:06:14Z","content_type":"text/html","content_length":"75127","record_id":"<urn:uuid:cd3b79b7-09f4-4e71-8c8a-25d85cbddb74>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00799.warc.gz"}
Debye-Huckel limiting law, Ionic strength, Activity and Activity coefficient Ionic strength • Ionic strength of a solution can be defined as the total concentration of ions present in the solution. • In another word, ionic strength measures the concentration of ionic atmosphere in a solution. • It is dimensionless quantity, it has no unit. • It is denoted by ‘µ’. Q) Calculate the ionic strength of 0.1M solution of aluminium sulphate. Q) Calculate the ionic strength of 0.1M KCl and 0.1M CaCl[2] solution. Activity and Activity coefficient In electrolytic solution, the experimentally determined value of concentration of ions is less than the actual concentration. The effective concentration of ions or electrolyte in a solution is called as activity. It is denoted by a symbol ‘a’. Mathematically, activity ‘a’ is taken as the product of actual concentration in molarity or molality and activity coefficient ‘f’. i.e. a = Cf ——– (i) C = Concentration in molarity or molality f = activity coefficient. • For very dilute solution, activity coefficient is nearly equal to one. So, the activity becomes equal to the actual concentration. i.e. a = c • For concentrated solution, activity coefficient is less than one. Rearranging equation (i), F = a/C Thus, activity coefficient is defined as the ratio of the activity to the actual concentration. The activity ‘a’ of the electrolyte is taken as the product of activities of cation and anion. i.e. a = a[+] a[–] a[+] = activity of cation a[–]= activity of anion Similarly, activity coefficient of an electrolyle is taken as the product of activity coefficients of cation and anion. i.e. f = f[+] f[–] f[+ ]= activity coefficient of cation f[– ]= activity coefficient of anion The activity and activity coefficient can’t be measured experimentally but their mean value can be determined. Debye-Huckel limiting law- Expression for the activity coefficient of electrolyte in terms of ionic strength • Debye-Huckel limiting law relates the mean activity coefficient of an electrolyte with valency of ions and ionic strength of the solution. • This law is applicable only for very dilute solution. So, this law is called limiting law. • Mathematically, this law can be expressed as: If -log is plotted against , a straight line passing through the origin having slope equal to AZ[+]Z[–] is obtained. Application of Debye-Huckel limiting law: Debye-Huckel limiting law can be used to calculate the mean activity coefficient of an electrolyte if the ionic strength or concentration of the solution and the value of ‘A’ is known. • Atkins, Peter, Paula, de Julio, Atkin’s Physical Chemistry, Seventh Edition, Oxford University Press, (Printed in India, 2002). • Gurtu, J.N., Snehi, H., Advanced Physical Chemistry, Seventh Edition, Pragati Prakashan India, 2000. • Madan, R.L., tuli, G.D., Physical Chemistry, S. Chand and company, New Delhi, 2012.
{"url":"https://chemicalnote.com/debye-huckel-limiting-law-ionic-strength-activity-and-activity-coefficient/","timestamp":"2024-11-09T09:38:13Z","content_type":"text/html","content_length":"54237","record_id":"<urn:uuid:c2aa51d6-26c0-4ab4-a9f6-bcace7960de3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00129.warc.gz"}
How can I measure my own power to weight ratio? in context of cycling power to weight ratio 30 Aug 2024 Title: Measuring Your Own Power-to-Weight Ratio: A Guide for Cyclists Abstract: The power-to-weight ratio (PWR) is a crucial metric in cycling, reflecting an athlete’s ability to generate power relative to their body weight. While professional teams and coaches often have access to sophisticated equipment to measure PWR, individual cyclists can also estimate this value using simple calculations and readily available data. This article provides a step-by-step guide on how to calculate your own PWR. Introduction: The power-to-weight ratio is calculated by dividing an athlete’s maximum power output (Pmax) by their body weight (BW). In cycling, Pmax is typically measured in watts (W), and BW is expressed in kilograms (kg). Calculating Power Output (Pmax): To estimate your Pmax, you can use one of the following methods: 1. 20-minute all-out test: This involves riding at maximum intensity for 20 minutes while wearing a heart rate monitor or power meter. The average power output during this period is taken as Pmax. 2. 5-second sprint test: This method involves sprinting at maximum effort for exactly 5 seconds, and the peak power output is recorded using a power meter. Pmax = (Power output in watts) / (Time in seconds) For example: Pmax = (Watts) / (20 minutes * 60 seconds/minute) = (Watts) / 1200 seconds Pmax = (Peak power output in watts) / (5 seconds) Calculating Body Weight (BW): Your BW can be measured using a bathroom scale or by consulting your medical records. BW = (Weight in kilograms) Calculating Power-to-Weight Ratio (PWR): Now that you have Pmax and BW, you can calculate your PWR by dividing the former by the latter. PWR = Pmax / BW For example: PWR = (Watts) / (kg) Conclusion: Measuring your own power-to-weight ratio requires minimal equipment and straightforward calculations. By following these steps, individual cyclists can estimate their PWR and gain a better understanding of their performance capabilities. This metric can be used to inform training decisions, set realistic goals, and track progress over time. Note: The formulas provided are in ASCII format for clarity and readability. Related articles for ‘cycling power to weight ratio ‘ : Calculators for ‘cycling power to weight ratio ‘
{"url":"https://blog.truegeometry.com/tutorials/education/d0f8263d327f5a3c5ac837387f1c4074/JSON_TO_ARTCL_How_can_I_measure_my_own_power_to_weight_ratio_in_context_of_cycl.html","timestamp":"2024-11-09T07:38:55Z","content_type":"text/html","content_length":"16914","record_id":"<urn:uuid:90177d6a-a852-4b22-bf5b-5cc2aaff5cb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00533.warc.gz"}
How do you convert seconds to days ? | HIX Tutor How do you convert seconds to days ? Answer 1 Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/57e7f17811ef6b17a64ae6bf-8f9af90668","timestamp":"2024-11-03T07:29:50Z","content_type":"text/html","content_length":"566845","record_id":"<urn:uuid:39df649b-1cce-4b6a-9afb-553a4259351b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00827.warc.gz"}
This Blog is Systematic There has been a very interesting discussion on twitter, relating to some stuff said by Sam Bankman-Fried (SBF), who at the time of writing has just completely vaporized billions of dollars in record time via the medium of his crypto exchange FTX, and provided a useful example to future school children of the meaning of the phrase nominative determinism*. * Sam, Bank Man: Fried. Geddit? Read the whole thread from the top: TLDR the views of SBF can be summarised as follows: • Kelly criterion maximises log utility • I don't have a log utility function. It's probably closer to linear. • Therefore I should bet at higher than Kelly. Up to 5x would be just fine. I, and many others, have pointed out that SBF is an idiot. Of course it's easier to do this when he's just proven his business incompetence on a grand scale, but to be fair I was barely aware of the guy until a week ago. Specifically, he's wrong about the chain of reasoning above*. * It's unclear whether this is specifically what brought SBF down. At the time of writing he appears to have taken money from his exchange to prop up his hedge fund, so maybe the hedge fund was using >>> Kelly leverage, and this really is the case. In this post I will explain why he was wrong, with pictures. To be clearer, I'll discuss how the choice of expectation and utility function affects optimal bet sizing. I've discussed parts of this subject briefly before, but you don't need to read the previous post. Scope and assumptions To keep it tight, and relevant to finance, this post will ignore arguments seen on twitter related to one off bets, and whether you should bet differently if you are considering your contribution to society as a whole. These are mostly philosophical discussions which it's hard to solve with pictures. So the set up we have is: • There is an arbitrary investment strategy, which I assume consists of a data generating process (DGP) producing Gaussian returns with a known mean and standard deviation (this ignores parameter uncertainty, which I've banged on about often enough, but effectively would result in even lower bet sizing). • We make a decision as to how much of our capital we allocate to this strategy for an investment horizon of some arbitrary number of years, let's say ten. • We're optimising L, the leverage factor, where L =1 would be full investment, 2 would be 100% leverage, 0.5 would be 50% in cash 50% in the strategy and so on. • We're interested in maximising the expectation of f(terminal wealth) after ten years, where f is our utility function. • Because we're measuring expectations, we generate a series of possible future outcomes based on the DGP and take the expectation over those. Note that I'm using the contionous version of the Kelly criterion here, but the results would be equally valid for the sort of discrete bets that appear in the original discussion. Specific parameters Let's take a specific example. Set mean =10% and standard deviation = 20%, which is a Sharpe ratio of 0.5, and therefore Kelly should be maxed at 50% risk, equating to L = 50/20 = 2.5. SBF optimal leverage would be around 5 times that, L = 12.5. We start with wealth of 1 unit, and compound it over 10 years. I don't normally paste huge chunks of code in these blog posts, but this is a fairly short chunk: import pandas as pd import numpy as np from math import log ann_return = 0.1 ann_std_dev = 0.2 BUSINESS_DAYS_IN_YEAR = 256 daily_return = ann_return / BUSINESS_DAYS_IN_YEAR daily_std_dev = ann_std_dev / (BUSINESS_DAYS_IN_YEAR**.5) years = 10 number_days = years * BUSINESS_DAYS_IN_YEAR def get_series_of_final_account_values(monte_return_streams, leverage_factor = 1): account_values = [account_value_from_returns(returns, for returns in monte_return_streams] return account_values def get_monte_return_streams(): monte_return_streams = [get_return_stream() for __ in range(10000)] return monte_return_streams def get_return_stream(): return np.random.normal(daily_return, def account_value_from_returns(returns, leverage_factor: float = 1.0): one_plus_return = np.array( for return_item in returns]) cum_return = one_plus_return.cumprod() return cum_return[-1] monte_return_streams = get_monte_return_streams() Utility function: Expected log(wealth) [Kelly] Kelly first. We want to maximise the expected log final wealth: def expected_log_value(monte_return_streams, leverage_factor = 1): series_of_account_values =get_series_of_final_account_values( monte_return_streams = monte_return_streams, leverage_factor = leverage_factor) log_values_over_account_values = [log(account_value) for account_value in series_of_account_values] return np.mean(log_values_over_account_values) And let's plot the results: def plot_over_leverage(monte_return_streams, value_function): leverage_ratios = np.arange(1.5, 5.1, 0.1) values = [] for leverage in leverage_ratios: value_function(monte_return_streams, leverage_factor=leverage) leverage_to_plot = pd.Series( values, index = leverage_ratios return leverage_to_plot leverage_to_plot = plot_over_leverage(monte_return_streams, In this plot, and nearly all of those to come, the x-axis shows the leverage L and the y-axis shows the value of the expected utility. To find the optimal L we look to see where the highest point of the utility curve is. As we'd expect: • Max expected log(wealth) is at L=2.5. This is the optimal Kelly leverage factor. • At twice optimal we expect to have log wealth of zero, equivalent to making no money at all (since starting wealth is 1). • Not plotted here, but at SBF leverage (12.5) we'd have expected log(wealth) of <undefined> and have lost pretty much all of our money. Utility function: Expected (wealth) [SBF?] Now let's look at a linear utility function, since SBF noted that his utility was 'roughly close to linear'. Here our utility is just equal to our terminal wealth, so it's purely linear. def expected_value(monte_return_streams, leverage_factor = 1): series_of_account_values =get_series_of_final_account_values( monte_return_streams = monte_return_streams, leverage_factor = leverage_factor) return np.mean(series_of_account_values) leverage_to_plot = plot_over_leverage(monte_return_streams, You can see where SBF was coming from, right? Utility gets exponentially higher and higher, as we add more leverage. Five times leverage is a lot better than 2.5 times, the Kelly criterion. Five times Kelly, or 2.5 * 5= 12.5, would be even better. Utility function: Median(wealth) However there is an important assumption above, which is the use of the mean for the expectation operator. This is dumb. It would mean (pun, sorry), for example, that of the following: 1. An investment that lost $1,000 99 times out of 100; and paid out $1,000,000 1% of the time 2. An investment that is guaranteed to gain $9,000 ... we would theoretically prefer option 1 since it has an expected value of $9,010, higher than the trivial expected value of $9,000 for option 2. There might be some degenerate gamblers who prefer 1 to 2, but not many. (Your wealth would also affect which of these you would prefer. If $1,000 is a relatively trivial amount to you, you might prefer 1. If this is the case consider if you'd still prefer 1 to 2 if the figures were 1000 times larger, or a million times larger). discussed this before , but I think the is the more appropriate. What the median implies in this context is something like this: Considering all possible future outcomes, how can I maximise the utility I receive in the outcome that will occur half the time? I note that the median of option 1 above is zero, whilst the median of option 2 is $9,000. Option 2 is now far more attractive. def median_value(monte_return_streams, leverage_factor = 1): series_of_account_values =get_series_of_final_account_values( monte_return_streams = monte_return_streams, leverage_factor = leverage_factor) return np.median(series_of_account_values) The spooky result here is that the optimal leverage is now 2.5, the same as the Kelly criterion. Even with linear utility, if we use the median expectation, Kelly is the optimal strategy. The reason why people prefer to use mean(log(wealth)) rather than median(wealth), even though they are equivalent, is that the former is more computationally attractive. Note also the well known fact that Kelly also maximises the geometric return. With Kelly we aren't really making any assumptions about utility function: our assumption is effectively that the median is the correct expectations operator The entire discussion about utility is really a red herring. It's very hard to measure utility functions, and everyone probably does have a different one, I think it's much better to focus on Utility function: Nth percentile(wealth) Well you might be thinking that SBF seems like a particularly optimistic kind of guy. He isn't interested in the median outcome (which is the 50% percentile). Surely there must be some percentile at which it makes sense to bet 5 times Kelly? Maybe he is interested in the 75% percentile outcome? QUANTILE = .75 def value_quantile(monte_return_streams, leverage_factor = 1): series_of_account_values =get_series_of_final_account_values( monte_return_streams = monte_return_streams, leverage_factor = leverage_factor) return np.quantile(series_of_account_values, QUANTILE) Now the optimal is around L=3.5. This is considerably higher than the Kelly max of L=2.5, but it is still nowhere near the SBF optimal L of 12.5. Let's plot the utility curves for a bunch of different quantile points: list_over_quantiles = [] quantile_ranges = np.arange(.4, 0.91, .1) for QUANTILE in quantile_ranges: leverage_to_plot = plot_over_leverage(monte_return_streams, pd_list = pd.DataFrame(list_over_quantiles) pd_list.index = quantile_ranges It's hard to see what's going on here, legend floating point representation notwithstanding, but you can hopefully see that the maximum L (hump of each curve) gets higher as we go up the quantile scale, as the curves themselves get higher (as you would expect). But in none of these quantiles we are still nowhere near reaching an optimal L of 12.5. Even at the 90% quantile - evaluating something that only happens one in ten times - we have a maximum L of under 4.5. Now there will be some quantile point at which L=12.5 is indeed optimal. Returning to my simple example: 1. An investment that lost $1,000 99 times out of 100; and paid out $1,000,000 1% of the time 2. An investment that is guaranteed to gain $9,000 ... if we focus on outcomes that will happen less than one in a million times (the 99.9999% quantile and above) then yes sure, we'd prefer option 1. So at what quantile point does a leverage factor of 12.5 become optimal? I couldn't find out exactly, since to look at extremely rare quantile points requires very large numbers of outcomes*. I actually broke my laptop before I could work out what the quantile point was. * for example, if you want ten observations to accurately measure the quintile point, then for the 99.99% quantile you would need 10* (1/(1-0.9999)) = 100,000 outcomes. But even for a quantile of 99.99% (!), we still aren't at an optimal leverage of 12.5! You can see that the optimal leverage is 8 (around 3.2 x Kelly), still way short of 12.5. Rather than utility functions, I think it's easier to say to ask people the likelihood of outcome they are concerned about. I'd argue that sensible people would think about the median outcome, which is what you expect to happen 50% of the time. And if you are a bit risk averse, you should probably consider an even lower quantile. In contrast SBF went for bet sizing that would only make sense in the set of outcomes that happens significantly less than 0.01% of the time. That is insanely optimistic; and given he was dealing with billions of dollars of other peoples money it was also insanely irresponsible. Was SBF really that recklessly optimistic, or dumb? In this particular case I'd argue the latter. He had a very superfical understanding of Kelly bet sizing, and because of that he thought he could ignore it. This is a classic example of 'a little knowledge is a dangerous thing'. A dumb person doesn't understand anything, but reads on the internet somewhere that half Kelly is the correct bet sizing. So they use it. A "smart" person like SBF glances at the Kelly formula, thinks 'oh but I don't have log utility' and leverages up five times Kelly and thinks 'Wow I am so smart look at all my money'. And that ended well... A truely enlightened person understands that it isn't about the utility function, but about the expectation operator. They also understand about uncertainty, optimistic backtesting bias, and a whole bunch of factors that imply that even 0.5 x Kelly is a little reckless. I, for example, use something south of a quarter Kelly. Which brings us back to the meme at the start of the post: Note I am not saying I am smarter than SBF. On pure IQ, I am almost certainly much, much dumber. In fact, it's because I know I am not a genius that I'm not arrogant enough to completely follow or ignore the Kelly criteria without first truely understanding it. Whilst this particular misunderstanding might not have brought down SBF's empire, it shows that really really smart people can be really dumb - particularly when they think that they are so smart they don't need to properly understand something before ignoring it*. * Here is another example of him getting something completely wrong Postscript (16th November 2022) I had some interesting feedback from Edwin Teejay on twitter, which is worth addressing here as well. Some of the feedback I've incorporated into the post already. (Incidentally, Edwin is a disciple of Ergodic Economics, which has a lot of very interesting stuff to say about the entire problem of utility maximisation) First he commented that the max(median) = max(log) relationship is only true for a long sequence of bets, i.e. asymptotically. We effectively have 5000 bets in our ten year return sequence. As I said originally, I framed this as a typical asset optimisation problem rather than a one off bet (or small number of one off bets). He then gives an example of a one off bet decision where the median would be inappropriate: 1. 100% win $1 2. 51% win $0 / 49% win $1'000'000 The expected values (mean expectation) are $1 and $490,000 respectively, but the medians are $1 and $0. But any sane person would pick the second option. My retort to this is essentially the same as before - this isn't something that could realistically happen in a long sequence of bets. Suppose we are presented with making the bet above every single week for 5 weeks. The distribution of wealth outcomes for option 1 is single peaked - we earn $5. The distribution of wealth outcomes for option 2 will vary from $0 (with probability 3.4%) to $5,000,000 (with a slightly lower probability of 2.8% - I am ignoring 'compounding', eg the possibility to buy more bets with money we've already won), with a mean of $2.45 million. But the median is pretty good: $1 million. So we'd definitely pick option 2. And that is with just 5 bets in the sequence. So the moment we are looking at any kind of repeating bet, the law of large numbers gets us closer and closer to the median being the optimal decision. We are just extremely unlikely to see the sort of payoff structure in the bet shown in a series of repeated bets. Now what about the example I posted: 1. An investment that lost $1,000 99 times out of 100; and paid out $1,000,000 1% of the time 2. An investment that is guaranteed to gain $9,000 Is it realistic to expect this kind of payoff structure in a series of repeated bets? Well consider instead the following: 1. An investment that lost $1 most of the time; and paid out $1,000,000 0.001% of the time 2. An investment that is guaranteed to gain $5 The mean of these bets is ~$9 and $5, and the medians are $-1 and $5. Is this unrealistic? Well, these sorts of payoffs do exist in the world- they are called lottery tickets (albeit it is rare to get a lottery ticket with a $9 positive mean!). And this is something closer to the SBF example, since I noted that he would have to be looking at somewhere north of the 0.01% quantile to choose 5x Kelly Leverage. Now what happens if we run the above as a series of 5000 repeated bets (again with no compounding for simplicity). We end up with the following distributions: 1. An investment that lost $5000 95.1% of the time, and makes $1 million or more 5% of the time. 2. An investment that is guaranteed to gain $25,000 Since there is no compounding we can just multiply up the individual numbers to get the mean ($45,000 and $25,000 respectively). The medians are -$5,000 and $25,000. Personally, I still prefer option 2! You might still prefer option 2 if spending $5,000 on lottery tickets over 10 years reflects a small proportion of your wealth, but I refer you to the previous discussion on this topi: make So I would argue that it in a long run of bets we are more likely in real life to get payoff structures of the kind I posited, than the closer to 50:50 bet suggested by Edwin. Ultimately, I think we agree that for long sequences of bets the median makes more sense (with a caveat). I personally think long run decision making is more relevant to most people than one off bets. What is the caveat? Edwin also said that the choice of the median is 'arbitrary'. I disagree here. The median is 'what happens half the time'. I still think for most people that is a logical reference point for 'what I expect to happen', as well as in terms of the maths: both median and mean are averages after all. I personally think it's fine to be more conservative than this if you are risk averse, but not to be more aggressive - bear in mind that will mean you are betting at more than Kelly. But anyway, as Matt Hollerbach , whose orginal series of tweets inspired this post, said: "The best part of Robs framework is you don't have to use the median,50%. You could use 60% or 70% or 40% if your more conservative. And it intuitively tells you what the chance of reaching your goal is. You don't get duped into a crazy long shot that the mean might be hiding in." (typos corrected from original tweet) This fits well into my general framework for thinking about uncertainty. Quantify it, and be aware of it. Then if you still do something crazy/stupid, well at least you know you're being an idiot... Few people are brave enough to put their entire net worth into a CTA fund or home grown trend following strategy (my fellow co-host on the TTU podcast, Jerry Parker, being an honorable exception with his 'Trend following plus nothing' portfolio allocation strategy). Most people have considerably less than 100% - and I include myself firmly in that category. And it's probably true that most people have less than the sort of optimal allocation that is recommended by portfolio optimisation engines. Still it is a useful exercise to think about just how much we should allocate to trend following, at least in theory. The figure that comes out of such an exercise will serve as both a ceiling (you probably don't want any more than this), and a target (you should be aiming for this). However any sort of portfolio optimisation based on historical returns is likely to be deeply flawed. I've covered the problems involved at length before, in particular in my second book and in this blogpost, but here's a quick recap: 1. Standard portfolio optimisation techniques are not very robust 2. We often assume normal distributions, but financial returns are famously abnormal 3. There is uncertainty in the parameter estimates we make from the data 4. Past returns distributions may be biased and unlikely to repeat in the future As an example of the final effect, consider the historically strong performance of equities and bonds in a 60:40 style portfolio during my own lifetime, at least until 2022. Do we expect such a performance to be repeated? Given it was driven by a secular fall in inflation from high double digits, and a resulting fall in interest rates and equity discount rates, probably not. Importantly, a regime change to lower bond and equity returns will have varying impact on a 60:40 long only portfolio (which will get hammered), a slow trend following strategy (which will suffer a little), and a fast trend following strategy (which will hardly be affected). Consider also the second issue: non Gaussian return distributions. In particular equities have famously negative skew, whilst trend following - especially the speedier variation - is somewhat positive in this respect. Since skew affects optimal leverage, we can potentially 'eat' extra skew in the form of higher leverage and returns. In conclusion then, some of the problems of portfolio optimisation are likely to be especially toxic when we're looking at blends of standard long only assets combined with trend following. In this post I'll consider some [S:tricks:S] methods we can use to alleviate these problems, and thus come up with a sensible suggestion for allocating to trend following. If nothing else, this is a nice toy model for considering the issues we have when optimising, something I've written about at length eg here. So even if you don't care about this problem, you'll find some interesting ways to think about robust portfolio optimisation within. Credit: This post was inspired by this tweet. Some very messy code with hardcoding galore, is here. The assets Let's first consider the assets we have at our disposal. I'm going to make this a very simple setup so we can focus on what is important whilst still learning some interesting lessons. For reasons that will become apparent later, I'm limiting myself to 3 assets. We have to decide how much to allocate to each of the following three assets: • A 60:40 long only portfolio of bonds and equities, represented by the US 10 year and S&P 500 • A slow/medium speed trend following strategy, trading the US 10 year and S&P 500 future with equal risk allocation, with a 12% equity-like annualised risk target. This is a combination of EWMAC crossovers: 32,128 and 64,256 • A relatively fast trend following strategy, trading the US 10 year and S&P 500 future with equal risk allocation, with a 12% annualised risk target. Again this is a combination of EWMAC crossovers: 8, 32 and 16,64 Now there is a lot to argue with here. I've already explained why I want to allocate seperately to fast and slow trend following; as it will highlight the effect of secular trends. The reason for the relatively low standard deviation target is that I'm going to use a non risk adjusted measure of returns, and if I used a more typical CTA style risk (25%) it would produce results that are harder to interpret. You may also ask why I don't have any commodities in my trend following fund. But what I find especially interesting here is the effect on correlations between these kinds of strategies when we adjust for long term secular trends. These correlations will be dampened if there are other instruments in the pot. The implication of this is that the allocation to a properly diversified trend following fund running futures across multiple asset classes will likely be higher than what is shown here. Why 60:40? Rather than 60:40, I could directly try and work out the optimal allocation to a universe of bonds and equities seperately. But I'm taking this as exogenous, just to simplify things. Since I'm going to demean equity and bond returns in a similar way, this shouldn't affect their relative weightings. 50:50 risk weights on the mini trend following strategies is more defensible; again I'm using fixed weights here to make things easier and more interpretable. For what it's worth the allocation within trend following for an in sample backtest would be higher for bonds than for equities, and this is especially true for the faster trading strategy. Ultimately three assets makes the problem both tractable and intuitive to solve, whilst giving us plenty of insight. Characteristics of the underyling data Note I am going to use futures data even for my 60:40, which means all the returns I'm using are excess returns. Let's start with a nice picture: So the first thing to note is that the vol of the 60:40 is fairly low at around 12%; as you'd expect given it has a chunky allocation to bonds (vol ~6.4%). In particular, check out the beautifully smooth run from 2009 to 2022. The two trading strategies also come in around the 12% annualised vol mark, by design. In terms of Sharpe Ratio, the relative figures are 0.31 (fast trading strategy), 0.38 (long only) and 0.49 (slow trading strategy). However as I've already noted, the performance of the long only and slow strategies is likely to be flattered by the secular trends in equities and bonds seen since 1982 (when the backtest starts). Correlations matter, so here they are: 60:40 Fast TF Slow TF 60:40 1.00 -0.02 0.25 Fast TF -0.02 1.00 0.68 Slow TF 0.25 0.68 1.00 What about higher moments? The monthly skews are -1.44 (long only), 0.08 (slow) and 0.80 (fast). Finally what about the tails? I have a novel method for measuring these which I discuss in my new book , but all you need to know is that a figure greater than one indicates a non-normal distribution. The lower tail ratios are 1.26 (fast), 1.35 (slow) and 2.04 (long only); whilst the uppers are 1.91 (fast), 1.74 (slow) and 1.53 (long only). In other words, the long only strategy has nastier skew and worst tails than the fast trading strategy, whilst the slow strategy comes somewhere in between. To reiterate, again, the performance of the long only and slow strategies is likely to be flattered by the secular trends in equities and bonds, caused by valuation rerating in equities and falling interest rates in bonds. Lets take equities. The P/E ratio in September 1982 was around 9.0, versus 20.1 now. This equates to 2.0% a year in returns coming from the rerating of equities. Over the same period US 10 year bond yields have fallen from around 10.2% to 4.0% now, equating to around 1.2% a year in returns. I can do a simple demeaning to reduce the returns achieved by the appropriate amounts. Here are the demeaned series with the original backadjusted prices. First S&P: And for US10: What effect does the demeaning have? It doesn't affect significantly standard deviations, skew, or tail ratios. But it does affect the Sharpe Ratio: Original Demean Difference Long only 0.38 0.24 -0.14 Slow TF 0.49 0.41 -0.08 Fast TF 0.31 0.25 -0.06 This is exactly what we would expect. The demeaning has a larger effect on the long only 60:40, and to a lesser extent the slower trend following. And the correlation is also a little different: 60:40 Fast TF Slow TF 60:40 1.00 -0.06 0.18 Fast TF -0.06 1.00 0.66 Slow TF 0.18 0.66 1.00 Both types of trend have become slightly less correlated with 60:40, which makes sense. The optimisation Any optimisation requires (a) a utility or fitness function that we are maximising, and (b) a method for finding the highest value of that function. In terms of (b) we should bear in mind the comments I made earlier about robustness, but let's first think about (a). An important question here is whether we should be targeting a risk adjusted measure like Sharpe Ratio, and hence assuming leverage is freely available, which is what I normally do. But for an exercise like this a more appropriate utility function will target outright return and assume we can't access leverage. Hence our portfolio weights will need to sum to exactly 100% (we don't force this to allow for the possibility of holding cash; though this is unlikely). It's more correct to use geometric return, also known as CAGR, rather than arithmetic mean since that is effectively the same as maximising the (log) final value of your portfolio (Kelly criteria). Using geometric mean also means that negative skew and high kurtosis strategies will be punished, as will excessive standard deviation. By assuming a CAGR maximiser, I don't need to worry about the efficient frontier, I can maximise for a single point. It's for this reason that I've created TF strategies with similar vol to 60:40. I'll deal with uncertainty by using a resampling technique. Basically, I randomly sample with replacement from the joint distribution of daily returns for the three assets I'm optimising for, to create a new set of account curves (this will preserve correlations, but not autocorrelations. This would be problematic if I was using drawdown statistics, but I'm not). For a given set of instrument weights, I then measure the utility statistic (CAGR) for the resampled returns. I repeat this exercise a few times, and then I end up with a distribution of CAGR for a given set of weights. This allows us to take into account the effect of uncertainty. Finally we have the choice of optimisation technique. Given we have just three weights to play with, and only two degrees of freedom, it doesn't seem too heroic to use a simple grid search. So let's do that. Some pretty pictures Because we only have two degrees of freedom, we can plot the results on a 2-d heatmap. Here's the results for the median CAGR, with the original set of returns before demeaning: Sorry for the illegible labels - you might have to click on the plots to see them. The colour shown reflects the CAGR. The x-axis is the weight for the long only 60:40 portfolio, and the y-axis for slow trend following. The weight to fast trend following will be whatever is left over. The top diagonal isn't populated since that would require weights greater than 1; the diagonal line from top left to bottom right is where there is zero weight to fast trend following; top left is 100% slow TF and bottom right is 100% long only. Ignoring uncertainty then, the optimal weight (brightest yellow) is 94% in slow TF and 6% in long only. More than most people have! However note that there is a fairly large range of yellow CAGR that are quite similar. The 30% quantile estimate for the optimal weights is a CAGR of 4.36, and for the 70% quantile it's 6.61. Let's say we'd be indifferent between any weights whose median CAGR falls in that range (in practice then, anything whose median CAGR is greater than 4.36). If I replace everything that is statistically indistinguishable from the maximum with white space, and redo the heatmap I get this: This means that, for example, a weight of 30% in long only, 34% in slow trend following, and 36% in fast trend following; is just inside the whitespace and thus is statistically indistinguishable from the optimal set of weights. Perhaps of more interest, the maximum weight we can have to long only and still remain within this region (at the bottom left, just before the diagonal line reappears) is about 80%. Implication: We should have at least 20% in trend following. If I had to choose an optimal weight, I'd go for the centroid of the convex hull of the whitespace. I can't be bothered to code that up, but by eye it's at roughly 40% 60/40, 50% slow TF, 10% fast Now let's repeat this exercise with the secular trends removed from the data. The plot is similar, but notice that the top left has got much better than the bottom right; we should have a lower weight to 60:40 than in the past. In fact the optimal is 100% in slow trend following; zilch, nil, zero, nada in both fast TF and 60:40. But let's repeat the whitespace exercise to see how robust this result is: The whitespace region is much smaller than before, and is heavily biased towards the top left. Valid portfolio weights that are indistinguishable from the maximum include 45% in 60:40 and 55% in slow TF (and 45% is the most you should have in 60:40 whilst remaining in this region). We've seen a shift away from long only (which we'd expect), but interestingly no shift towards fast TF, which we might have expected as it is less affected by demeaning. The optimal (centroid, convex hull, yada yada...) is somewhere around 20% 60:40, 75% slow TF and 5% in fast TF. Summary: practical implications This has been a highly stylised exercise, deliberately designed to shine a light on some interesting facts and show you some interesting ways to visualise the uncertainty in portfolio optimisation. You've hopefully seen how we need to consider uncertainty in optimisation, and I've shown you a nice intuitive way to produce robust weights. The bottom line then is that a robust set of allocations would be something like 40% 60/40, 50% slow TF, 10% fast TF; but with a maximum allocation to 60/40 of about 80%. If we use data that has had past secular trends removed, we're looking at an even higher allocation to TF, with the maximum 60/40 allocation reducing considerably, to around 45%. Importantly, this has of course been an entirely in sample exercise. Although we've made an effort to make things more realistic by demeaning, much of the results depend on the finding that slow TF has a higher SR than 60:40, an advantage that is increased by demeaning. Correcting for this would result in a higher weight to 60:40, but also to fast TF. Of course if we make this exercise more realistic, it will change these results: • Improving 60:40 equities- Introducing non US assets, and allocating to individual equities • Improving 60:40 bonds - including more of the term structure, inflation and corporate bonds, • Improving 60:40 by including other non TF alternatives • Improving the CTA offering - introducing a wider set of instruments across asset classes (there would also be a modest benefit from widening beyond a single type of trading rule) • Adding fees to the CTA offering I'd expect the net effect of these changes to result in a higher weight to TF, as the diversification benefits in going from two instruments to say 100 is considerable; and far outweights the effect of fees and improved diversification in the long only space.
{"url":"https://qoppac.blogspot.com/2022/11/","timestamp":"2024-11-09T14:21:59Z","content_type":"text/html","content_length":"165214","record_id":"<urn:uuid:23742959-e6f1-4226-ba4f-b9f2e698e72d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00092.warc.gz"}
Identify numbered sets of cubes that match a given total up to 5 Curriculum>Kindergarten> Module 1>Topic C: Numbers to 5 in Different Configurations, Math Drawings, and Expressions Select the pictures that show the stated number (1-5) of cubes. Four pictures with sets of cubes are given in each problem. More than one picture may contain the desired number of cubes, so select all that apply
{"url":"https://happynumbers.com/demo/cards/295491/","timestamp":"2024-11-13T15:16:42Z","content_type":"text/html","content_length":"15672","record_id":"<urn:uuid:19c6007c-00f1-4f35-a023-2b5768dd2403>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00207.warc.gz"}
Real solutions Written by: Germán Fernández Category: real solutions Hits: 481 For a component in an ideal or dilute ideal solution, the chemical potential is given by: $$\mu_{i}^{id}=\mu_{i}^{0}+RTlnx_i$$ Solving for $ x_i$ We define the activity of component i in a real solution as: $$a_i=e^{[(\mu_i-\mu_{i}^{0})/RT]}$$ Activity plays the same role in real solutions as mole fraction in ideal ones. Therefore, the chemical potential of a component in any solution (ideal or real) is given by: Written by: Germán Fernández Category: real solutions Hits: 388 Convention I. The mole fractions of the components vary over a wide range (cannot distinguish between solvent and solutes). To define the normal state, we drop the term $RTlnx_i\gamma_i$ in the equation, $\mu_i=\mu_{I,i}^{^0}+RTlnx_i\gamma_{I,i}$. To do this, we choose a state with, $\gamma_{I,i}\rightarrow 1$ and $x_i\ rightarrow 1$, so that $lnx_i\gamma_{I,i}\rightarrow 0$. The normal state $\mu_{I,i}^{0}=\mu_{i}^{\ast}$ is defined as component i being pure at the temperature and pressure of the solution. Note that $\mu_i=\left(\frac{\partial G}{\partial n_i}\right)_{T,P,n_{j\neq i}}$ does not depend on the choice of the normal state. However, the activity and the activity coefficient do depend. Written by: Germán Fernández Category: real solutions Hits: 433 Excess functions represent the difference between the thermodynamic function of a solution and said function in a hypothetical ideal solution of the same composition. $$G^{E}=G-G^{id}=G-G^{id}+G^{\ ast}-G^{\ast}=G-G^{\ast}-(G^{id} -G^{\ast})=\Delta G_{mez}-\Delta G_{mez}^{id}$$ Analogously: $$S^{E}=\Delta S_{ mix}-\Delta S_{mez}^{id}$$ $$H^{E}=\Delta H_{mez}-\cancel{\Delta H_{mez}^{id}}$$ $$V^ {E}=\Delta V_{mez}-\cancel{\Delta V_{mez}^{id}}$$ Written by: Germán Fernández Category: real solutions Hits: 491 The above formalism is useless if we cannot determine the activity coefficients. Convention I $$\mu_i=\mu_{I,i}^{0}+RTln\gamma_{I,i}x_i\;\;\;\rightarrow\;\;\;\mu_i=\mu_{I ,i}^{0}+RTlna_{I,i}$$ Raoult's Law, $P_i=x_iP_{i}^{\ast}$, can be generalized to real solutions simply by changing the fraction molar times the activity, $P_i=a_{I,i}P_{i}^{\ast}$ Solving for the activity, we obtain an equation that allows us to calculate it. $$a_{I,i}=\frac{P_i}{P_{i}^{\ast}}$$ The partial pressure of component i is calculated using Dalton's Law, $P_i= x_{iv}P_T$, and it is necessary to experimentally measure the total pressure, $P_T$, and the composition of the vapor over the solution, $x_{iv} $ $$\gamma_{I,i}=\frac{ P_i}{x_{i,l}}$$ This last equation allows us to calculate the activity coefficient, once the activity and composition of the liquid phase are known. Read more: Determination of Activities and Activity Coefficients Written by: Germán Fernández Category: real solutions Hits: 555 Nonvolatile solute activity coefficients can be determined from vapor pressure data using the Gibbs-Duhem equation. We start from the equation that gives us G for a solution as the sum of the product of the moles of each component times its chemical potential. $$G=\sum_{i}n_i\mu_i$$ Differentiating: Writing the Gibbs equation for dG Written by: Germán Fernández Category: real solutions Hits: 439 The activity coefficients of nonvolatile solutes cannot be determined by measuring the partial pressure of solute since it is too small. Therefore, the vapor pressure on the solution (solvent pressure) $P_A$ is measured and with it the activity coefficient $\gamma_A$ is calculated based on the composition of the solution. Using the Gibbs-Duhem equation, the activity coefficient of the solvent is related to that of the solute $\gamma_B$. We write the Gibbs-Duhem equation We develop the equation for two components A and B. $$n_Ad\mu_A+n_Bd\mu_B= 0$$ Dividing by the total moles: $n_A+n_B$ Written by: Germán Fernández Category: real solutions Hits: 530 Starting from the chemical potential of a solute according to the convention II Molality of component i is given by, $m_i=\frac{n_i}{n_AM_A}$ where $M_A$ is the molecular weight of the solvent. Since the solvent is very abundant we can approximate the moles of A by the totals and suppose that, $x_i=\frac{n_i}{n_A}$. Substituting this mole fraction into the chemical potential equation $$\mu_i=\mu_{II,i}^{0}+RTln\left(\gamma_{II,i}m_ix_AM_A\frac{m^{0} }{m^{0}}\right)$$ In this last equation we multiply and divide by $m^0$ in order to separate the Neperian into two dimensionless addends. $$\mu_i=\underbrace{\mu_{II,i}^{0}+RTln(M_Am^0)}_{\mu_{m,i}^{0}}+RTln(\underbrace{ x_A\gamma_{II,i}}_{\gamma_{m,i}}m_i/m^0)$$ Read more: Activity coefficients on the molality and molar concentration scales
{"url":"https://www.quimicafisica.com/en/real-solutions.html","timestamp":"2024-11-04T20:34:16Z","content_type":"text/html","content_length":"47177","record_id":"<urn:uuid:bef7876f-415a-4e28-a94f-a424a7caf234>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00401.warc.gz"}
Using the Preservation of the Inverse Points Property to Find the Image of a Generalized Circle Under an Extended Mobius Transformation Extended mobius transformations preserve the inverse points property, so that if We can use this inverse points property to find the equation of the image of a circle with a given equation under an extended mobius transformation. Example: Find the equation of the image of the circle The circle The equation of the image of The equation of Example: Find the equation of the image of the circle The circle The equation of the image of The equation of
{"url":"https://astarmathsandphysics.com/university-maths-notes/complex-analysis/1896-using-the-preservation-of-the-inverse-points-property-to-find-the-image-of-a-generalized-circle-under-an-extended-mobius-transformation.html","timestamp":"2024-11-10T04:49:44Z","content_type":"text/html","content_length":"45117","record_id":"<urn:uuid:b12b04df-2e3d-4e0d-9367-d813447a29dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00376.warc.gz"}
A man 23 - math word problem (83872) A man 23 A man standing on the deck of a ship, which is 10 m above the water level, observes the angle of elevation of the top of a hill as 60°, and angle of depression of the base of the hill is 30°. Find the distance of the hill from the ship and the height of the hill. Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators You need to know the following knowledge to solve this word math problem: Units of physical quantities: Grade of the word problem: We encourage you to watch this tutorial video on this math problem: video1 video2 Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/83872","timestamp":"2024-11-12T18:33:26Z","content_type":"text/html","content_length":"65136","record_id":"<urn:uuid:4ece912f-127e-45c0-9324-25c3d14d2c1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00441.warc.gz"}
Topological characterization of fractional quantum hall ground states from microscopic hamiltonians We show how to numerically calculate several quantities that characterize topological order starting from a microscopic fractional quantum Hall Hamiltonian. To find the set of degenerate ground states, we employ the infinite density matrix renormalization group method based on the matrix-product state representation of fractional quantum Hall states on an infinite cylinder. To study localized quasiparticles of a chosen topological charge, we use pairs of degenerate ground states as boundary conditions for the infinite density matrix renormalization group. We then show that the wave function obtained on the infinite cylinder geometry can be adapted to a torus of arbitrary modular parameter, which allows us to explicitly calculate the non-Abelian Berry connection associated with the modular T transformation. As a result, the quantum dimensions, topological spins, quasiparticle charges, chiral central charge, and Hall viscosity of the phase can be obtained using data contained entirely in the entanglement spectrum of an infinite cylinder. All Science Journal Classification (ASJC) codes • General Physics and Astronomy Dive into the research topics of 'Topological characterization of fractional quantum hall ground states from microscopic hamiltonians'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/topological-characterization-of-fractional-quantum-hall-ground-st","timestamp":"2024-11-10T03:37:27Z","content_type":"text/html","content_length":"47969","record_id":"<urn:uuid:2a9c0afa-a9e6-4fce-b3be-0adb77614107>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00164.warc.gz"}
How do you find two solutions (in degree and radians) for cscx = (2sqrt3)/3? | HIX Tutor How do you find two solutions (in degree and radians) for cscx = (2sqrt3)/3? Answer 1 Solve $\csc x = \frac{2 \sqrt{3}}{3}$ #csc x = 1/(sin x) = (2sqrt3)/3.# Find sin x. #sin x = 3/(2sqrt3) = sqrt3/2.# Trig Table of Special Arcs gives --> #sin x = sqrt3/2# ---> arc #x = pi/3 (or 60^@)#, and #x = (2pi)/3 (or 120^@)# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the solutions for the equation csc(x) = (2√3)/3, you can follow these steps: 1. Identify the reference angle θ in the first quadrant using the given value: csc(θ) = (2√3)/3. The reference angle is the angle whose sine is equal to the reciprocal of the given value. In this case, θ = 30 degrees or π/6 radians. 2. Recognize that csc(x) = 1/sin(x), so if csc(x) = (2√3)/3, then sin(x) = 3/(2√3). 3. Since sin(x) = 3/(2√3), we can find the angle x by taking the inverse sine (arcsin) of 3/(2√3). 4. Calculate the values of x using the inverse sine function. Remember that sine is positive in the first and second quadrants. 5. Once you find the value of x in radians, convert it to degrees if necessary. So, the solutions in degrees and radians are: 1. ( x = 30^\circ ) (or ( x = \frac{\pi}{6} ) radians) 2. ( x = 180^\circ - 30^\circ ) (or ( x = \pi - \frac{\pi}{6} ) radians) Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-two-solutions-in-degree-and-radians-for-cscx-2sqrt3-3-8f9afabf09","timestamp":"2024-11-06T23:10:58Z","content_type":"text/html","content_length":"573750","record_id":"<urn:uuid:983723a3-b1bf-45e9-82c0-b7c6755307b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00304.warc.gz"}
Three Analytical Relations Giving The Speed of Light, The Planck Constant and The Fine-Structure Constant - Journal of Physical Science Three Analytical Relations Giving The Speed of Light, The Planck Constant and The Fine-Structure Constant The fundamental physical constants are at the root of physics theories, but no theoretical framework provides their experimental values. In addition, they are assumed to be independent of each other. Here, we present two valuable dimensionless numbers based on vacuum properties and fundamental constants. The value of these dimensionless numbers provokes questioning, since they are of order 10^1. In particular, they mean that it is possible to build a velocity and a parameter homogeneous to the Planck constant of the same order as the speed of light and the Planck constant respectively, only based on five well-known physical parameters. These formulas are very unlikely to be two coincidences and suggest that the parameters involved depend on each other. They also seem to indicate that light is a material wave and quantum mechanics is a deterministic theory. A link between these numbers and the fine-structure constant is also established.
{"url":"https://jps.usm.my/three-analytical-relations-giving-the-speed-of-light-the-planck-constant-and-the-fine-structure-constant/","timestamp":"2024-11-10T15:47:24Z","content_type":"text/html","content_length":"44792","record_id":"<urn:uuid:2e11bb7c-0143-4942-a6c5-e150375d129d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00846.warc.gz"}
Extremely low excess noise InAlAs avalanche photodiodes Tan, C. H. and Goh, Y. L. and Marshall, A. R. J. and Tan, L. J. J. and Ng, J. S. and David, J. P. R. (2007) Extremely low excess noise InAlAs avalanche photodiodes. In: 2007 International Conference on Indium Phosphide and Related Materials, Conference Proceedings :. IEEE, Matsue, pp. 81-83. ISBN 978-1-4244-0874-0 Full text not available from this repository. Excess noise factors < 4 at avalanche gain of 10 measured on a series of p(+)in(+) InAlAs diodes with avalanche regions ranging from 0.11 mu m to 2.53 mu m. Extremely low excess noise, corresponding to effective ionization coefficient ratios, k, of 0.15 < k < 0.25, showed the potential of InAlAs as multiplication region for avalanche photodiodes. Breakdown voltage obtained from multiplication characteristics of these diodes showed a linear dependence of breakdown voltage on the avalanche width. Using tunnelling parameters derived from current-voltage' measurements with the ionization coefficients and threshold energies derived from gain and excess noise measurements, our calculations showed that InAlAs avalanche photodiodes have sensitivities of similar to 28.8dBm assuming a rather high pre-amplifier noise of 15pAHz(-1/2). Item Type: Contribution in Book/Report/Proceedings Uncontrolled Keywords: ?? physicsqc physics ?? Deposited On: 28 May 2012 10:59 Last Modified: 16 Jul 2024 02:43
{"url":"https://eprints.lancs.ac.uk/id/eprint/54611/","timestamp":"2024-11-08T08:30:26Z","content_type":"application/xhtml+xml","content_length":"22049","record_id":"<urn:uuid:db476e41-7afc-408f-99c0-c46196ad6416>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00828.warc.gz"}
The water weakening effect on the progressive slope failure under excavation and rainfall conditions Slope failure sometimes exhibits progressive characteristics. Initially, a local soil mass will be damaged and form a plastic zone, which will then pull or push the rest to expand the plastic zone, ultimately leading to slope instability (Bastian et al. ; Gong et al. ; Yu et al. ). Landslide development often results from continuous exposure to a single factor, such as rainfall or excavation, or from the sequential impact of multiple inducing factors, including earthquakes and rainfall. Over the past few decades, the instability of engineering slopes caused by excavation and rainfall is countless, for example the improper excavation and rainfall-induced softening of weak interlayers led to a gently inclined bedding landslide at a construction site in Bijie City, Guizhou Province, China, resulting in 14 deaths and three injuries on January 3, 2022 (Tao et al. ). Similarly, sustained heavy rainfall triggered slope instability at an excavated cutting slope along the Mawlamyine Highway in Mon State, Myanmar, causing 75 deaths and 27 damaged buildings on August 9, 2019 (Panday et al. ). On September 2, 2018, both rainfall and excavation decreased the effective stress and tensile resistance of soil at the slope toe, damaging the northern slope of Mengdong Town, Yunnan Province, China, leading to 10 deaths and 11 missing persons (Yang et al. ). On September 23, 2017, due to rainfall infiltrating along the cracks formed by excavation and weakening the mechanical parameters of the rock mass, a landslide occurred at the construction site of Sanli Road in Libo County, Guizhou Province, China, resulting in 3 deaths and 6 injuries. The landslide blocked the highway ramp (Zhao et al. ). On August 27, 2014, a high-speed landslide occurred in Fuquan, Guizhou, China, 23 people were killed, 22 were injured, and 77 houses were damaged. On site research shows that excavation leads to stress concentration at the middle of the slope, and subsequent rainfall causes an increase in groundwater level, both of which jointly trigger landslides (Lin et al. ). These cases illustrate that slope failure resulting from combined excavation and rainfall has become a significant engineering problem, not only resulting in substantial economic losses but also posing a serious threat to civilian safety. Currently, significant research has been conducted on the mechanism of shallow progressive landslide caused by excavation and rainfall. Scholars have found that excavation unloading can cause stress field adjustment, leading to excessive differences between major and minor principal stresses of rock and soil mass, which in turn leads to crack expansion and generation due to shear stress exceeding peak strength (Fang et al. ; Feng et al. ; Liang et al. ; Wang et al. ; Xu et al. ; Yang et al. ). Meanwhile, excavation can result in loss of slope effective mechanical support, as well as providing an infiltration channel for rainfall (Gong ; Li et al. ; Peng et al. ; Robert ; Shi et al. ). Subsequent rainfall reduces matric suction and effective stress of unsaturated soils (Fredlund and Lim ; Oh and Lu ; Bishop ), decreasing shear strength parameters while increasing the density of rock and soil mass (Qi et al. ; He et al. ). Rainfall promotes crack development, drive the sliding surface to expand with the formation of new plastic zone, and rapidly transform local deformation into a landslide. Infiltration along the excavated slope surface erodes the slope toe, causing failure and tractive landslides. Simultaneously, the accumulation at the slope crest cracks generates hydrostatic pressure, compressing the rock and soil mass at the rear and in turn resulting in a push-type landslide. However, the mechanism of deep landslides remains a complex topic. Several studies have shown that the sliding surface in engineering slopes with groundwater often coincides or intersects with the elevated water level replenished by rainfall (Zhang et al. ; Lin et al. ). Additionally, the softening of sliding zone soil is significantly higher than that of saturated soil, indicating further reduction in mechanical parameters of rock and soil below the water level due to the hydration or water weakening effect (Meng et al. ; Zhu et al. Experimental studies on various rock and soil masses under long-term saturation have shown that water causes minerals volumetric expansion, cement dissolution between particles, and microcracks expansion, leading to a decrease in compressive and tensile strength, elastic modulus, critical strain, and shear strength over time (Zhu et al. ; Zhao et al. ; Chen et al. ; Liu et al. ). Water weakening is being recognized as an important factor inducing disasters in deep soft rock formations to date, such as large deformation of tunnel surrounding rocks (Yang et al. ; Bian et al. ) and the formation of weak intercalations in slopes (Huang and Gu. ; Zhang et al. ). Nevertheless, the study of the impact of hydration on slope stability and the mechanical behavior of engineering rock masses under hydration is still limited. In this study, an engineering slope affected by excavation and rainfall was taken as an example. Firstly, through field investigation and geological survey, the shale hydration due to rainfall infiltration was identified as the primary factor contributing to overall slope instability. Secondly, the mechanical parameters of hydrated rock and soil mass were determined using the parameter inversion and formula calculation. Lastly, the numerical simulations were conducted to reveal the slope progressive failure process and mechanism, as well as the mechanical behavior of the engineering rock and soil mass under hydration. Overview of the highway slope Basic characteristics and failure process The studied highway slope is situated in Guangdong Province, China. The slope is covered with dense vegetation, which can be divided into K158+280~590 and K158+590~700 sections by a gully, as depicted in Fig. (a). The terrain slopes downwards from north to south, with a relative height difference of 90 m, as shown in Fig. (b). The natural slope angle ranges from 16° to 20°, and the inclination is 180°. The area presents a syncline structure, the north and south rock strata dip between 15–53° and 19–61° respectively, which are both dip rock strata and not conducive to slope stability. Slope topography and geomorphic characteristics: (a) UAV image, (b) 3D terrain map The K158+280~590 section, which was selected as the case slope, experienced progressive failure due to rainfall infiltration on the excavated slope surface. The initial excavation began in November 2015 and ended in April 2016, forming a three-level slope, each with a slope ratio of 1:1.1 and a height of 10 m. In early May of the same year, a continuous rainfall occurred, lasting for 93 days and resulting in an accumulation of 1992 mm. Following the rainfall, the excavated slope collapsed, 11 underground water springs were revealed at the lower first level slope to slope toe, as depicted in Fig. The landslide characteristics: (a), (d), (f) shear outlets, (b) underground water springs, (c) landslide drumlin, (e), (h) cracks, (g) through cracks, and (i) remote sensing image of landslide area After the excavated slope failed, a modified construction plan was adopted, which included adjusting each level slope ratio to 1:1.25, wall masonry protection and 12 m full-length bonded bolt support. Meanwhile, ditches were also set on the slope shoulder to intercept surface runoff. Unfortunately, the slope slid and significant deformed just 30 days after construction was completed. The front toe extruded and uplifted to form a landslide drumlin, as shown in Fig. (c). Shear outlets were found at the lower first level slope, as depicted in Fig. (a), (d), and (f). At the middle, soil masses from the first and second level slid, causing the first-level platform to stagger by 0.6 m. Cracks were also observed at the intercepting ditch of the slope shoulder and the second platform, as seen in Fig. (e) and (h), suggesting that the supporting structure wasn't effective. A large number of through cracks with a width of 50-100 cm were also observed 135–240 m behind the slope shoulder, as shown in (g). The large-scale landslide from the top to the foot of the slope has formed. Field investigation and analysis After the landslide, geological survey and deformation monitoring were carried out on the slope area. To begin with, based on the location of tension cracks and landslide drumlin, it is estimated that the sliding direction is 178°. Next, to explore groundwater level and strata, a total of 22 geological boreholes were drilled along the sliding direction and route, forming three geological sections K158+360, 430, and 500, as well as one cross-section, as shown in Fig. (i). Furthermore, in order to determine the sliding surface, 16 monitoring holes (BPC1~14) were arranged in the geological borehole and its vicinity, and the inclinometer is used to measure the movement of the sliding body, as shown in Fig. (a), (b) and (c). Field investigation: (a) the geological cross section, (b), (c), (d) K158+360, 430, and 500 sections (e), (f), (g) cores of argillaceous sandstone, carbonaceous shale and argillaceous limestone, and (h) groundwater seepage at lower first-level slope The drilling results show that the strata are gently inclined from north to south and basically parallel to the terrain from east to west. The slope is covered by argillaceous sandstone, carbonaceous shale, and argillaceous limestone from top to bottom. The sandstone cores are broken and soft, showing no apparent signs of immersion, as depicted in Fig. (e). The shale cores are earthy and wet, displaying obvious slip marks, as illustrated in Fig. (f). The limestone cores are incomplete and hard, without sliding mark on the surface, as shown in Fig. (g). Groundwater levels were observed 32 times, including 16 times both for initial water and stable water level, all of which were located in the lower carbonaceous shale. Groundwater was found to be gushing out at the front of K158+430 section, with a daily inflow of 0.72 tons. The location is close to the water springs, indicating that the groundwater level has hardly changed after local failure, as shown in Figs (b) and 3 (h). Combined with the preliminary investigation report, it can be speculated that the initial groundwater level is located at the upper argillaceous limestone and rises to the lower carbonaceous shale after being recharged by rainfall in May, 2016. To initially determine the position of the sliding surface and monitor the further development of the landslide, a total of 16 deep displacement monitoring points were installed on the slope. Meanwhile, the inclinometers were utilized to track the changes of slope displacement. Deep displacement monitoring began on October 6th, 2016 after the landslide, and ended on November 28th. BPCX1-5 monitoring holes are arranged at K158+360 section. Based on the inflection point of the displacement curve, the sliding surfaces is speculated at 12.2, 22.7, 28.6, 23.8, and 22 m below the surface, with cumulative displacement of 60.1, 57.4, 49.4, 18.3, and 24.2 mm. BPCX7-12 monitoring holes are set up at K158+430 section, the sliding surface is speculated at 18, 24.9, 26.7, 25.7, 28.3, and 28.9 m deep, with cumulative displacement of 64.4, 53.1, 22.6, 24.1, 37.6, and 41.2 mm. BPCX14-16 boreholes are arranged at K158+500 section, the sliding surface is speculated at 31.3, 26, and 30.1 m deep, with cumulative displacement of 43.5, 20.9, and 28 mm. BPCX6 and BPCX13 are arranged at the cross-section, the sliding surface is speculated at 29.7 and 28.2 m deep, with cumulative displacement of 35.7 and 31.5 mm. Based on the displacement inflection point and the core data with sliding traces obtained from drilling exploration, the depth of sliding surface was completely determined, as shown in Fig. (a)~(d). The landslide type was a bedding landslide. The sliding surface rapidly extended downwards to the carbonaceous shale and cut out from the first level slope. This result is consistent with the obvious scratches on the carbonaceous shale core, manifesting that the sliding zone is the lower carbonaceous shale. Since the depth and cumulative deformation of the slip surface at the K158+430 section are the largest, and the landslide drum and tensile cracks are located nearby, it can be basically determined that this section is the main axis of the landslide. Considering that the lower shale is extremely soft and humid, water should be main induce factor. During the initial excavation stage, continuous rainfall causes the groundwater level to rise to the lower part of the carbonaceous shale and remain unchanged. Meanwhile, the modified excavated surface and slope top were covered with mortar rubble and vegetation, and no water flow was found in the gullies. These indicate that the slope is unlikely to be damaged due to consolidation or rainfall infiltration. Thus, it is highly likely that the landslide is due to the water weakening effect, which means that the strength of the sliding zone soil decreases and undergoes creep deformation under 30-days immersion. To verify this hypothesis, it is necessary to analyze the slope stability under hydration. Parameters and Numerical configuration To analyze the stability of slopes under hydration, it is imperative to study the influence of water on the mechanical parameters of shale first. Previous studies have suggested that prolonged immersion can result in erosion and dissolution of the cement between shale particles, leading to weakened cementation and decreased cohesive force of the rock mass (Wong et al. ; Ewy ; Jiang et al. ). Moreover, the hydration film on mineral surfaces becomes thicker, which lubricates the contact surface between particles and reduces its internal friction angle (Leng et al. ; Zhao et al. ; Mao et al. ). Additionally, hydration can cause particle expansion and fall off, resulting in crack expansion, pore increase, and a reduction in the elastic modulus of shale (Bian et al. ; Li et al. ; Chen et al. ). Therefore, this study will first determine the shear strength and elastic modulus when overall landslide occurred. Namely, after 30 days of hydration, and then restore the hydration process in the finite element simulation by setting the parametric weakening function. Inversion of shear strength parameters Due to differences in mineral composition, weathering degree, and structure, the shear strength parameters of hydrated shale vary greatly. To eliminate this variability, parameter inversion is conducted using Geo-studio software based on the limit equilibrium method (Nguyen ; Ishii et al. ; Shinoda et al. ). This involves continuously reducing the cohesive force and internal friction angle according to a certain proportion until the slope factor of safety falls within the target. Since landslide characteristics have already formed without significant mass movement, it can be basically determined that the slope slightly breaks the limit equilibrium state (FoS≤1) and has not yet entered the stage of severe sliding (Wang et al. ). Considering the result obtained from two-dimensional analysis is conservative due to the soil arching effect (Liu et al. ), the target safety factor for parameter inversion is set to 1, and the inversion is conducted on the sliding surfaces of K158+360, 430, and 500 sections after modified excavation. The excavated slope is reinforced by 12 m full-length bonded anchor rods with a design bond strength, the bearing and shear bearing capacity of 360 kPa, 300 kN and 240 kN, respectively. Referring to the experimental research (Yang et al. ; Zhu et al. ; Kang ; Wang ), the cohesion and internal friction angle of carbonaceous shale below the water level will be reduced in a ratio of 1:2. The saturation parameters will be selected as the initial values, and the rest rock and soil mass still used natural parameters. When one of the three sections reaches the safety factor of 1 through reverse calculation, the reduced shear strength parameters are the hydrated parameters. It is worth noting that the shear strength parameters obtained through parameter inversion are not only used to calculate the safety factor but also considered as the actual material properties of the hydrated rock and soil mass. Table shows the physical and mechanical parameters, with the parameters in parentheses representing the saturation parameters obtained from geotechnical tests. Table 1 Physical and mechanical parameters of the geomaterials │ │Elastic │ │ │ │ │ │ │ │ │Weight │ │ │ │Formation │modulus │Poisson's ratio│ │Cohesive force C (kPa)│Internal friction angle φ \((^\circ )\) │ │ │ │ │(kN/m^3) │ │ │ │ │(kPa) │ │ │ │ │ │Shaly │ │ │ │ │ │ │ │1.5×10^5 │0.31 │21 │29.4 │19 │ │sandstone │ │ │ │ │ │ │ │6.1×10^5 │ │ │ │ │ │Carbonaceous shale │ │0.27 │22 (23.5)│37 (31) │24 (23) │ │ │(5.7×10^5)│ │ │ │ │ │ │8.3×10^5 │ │ │ │ │ │Argillaceous limestone│ │0.29 │23.5 (24)│80 (76) │31 (29) │ │ │(7.6×10^5)│ │ │ │ │ The inverted parameters are presented in Table . When the cohesive force and internal friction angle of the weakened carbonaceous shale are 15 kPa and 15°, the critical factor of safety for the K158+360, 430 and 500 section are 1.007, 0.998 and 1, respectively, as shown in Fig. . Therefore, this set of cohesion and internal friction angle values are regarded as the water-weakened parameters of carbonaceous shale after 30 days of hydration. Table 2 Parameter inversion of water-weakened carbonaceous shale │ │C │φ │Factor of safety of│Factor of safety of│Factor of safety of│ │Reduction ratio between C and φ │ │ │ │ │ │ │ │(kPa)│\((^\circ )\)│K158+360 section │K158+430 section │K158+500 section │ │Initial parameters │31 │23 │1.331 │1.265 │1.283 │ │ │27 │21 │1.22 │1.113 │1.167 │ │ ├─────┼─────────────┼───────────────────┼───────────────────┼───────────────────┤ │ │23 │19 │1.113 │1.054 │1.094 │ │2:1 ├─────┼─────────────┼───────────────────┼───────────────────┼───────────────────┤ │ │19 │17 │1.058 │1.025 │1.05 │ │ ├─────┼─────────────┼───────────────────┼───────────────────┼───────────────────┤ │ │15 │15 │1.007 │0.998 │1 │ Final sliding surfaces of parameter inversion: (a) K158+360 section, (b) K158+430 section, and (c) K158+500 section Elastic modulus under hydration Based on the mechanical tests of shale, the reduction rate of peak strength and elastic modulus is relatively similar within 30 days of immersion (Li et al. ). The ratio of strength and modulus of immersed to dried shale can be used to indicate the water weakening effect on mechanical parameters (Chen et al. $$\text{K}=\frac{{\sigma }_{w}}{{\sigma }_{c}}=\frac{{E}_{w}}{{E}_{c}}$$ is softening coefficient, \({\sigma }_{w}\) are the peak strength and elastic modulus of water immersed shale, respectively. \({\sigma }_{c}\) are the peak strength and elastic modulus of dried shale, respectively. As the peak strength is commonly utilized to represent the shear strength, the elastic modulus of the immersed rock can be determined by multiplying the shear strength ratio of the immersed and dried shales with the elastic modulus of the dried rock: $${E}_{w}=\frac{{\tau }_{w}}{{\tau }_{c}}\times {E}_{c}$$ \({\tau }_{w}\) and the \({\tau }_{c}\) is the shear strength of the immersed and dried shale, respectively. The shear strength parameters are based on the previous inversion results, =15 kPa, \({\varphi }^{w}\) =15°. The geological prospecting data provides the elastic modulus and shear strength parameters for the dried shale, =37 kPa and =24°. Based on the Mohr–Coulomb strength theory, a soil element was considered in the hydrated carbonaceous shale as an object. It is assumed that the maximum principal stress \({\sigma }_{1}\) on the element is vertical while the minimum principal stress \({\sigma }_{3}\) is horizontal. As the argillaceous sandstone and carbonaceous shale above the underground water level are dry, the pore water can be disregarded under the weight of the thick overlying masses. When the unit shear failure occurs, the shear strength, maximum, and minimum principal stress can be expressed as follows: $$\left\{\begin{array}{c}{\sigma }_{1}=\gamma z\\ {\sigma }_{3}={\sigma }_{1}{\text{tan}}^{2}({45}^{\circ }-\frac{\sigma }{2})\\ \tau =\frac{1}{2}({\sigma }_{1}-{\sigma }_{3})sin2\alpha \end{array} -2c\text{tan}({45}^{\circ }-\frac{\varphi }{2})\right.$$ is the included angle between the fracture surface and the direction of the maximum principal stress, \(\alpha ={45}^{\circ }+\frac{\varphi }{2}\) . The average thickness of the argillaceous sandstone and carbonaceous shale above the water level was initially estimated to calculate the maximum principal stress (477.17 kPa), using formula ( ). Then, the strength parameters of the immersed and dried shale were introduced separately into formula ( ), and the shear strength \({\tau }_{w}\) \({\tau }_{c}\) were calculated in combination with the maximum principal stress. The shear strength \({\tau }_{w}\) =106.02 kPa, \({\tau }_{c}\) =147.98 kPa. Finally, the shear strengths and the elastic modulus of dried shale were inserted into formula ( ) to derive the elastic modulus of the immersed shale = 4.4×10 kPa. Compared with the elastic modulus of dried carbonaceous shale, the modulus of shale after 30 days of immersion decreased by 28.4%, which is similar to the modulus loss rate of 25.05% and 31.5% measured respectively by uniaxial and triaxial tests (Bian et al. ; Zhao et al. ). Compared to short-term saturated carbonaceous shale, its elastic modulus decreases by 22.8%. Numerical configuration For the analysis of slope failure, this study utilized the stress-seepage-slope construction stage group in the two-dimensional finite element software Midas-GTX (MIDAS Information Technology Co., Ltd.), and the factor of safety was calculated using the strength reduction method. The numerical model was configured according to the landslide's main section (K158+430). The left and right boundaries were supported by vertical sliding bearings, while fixed bearings were applied to the bottom boundary. The model adopts quadrilateral elements with a total of 64,131 elements, the initial excavation and modified excavation+anchor rod support were completed in one construction stage each, the rainfall is simulated using a transient seepage module, and the flow boundary (21.42 mm/day) obtained by dividing the total rainfall by the duration is applied to the initial excavation slope and road, the initial groundwater level is set at argillaceous limestone. The parameters of soil seepage characteristics include the unsaturated permeability coefficient and the soil–water characteristic curve (Fredlund ), the later was calculated using the Van Genuchten model (Parker et al. ) and combined with the saturated permeability coefficient to obtain the former. The V-G model parameters and saturated permeability coefficients listed in Table were determined based on experimental studies of seepage in carbonaceous shale, argillaceous sandstone (Tong ; Moazeni-Noghondar et al. ), and argillaceous limestone (Xu et al. ; Wang ). As the rock masses below the initial water level were saturated throughout, its permeability characteristics were not considered. The governing equation for the V-G water content function model is as follows: $${\theta }_{w}={\theta }_{r}+\frac{{\theta }_{s}-\theta r}{{[1+{(\frac{\psi }{a})}^{n}]}^{m}}$$ is the volumetric water content; is the residual volumetric water content; is the residual volumetric water content; is the negative pore water pressure; , and are the curve fitting parameters. Table 3 The VG model parameters and saturated permeability coefficients │ │a │ │ │ │ │\({k}_{s}\) │ │Types of rock and soil layers │ │n │m │\({\theta }_{r}\)│\({\theta }_{s}\)│ │ │ │(kPa)│ │ │ │ │(m/s) │ │Shaly sandstone │38.92│1.60│0.3742│0.054 │0.275 │\(7.16\times {10}^{-4}\)│ │Carbonaceous shale │24.88│1.50│0.3316│0.12 │0.326 │\(5.48\times {10}^{-5}\)│ │Argillaceous limestone │12.82│3.39│0.7050│0.218 │0.323 │\(1.12\times {10}^{-5}\)│ The simulated conditions were set in the following order: in-situ stress balance→initial excavation→rainfall→modified excavation and support→hydration. At the end of the rainfall stage, the shear strength and elastic modulus of carbonaceous shale below the simulated saturation line (final groundwater level) were replaced with saturation parameters, as the initial values for subsequent hydration analysis. By subtracting the initial value from the parameters after 30th day of hydration and dividing by the duration, the daily decreases of cohesion, internal friction Angle and elastic modulus during hydration were 0.53 kPa, 0.27° and 4,333 kPa, respectively. Based on the above data, construct parameter weakening functions with an independent variable of 30 days, integrate it into saturated carbonaceous shale material, and activate it in the final construction stage. Then, the stability of the slope under hydration can be analyzed. Result and Analyses The following study employs numerical calculations to analyze and discuss simulation results. Specifically, the slope stability and deformation characteristics were presented and analyzed in the first and second chapter. The third chapter focuses on analyzing the mechanical behaviors of carbonaceous shale under hydration, using curves of plastic strain over time, maximum shear stress versus displacement, maximum shear stress over time, and major principal stress over time. In the fourth part, the failure mechanism of slope and the existing supports are studied, as well as the improved support scheme is proposed. Slope deformation process and stability Considering the large number of time steps involved in the analysis of hydration, representative simulation results for the 10th (initial), 20th (middle), and 30th days (late) are displayed alongside the results of two excavation and rainfall conditions. Figure presents the effective plastic strain evolution. Over the course of construction, the scope of slope failure shifted from shallow to deep, while the failure type transitioned from local collapse to overall bedding sliding. Following initial excavation, a plastic zone developed along the excavated slope, with its maximum value located at the slope toe. During the rainfall stage, the plastic strain extended from the toe to the upper excavated slope, indicating a traction landslide pattern. Additionally, a new plastic zone formed in the middle of the slope, extending towards the slope top and originating from the saturated shale. Evolution process of slope plastic strain under different analysis condition: (a) initial excavation (b) rainfall (c) modified excavation and support (d), (e) and (f) represent 10, 20 and 30 days of After modifying the excavation and implementing slope support, the plastic strain at the middle and rear of the slope disappeared, while a shallow plastic zone appeared from the slope toe to the second-level slope. However, carbonaceous shale hydration caused the slope to deform and slide once again. By the 10th day of hydration, the shallow plastic zone had transformed into a deep-seated sliding surface that circumvented the support structure and cut along the hydration layer. Simultaneously, the plastic zone from the hydration layer in the middle extended towards the slope top. On the 20th day of hydration, both the front and rear plastic zones expanded; the front plastic strain increased significantly, and the rear plastic zone began extending downward along the weakened shale, with a tendency to connect to the front sliding surface. By the 30th day of hydration, the front and rear plastic zones had fully connected, forming a large landslide mass that spanned the slope from top to toe. displays the variation of the slope factor of safety during each modeling stage. During the construction stage, slope stability initially decreased rapidly, followed by a brief increase and subsequent decrease. After 30 days of hydration, the slope factor of safety reached 0.996, and was close to the slope factor of safety 0.998 obtained from parameter back analysis. This indicates good consistency between numerical simulation and inversion. According to Wang et al. ( ), who summarized the relationship between deformation and factors of safety, the cutting slopes can be categorized into the creeping stage (1.05<FoS<1.1), the extrusion stage (1.02<FoS <1.05), the sliding stage (0.98<FoS<1.02), and the sudden slip stage (0.95<FoS<0.98), the following section presents an analysis of the slope failure characteristics based on the four Factor of safety at each stage of the slope Slope failure characteristics illustrates that the slope is stable at the initial stage, with a factor of safety of 1.229. Following the initial excavation, the factor of safety decreased to 1.085, indicating the onset of the creeping stage. Deformation concentrated in the surface from the middle first-level to the lower fourth-level slope, as shown in Fig. (a). Additionally, a shallow sliding surface occurred from the top of the excavated slope to the slope toe, as depicted in Fig. (a). In the rainfall stage, the slope skipped the extrusion stage and directly entered the sliding stage, with the factor of safety reduced to 1.015. Rainfall infiltrated along the excavated slope, creating a transient saturation zone in the front and raising the groundwater level of the lower shale. Meanwhile, the rest remained unsaturated with suction pressure values increasing from bottom to top, as demonstrated in Fig. (b). This resulted in the loss of effective stress of the front soil mass and reduction of shale shear strength parameters, leading to traction landslide of the excavated slope and deformation of the middle and rear soil mass, as depicted in Fig. Deformation and seepage characteristics of slope in initial excavation and rainfall stage: (a) total displacement after initial excavation, (b) pore water pressure at 93-day of the rainfall, and (c) total displacement after rainfall Following the modified excavation and support, the slope stability greatly improved, with the factor of safety rising to 1.092, approaching the stable stage. The plastic strain nephogram showed the appearance of a shallow plastic zone from the middle to the lower part of the excavated slope, located within the anchoring range of the support structure, which did not result in sliding failure. In the hydration stage, slope stability exhibited an accelerated decreasing trend. By the 10th day of hydration, cohesion, internal friction angle, and elastic modulus of the hydration layer decreased to 82.9% (25.7 kPa),88.3% (20.3°), and 92.4% (5.27×10 kPa) of initial values, respectively. The slope remained in the creeping phase, with a factor of safety of 1.072, only 1.9% lower than the previous. However, stress and deformation characteristics were significantly changed. Positive effective stress concentrated in the middle to rear of the hydration layer while surface values oscillated from positive to negative, as depicted in Fig. (b), indicating compression and tension occurring in deep and shallow layers. The plastic zone of the excavated slope shifted from shallow to deep, with the sliding surface bypassing the anchor rod and cutting along the exposed position of the lower shale, as shown in Fig. (d). This rendered the support structure ineffective and caused deformation of the first-level slope. Furthermore, due to weakened shale parameters, the middle and rear soil mass exhibited displacement exceeding 6 mm, as shown in Fig. Stress and deformation characteristics under hydration: (a), (c), (e) Total displacement on the 10th, 20th, and 30th day of hydration, (b), (d), (f) Mean effective stress on the 10th, 20th, and 30th day of hydration On the 20th day of hydration, the factor of safety decreased to 1.041, and the slope entered the extrusion phase. Cohesion and internal friction angle reduced to 65.8% (20.4 kPa) and 76.5% (20.4°) of initial values, respectively. Additionally, elastic modulus decreased to 84.7% (4.83×10 kPa), significantly exacerbating progressive sliding deformation. Stress concentration from the middle to rear of the hydration layer rapidly extended downwards, forming a compressive stress band as illustrated in Fig. (d). Concurrently, tensile stress in the shallow layer of the excavated slope slightly increased. The stress variation, combined with expansion and increase of plastic strain in the Fig. (e), indicating traction sliding deformation in the excavated slope and the development of translational landslide along the hydration layer. Compared to the 10th day of hydration, deformation range of the excavation slope greatly extended from the first level to the middle of the third level slope, with average displacement increasing from 63.5 mm to 111.8 mm, an increase of 76%. The deformation area of the middle and rear weathered layer extended downward, and displacement near the excavated slope gradually decreased, as depicted in Fig. At the end of the hydration, the slope exhibited instability. The factor of safety sharply dropped to 0.996, reaching sliding failure stage. The cohesion and internal friction angle reduced to the lowest, which were only 48.4% (15 kPa) and 65.2% (15°) of initial values, disrupting torque balance of weathered layer. Both compressive stress and plastic strain zone connected to front excavated slope, as shown in Figs (f) and (f). A large bedding landslide from slope toe to top had completely formed. The first-to-second-level slope displacement exceeded half a meter, consistent with the 0.6 m stagger deformation observed in the second-level platform. By comparing the simulation results and investigation, it can be seen that the simulated sliding range is basically consistent with the sliding surface on the K158+430 section. The rear edges of the two are located 235 m and 239 m behind the slope shoulder, respectively, and the simulated deformation (50–80 mm) also matches the width of the tension crack (50–100 mm). In addition, the positions of the shear outlets are also close, located at the slope toe and the lower first level slope. Considering that both the simulated and actual sliding zones are carbonaceous shale below the water level, it can be confirmed that the overall sliding is induced by hydration, demonstrating the simulation have reproduced the failure process. Mechanical behavior of shale under hydration Based on the simulation results, as hydration time increases, mechanical parameters of the lower shale linearly decrease, while sliding deformation and the rate of decline in factor of safety sharply increase. The relationship between mechanical parameters of the hydration layer and slope stability appears non-linear. To clarify the slope failure mechanism, it is necessary to analyze mechanical behavior of shale. Therefore, four measurement points every 50 m along the middle layer of the weakened shale are selected, named as Front, Middle, Middle rear, and Rear, to extract total displacement, mean effective stress, maximum shear stress, and shear strain. Figure illustrates the curves of plastic strain–time, mean effective stress-time, maximum shear stress-displacement, and maximum shear stress-shear strain. Mechanical characteristics of shale under hydration. (a) Plastic strain vs. time (b) Maximum shear stress vs. displacement (c) Maximum shear stress-time (d) Mean effective stress vs. time (a) and (b) illustrate the three-stage deformation of hydrated shale. Unlike the instantaneous creep-creep stability/attenuation-accelerated creep process observed in shear tests and uniaxial compression tests (Fang et al. ; Cai et al. ; Zhu et al. ), the strain–time curve gradually increases. During the initial stage (1–10 days), the deformation rate remains nearly constant at zero or a low level, with no significant increase in plastic strain. In the intermediate stage (10–20 days), the deformation rate slowly increases. During the later stage (20–30 days), the deformation rate increases sharply until slope failed. The maximum shear stress exhibits a nonlinear relationship with total displacement; as shear stress increases, displacement initially increases slowly, then accelerates before increasing sharply. This characteristic resembles the variation pattern of slope sliding failure and factor of safety at three time points (10 days, 20 days, 30 days) under hydration, indicating that water-weakening has a significant impact on slope stability. When considering Fig. (c) and (d), it is evident that during the initial stage, shale exhibits anelasticity where the plastic strain lags behind the stress change. As a result of shear strength reduction, the shear stress significantly increased, with the average increment being 98 kPa at the 10th day of hydration across measuring points. Concurrently, effective stress changes at the front, rear, and middle-rear measuring points, where the front recorded tensile stress increasing from -12.5 kPa to -38.1 kPa, the data obtained at the rear and middle rear indicated compressive stress, with values increasing from 18.47 kPa and 24.69 kPa to 120.81 kPa and 228.97 kPa, respectively. However, the plastic strain remained relatively constant. Despite slight shale mechanical parameter reduction altering the slope stress state, the residual shear strength enabled the hydrated layer to remain stable, and the high elastic modulus restricted deformation. During the intermediate stage, moderate increases in plastic strain are observed. The growth rate of shear stress at each measuring point slows down, and the effective tensile/compressive stress exhibits an obviously growth, suggesting the tensile shear and compressive shear deformation are occurred at the front and middle to rear part, respectively. In the later stage, the shear stress-time curve remains almost constant, while the plastic strain continues to increase. Figure (b) shows that when the maximum shear stress exceeds 270 kPa, the displacement sharply rises, indicating that the hydrated shale has reached its yield limit and is rapidly damaged. The damaged soil masses at middle to rear push forward, leading to increased compressive stress and causing the front failure mode to transition from tensile shear failure to compressive shear failure. Failure mechanism and reinforcement Based on the slope failure characteristics and mechanical behavior of hydrated shale, it can be observed that the combined excavation and rainfall led to the initial excavated slope collapse. During the modified excavation stage, hydration accelerated the reduction of the factor of safety and increased deformation, which caused the slope damage to progress from partial to overall, eventually resulted in the alteration of slope failure mode from excavated slope collapse to deep-seated landslide and overall bedding landslide. Due to the continuous reduction of mechanical parameters, tensile and compressive shear failures first occurred in the front and rear of the lower shale. The front failure developed from front to rear, forming a deep sliding surface that caused traction sliding deformation of the modified excavated slope. The rear failure developed from the rear to the front, causing the compressive stress zone and sliding surface to connect with the front, resulting in a bedding landslide of the hydration layer. In the process of progressive failure, the sliding surface extended from front to back before connecting from back to front, thereby exhibiting a composite sliding failure characterized by traction and push landslides. In terms of treatment effect, modified excavation and full-length bonded anchor rod support have improved the stability in the early stages of hydration. Additionally, mortar rubble protection and intercepting and drainage ditches have effectively blocked the infiltration of rainwater. However, these measures are only used to prevent the excavated slope collapse. For deep complex landslide disasters caused by hydration, the anchor rod cannot cross the sliding surface to produce an anchoring effect. Furthermore, slope protection and drainage facilities can only prevent the rise of groundwater levels and cannot curb the weakening of mechanical parameters caused by hydration. The construction and design units only treat the slope based on deformation and failure characteristics that have occurred without analyzing potential failure mechanisms comprehensively, including stress, strain, and seepage characteristics. As a result, failure of the support and protection structure is inevitable. Given the gentle inclination and longitudinal length (320 m) of the slope stratum, the thrust generated by the bedding landslide is significant. To address this issue, the author proposes adopting a scheme of prestressed anchor cable and double row portal anti-slide pile to support the front excavated slope. The pile is poured with C30 concrete, with a length of 27 m and embedded in the bedrock of 11.6 m. The cross-sectional size is 2 m×2 m, with a 6 m spacing and a middle connecting beam width of 2 m. The prestressed anchor cable is made of 7 steel strands with a diameter of 15.2 mm, with a single tensile strength of 1860 Mpa. The prestressed force is 700 kN, and the spacing between the anchor cables is 4 m×4 m. The anchoring section length is 10 m, mostly embedded in landslide By simulating the slope proposed support scheme under hydration, the slope factor of safety after 30 days of hydration is 1.38, meeting China's Code for Design of Highway Subgrades' requirements for a factor of safety greater than 1.25. The compressive stress band of the hydration layer appears only at the rear, and the maximum deformation observed is 0.27 mm, indicating the support structure effective control deformation, as shown in Fig. (a) and (b). Stress and deformation of slope with proposed reinforcement under 30 days hydration: (a) total displacement, and (b) mean effective stress (1) To analyze the impact of water weakening on slope stability, an engineering slope disturbed by excavation and rainfall was taken as an example. Through on-site investigation and geological survey, it was evident that the hydration of shale below the water level induced the overall slope instability. Based on the parameter inversion and formula calculations, the elastic modulus and shear strength parameters of hydrated shale were determined and used to simulate the gradual weakening process. The actual failure process of the slope was numerically reproduced, and the resulting simulation was compared against the on-site landslide characteristics. The slope failure mechanism under hydration was subsequently analyzed. The results indicates that the hydration accelerates the reduction of the slope stability and increases deformation, thereby causing traction and push type landslides along the front and rear of the hydrated shale layer. (2) Unlike laboratory tests that yield instantaneous creep-creep stability/attenuation-accelerated creep processes, the plastic strain of the slope shale layer under hydration reveals a three-stage process involving initial growth weakness, medium-term accelerated growth, and late sharp increase with time. Additionally, the maximum shear stress and total displacement exhibit similar nonlinear relationships. Shale exhibits anelasticity in the early stages of hydration, and its plastic strain remains unchanged with the growth of shear and effective stress. During the middle stage, as the plastic strain gradually increases, tensile shear and compressive shear deformation occur in the front and rear of shale, respectively. In the later stage, the shale reaches its yield limit of maximum shear stress (270 kPa) and ultimately fails. Meanwhile, the damaged masses at the rear and middle compress the front, resulting in a transition of failure mode to compressive shear failure. (3) The failure reasons of the slope protection structure in this case were analyzed. A prestressed anchor cable support scheme for hydration was proposed, and its feasibility was verified through numerical simulation. Results show that when dealing with engineering slopes containing groundwater, the rise of groundwater level caused by excavation and rainfall infiltration can cause the shift from local to overall slope damage, while the sliding range converts from shallow to deep. Comprehensive measures such as excavation, protection, surface waterproofing and drainage engineering are difficult to effectively prevent and control deep landslides caused by hydration. Additionally, the design idea that only considers the displacement characteristics of the shallow masses while ignoring deep stress, strain, and seepage characteristics has significant shortcomings. The present work was financially supported by the UK Research and Innovation (UKRI) (Grant No. EP/Y02754X/1) and the UK Engineering and Physical Sciences Research Council (EPSRC) New Investigator Award (Grant No. EP/V028723/1). The authors declare that they have no known competing financial interests or personal relationships that could appear to influence the work reported here. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
{"url":"https://www.springerprofessional.de/en/the-water-weakening-effect-on-the-progressive-slope-failure-unde/27359356","timestamp":"2024-11-12T06:30:16Z","content_type":"text/html","content_length":"257550","record_id":"<urn:uuid:6ac9b439-a643-4c90-83b2-9f4518bb04ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00634.warc.gz"}
Hybrid residual fatigue life prediction approach for gear based on Paris law and particle filter with prior crack growth information Gear has been widely used in the modern industry, and the gear reliability is important to the driving system, which makes the residual fatigue life prediction for a gear crucial. In order to realize the residual fatigue life of the gear accurately, a hybrid approach based on the Paris law and particle filter is proposed in this paper. The Paris law is usually applied to predict the residual fatigue life, and accurate model parameters allow a more realistic prediction. Therefore, a particle filtering model is utilized to assess both model parameters and gear crack size simultaneously. As a data-driven method, particle filter describes the dynamical behavior of model parameters updating and gear crack growth, whereas the Paris law, as a model-based method, characterizes the gear’s crack growth according to the physical properties. The integration of the Paris law and particle filter is proposed as a hybrid approach, which is suitable for nonlinear and non-Gaussian systems, and can update the parameters online and make full use of the prior information. Finally, case studies performed on gear tests indicate that the proposed approach is effective in tracking the degradation of gear and accurately predicts the residual gear fatigue life. 1. Introduction Gear is one of the key components in a driving system, and it has been widely used in the modern industry. The gear reliability is important to the system safety, and it is necessary to investigate the allowable crack size present in the gear to avoid a sudden system failure [1]. Therefore, it is important to estimate the useful gear fatigue life within the shortest times, that is helpful for monitoring health conditions of machines and estimating whether the machines can accomplish the routine task or not [2]. Currently, machine prediction approaches can roughly be categorized into model-based and data-driven approaches [3]. The Paris law is a widely used model-based prognostic approach, and it has been proven that gear crack-growth rates follow it [4]. In addition, fatigue life prediction models in most literature sources are based on the traditional Paris law and are usually established based on monitoring and estimation of well-known direct damage indicators such as a crack size [5]. Therefore, the fatigue life prediction based on the Paris law has received considerable attention in recent years [6-10], and it has been applied to a range of applications, including axial flow compressors [1], girth gear-pinion assembly [11], ball bearings [2], and interfacial cracked plate [12]. However, it is hard to acquire real-time data of crack lengths and flaw sizes without interrupting the machine operation and model parameters which have an effect on the model behavior are often unknown and need to be identified as a part of the prediction process. In order to realize the fatigue life prediction based on the Paris law, a variety of improved methods are proposed. Dong Xu, et al. [2] proposed two improved Paris models based on the intrinsic mode function (IMF) involving the fault characteristic frequency which has a consistent trend with the diameter of flaws. Yuning Qian, et al. [13] integrated enhanced phase space warping with a Paris crack growth model to propose a multi-time scale approach for bearing defect tracking and residual useful life prediction. Ben Abdessalem, et al [14] predicted the fatigue crack growth by a Markov process associated with deterministic crack laws, which provide a reliable prediction and can be an efficient tool for a safety analysis of structures in a large variety of engineering applications. The Paris model depends on its parameters for accurate prediction of the fatigue life. Many contributions for the parameter estimation of Paris law have been presented in the published literatures. In most cases, the Paris law parameters can be derived from fracture mechanics tests [15], however, the crack growth in structures depends on many factors, such as the amplitude, stress ratio, or frequency of the load, which are difficult to be estimated correctly. The extended Kalman filter (EKF) is proposed as an estimation method for the parameters identification of Paris law which is used as fatigue-crack-length growth model under loading cycles [16]. However, it is different to realize the parameters when it comes to nonlinear situation. Particle filter is widely used in the engineering, and it is especially suitable for processing the nonlinear and non-Gaussian systems [5, 17, 18]. Hence, we take the particle filtering technique as an inference method inside the dynamic Bayesian network to assess both model parameters and damage states simultaneously. In this paper, a hybrid approach based on the Paris law and particle filter is proposed to predict the residual gear fatigue life with prior crack growth information, which considers the gear fracture mechanics and makes full use of the prior information to improve the prediction performance. First, the gear degradation process is described by the Paris law. Then, the PF model is proposed as an estimation method for the parameters identification. As a result, the integration of data-driven approach and model-based approach is used for the residual fatigue life prediction, which makes full use of the advantages of two approaches and makes the prediction more accurate. The rest of this paper is organized as follows. In Section 2, the Paris law and particle filter are introduced separately. In Section 3, the residual fatigue life prediction based on hybrid approach is proposed, and the particle filter is proposed to update the unknown parameters of Paris model. Then, the implementation steps of the hybrid prediction approach are discussed. In Section 4, a test of the gear crack growth is used to prove the proposed method. Finally, the conclusions are drawn in Section 5. 2. Introduction of Paris law and PF model 2.1. Short overview of Paris law In 1963, Paris, et al [19] proposed the Paris law based on fracture mechanics, which can reflect the failure mechanism of materials and is usually applied as a method to predict fatigue life or residual fatigue life. The stress intensity factor (SIF) plays a key role in the process of the fatigue crack propagation, and a number of tests are carried out to study the influence for crack propagation. It has been found that for the first loading mode the SIF increases along with the increase of the crack depth and for the second loading mode the SIF decreases along with the increase of the crack depth. The crack propagation rate $d\alpha /dN$ measured was similar with the SIF amplitude variation. Based on the above, the Paris law for the fatigue crack propagation under the constant amplitude loading was proposed: $\frac{d\alpha }{dN}=C{\left(\mathrm{\Delta }K\right)}^{m},\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{\Delta }K=\mathrm{\Delta }\sigma \sqrt{\pi \alpha },$ where $\alpha$ represents the crack size, $N$ represents the number of stress cycles, $\mathrm{\Delta }K$ is the range of the stress intensity factor, $\mathrm{\Delta }\sigma$ is the stress range, $C$ and $m$ are the parameters of the Paris law which are usually determined by material constants. 2.2. Particle filter model The particle filter employs the Monte Carlo simulation of a state dynamic model and Bayesian estimation for estimating the posterior probability density function (PDF) of the state. Therefore, the method is an effective tool for nonlinear and non-Gaussian systems. The state equation and measurement equation of particle filter are expressed as follows [20]: $\left\{\begin{array}{l}{x}_{k}=f\left({x}_{k-1},{\theta }_{k-1},{n}_{k-1}\right),\\ {y}_{k}=h\left({x}_{k},{\omega }_{k}\right),\end{array}\right\$ where ${x}_{k}$ is the damage state to be estimated, $k$ is the time step index, ${\theta }_{k-1}$ is a vector of model parameters,${y}_{k}$ is the measurement data, ${n}_{k}$ and ${\omega }_{k}$ are process and measurement noise, respectively. $f$ and $h$ represent the known process and observation functions, respectively. The probability density function of the update unknown parameters can be obtained based on the following Bayes’ theorem: $p\left(\mathrm{\Theta }|z\right)\propto L\left(z|{\mathrm{\Theta }}_{prior}\right)p\left({\mathrm{\Theta }}_{prior}\right),$ where $\mathrm{\Theta }$ is a set of the unknown parameters, $z$ is a vector of monitoring data, $p\left({\mathrm{\Theta }}_{prior}\right)$ is the prior PDF of parameters, $p\left(\mathrm{\Theta }|z\ right)$ is the posterior PDF of parameters based on the observations and $L\left(z|{\mathrm{\Theta }}_{prior}\right)$ is the likelihood based on the given parameters ${\mathrm{\Theta }}_{prior}$. When the initial PDF of the parameters is given, in order to obtain the posterior probability density function $p\left({\theta }_{k}|{z}_{0:k}\right)$ of the unknown parameters, the state equation is defined as [21]: $\begin{array}{l}p\left({\theta }_{k}|{z}_{0:k-1}\right)=\int p\left({\theta }_{k}|{\theta }_{k-1},{z}_{0:k-1}\right)p\left({\theta }_{k-1}|{z}_{0:k-1}\right)d{\theta }_{k-1}\\ =\int p\left({\theta } _{k}|{\theta }_{k-1}\right)p\left({\theta }_{k-1}|{z}_{0:k-1}\right)d{\theta }_{k-1},\end{array}$ where the notation 0:$k-1$ means a set of parameters from cycles 0 to $k-1$. The new observations ${z}_{k}$ are collected at time $k$. According to the Bayesian rule, the posterior probability density for unknown parameters updates over time, so posterior probability distribution of the current parameters is obtained [21]: $p\left({\theta }_{k}|{z}_{0:k}\right)=\frac{p\left({\theta }_{k}|{z}_{0:k-1}\right)p\left({z}_{k}|{\theta }_{k}\right)}{p\left({z}_{k}|{z}_{0:k-1}\right)},$ where $p\left({z}_{k}|{z}_{0:k-1}\right)=\int p\left({\theta }_{k}|{z}_{0:k-1}\right)p\left({z}_{k}|{\theta }_{k}\right)d{\theta }_{k}$ is a constant. Above is the prediction process and update process for parameters of Paris model. In this paper, we utilize the Bayesian filtering and Monte Carlo algorithm to obtain the parameters updating. Assuming that update observations ${z}_{0:k}$ are known, the posterior probability density of parameters ${\theta }_{0:k}$ can be expressed as follows [21]: $p\left({\theta }_{0:k}|{z}_{0:k}\right)=\int p\left({\xi }_{0:k}|{z}_{0:k}\right)\delta \left({\xi }_{0:k}-{\theta }_{0:k}\right)d{\xi }_{0:k},$ where $\delta \left(\cdot \right)$ is the Dirac delta measure, ${\xi }_{0:k}$ is the priori vectors for parameters, ${\theta }_{0:k}$ is a set of vectors for parameters at cycles from 0 to k and $p\ left({\xi }_{0:k}|{z}_{0:k}\right)$ is the priori probability density function. If we get the real posterior probability density function $p\left({\theta }_{0:k}|{z}_{0:k}\right)$, Eq. (6) can be calculated according to Eq. (7) [21]: $\stackrel{^}{p}\left({\theta }_{0:k}|{z}_{0:k}\right)=\frac{1}{{N}_{s}}\sum _{i=1}^{{N}_{s}}\delta \left({\theta }_{0:k}-{\theta }_{0:k}^{i}\right),$ where ${\theta }_{0:k}^{i},i=1,2...,{N}_{s}$ are independent random samples sampling from $p\left({\theta }_{0:k}|{z}_{0:k}\right)$, ${N}_{s}$ is the number of samples sampling from $p\left({\theta } In the calculation process, $p\left({\theta }_{0:k}|{z}_{0:k}\right)$ are multi-type and non-standard. Therefore, it is difficult to obtain the result. In order to calculate the posterior probability density function of parameters for Paris model, the importance function sampling method is proposed in this paper. In this way, we can obtain the PDF distribution $\pi \left({\theta }_{0:k}|{z}_{0:k} \right)$ based on importance sampling for $p\left({\theta }_{0:k}|{z}_{0:k}\right)$, which has the same distribution with the $p\left({\theta }_{0:k}|{z}_{0:k}\right)$. Therefore, Eq. (7) can be transformed as follows [21]: $\begin{array}{l}p\left({\theta }_{0:k}|{z}_{0:k}\right)=\int \pi \left({\xi }_{0:k}|{z}_{0:k}\right)\frac{p\left({\xi }_{0:k}|{z}_{0:k}\right)}{\pi \left({\xi }_{0:k}|{z}_{0:k}\right)}\delta \left ({\xi }_{0:k}-{\theta }_{0:k}^{}\right)d{\xi }_{0:k}\approx \frac{1}{{N}_{s}}\sum _{i=1}^{{N}_{s}}{w}_{k}^{*i}\delta \left({\theta }_{0:k}-{\theta }_{0:k}^{i}\right),\\ \end{array}$ where ${w}_{k}^{*i}=\frac{p\left({z}_{0:k}|{\theta }_{0:k}^{i}\right)p\left({\theta }_{0:k}^{i}\right)}{p\left({z}_{0:k}\right)\pi \left({\theta }_{0:k}^{i}|{z}_{0:k}\right)}$ is the weights of the unknown parameters, ${\theta }_{0:k}^{i}$, $i=1,2,...,{N}_{s}$, which can be calculated by $\pi \left({\theta }_{0:k}^{i}|{z}_{0:k}\right)$sampling, and $p\left({z}_{0:k}|{\theta }_{0:k}^{i}\right)$ is the probability density function of observations. In order to obtain the weight, we assume that $p\left({z}_{0:k}\right)=\int p\left({z}_{0:k}|{\theta }_{0:{k}_{}}\right)p\left({\theta }_{0:k}\right)d{\theta }_{0:k}$, and the probability distribution $p\left({\theta }_{0:k}|{z}_{0:k}\right)$ can be expressed as follows: $\stackrel{^}{p}\left({\theta }_{0:k}|{z}_{0:k}\right)=\sum _{i=1}^{{N}_{s}}{\stackrel{~}{w}}_{k}^{*i}\delta \left({\theta }_{0:k}-{\theta }_{0:k}^{i}\right),$ where ${\stackrel{~}{w}}_{k}^{*i}=\frac{{w}_{k}^{i}}{\sum _{j=1}^{{N}_{s}}{w}_{k}^{j}}$ and ${w}_{k}^{i}=\frac{p\left({z}_{0:k}|{\theta }_{0:k}^{i}\right)p\left({\theta }_{0:k}^{i}\right)}{\pi \left ({\theta }_{0:k}^{i}|{z}_{0:k}\right)}={w}_{k}^{*i}p\left({z}_{0:k}\right)$. 3. Hybrid prediction approach for gear 3.1. Improved Paris law based on PF In order to make full use of the advantages of data-driven method and model-based method, the integration of Paris law and PF model is proposed in this paper. The approach considers both the gear degradation process, and the prior information, which can describe the gear degradation process based on a simple model and improve the residual fatigue life prediction accuracy based on the prior information, which is the measured gear crack size as shown in Table 1. Based on the introduction of Paris law and PF model above, the state transition equation function for the Paris law is established, which is shown as follows: ${\alpha }_{k}={C}_{k}{\left(\mathrm{\Delta }\sigma \sqrt{\pi {\alpha }_{k-1}}\right)}^{{m}_{k}}dN+{\alpha }_{k-1}.$ The model parameters ${m}_{k}$ and ${C}_{k}$ as well as the degradation state ${\alpha }_{k}$ are estimated using the gear crack growth information ${z}_{k}$ under constant amplitude loading. When the state transition equation and crack growth prior information are defined, the parameters ${m}_{k}$ and ${C}_{k}$ over time can be estimated based on the PF model. Moreover, we can estimate the parameters online when new gear crack growth information is added. In this paper, the parameters of the improved Paris law are obtained by maximum likelihood estimation, and the likelihood function is shown as follows: $L\left({z}_{k}|{\alpha }_{k}^{i},{m}_{k}^{i},{C}_{k}^{i}\right)=\frac{1}{{z}_{k}\sqrt{2\pi }{\zeta }_{k}^{i}}\mathrm{e}\mathrm{x}\mathrm{p}\left[-\frac{1}{2}{\left(\frac{\mathrm{l}\mathrm{n}{z}_{k}- {\lambda }_{k}^{i}}{{\zeta }_{k}^{i}}\right)}^{2}\right],i=1,\dots ,n,$ where ${\zeta }_{k}^{i}=\sqrt{\mathrm{l}\mathrm{n}\left[1+{\left(\sigma }{{\alpha }_{k}^{i}\left({m}_{k}^{i},{C}_{k}^{i}\right)}\right)}^{2}\right]}$ and ${\lambda }_{k}^{i}=\mathrm{l}\mathrm{n}\left [{\alpha }_{k}^{i}\left({m}_{k}^{i},{C}_{k}^{i}\right)\right]-1}{2}{\left({\zeta }_{k}^{i}\right)}^{2}$. 3.2. Implementation steps of hybrid prediction approach In the prediction strategy process, the integration of the Paris model and particle filter is proposed to predict the residual fatigue life of the gear with prior crack growth information. The implementation steps of the proposed approach are shown as Fig. 1. The steps are described as follows: Step 1. Decide the model-based prognostics method according to the object of study and raw measurement data. We choose the Paris law as the model-based method in this paper. Step 2. Define the parameters which characterize the damage behavior. The model parameters which have an effect on the model behavior are often unknown and need to be identified in the prediction Step 3. PF is employed to update the parameters as the data-driven model when new observations appear. Step 4. If there is a new observation, go back to Step 3, or else go to Step 5. Step 5. The residual fatigue life at the current time can be obtained based on the updated parameters of the Paris law. The proposed hybrid approach considers the model-based prognostics and data-driven model at the same time, which can improve the prediction performance. Fig. 1Implementation of hybrid prediction approach 4. Case studies and discussion 4.1. Gear crack growth test In order to verify the proposed method in this paper, a gear crack growth test is taken as an example to predict the residual fatigue life. As shown in Fig. 2, there is a through-the-thickness crack on the gear. The crack size is measured at every 50 cycles under the loading condition $\mathrm{\Delta }\sigma =$ 78 MPa, which is shown in Table 1, and the critical threshold of the crack size is 0.0463 m. The actual fatigue life is 2500 cycles. All the original data is taken from Ref [20]. In the current paper, we establish the model based on the crack size, not considering the relationship between the gear crack and condition monitoring data, such as vibration and temperature. Firstly, the true crack size data are generated according to Eq. (10). The measured crack size data are then generated by multiplying noise, which is lognormally distributed with standard deviation of 0.001/${a}_{k}$ (m). Actually, it has been shown that the distribution of crack size follows a lognormal distribution [22]. Table 1Monitoring data for crack growth [20] Time (cycles) 50 100 150 200 250 300 350 400 Crack size (m) 0.0103 0.0118 0.0095 0.0085 0.0122 0.0110 0.0120 0.0113 Time (cycles) 450 500 550 600 650 700 750 800 Crack size (m) 0.0122 0.0110 0.0124 0.0117 0.0138 0.0127 0.0115 0.0135 Time (cycles) 850 900 950 1000 1050 1100 1150 1200 Crack size (m) 0.0124 0.0141 0.0160 0.0157 0.0149 0.0156 0.0153 0.0155 4.2. Degradation process and parameters updating As shown in Fig. 3, the measured crack size increases gradually over cycles, which means that the trend of the gear degradation. It is assumed that the standard deviation of measurement is known. Also, the initial distribution of the parameters and the likelihood function are, respectively, normal and lognormal distributions, which are as follows: ${\alpha }_{0}~N$(0.01, (5×10^–4)^2), ${m}_ {0}~N$(4, 0.2^2), $\mathrm{l}\mathrm{o}\mathrm{g}\left({C}_{0}\right)~N$(–22.33, 1.12^2). Fig. 3Monitoring crack size According to the prior information of gear crack size data, we can predict the gear degradation process based on the proposed hybrid approach with the initial distribution of the parameters. The state transition for gear crack growth is given by Eq. (10). The gear degradation in the future can be predicted, and the predicted crack size is shown in Table 2. Moreover, the gear crack growth is shown in Fig. 4, and we can tell that the gear is close to failure at 2500 cycles according to the failure threshold, which is fit with the actual residual life of the gear. Fig. 4Crack growth prediction Table 2Predicted data for crack growth Time (cycles) 1250 1300 1350 1400 1450 1500 1550 1600 Crack size (m) 0.0165 0.0170 0.0175 0.0180 0.0184 0.0190 0.0196 0.0203 Time (cycles) 1650 1700 1750 1800 1850 1900 1950 2000 Crack size (m) 0.0209 0.0220 0.0223 0.0231 0.0241 0.0250 0.0260 0.0271 Time (cycles) 2050 2100 2150 2200 2250 2300 2350 2400 Crack size (m) 0.0283 0.0296 0.0310 0.0352 0.0343 0.0360 0.0381 0.0405 While the gear degradation process is predicted based on the PF model, the parameters of Paris model are updated simultaneously. When the gear crack size in the future is predicted, we can update the model parameters based on the new obtained crack size data. Taking the parameters estimation at 1200 cycles and 1500 cycles as examples, initial value $m=$ 3.8, $C=$ 1.5×10^–10 and historical data are both generated based on PF to get the best estimation for the gear degradation state (${\alpha }_{1200}$, ${m}_{1200}$, ${C}_{1200}$) and (${\alpha }_{1500}$, ${m}_{1500}$, ${C}_{1500}$), which are shown in Fig. 5 and Fig. 6, and the model parameters are convergent to the optimum value quickly. According to the gear crack size and the model parameters, we can estimate the residual fatigue life at the current time. Fig. 5Updated parameters at 1200 cycles Fig. 6Updated parameters at 1500 cycles 4.3. Prediction results and discussion The process of residual fatigue life prediction is shown in Fig. 7. Firstly, the prediction model needs to be defined based on the investigated object, and the model parameters are obtained according to the prior information, which has been finished in section 4.2. However, we can only get a lower accuracy result based on the model. Therefore, new monitoring information is added to the predicted process, and posterior parameters of the model can be obtained, which can improve the accuracy and update the parameters online. Finally, the residual fatigue life can be obtained according to the given failure threshold. Fig. 7Illustration of RFL prediction The model parameters are given at 1200 cycles and 1500 cycles as mentioned above. Consequently, the residual fatigue life distribution is shown in Fig. 8 and Fig. 9 respectively, which takes ${N}_{s} =$ 5000 particles in every simulation. According to the residual fatigue life distribution and the result of the residual fatigue life at the current time, we can obtain the probability density function, which is more suitable for describing the gear residual fatigue life. Taking the calculation process above repeated, then the gear residual fatigue life can be obtained over time gradually. The gear residual fatigue life and 90 % confidence interval at a different time can be calculated based on the model when the parameters and failure threshold are given as shown in Table 3. The probability density function of the gear residual fatigue life is shown in Fig. 10, and the red curve and green curve represents the predicted mean residual fatigue life and actual residual fatigue life, respectively. The variances of the probability density function are smaller over cycles with more monitored and predicted crack size data is used during the prediction process, and it means that the prediction results are more and more accurate. Fig. 8Residual fatigue life distribution at 1200 cycles Fig. 9Residual fatigue life distribution at 1500 cycles Table 3Predicted RFL and 90 % confidence interval Confidence interval Time (cycles) Prediction of RFL (cycles) Actual RFL (cycles) 90 % lower limit 90 % upper limit Fig. 10Residual fatigue life probability density function The predicted residual fatigue life and actual fatigue life are compared as shown in Fig. 11. The predicted results are closer to the actual residual gear fatigue life with the increase of monitored data. The gear degradation process allows to the Paris law, and the model parameters are updated by the PF, which can make full use of both the prior information of gear crack, and the posterior information. However, we only study the residual fatigue life prediction under the constant amplitude loading, and the case under variable amplitude loading needs a further study. The results errors are caused by the monitoring information errors and the randomness of the proposed model, however, the model accuracy satisfies with the most working condition. In addition, when the gear is close to a failure, the error is much bigger, which is caused by the change of the mechanical properties. Fig. 11Comparison of actual and predicted RFL 5. Conclusions A hybrid residual fatigue life prediction approach based on the PF model and Paris law is proposed in this paper. Firstly, the Paris law is utilized to describe the crack growth of the gear. Then, the parameters of Paris law are updated and the gear degradation in the future are predicted based on the PF model. Compared with the failure threshold of the gear, the residual fatigue life can be obtained. The main findings are as follows. 1) The Paris law can describe the gear crack growth process, and it is suitable to predict the residual gear fatigue life. 2) The improved Paris law utilizes the particle filtering to assess both model parameters and gear crack size simultaneously that is especially suitable for processing the nonlinear and non-Gaussian systems. In this way, we can update the parameters online and make full use of the prior information. 3) The integration of data-driven approach and model-based approach for residual fatigue life prediction can be used to improve the prediction performance, which provides a new way to predict the residual fatigue life. • Ahmed Mutahir, Ullah Himayat, Rauf A. Fracture mechanics based fatigue life estimation of axial compressor blade. 13th International Bhurban Conference on Applied Sciences and Technology, Islamabad, Pakistan, 2016, p. 69-74. • Dong Xu, Jin’e Huang, Qin Zhu, et al. Residual fatigue life prediction of ball bearings based on paris law and RMS. Chinese Journal of Mechanical Engineering, Vol. 25, Issue 2, 2012, p. 320-327. • Jardine A. K. S., Lin D., Banjevic D. A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mechanical Systems and Signal Processing, Vol. 20, 2006, p. • Khader Iyas, Rasche Stefan, Lube Tanja, et al. Lifetime prediction of ceramic components - A case study on hybrid rolling contact. Engineering Fracture Mechanicas, Vol. 169, 2017, p. 292-308. • Rabiei Elaheh, Lopez Droguett Enrique Modarres Mohammad A prognostics approach based on the evolution of damage precursors using dynamic Bayesian networks. Advances in Mechanical Engineering, Vol. 8, Issue 9, 2016, https://doi.org/10.1177/1687814016666747. • Wang Yiwei, Binaud Nicolas, Gogu Christian, et al. Determination of Paris’ law constants and crack length evolution via extended and unscented Kalman filter: An application to aircraft fuselage panels. Mechanical Systems and Signal Processing, Vol. 80, 2016, p. 262-281. • Zhang Junhong, Yang Shuo, LiuJiewei Fatigue crack growth rate of Ti-6Al-4V considering the effects of fracture toughness and crack closure. Chinese Journal of Mechanical Engineering, Vol. 28, Issue 2, 2015, p. 409-415. • Alberto Carpinteri, Marco Paggi Self-similarity and crack growth instability in the correlation between the Paris’ constants. Engineering Fracture Mechanics, Vol. 74, 2007, p. 1041-1053. • Agafonov S. K. Vibration strength of structures based on the theory of cracking and fatigue curves. Journal of Machinery Manufacture and Reliability, Vol. 45, Issue 5, 2016, p. 451-457. • Loutas Theodoros, Eleftheroglou Nick, Zarouchas Dimitrios A data-driven probabilistic framework towards the in-situ prognostics of fatigue life of composites based on acoustic emission data. Composite Structures, Vol. 161, 2017, p. 522-529. • Annamalai K., Sathyanarayanan S., Naiju, C. D., et al. Fatigue life prediction of girth gear-pinion assembly used in kilns by finite element analysis. 2nd International Conference on Advanced Materials Design and Mechanics, Kuala Lumpur, Malaysia, 2013, p. 292-296. • Bhardwaj G., Singh S. K., Singh I. V., et al. Fatigue crack growth analysis of an interfacial crack in heterogeneous materials using homogenized XIGA. Theoretical and Application Fracture Mechanics, Vol. 85, 2016, p. 294-319. • Qian Yuning, Yan Ruqiang, Gao Robert X. A multi-time scale approach to remaining useful life prediction in rolling bearing. Mechanical Systems and Signal Processing, Vol. 83, Issue 2, 2017, p. • Ben Abdessalem Anis, Azais Romain, Touzet-Cortina Marie, et al. Stochastic modelling and prediction of fatigue crack propagation using piecewise-deterministic Markov processes. Proceedings of the Institution of Mechanical Engineers Part O-Journal of Risk and Reliability, Vol. 240, Issue 4, 2016, p. 405-416. • Bernasconi A., Jamil A., Moroni F., et al. A study on fatigue crack propagation in thick composite adhesively bonded joints. International Journal of Fatigue, Vol. 50, 2013, p. 18-25. • Melgar M., Gomez-Jimenez C., Cot L. D., et al. Paris law parameter identification based on the extended Kalman filter. 3rd International Conference on Structural Nonlinear Dynamics and Diagnosis (CSNDD), Marrakech, Morocco, 2016. • Butler Shane, RingwoodJohn Particle filters for remaining useful life estimation of abatement equipment used in semiconductor manufacturing. Conference on Control and Fault Tolerant Systems Nice, France, 2010, p. 436-441. • Fan Bin, Hu Lei, Hu Niaoqing Remaining useful life prediction of rolling bearings by the particle filter method based on degradation Tate tracking. Journal of Vibroengineering, Vol. 17, Issue 2, 2015, p. 743-756. • Paris P. C., Erdogan F. A critical analysis of crack propagation laws. Journal of Basic Engineering, Vol. 85, 1963, p. 528-534. • Dawn An, Joo Ho Choi, Nam Ho Kim Prognostics 101: A tutorial for particle filter-based prognostics algorithm using Matlab. Reliability Engineering and System Safety, Vol. 115, 2013, p. 161-169. • Sun Lei Research on Methods and Application for Condition Based Equipment Fault Prognosis and Maintenance. Mechanical Engineering College, 2014. • Wang X., Rabiei M., Hurtado J., et al. A probabilistic-based airframe integrity management model. Reliability Engineering and System Safety, Vol. 94, 2009, p. 932-941. About this article Fault diagnosis based on vibration signal analysis Paris law particle filter fatigue life prediction gear crack growth hybrid approach The research is supported by the National Natural Science Foundation of China (No. 71401173) and the authors are grateful to all the reviewers and the editor for their valuable comments. Copyright © 2017 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/18327","timestamp":"2024-11-08T10:39:56Z","content_type":"text/html","content_length":"170491","record_id":"<urn:uuid:4e12793c-38ba-4c09-962a-dbde84e1a7a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00091.warc.gz"}
Zener Voltage Calculator, Formula, Zener Calculation | Electrical4u Zener Voltage Calculator, Formula, Zener Calculation Zener Voltage Calculator: Enter the values of resistor voltage drop, V[r(V)] and supply voltage, V[s(V)]­ to determine the value of Zener voltage, V[z(V)]. Zener Voltage Formula: Zener voltage (Vz) is a critical concept in electrical engineering, particularly in the design and application of Zener diodes for voltage regulation. A Zener diode allows current to flow in both directions, but it is specifically designed to conduct in the reverse direction when a particular reverse voltage, known as the Zener voltage, is reached. This characteristic makes it useful for stabilizing voltage in power supplies. Zener voltage, V[z(V)] in volts can be calculated by subtracting the resistor voltage drop, V[r(V)] in volts from the supply voltage, V[s(V)] in volts. Zener voltage, V[z(V)] = V[s(V)]­ – V[r(V)] V[z(V)] = zener voltage in volts, V. V[s(V)]­ = supply voltage in volts, V. V[r(V)] = resistor voltage drop in volts, V. Zener Voltage Calculation: 1. Calculate the Zener voltage in a circuit with a supply voltage of 12 volts and a resistor voltage drop of 2 volts: Given: V[s(V)]­ = 12V, V[r(V)] = 2V. Zener voltage, V[z(V)] = V[s(V)]­ – V[r(V)] V[z(V)] = 12 – 2 V[z(V)] = 10V. 2. Suppose a circuit has a supply voltage of 24 volts and the Zener voltage is 18 volts. Calculate the resistor voltage drop: Given: V[s(V)]­ = 24V, V[z(V)] = 18V. Zener voltage, V[z(V)] = V[s(V)]­ – V[r(V)] V[r(V)] = V[s(V)]­ – V[z(V)] V[r(V)] = 24 – 18 V[r(V)] = 6V. Applications and Considerations: • Voltage Regulation: Zener diodes are commonly used for maintaining a stable voltage across a load. • Protection Circuits: They protect sensitive components from overvoltage conditions. • Clipping Circuits: In signal processing, Zener diodes can clip voltage peaks to a desired level. Zener as a voltage regulator LEAVE A REPLY Cancel reply
{"url":"https://www.electrical4u.net/calculator/zener-voltage-calculator-formula-calculation/","timestamp":"2024-11-07T04:20:18Z","content_type":"text/html","content_length":"109607","record_id":"<urn:uuid:2108a8d0-e3c6-4200-b893-e414d246d541>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00415.warc.gz"}
Comments on The NMRlipids Project: Towards a new version of the manuscriptI think that this might be useful paper for us. In...I tried to tune the dihedrals between glycerol and...Matti Javanainen sent me unpublished trajectory fo...I have been trying to make dihedral potentials whi...A progress report on the Berger modification. I ra...I will think about metadynamics idea. I am aware ...The computational cost was more of a side note, an...I do not think that introducing ~10 tabulated dihe...The problem with tabulated potentials is that they...Here is what I have done so far: I started to wor...I&#39;m asking for a bit of a clarification as to ... 05973383391755193687noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-1283194685237019772.post-82342085346701103722014-09-01T10:56:54.521+03:002014-09-01T10:56:54.521+03:00I think that this might be useful paper for us. In their webpage they are sharing the code which can be used to fit asymmetric potentials by using trigonometric functions:<br />http://www.clas.ufl.edu/users/roitberg/ links.html<br /><br />I think that I can get rid of tabulated potentials and gaussian peak addition with this. I will try this as soon as I have time.Samuli Ollilahttps://www.blogger.com/profile/ 06106569992787533569noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-59468369243138980692014-07-01T17:58:57.393+03:002014-07-01T17:58:57.393+03:00I tried to tune the dihedrals between glycerol and acyl chains which I mentioned above. It did not fix the problem and the order parameters are pretty similar as above. I am now sharing the parameters which I have made by trying to directly reproduce the CHARMM dihedral distributions in the Berger model:<br />https://www.dropbox.com/s/ylgsczvh11qwuj8/share7.tar<br /><br />The file contains also dihedral distrubutions calculated from the CHARMM model.<br /><br />The structure looks currently like this:<br />https://www.dropbox.com/s/cavebz57cbwea4t/fftestGLYplot.jpg<br /><br />It seems, at least, too rigid compared to CHARMM. <br /><br />I am sharing the parameters now since I will not work on this for a while now, and someone might be willing to play or improve these.Samuli Ollilahttps:// Javanainen sent me unpublished trajectory for POPC bilayer, ran with MacRog model. I used it to make a similar plot for the structure as shown above for the Berger and CHARMM:<br />https:// www.dropbox.com/s/e14hh6xq67d2nju/MacRogGLYplot.jpg<br /><br />Regarding the glycerol, it is slightly different than CHARMM: <br />g1 is more &quot;flexible&quot; changing between two configurations <br />g3 is more &quot;rigid&quot; sampling only two conformations (three in CHARMM)<br /><br />With visual inspection it seems that beoynd the glycerol there are possibly some larger differences.<br /><br />When lookng at the order parameter results, the CHARMM is closer to experiments for g3 and MacRog overestimates the forking. This would indicate that the CHARMM sampling would be better for g3. Also beta is better in CHARMM.<br /><br />On the other, g2 and g1 order parameters are better in MacRog, indicating that the sampling of this part may be better in MacRog. However, when I calculated the order parameters for the first carbons in acyl chains in MacRog I got the following results:<br /><br />sn1:<br />C2 0.0670417<br />C2 0.0828269<br />C3 -0.137857<br />C3 -0.131714<br />C4 -0.151883<br />C4 -0.153766<br />C5 -0.172636<br />C5 -0.172115<br /><br />sn2:<br />C2 -0.146994<br />C2 -0.172383<br />C3 -0.186539<br />C3 -0.18673<br />C4 -0.163145<br />C4 -0.152277<br />C5 -0.176759<br />C5 -0.167419<br /><br />Especially the sn-1 looks weird. In constrast, CHARMM is in reasonable agreement with experiments. <br /><br />I did not fully understand the MacRog paper (http://dx.doi.org/10.1021/jp5016627) in this respect. In the text they say: <br />&quot; Deuterium magnetic resonance studies demonstrated that the two C–D bonds of carbon 2 (C22, Figure 1) of the sn-2 chain are characterized by different values of SCD, while in the case of the sn-1 chain (C32, Figure 1) the values are the same.(88) In our simulations, differences in the SCD values of the C–D bonds in carbon 2 are observed in both chains (C22 and C32).&quot;<br />However, in my eye the results shown in Fig. 7 are not what is written in the text, and also different what I got.<br /><br />I think that I will continue using CHARMM as the basis for the above discussed procedure (tabulated potentials), at least for now.Samuli Ollilahttps://www.blogger.com/profile/ 06106569992787533569noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-79404950908471621182014-06-14T18:25:43.554+03:002014-06-14T18:25:43.554+03:00I have been trying to make dihedral potentials which would reproduce similar dihedral distributions observed in CHARMM by using the approach I mentioned above (tabulated potentials). The potentials I have now gives the following order parameters:<br />beta -0.118127<br />beta 0.0921357<br />alpha 0.217337<br />alpha 0.0414373<br />g3 -0.14162<br />g3 -0.300409<br />g2 -0.344162<br />g1 -0.329405<br />g1 -0.184324 <br /><br />and the structure which looks like this: https://www.dropbox.com/s/kzstx1unjaxbuin/BergerTST6.png<br /><br />Compared to the structure in CHARMM (shown above), the structure of the glycerol is pretty similar except that the g2-g3 dihedral seems too rigid. However, the order parameters for the glycerol are clearly too negative in the current model. I am now suspecting that this is due to the dihedrals between glycerol and acyl chains which looks different between models but I have not tuned those yet. I have been and will be quite a lot out of office during the summer, but I will try to test this during the next week. After this it might be useful to try Antti&#39;s approach by starting from these pontentials which are quite different compared to the original ones.<br /><br />I am also suspecting that the too rigid g2-g3 might reflect to the alpha and beta.Samuli Ollilahttps://www.blogger.com/profile/ 06106569992787533569noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-58714244177647788772014-06-07T19:12:40.662+03:002014-06-07T19:12:40.662+03:00A progress report on the Berger modification. I ran a bunch more simulations (64, to be exact), and as I had predicted (somewhere on this blog, after we discovered that the signs of the order parameters were in fact important and wrong in Berger), I have been unable to satisfactorily reproduce the signs of the g1 order parameters. Granted, I sometimes do get the sign itself correct, but this is a still far cry from getting the order up to -0.15. <br /><br />While the procedure I was using is by no means exhaustive (as in, it does not sample the whole parameter space), it does prompt the question of whether the dihedrals I was modifying can in fact produce the correct behaviour at all. One might trying modifying more of the dihedrals, without using the symmetry considerations that I was using, or by trying to fit them to CHARMM distributions in vacuum. My guess, though, is that even these might not be enough, but rather Berger dihedrals (or charges, or a combination thereof) are more fundamentally flawed and that one would need to add other dihedrals into the mix to have any chance of succeeding. This is speculation, of course, and I&#39;d be happy to see someone show me wrong. If I have time, I&#39;ll later look into doing some of the things that I proposed. Anonymoushttps://www.blogger.com/profile/ 14866066066065465168noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-8258523685784238152014-05-22T11:23:22.886+03:002014-05-22T11:23:22.886+03:00I will think about metadynamics idea.<br /><br />I am aware that I am overfitting. However, it is not clear if it matters in our case and this can be checked against the response data (dehydration, ions etc.).Samuli Ollilahttps:// computational cost was more of a side note, and the reason I suggested VOTCA/PLUMED was to point out that they use methods which have some theoretical justification and whose side effects have been studied. Metadynamics in its standard form tries to flatten out the histogram (distribution), but I see no reason why you could not in principle use the same methodology to fit to known dihedral distributions. If I have time, I might have a go at this later. But again, because it is about dropping Gaussians, I am expecting to get results similar to your procedure; metadynamics just drops them to wherever the current angle is at predetermined timesteps and is theoretically guaranteed to give a working functional form (for the mean field), and you decide these positions by hand. <br /> <br />The side effect that I am talking about is basically overfitting to data, and this should have been the emphasis of my last post. If you take a curve that traverses through all of your data points, it is by definition a perfect fit to the data, but it can easily misstep much worse than a linear fit when you inter/extrapolate from the data. The more parameters you have in your fit (and each of your Gaussian insertion will essentially add parameters/degrees of freedom), the more likely you are to be overfitting. Now as a practical example this extrapolation would happen in different salt conditions, for example: Your parametrization procedure can easily overfit to the salt free data and give complete nonsense when salt is added. <br /><br />There are some related examples in the current literature. Martini uses &quot;more&quot; physically inspired potentials than the SDK model, and only the former qualitatively captures the gel-fluid transition in bilayers (even though the fluid phase is arguably more accurately described in the latter). Another example might be the one I mentioned earlier: iterative Boltzmann inversion will give pair-pair interactions that work for molecule types A and B separately, but should you mix A and B, the A-A and B-B interactions, too, have to be recomputed, which clearly does not make much physical sense. This is why I would prefer a more physically motivated procedure (using predetermined functional forms). Having said this, I think it is worthwhile testing as many different ideas as possible and seeing which gets you better results. Anonymoushttps://www.blogger.com/profile/ 14866066066065465168noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-6615114322527315602014-05-21T23:50:46.240+03:002014-05-21T23:50:46.240+03:00I do not think that introducing ~10 tabulated dihedral potentials leads to a significant computational cost. The way I do it is extremely simple and fast to do. I think that I will do it like this, and then, depending on the results consider more sophisticated approaches. <br /><br />The nonsymmetric, sharp peaked functional form is not a problem for the &quot;coarse graining&quot; method. For me it seems to be a problem for gromacs proper dihedrals which are given as series of trigonometric functions (see eq. 4.61 in manual). I think that to make a potential with the required shape one needs quite many terms into the sum (correct if I am wrong). <br /><br />Adding gaussians to the potential function is very easy:<br />cat popcTST4_d1.xvg | awk &#39;{print $1&quot; &quot;$2+0.01*exp(-($1+60)*($1+60)/ (2*20*20))}&#39;<br />To automize the decision where to add it, i.e., in my case finding the places of flat potential maximum, is difficult. This is currently the only &quot;arbitrary&quot; part.Samuli Ollilahttps://www.blogger.com/profile/ 06106569992787533569noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-46965659308508856272014-05-21T22:58:48.557+03:002014-05-21T22:58:48.557+03:00The problem with tabulated potentials is that they are slow. I would much prefer using nontabulated potentials. Of course dihedtals are intramolecular, so this is not a very big deal. Your procedure, however, is somewhat arbitrary. I suggest using iterative Boltzmann inversion (which as I have understood it is somewhat close in spirit to what you are doing anyway) or force matching or some other well established way of parametrization. These all have thein own problems, espeially when it comes to the generalization of systems*, which is why I would again prefer using parametrized potentials (instead of tabulated ones) if at all possible. With these methods, though, the arbitrary functional form is not a problem. From what I remember VOTCA interfaces with GROMACS and is relatively easy to use for these types of computations. To add custom Gaussians you might consider PLUMED. This is a GROMACS extension meant for free energy calculations, but the way you are parametrizing your potential sound awfully lot like metadynamics (dropping Gaussians), which is what PLUMED does. <br /><br />*Suppose you manage to parametrize POPC and DPPC with pretty much any generalized function method (especially Boltzmann inversion). In a mixture of DPPC and POPC, the dihedrals would need to be reparametrized for both molecules. Physically this seems nonsensical.Anonymoushttps://www.blogger.com/profile/ 14866066066065465168noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-91275194565012065222014-05-21T13:56:19.394+03:002014-05-21T13:56:19.394+03:00Here is what I have done so far: <br /><br />I started to work on the dihedral between g1 and g2 which seemed to be somehow too loose in the Berger. My goal was to find dihedral potentials which would reproduce the dihedral distributions observed in CHARMM. I think that it would be difficult to use the CHARMM potentials directly since it has hydrogens involved which are not present in Berger (I did not think this further though). So I calculated the dihedral angle distribution observed in CHARMM for the atoms which are present in the Berger, i.e. C12-C13-C32-O33 and O14-C13-C32-O33 with the Berger notation. The observed distributions from CHARMM simulation are here:<br /><br />https://www.dropbox.com/s/skvy3tinbdl7dxj/g1-g2dih_C12-C13-C32-O33fromCHARMM.xvg<br />https://www.dropbox.com/s/pft3l48shaa6q2u/ g1-g2dih_O14-C13-C32-O33fromCHARMM.xvg<br /><br />First I thought that I would use Gromacs proper dihedrals to construct potential which would have minima and maxima at the locations corresponding to the maxima and minima at the observed distribution. I think that this would be possible but one would need a quite large number of terms since the observed distributions are asymmetric and the peaks are quite narrow (I did not think this further though).<br /><br />Then I decided to use tabulated potentials. I constructed tabulated potentials for the C12-C13-C32-O33 and O14-C13-C32-O33 dihedrals roughly with the following steps 4 steps:<br /><br />1. I took running average to smooth the observed dihedral distribution.<br />2. I multiplied the distribution with (-1) and added a constant to make values positive.<br /><br />I ran simulations directly with this kind of potential. However, this did not work since with some angles the distributions had values of zero (no observed angles) which led to potential with flat maximum having zero derivative through several angles. Then in the simulation system was happy to have these angles since there was no force, even though the energy maximum. To get rid of this problem I added the step 3: <br /><br />3. I added gaussian functions to the potential at the locations of flat maximum.<br /><br />As a consequence, I got the following tabulated potentials for the C12-C13-32-O33 and O14-C13-C32-O33 dihedrals:<br /><br />https://www.dropbox.com/s/a7ltepbas2kz6hu/potential_C12-C13-C32-O33.xvg<br />https://www.dropbox.com/s/ 3qd3blwuhdw62y8/potential_O14-C13-C32-O33.xvg<br /><br /><br />Finally,<br /><br />4. I added:<br />[ exclusions ]<br /> 12 33 <br /> 14 33 <br /> 11 14 <br /><br /><br /><br />When I ran a simulation with these potentials I get structures which looks like these:<br /><br />https://www.dropbox.com/s/p7mnub6j9wxu9kq/FFtestSNAP.pdf<br /><br />Now the glycerol structure is closer to the CHARMM. <br /><br />My plan was to automatically run the above procedure for all the dihedrals in the headgroup and glycerol to make new dihedral potentials, and then these potentials would be modified with the same style as Antti did directly for the Berger potentials previously to give perfect order parameters. <br /><br />Now there are some potential problems though:<br /><br />1. I am not sure how I can automize the addition of gaussian function. I will think about this. This can be done manually as well, but it just takes a bit more time.<br /><br />2. When Antti modified berger potentials, the modification was done for proper dihedral parameters. Modifying tabulated potentials automatically might be more difficult? What do you think?<br /><br /><br />In conclusion, I am trying a brutal coarse graining approach to make United Atom dihedral potentials from All Atom simulations. To do something in vacuum, as Antti suggested might be also reasonable, especially if the current approach does not work.Samuli Ollilahttps://www.blogger.com/profile/ 06106569992787533569noreply@blogger.comtag:blogger.com,1999:blog-1283194685237019772.post-3982425307730706192014-05-20T17:15:40.047+03:002014-05-20T17:15:40.047+03:00I&#39;m asking for a bit of a clarification as to your 3-step process. Just to recap, in the procedure I was using, I modified some of the dihedrals of the Berger model and simulated whole bilayers for ~50 ns. I then computed the order parameters of said simulations, and tried to parametrize a function f(dihedrals) = order parameters, so that I could then predict a set of dihedrals that would give the experimental order parameters.<br /><br />We were unsure of whether all atom models work very well at the time, so this seemed like the reasonable thing to do. However, now that we have established that CHARMM seems to reproduce the experimental values well for almost all use cases that we have tried, one could directly parametrize a united atom model to reproduce the dihedral distributions of CHARMM. This is, I think, what you are saying but I want to explicitly point out that this can, and probably should, be done for single molecules in vacuum (or some other minimally small systems). This means a _lot_ less computational effort. <br /><br />As for the dihedrals, one can take the Berger definitions and see whether by tuning the parameters CHARMM distributions can be replicated or not. I had assumed some symmetries in the parameter values, but I think this is probably not necessary if the simulations only contain single molecules. Anonymoushttps://www.blogger.com/profile/
{"url":"https://nmrlipids.blogspot.com/feeds/66813795024321059/comments/default","timestamp":"2024-11-05T21:44:08Z","content_type":"application/atom+xml","content_length":"37235","record_id":"<urn:uuid:4dbb9862-fb7e-494d-97f7-8800c37065bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00038.warc.gz"}
What is x if log_4(8x ) - 2 = log_4 (x-1)? | HIX Tutor What is x if #log_4(8x ) - 2 = log_4 (x-1)#? Answer 1 We would like to have an expression like #log_4(a)=log_4(b)#, because if we had it, we could finish easily, observing that the equation would the solved if and only if #a=b#. So, let's do some manipulations: The equation then rewrites as But we're still not happy, because we have the difference of two logarithms in the left member, and we want a unique one. So we use Now we are in the desired form: since the logarithm is injective, if #log_4(a)=log_4(b)#, then necessarily #a=b#. In our case, #log_4(x/2)=log_4(x-1) iff x/2 = x-1# Which is easily solve into #x=2x-2#, which yields #x=2# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find x, we first need to combine the logarithmic terms using logarithmic properties. Then, we solve for x: log_4(8x) - 2 = log_4(x - 1) Using the property: log_a(b) - log_a(c) = log_a(b/c): log_4((8x) / 4^2) = log_4(x - 1) log_4(8x / 16) = log_4(x - 1) Simplify the expression inside the logarithm: log_4(1/2) + log_4(x) = log_4(x - 1) Now, using the property: log_a(b) + log_a(c) = log_a(b * c): log_4(1/2 * x) = log_4(x - 1) log_4(x/2) = log_4(x - 1) Now, we equate the arguments: x / 2 = x - 1 Solve for x: x - x / 2 = 1 (2x - x) / 2 = 1 x / 2 = 1 x = 2 × 1 x = 2 Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-is-x-if-log-4-8x-2-log-4-x-1-8f9af96403","timestamp":"2024-11-06T22:15:08Z","content_type":"text/html","content_length":"577123","record_id":"<urn:uuid:6b2218b9-b1b7-4a76-b925-3ba5613fc57f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00487.warc.gz"}
NGUYEN, Dinh Hoa Associate Professor My current research mainstream is Applied Mathematics for Energy Systems which intersects between control systems, power and energy systems, and optimization, through collaboration with WPI-I2CNER. Particularly, I focus on problems in smart grid, multi-agent systems, and distributed optimization and control, where there are many emergent and challenging issues arising from the integration of stochastic and intermittent renewable energy sources, the control approaches for demand response and real-time pricing, the detection and protection against cyber-attacks, etc. Those complex problems may be handled by utilizing a combination of several applied mathematical tools including dynamical systems, nonlinear optimization, graph theory, nonlinear systems, and machine learning. Furthermore, I am also interested in iterative learning control, robust control, and optimal control, which are important tools to address many other interesting problems of realistic systems.
{"url":"https://www.imi.kyushu-u.ac.jp/post-department/department-4910/","timestamp":"2024-11-05T22:19:15Z","content_type":"text/html","content_length":"29112","record_id":"<urn:uuid:88c098e8-dbe6-44d6-873f-9650519e6dfe>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00362.warc.gz"}
Bayesian Networks A method used to represent Joint Distributions implicitly using conditional independence and local interactions. It is represented as a directed acyclic graph. Each variable is a Node, and a directed edge from node A to node B means that the random variable B is a function of random variable A. In other words, a random variable is a function of the random variables in its parent nodes. Every node A stores a table that represents the conditional probability of random variable A given every combination of values its parents could take. \[P(A | Parents(A))\] From Conditional to Joint In order to extract the joint distribution from the local conditionals, we can use the following formula: \[P(X_1, X_2, \dots , X_n) = \prod_{i=1}^n P(X_i|Parents(X_i))\] Size of a Bayesian Network In order to store a table that represents an ordinary joint distribution of N variables where each variable can take d values, one would need $O(d^N)$ entries which is too large. One important advantage of Bayesian Networks is that they reduce this size needed to store a join distribution by storing different small parts from which we can reconstruct the joint distribution. Every node contains a table that determines its value for every combination of values of its parents. If we assume that every node has no more than k parents, this would mean that every node has a table of size $O(d^{k+1})$ ($k + 1$ comes from the fact that we encode the parents and the node itself), so for N nodes, we would get a total of $O(Nd^k)$, which is a reduction that depends on k, the maximum number of possible parents. Does a Bayesian Network Graph imply Causality ? No, not all the time. It is easier to think of the graph as if it means that a certain variable (parent) is causing another variable (child), but that is not always the case. Rather, it only means that there is a direct conditional dependency between variables. How to check for Independence in a Bayesian Networks If two variables do not have a direct connection, this does not mean that they are independent, they might indirectly influence each other through other nodes. For example in this graph below, A and B are not necessarily independent because they can influence each other through B. Nevertheless, we can prove conditional independence of some cases. For example, in the network above, we can say that C is conditionally independent from A given B. The intuition is that since a value of B was given, it already holds the influence from A, or another way to think of it that the influence between A and C was blocked by B. There are 3 types of simple 3-node graphs which are simple to analyze and will be used to detect properties in Bayesian Networks. These graphs are causal chains, common cause, and common effect. Causal Chains From this structure we can only say that C is conditionally independent of A given B. (can be proven mathematically) Common Cause Here, we can say that B and C are independent given A. (can be proven mathematically) Common Effect Here, A and B are in fact fully independent, but they are not independent given C. Detect if 2 nodes are conditionally independent from the graph structure We can do this by tracing the undirected path between any two nodes on the graph. For every 3 consecutive nodes on the path, we check if they satisfy conditional independence. In order to do the check, we simply identify to which type this triplet belongs (causal chain, common cause, common effect). If we find any triplet on the path to voilate independence then we assume that conditional independence between source and target node is not guaranteed. The triplets which imply conditional independence are called Inactive Triplets, whereas triplets that violate conditional independence are called Active Triplets. The following figure summarizes the triplet cases. This approach is called D-Separation. Markov Blanket For a given node X, we can say that it is conditionally independent of all other nodes given its Markov Blanket. A Markove Blanket for a node consists of its parents, children, and children’s parents. Like in the image below. Probabilistic Inference The act of calculating the probability that some random variables take certain values from a join distribution. Inference by Enumeration This is basically the brute force way of calculating a certain probability, where we first take the sum of the probabilities of all the hidden random variables to eliminate them, and then calculate the desired probability of the target and evidence. Note: Hidden variables are the rest of the random variables which are neither in the target set, nor in the evidence set. Evidence contains the random variables which make the condition of the probability calculated. Enumeration gives an exact answer but is exponential in complexity. Steps of performing Inference by Enumeration 1. From each probability table, select the entries whose rows are consistent with the evidence. That is, if we have two variables A and B in the evidence, and both take values a and b respectively, then select all rows for which variables A and B have values a and b respectively. 2. For all the hidden variables, sum out their values. This means compute the sum of probabilities for all combinations of hidden variables values. This operation is called marginalization. \(P(Q, e_1 \dots e_n) = \sum_{h_1 \dots h_r} P(Q, h_1 \dots h_r, e_1 \dots e_k)\) 3. Normalize the the the values left so that their sum is 1 and they form a valid distribution. \[Z = \sum_q P(Q, e_1 \dots e_k) \space \space , \space \space P(Q | e_1 \dots e_k) = \frac{1}{Z} P(Q | e_1 \dots e_k)\] Variable Elimination In enumeration, what we did was build the whole join distribution and then marginalize in order to reduce it to the desired state of target and evidence variables. The idea suggested by Variable Elimination to speed up the calculations is to interleave between joining and marginalizing. It basically remains exponential but it helps speed up the calculations compared to enumerations. In order to understand how Variable Elimination works, we need to understand the concept of Factors first. Factorization as a mathematical concept means to decompose an object into a product of other objects called factors. Note: Capital letters in a distribution determine its dimensions, because small letters denote assigned values. For Bayesian Networks, let’s see what kind of factors there can be: • Joint Distribution $P(X,Y)$ □ Contains entries $P(x,y)$ for all $x,y$ □ Sums to 1 • Selected Joint $P(x, Y)$ □ A slice of the joint Distribution □ Sums to $P(x)$, because it represent probability of $x$ for every value of $y$, which means that $P(x)$ is sort of partitioned over $y$ values. • Single Conditional $P(Y|x)$ □ Contains entries $P(y, x)$ for all $y$ and fixed $x$ □ Sums to 1, because it is basically a conditional probability distribution for all values of y conditioned on a single value $x$. • Family of Conditionals $P(Y|X)$ □ Contains multiple conditionals. □ Contains Entries $P(y|x)$ for all $x,y$ □ sums to $|X|$, which is the number of values that can be taken by the random variable $X$. This is because for every value $X$, we have a conditional distribution for $Y$ over that particular $x$, which means we essentially have $|X|$ distributions where each one sums to 1. □ Specified Family $P(y|X)$ ☆ Contains entries $P(y|x)$ for fixed $y$, but for all $x$. ☆ Sums to an unknown value. So now back to Variable Elimination, we will use 2 main operations, join and eleminate. Join operation basically joins two tables in a similar way to the database join operations. In simpler terms, if table A has two variables X and Y, and table B has two variables Y and Z, then a join operation would result in a 3 dimensional table of X, Y, and Z. where probability of an entry in table A is multiplied by the a probability of table B where the common variable Y values match. Eliminate operations is simply summing for all the values of a certain variable to remove it from the table. 1. Select all rows that match the evidence and delete rows that do not match it, that is only if a table contains an evidence variable. If a table does not contain an evidence variable don’t touch it. Here are the steps to follow in order to solve a query using Variable Elimination: 2. Since we cannot eliminate a hidden variable unless it exists in only one facts, we join factors which have a common hidden variable. 3. Then for every factor which contains a hidden variable and that hidden variable only exists in that factor, eliminate the hidden variable. 4. Repeat steps 2 and 3 until the factor left consists of target and evidence. 5. Finally we normalize the table we get, because the existence of evidence variables might cause a selected joint to appear. Note: When joining two tables where each table contains some variables on the left side of the condition and variables on right side, the trick is to place all variables on the left side of the two tables to the left side of the output table, and then place the rest on the right. This would save time in a case where a variable exists on the left side of a table and the right side of another table. To understand what I mean by left and right: $P(X|Y)$, here $X$ is the left of the condition and $Y$ is the right of the condition. Conclusion of Variable Elimination: The worst case complexity of Variable Elimination is not better than Enumeration in theory but in practice it is faster. This difference in speed is due to the fact that eliminating some variables in a certain order decreases the maximum factor generated, which affects the computations significantly if we were able to find a good ordering. Approximate Inference Since it takes too long to calculate exact inferences using the method discussed earlier, we may sacrifice some of the exactness in order to speed up the calculation. One method to do this is The idea behind sampling is to draw $N$ samples from a distribution and then use these samples to compute an approximate posterior probability and show that this can converge to the true probability. How can we computer a posterior probability from samples ? Simply, we divide the number of samples matching the target over the samples matching the evidence, a very basic way of calculating How to sample from a known distribution Since it is a known distribution, then for each value the random variable takes we know its probability. Hence, we do the following steps: 1. Get a sample $u$ from the uniform distribution over the interval [0, 1) (assume that we have the mechanism to do so). 2. For every value of the random variable assign an interval in the range [0, 1) which is of length equal to the probability at that value. We can do this because the sum of all probabilities is equal to 1. 3. Choose the value of the random variable that has the sample $u$ within it. 4. Repeat this process $n$ times to get $n$ samples. Four main sampling methods will be discussed • Prior Sampling • Rejection Sampling • Likelihood Weighting • Gibbs Sampling (Most used) Prior sampling To apply prior sampling on a Bayesian Network: 1. Sort the nodes in the graph using topological sorting. 2. Iterate over the variables in the sorted order and randomly sample from each. Sampling from a node would affect sampling of its children, because if a random variable A gets a value a, then when sampling from its child B, we sample from the distribution that is conditioned on A taking the value a. Topological sorting guarantees that a node will not be sampled until all its parents are sampled, which determines the value combinations the child needs to sample from. 3. When the last node is reached, we would have gained a single sample of our Bayesian Network. So we need to repeat the steps $n$ times to get $n$ samples. 4. Once we get $n$ samples, we eliminate the samples which do not match the evidence, then the desired probability would be the number of samples which match the target divided by the number of samples left. As $n$ goes to infinity, then the probabilities computer using Prior Sampling would converge to the exact original probabilities. This is why we call this sampling method consistent. Rejection Sampling The idea in Rejection Sampling is simple, given a desired query with targets and evidence, for every sample we get, we reject it if it does not match our desired evidence. That is, if we want to collect $n$ samples, we keep drawing samples such that all the $n$ chosen samples have our evidence satisfied, so we might perform more than $n$ sampling operations to get $n$ samples which match our evidence. The number of sampling operations can be very high if our evidence is unlikely because we would reject many of the samples we get. Likelihood Weighting Rejection Sampling has a very serious limitation, which is the fact that many samples can be rejected if the evidence was unlikely. Likelihood Weighting solves this issue with the following steps: 1. Fix the evidence variables and sample from the others instead of rejecting. 2. Fixing evidence alone would disrupt the distribution and make it inconsistent. Therefore, we modify the sampling algorithm maintain and return a constant weight which is initially set to 1. 3. As we move through the network in topological order, if we encounter an evidence variable, we multiply its probability give its parents with the weight constant we have and update the value of the weight to the new value (old value times probability of evidence given its parents). 4. The sampling algorithm returns the sample with accumulated weight, so in this way every sample will be associated with a certain weight. Likelihood Weighting is consistent because the product of Sampling distribution that is a result of fixing evidence and the weight of every sample (which is an accumulated product of evidence variables given their parents) gives us the original join distribution (Mathematical equations to be added here later). Note: After generating the samples, instead of counting the number of samples satisfying the target, we sum the weights instead. This is essential to produce a consistent answer. Issues of Likelihood Weighting: Fixing the evidence affects the downstream variables (variables which are reachable from an evidence in the directed graph and come after it in the topological sorting), but it does not affect the upstream variables (variables which can reach the evidence in the directed graph and come before it in the topological sorting). this is bad because when fixing the evidence, all its downstream will be affected by its value but it would still follow the distribution. On the other hand, fixing the value of an evidence regardless of its parents values creates an inconsistency which we approximately solve by using the weight approach. Therefore, the larger the number of upstream variables, the less accurate the approximation is. Following this logic, we prefer evidences that exist at the top of the hierarchy rather than the bottom. Gibbs Sampling Gibbs Sampling addresses the issue with Likelihood Weighting, which is that the upstream variables of a fixed evidence are sampled independently from the fixed evidence. Gibbs Sampling attempts to sample all variables taking evidence into consideration. Approach to Generate a Single Sample: 1. Start with a random assignment to each variable except for the evidence variables which are already fixed with their pre-determined values. 2. Sample each non-evidence variable one variable at a time (Round Robin Style), conditioned on all the other variables while keeping the evidence fixed. 3. Repeat this for a long time. This approach has the property that repeating it infinitely many times results in a sample that comes from the correct distribution. This way we also guarantee that both upstream and downstream variables are conditioned on evidence. One might ask: How efficient is it to sample one variable conditioned on all the rest ? Well, the probability of a variable conditioned on all the rest is equal to the joint probability divided on the probability of all the rest. In this equation many terms would cancel out between the numerator and denominator. More specifically, all Conditional Probability Tables which do not contain the re-sampled variables will be canceled out. In other words, to re-sample a variable we simply join the tables that contain it and use them for the calculations, which can be faster if the local interactions in the Bayesian Network were limited. Note: Sampling a variable given its Markov Blanket is enough. As with the previous methods, once we have the samples, probabilities are computer by counting samples that match the target and divide by the number of samples (because they all match the evidence by definition). Note: Gibbs Sampling is special case of a more general method called Markov Chain Monte Carlo (MCMC) methods. Metropolis-Hastings is another famous MCMC method. Decision Networks They are an extension of Bayesian Networks where we assign utilities to outcomes to help make decisions. Two new types of nodes are added to the Bayesian Network: • Actions (Rectangles): cannot have parents because agents decide on their actions. • Utility Node (Diamond): depends on action and chance nodes. How to select an action in this network: 1. Instantiate all evidence variables. 2. Calculate Posterior of all parents of the utility node. 3. For every possible action, calculate the expected utility. 4. Choose Action with maximum utility. This is called the Maximum Expected Utility (MEU). Value of Information Finding the value of discovering new information or in the case of Bayesian Network adding new evidence values can help in deciding to which direction we must put effort in order to collect data. Therefore we attempt to calculate the Value of Information to guide our choices and decisions better. How to calculate the Value of Information ? 1. Calculate the Maximum Expected Utility (MEU) before acquiring the new evidence. 2. Calculate expected MEU after acquiring the new evidence 3. Value of Information is equal to new expected MEU minus old MEU. 4. If Value of Information is positive, then it would by worthwhile adding this evidence. How to calculate the Expected MEU after acquiring new evidence ? 1. Assume evidence is added 2. For every value of the evidence, calculated expected utility conditioned on that value. 3. For every value of the evidence, multiply probability of evidence with expected utility conditioned on that evidence and sum all these values. The result is the expected MEU. Properties of Value of Information (VPI): • Non-negative: we can never get a negative VPI. • Non-Additive \[VPI(E_i, E_j \| e) \neq VPI(E_i \| e) + VPI(E_j \| e)\] • Order Independent: If you have two evidences to check their values, it doesn’t matter in which order they are calculated. \[VPI(E_i, E_j \| e) = VPI(E_i \| e) + VPI(E_j \| e, E_i) = VPI(E_j \| e) + VPI(E_i \| e, E_j)\] Note: $VPI$ stands for Value of Perfect Information. We can have some shortcuts in calculating VPO through some independence properties: • If $Parents(U)$ ($U$ is utility node) is independent from a node Z given some evidence, then $VPI(Z | evidence) = 0$ Enjoy Reading This Article? Here are some more articles you might like to read next:
{"url":"https://nazirnayal.xyz/blog/2020/bayesian-networks/","timestamp":"2024-11-13T21:18:56Z","content_type":"text/html","content_length":"48693","record_id":"<urn:uuid:f8182dce-9a0e-4e32-88b5-68725e9f92fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00072.warc.gz"}
The Rule of 72 The Rule of 72 The rule of 72 teaches that money can work for you or against you. Albert Einstein said: “Compound interest is the greatest mathematical discovery of all time. It’s the eighth wonder of the world. He who understands it, earns it – he who doesn’t, pays it.” The rule of 72 states that if you divide the interest rate by 72 you will get approximately how long it will take for your money to double. When you save or invest, money can work for you. Let’s look at how this works and how your interest rate can make all the difference in what your return will be. If on the day you were born your parents put $10,000 into a savings account, and that lump sum yielded a 1% fixed interest rate, you’d have around $20,000 waiting for you when you turned 72. If the interest rate was 4%, you’d have $168,423. What do you think the value would be at 8%? Would it surprise you that you’d have $2,549,825. That’s 15 times more money by simply doubling the rate from 4% to 8% over your lifetime. The rule of 72 is the kind of principles that enable you to take advantage of The Wealth Wave. When you borrow money, it works against you! Credit card debt, mortgages, bank loans for your business, student loans, and car loans are all examples of compound interest working against you and working for someone else like the banks. Here’s how the rule of 72 works: At 1% rate of return, it takes 72 years for $1 to turn into $2. At 4%, it takes 18 years for money to double. It’s a simple formula. Now, instead of your money doubling once over your lifetime, you could experience two or three doubles. At double the rate of return – 8% – it takes half the time, 9 years for money to double. What if your money doubled four or five times in your life? So if you think that a difference of 1% or 2% won’t amount to much, you’re seriously underestimating the power of compounding, and you’ll pay a huge price. Do you have any idea how much money in this country is earning less than 1% today? Would you be surprised to hear there is over $11 trillion sitting in savings, money market and cash equivalent accounts as of June 2, 2014… all earning less than 1%? Savings account rates are currently averaging .11% while 1 year CDs average .24% and 5 year CDs average .79%. That means your money will never double in your lifetime. Sucks Ha? Let’s look at the rule of seventy two this way. At 29 years old, if you had $10,000 earning a 4% rate of return, your money would double in 18 years. You would have $20,000 at age 47. If you earned 8% rate of return, your money would double in only 9 years. At age 38 you would have $20,000. Lets double it one more time in 9 more years you would have $40,000 at age 47 and one more 9 years $80,000 at age 56. Now you can see the power of the rule of 72 and compounding interest. The more rate of return you earn your money doubles faster. If you earned 6% on a $10,000 investment, after 36 years, you’ll have $80,000. That’s three doubles in your working lifetime. If you double your return from 6% to 12% you double every 6 years, it’s not just double the money, it’s actually EIGHT times the money. Your money could double 7 times in your lifetime. At age 65 you will have $640,000 and at age 71 you will have $1,280,000. That’s a lot of doubles you could get over your working lifetime. The factors that make the Rule of 72 work for you are: time – the interest rate you earn – and how much money you put into the account. When people don’t have time on their side, they’re faced with doing one of two things – either adding more money or earning a higher interest rate. Generally aiming for higher returns often means increasing risk. So if you don’t want to add risk and you don’t have all the time in the world, what do they have to do? You have to save more money. And reduce the risk and taxation. Take a look at the IUL it might be a perfect fit for you. Read about is an IUL right for you. Also the rule of seventy two works against you with inflation also. Today we have one of the lowest inflation rates in last 20 years, but if we take the average for the last 20 years of 3.33%, lets round it to just 3% divided by 72 you have 24. So you need to double your income every 24 years to keep up with inflation. Has your income doubled in the last 24 years? If so do you think it will double in the next 24 years? If not you better find a way to add income or you are losing money. That is why I started a business in the financial industry. If you are interested in a part time or full time business in the financial industry watch this video then call me http://wealthwaveinfo.com When I looked into starting my own business as a financial professional, learning about the Rule of 72 was the BIG A-HA! for me. I can still remember how mesmerized I was – like I had just been handed the skeleton key to the halls of wealth. It’s a tremendous mental math shortcut to estimate the effect of any growth rate. The Rule of 72 is so simple and powerful. Once I’d learned these concepts my financial thinking was changed forever. As I shared this new knowledge with others, I found that it had the same effect on them. I learned that most people you know have never heard of these truths. By giving a financial education, you have the opportunity to help unlock the doors of wealth and prosperity for people you know and care about. This is what drew me into this business and why I love what I do more every day. Chief Inspiration Officer Vincent St.Louis Fighting the forces of Mediocrity If you found this article on The Rule of 72 useful please comment and share it.
{"url":"https://vincentstlouis.com/rule-72/","timestamp":"2024-11-08T15:44:02Z","content_type":"application/xhtml+xml","content_length":"54343","record_id":"<urn:uuid:de9d3fc2-695f-49f0-977c-3b8d0535f740>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00823.warc.gz"}
A definition of causal effect for epidemiological research Continuing professional education A definition of causal effect for epidemiological research Estimating the causal effect of some exposure on some outcome is the goal of many epidemiological studies. This article reviews a formal definition of causal effect for such studies. For simplicity, the main description is restricted to dichotomous variables and assumes that no random error attributable to sampling variability exists. The appendix provides a discussion of sampling variability and a generalisation of this causal theory. The difference between association and causation is described—the redundant expression “causal effect” is used throughout the article to avoid confusion with a common use of “effect” meaning simply statistical association—and shows why, in theory, randomisation allows the estimation of causal effects without further assumptions. The article concludes with a discussion on the limitations of randomised studies. These limitations are the reason why methods for causal inference from observational data are needed. Statistics from Altmetric.com Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. Zeus is a patient waiting for a heart transplant. On 1 January, he received a new heart. Five days later, he died. Imagine that we can somehow know, perhaps by divine revelation, that had Zeus not received a heart transplant on 1 January (all others things in his life being unchanged) then he would have been alive five days later. Most people equipped with this information would agree that the transplant caused Zeus’ death. The intervention had a causal effect on Zeus’ five day survival. Another patient, Hera, received a heart transplant on 1 January. Five days later she was alive. Again, imagine we can somehow know that had Hera not received the heart on 1 January (all other things being equal) then she would still have been alive five days later. The transplant did not have a causal effect on Hera’s five day survival. These two vignettes illustrate how human reasoning for causal inference works: we compare (often only mentally) the outcome when action A is present with the outcome when action A is absent, all other things being equal. If the two outcomes differ, we say that the action A has a causal effect, causative or preventive, on the outcome. Otherwise, we say that the action A has no causal effect on the outcome. In epidemiology, A is commonly referred to as exposure or treatment. The next step is to make this causal intuition of ours amenable to mathematical and statistical analysis by introducing some notation. Consider a dichotomous exposure variable A (1: exposed, 0: unexposed) and a dichotomous outcome variable Y (1: death, 0: survival). Table 1 shows the data from a heart transplant observational study with 20 participants. Let Y[a][=1] be the outcome variable that would have been observed under the exposure value a=1, and Y[a][=0] the outcome variable that would have been observed under the exposure value a=0. (Lowercase a represents a particular value of the variable A.) As shown in table 2, Zeus has Y[a][=1]=1 and Y[a][=0]=0 because he died when exposed but would have survived if unexposed. We are now ready to provide a formal definition of causal effect for each person: exposure has a causal effect if Y[a][=0]≠Y[a][=1]. Table 2 is all we need to decide that the exposure has an effect on Zeus’ outcome because Y[a][=0]≠Y[a][=1], but not on Hera’s outcome because Y[a][=0]=Y[a][=1]. When the exposure has no causal effect for any subject—that is, Y[a][=0]=Y[a][= 1] for all subjects—we say that the sharp causal null hypothesis is true. The variables Y[a][=1] and Y[a][=0] are known as potential outcomes because one of them describes the subject’s outcome value that would have been observed under a potential exposure value that the subject did not actually experience. For example, Y[a][=0] is a potential outcome for exposed Zeus, and Y[a][=1] is a potential outcome for unexposed Hera. Because these outcomes would have been observed in situations that did not actually happen (that is, in counter to the fact situations), they are also known as counterfactual outcomes. For each subject, one of the counterfactual outcomes is actually factual—the one that corresponds to the exposure level or treatment regimen that the subject actually received. For example, if A=1 for Zeus, then Y[a][=1]=Y[a][=A]=Y for him. The fundamental problem of causal inference should now be clear. Individual causal effects are defined as a contrast of the values of counterfactual outcomes, but only one of those values is observed. Table 3 shows the observed data and each subject’s observed counterfactual outcome: the one corresponding to the exposure value actually experienced by the subject. All other counterfactual outcomes are missing. The unhappy conclusion is that, in general, individual causal effects cannot be identified because of missing data. We define the probability Pr[Y[a]=1] as the proportion of subjects that would have developed the outcome Y had all subjects in the population of interest received exposure value a. We also refer to Pr[Y[a]=1] as the risk of Y[a]. The exposure has a causal effect in the population if Pr[Y[a][=1]=1]≠Pr[Y[a][=0]=1]. Suppose that our population is comprised by the subjects in table 2. Then Pr[Y[a][=1]=1]=10/20=0.5, and Pr[Y[a][=0]=1]=10/20=0.5. That is, 50% of the patients would have died had everybody received a heart transplant, and 50% would have died had nobody received a heart transplant. The exposure has no effect on the outcome at the population level. When the exposure has no causal effect in the population, we say that the causal null hypothesis is true. Unlike individual causal effects, population causal effects can sometimes be computed—or, more rigorously, consistently estimated (see appendix)—as discussed below. Hereafter we refer to the “population causal effect” simply as “causal effect”. Some equivalent definitions of causal effect are 1. Pr[Y[a=1]=1]−Pr[Y[a=0]=1]≠0 2. Pr[Y[a=1]=1]/Pr[Y[a=0]=1]≠1 3. (Pr[Y[a=1]=1]/Pr[Y[a=1]=0])/(Pr[Y[a=0]=1]/Pr[Y[a=0]=0])≠1 where the left hand side of inequalities (a), (b), and (c) is the causal risk difference, risk ratio, and odds ratio, respectively. The causal risk difference, risk ratio, and odds ratio (and other causal parameters) can also be used to quantify the strength of the causal effect when it exists. They measure the same causal effect in different scales, and we refer to them as effect measures. To characterise association, we first define the probability Pr[Y=1|A=a] as the proportion of subjects that developed the outcome Y among those subjects in the population of interest that happened to receive exposure value a. We also refer to Pr[Y=1|A=a] as the risk of Y given A=a. Exposure and outcome are associated if Pr[Y=1|A=1]≠Pr[Y=1|A=0]. In our population of table 1, exposure and outcome are associated because Pr[Y=1|A=1]=7/13, and Pr[Y=1|A=0]=3/7. Some equivalent definitions of association are 1. Pr[Y=1|A=1]−Pr[Y=1|A=0]≠0 2. Pr[Y=1|A=1]/Pr[Y=1|A=0]≠1 3. (Pr[Y=1|A=1]/Pr[Y=0|A=1])/(Pr[Y=1|A=0]/Pr[Y=0|A=0])≠1 where the left hand side of the inequalities (a), (b), and (c) is the associational risk difference, risk ratio, and odds ratio, respectively. The associational risk difference, risk ratio, and odds ratio (and other association parameters) can also be used to quantify the strength of the association when it exists. They measure the same association in different scales, and we refer to them as association measures. When A and Y are not associated, we say that A does not predict Y, or vice versa. Lack of association is represented by Y⨿A (or, equivalently, A⨿Y), which is read as Y and A are independent. Note that the risk Pr[Y=1|A=a] is computed using the subset of subjects of the population that meet the condition “having actually received exposure a” (that is, it is a conditional probability), whereas the risk Pr[Y[a]=1] is computed using all subjects of the population had they received the counterfactual exposure a (that is, it is an unconditional or marginal probability). Therefore, association is defined by a different risk in two disjoint subsets of the population determined by the subjects’ actual exposure value, whereas causation is defined by a different risk in the same subset (for example, the entire population) under two potential exposure values (fig 1). This radically different definition accounts for the well known adage “association is not causation.” When an association measure differs from the corresponding effect measure, we say that there is bias or confounding. Unlike association measures, effect measures cannot be directly computed because of missing data (see table 3). However, effect measures can be computed—or, more rigorously, consistently estimated (see appendix)—in randomised experiments. Suppose we have a (near-infinite) population and that we flip a coin for each subject in such population. We assign the subject to group 1 if the coin turns tails, and to group 2 if it turns heads. Next we administer the treatment or exposure of interest (A=1) to subjects in group 1 and placebo (A=0) to those in group 2. Five days later, at the end of the study, we compute the mortality risks in each group, Pr[Y=1|A=1] and Pr[Y=1|A=0]. For now, let us assume that this randomised experiment is ideal in all other respects (no loss to follow up, full compliance with assigned treatment, blind assignment). We will show that, in such a study, the observed risk Pr[Y=1|A=a] is equal to the counterfactual risk Pr[Y[a]=1], and therefore the associational risk ratio equals the causal risk ratio. First note that, when subjects are randomly assigned to groups 1 and 2, the proportion of deaths among the exposed, Pr[Y=1|A=1], will be the same whether subjects in group 1 receive the exposure and subjects in group 2 receive placebo, or vice versa. Because group membership is randomised, both groups are “comparable”: which particular group got the exposure is irrelevant for the value of Pr [Y=1|A=1]. (The same reasoning applies to Pr[Y=1|A=0].) Formally, we say that both groups are exchangeable. Exchangeability means that the risk of death in group 1 would have been the same as the risk of death in group 2 had subjects in group 1 received the exposure given to those in group 2. That is, the risk under the potential exposure value a among the exposed, Pr[Y[a]=1|A=1], equals the risk under the potential exposure value a among the unexposed, Pr[Y[a]=1|A=0], for a=0 and a=1. An obvious consequence of these (conditional) risks being equal in all subsets defined by exposure status in the population is that they must be equal to the (marginal) risk under exposure value a in the whole population: Pr[Y[a]=1|A=1]=Pr[Y[a]=1|A=0]=Pr[Y[a]=1]. In other words, under exchangeability, the actual exposure does not predict the counterfactual outcome; they are independent, or Y[a]⨿A for all values a. Randomisation produces exchangeability. We are only one step short of showing that the observed risk Pr[Y=1|A=a] equals the counterfactual risk Pr[Y[a]=1] in ideal randomised experiments. By definition, the value of the counterfactual outcome Y[a] for subjects who actually received exposure value a is their observed outcome value Y. Then, among those who actually received exposure value a, the risk under the potential exposure value a is trivially equal to the observed risk. That is, Pr[Y[a]=1|A=a]=Pr[Y=1|A=a]. Let us now combine the results from the two previous paragraphs. Under exchangeability, Y[a]⨿A for all a, the conditional risk among those exposed to a is equal to the marginal risk had the whole population been exposed to a: Pr[Y[a]=1|A=1]=Pr[Y[a]=1|A=0]=Pr[Y[a]=1]. And by definition of counterfactual outcome Pr[Y[a]=1|A=a]=Pr[Y=1|A=a]. Therefore, the observed risk Pr [Y=1|A=a] equals the counterfactual risk Pr[Y[a]=1]. In ideal randomised experiments, association is causation. On the other hand, in non-randomised (for example, observational) studies association is not necessarily causation because of potential lack of exchangeability of exposed and unexposed subjects. For example, in our heart transplant study, the risk of death under no treatment is different for the exposed and the unexposed: Pr[Y[a][=0]=1|A=1]=7/13≠Pr[Y[a][=0]=1|A=0]=3/7. We say that the exposed had a worse prognosis, and therefore a greater risk of death, than the unexposed, or that Y[a]A does not hold for a=0. We have so far assumed that the counterfactual outcomes Y[a] exist and are well defined. However, that is not always the case. Suppose women (S=1) have a greater risk of certain disease Y than men (S=0)—that is, Pr[Y=1|S=1]>Pr[Y=1|S=0]. Does sex S has a causal effect on the risk of Y—that is, Pr[Y[s][=1]=1]> Pr[Y[s][=0]=1]? This question is quite vague because it is unclear what we mean by the risk of Y had everybody been a woman (or a man). Do we mean the risk of Y had everybody “carried a pair of X chromosomes”, “been brought up as a woman”, “had female genitalia”, or “had high levels of oestrogens between adolescence and menopausal age”? Each of these definitions of the exposure “female sex” would lead to a different causal effect. To give an unambiguous meaning to a causal question, we need to be able to describe the interventions that would allow us to compute the causal effect in an ideal randomised experiment. For example, “administer 30 μg/day of ethinyl estradiol from age 14 to age 45” compared with “administer placebo.” That some interventions sound technically unfeasible or plainly crazy simply indicates that the formulation of certain causal questions (for example, the effect of sex, high serum LDL-cholesterol, or high HIV viral load on the risk of certain disease) is not always straightforward. A counterfactual approach to causal inference highlights the imprecision of ambiguous causal questions, and the need for a common understanding of the interventions involved. We now review some common methodological problems that may lead to bias in randomised experiments. To fix ideas, suppose we are interested in the causal effect of a heart transplant on one year survival. We start with a (near-infinite) population of potential recipients of a transplant, randomly allocate each subject in the population to either transplant (A=1) or medical treatment (A= 0), and ascertain how many subjects die within the next year (Y=1) in each group. We then try to measure the effect of heart transplant on survival by computing the associational risk ratio Pr[Y= 1|A=1]/Pr[Y=1|A=0], which is theoretically equal to the causal risk ratio Pr[Y[a][=1]=1]/Pr[Y[a][=0]=1]. Consider the following problems: • Loss to follow up. Subjects may be lost to follow up or drop out of the study before their outcome is ascertained. When this happens, the risk Pr[Y=1|A=a] cannot be computed because the value of Y is not available for some people. Instead we can compute Pr[Y=1|A=a, C=0] where C indicates whether the subject was lost (1: yes, 0: no). This restriction to subjects with C=0 is problematic because subjects that were lost (C=1) may not be exchangeable with subjects who remained through the end of the study (C=0). For example, if subjects who did not receive a transplant (A=0) and who had a more severe disease decide to leave the study, then the risk Pr[Y=1|A=0, C=0] among those remaining in the study would be lower than the risk Pr[Y=1|A= 0] among those originally assigned to medical treatment. Our association measure Pr[Y=1|A=1, C=0]/Pr[Y=1|A=0, C=0] would not generally equal the effect measure Pr[Y[a][=1]=1]/Pr[ • Non-compliance. Subjects may not adhere to the assigned treatment. Let A be the exposure to which subjects were randomly assigned, and B the exposure they actually received. Suppose some subjects that had been assigned to medical treatment (A=0) obtained a heart transplant outside of the study (B=1). In an “intention to treat” analysis, we compute Pr[Y=1|A=a], which equals Pr[Y[a] =1]. However, we are not interested in the causal effect of assignment A, a misclassified version of the true exposure B, but on the causal effect of B itself. The alternative “as treated” approach—using Pr[Y=1|B=b] for causal inference—is problematic. For example, if the most severely ill subjects in the A=0 group seek a heart transplant (B=1) outside of the study, then the group B=1 would include a higher proportion of severely ill subjects than the group B=0. The groups B=1 and B=0 would not be exchangeable—that is, Pr[Y=1|B=b]≠Pr[Y[b]=1]. In the presence of non-compliance, an intention to treat analysis guarantees exchangeability of the groups defined by a misclassified exposure (the original assignment), whereas an as treated analysis guarantees a correct classification of exposure but not exchangeability of the groups defined by this exposure. However, the intention to treat analysis is often preferred because, unlike the as treated analysis, it provides an unbiased association measure if the sharp causal null hypothesis holds for the exposure B. • Unblinding. When the study subjects are aware of the treatment they receive (as in our heart transplant study), they may change their behaviour accordingly. For example, those who received a transplant may change their diet to keep their new heart healthy. The equality Pr[Y=1|A=a]=Pr[Y[a]=1] still holds, but now the causal effect of A combines the effects of the transplant and the dietary change. To avoid this problem, knowledge of the level of exposure assigned to each group is withheld from subjects and their doctors (they are “blinded”), when possible. The goal is to ensure that the whole effect, if any, of the exposure assignment A is solely attributable to the exposure received B (the heart transplant in our example). When this goal is achieved, we say that the exclusion restriction holds—that is, Y[a][=0,b]=Y[a][=1,b] for all subjects and all values b and, specifically, for the value B observed for each subject. In non-blinded studies, or when blinding does not work (for example, the well known side effects of a treatment make apparent who is taking it), the exclusion restriction cannot be guaranteed, and therefore the intention to treat analysis may not yield an unbiased association measure even under the sharp causal null hypothesis for exposure B. In summary, the fact that exchangeability Y[a]⨿A holds in a well designed randomised experiment does not guarantee an unbiased estimate of the causal effect because: i) Y may not be measured for all subjects (loss to follow up), ii) A may be a misclassified version of the true exposure (non-compliance), and iii) A may be a combination of the exposure of interest plus other actions (unblinding). Causal inference from randomised studies in the presence of these problems requires similar assumptions and analytical methods as causal inference from observational studies. Leaving aside these methodological problems, randomised experiments may be unfeasible because of ethical, logistic, or financial reasons. For example, it is questionable that an ethical committee would have approved our heart transplant study. Hearts are in short supply and society favours assigning them to subjects who are more likely to benefit from the transplant, rather than assigning them randomly among potential recipients. Randomised experiments of harmful exposures (for example, cigarette smoking) are generally unacceptable too. Frequently, the only option is conducting observational studies in which exchangeability is not guaranteed. Hume^1 hinted a counterfactual theory of causation, but the application of counterfactual theory to the estimation of causal effects via randomised experiments was first formally proposed by Neyman.^ 2 Rubin^3,^4 extended Neyman’s theory to the estimation of the effects of fixed exposures in randomised and observational studies. Fixed exposures are exposures that either are applied at one point in time only or never change over time. Examples of fixed exposures in epidemiology are a surgical intervention, a traffic accident, a one dose immunisation, or a medical treatment that is continuously administered during a given period regardless of its efficacy or side effects. Rubin’s counterfactual model has been discussed by Holland and others.^5 Robins^6,^7 proposed a more general counterfactual model that permits the estimation of total and direct effects of fixed and time varying exposures in longitudinal studies, whether randomised or observational. Examples of time varying exposures in epidemiology are a medical treatment, diet, cigarette smoking, or an occupational exposure. For simplicity of presentation, our article was restricted to the effects of fixed exposures. The use of the symbol ⨿ to denote independence was introduced by Dawid.^8 Our descriptions of causal effect and exchangeability have relied on the idea that we somehow collected information from all the subjects in the population of interest. This simplification has been useful to focus our attention on the conceptual aspects of causal inference, by keeping them separate from aspects related to random statistical variability. We now extend our definitions to more realistic settings in which random variability exists. Many real world studies are based on samples of the population of interest. The first consequence of working with samples is that, even if the counterfactual outcomes of all subjects in the study were known, one cannot obtain the exact proportion of subjects in the population who had the outcome under exposure value a—that is, the probability Pr[Y[a][=0]=1] cannot be directly computed. One can only estimate this probability. Consider the subjects in table 2. We have previously viewed them as forming a 20 person population. Let us now view them as a random sample of a much larger population. In this sample, the proportion of subjects who would have died if unexposed is P̂r[Y[a][−0]=1] =10/20=0.5, which does not have to be exactly equal to the proportion of subjects who would have died if the entire population had been unexposed, Pr[Y[a][=0]=1]. We use the sample proportion P̂r[Y[a]=1] to estimate the population probability Pr[Y[a]=1]. (The “hat” over Pr indicates that P̂r[Y[a]=1] is an estimator.) We say that P̂r[Y[a]=1] is a consistent estimator of Pr[Y[a]=1] because the larger the number of subjects in the sample, the smaller the difference between P̂r[Y[a]=1] and Pr[Y[a]=1] is expected to be. In the long run (that is, if the estimator is applied to infinite samples of the population), the mean difference is expected to become zero. There is a causal effect of A on Y in such population if Pr[Y[a][=1]=1]≠Pr[Y[a][=0]=1]. This definition, however, cannot be directly applied because the population probabilities Pr[Y[a]=1] cannot be computed, but only consistently estimated by the sample proportions P̂r[Y[a]=1]. Therefore, one cannot conclude with certainty that there is (or there is not) a causal effect. Rather, standard statistical procedures are needed to test the causal null hypothesis Pr[Y[a][=1]=1]=Pr[Y[a][=0]=1] by comparing P̂r[Y[a][−1]=1] and P̂r[Y[a][−1]=1], and to compute confidence intervals for the effect measures. The availability of data from only a sample of subjects in the population, even if the values of all their counterfactual outcomes were known, is the first reason why statistics is necessary in causal inference. The previous discussion assumes that one can have access to the values of both counterfactual outcomes for each subject in the sample (as in table 2), whereas in real world studies one can only access the value of one counterfactual outcome for each subject (as in table 3). Therefore, whether one is working with the whole population or with a sample, neither the probability Pr[Y[a]=1] or its consistent estimator P̂r[Y[a]=1] can be directly computed for any value a. Instead, one can compute the sample proportion of subjects that develop the outcome among the exposed, P̂r[Y=1|A=1] =7/13, and among the unexposed, P̂r[Y=1|A=0]=3/7. There are two major conceptualisations of this problem: 1. The population of interest is near infinite and we hypothesise that all subjects in the population are randomly assigned to either A=1 or A=0. Exchangeability of the exposed and unexposed would hold in the population—that is, Pr[Y[a]=1]=Pr[Y=1|A=a]. Now we can see our sample as a random sample from this population where exposure is randomly assigned. The problem boils down to standard statistical inference with the sample proportion P̂r[Y=1|A=a] being a consistent estimator of the population probability Pr[Y=1|A=a]. This is the simplest conceptualisation. 2. Only the subjects in our sample, not all subjects in the entire population, are randomly assigned to either A=1 or A=0. Because of the presence of random sampling variability, we do not expect that exchangeability will exactly hold in our sample. For example, suppose that 100 subjects are randomly assigned to either heart transplant (A=1) or medical treatment (A=0). Each subject can be classified as good or bad prognosis at the time of randomisation. We say that the groups A=0 and A=1 are exchangeable if they include exactly the same proportion of subjects with bad prognosis. By chance, it is possible that 17 of the 50 subjects assigned to A=1 and 13 of the 50 subjects assigned to A=0 had bad prognosis. The two groups are not exactly exchangeable. However, if we could draw many additional 100 person samples from the population and repeat the randomised experiment in each of these samples (or, equivalently, if we could increase the size of our original sample), then the imbalances between the groups A=1 and A=0 would be increasingly attenuated. Under this conceptualisation, the sample proportion P̂r[Y=1|A =a] is a consistent estimator of P̂r[Y[a]=1], and P̂r[Y[a]=1] is a consistent estimator of the population proportion Pr[Y[a]=1] if our sample is a random sample of the population of interest. This is the most realistic conceptualisation. Under either conceptualisation, standard statistical procedures are needed to test the causal null hypothesis Pr[Y[a][=1]=1]=Pr[Y[a][=0]=1] by comparing P̂r[Y=1|A=1] and P̂r[Y=1|A=0], and to compute confidence intervals for the estimated association measures, which are consistent estimators of the effect measures. The availability of the value of only one counterfactual outcome for each subject, regardless of whether all subjects in the population of interest are or are not included the study (and regardless of which conceptualisation is used), is the second reason why statistics is necessary in causal inference. A2.1 Definition of causal effect We defined causal effect of the exposure on the outcome, Pr[Y[a][=1]=1]≠Pr[Y[a][=0]=1], as a difference between the counterfactual risk of the outcome had everybody in the population of interest been exposed and the counterfactual risk of the outcome had everybody in the population been unexposed. In some cases, however, investigators may be more interested in the causal effect of the exposure in a subset of the population of interest (rather than the effect in the entire population). This causal effect is defined as a contrast of counterfactual risks in that subset of the population of interest. A common choice is the subset of the population comprised by the subjects that were actually exposed. Thus, we can define the causal effect in the exposed as Pr[Y[a][=1]=1|A=1]≠Pr[Y[a][=0]= 1|A=1] or, by definition of counterfactual outcome, Pr[Y=1|A=1]≠Pr[Y[a][=0]=1|A=1]. That is, there is a causal effect in the exposed if the risk of the outcome among the exposed subjects in the population of interest does not equal the counterfactual risk of the outcome had the exposed subjects in the population been unexposed. The causal risk difference in the exposed is Pr[Y=1|A =1]−Pr[Y[a][=0]=1|A=1], the causal risk ratio in the exposed is Pr[Y=1|A=1]/Pr[Y[a][=0]=1|A=1], and the causal odds ratio in the exposed is (Pr[Y=1|A=1]/Pr[Y=0|A=1])/(Pr[Y[a] The causal effect in the entire population can be computed under the condition that the exposed and the unexposed are exchangeable—that is, Y[a] ⨿ A for a=0 and a=1. On the other hand, the causal effect in the exposed can be computed under the weaker condition that the exposed and the unexposed are exchangeable had they been unexposed—that is, Y[a] ⨿ A for a=0 only. Under this weaker exchangeability condition, the risk of the outcome under no exposure is equal for the exposed and the unexposed: Pr[Y[a][=0]=1|A=1]=Pr[Y[a][=0]=1|A=0]. By definition of a counterfactual outcome Pr[Y[a][=0]=1|A=0]=Pr[Y=1|A=0]. Therefore, when the exposed and unexposed are exchangeable under a=0, Pr[Y[a][=0]=1|A=1]=Pr[Y[a][=0]=1|A=0]=Pr[Y=1|A=0]. We decided to restrict our discussion to the causal effect in the entire population and not to the causal effect in the exposed because the latter cannot be directly generalised to time varying A2.2 Non-dichotomous outcome and exposure The definition of causal effect can be generalised to non-dichotomous exposure A and outcome Y. Let E[Y[a]] be the mean counterfactual outcome had all subjects in the population received exposure level a. For discrete outcomes, the expected value E[Y[a]] is defined as the weighted sum yp[Y[a]](y) over all possible values y of the random variable Y[a], where p[Y[a]](·) is the probability mass function of Y[a]—that is[,]p[Y[a]](y)=Pr[Y[a]=y]. For continuous outcomes, the expected value E[Y[a]] is defined as the integral ∫y f[Y[a]](y)dy over all possible values y of the random variable Y[a], where f[Y[a]](·) is the probability density function of Y[a]. A common representation of the expected value for discrete and continuous outcomes is E[Y[a]]=∫y dF[Y[a]](y), where F[Y[a]](·) is the cumulative density function (cdf) of the random variable Y[a]. We say that there is a population average causal effect if E[Y[a]]≠E[Y[a][′]] for any two values a and a′. In ideal randomised experiments, the expected value E[Y[a]] can be consistently estimated by the average of Y among subjects with A=a. For dichotomous outcomes, E[Y[a]]=Pr[Y[a]=1]. The average causal effect is defined by the contrast of E[Y[a]] and E[Y[a′]]. When we talk of “the causal effect of heart transplant (A)” we mean the contrast between “receiving a heart transplant (a =1)” and “not receiving a heart transplant (a=0).” In this case, we may not need to be explicit about the particular contrast because there are only two possible actions, and therefore only one possible contrast. But for non-dichotomous exposure variables A, the particular contrast of interest needs to be specified. For example, “the causal effect of aspirin” is meaningless unless we specify that the contrast of interest is, say, “taking 150 mg of aspirin daily for five years” compared with “not taking aspirin”. Note that this causal effect is well defined even if counterfactual outcomes under interventions other than those involved in the causal contrast of interest are not well defined or even do not exist (for example, “taking 1 kg of aspirin daily for five years”). The average causal effect, defined as a contrast of means of counterfactual outcomes, is the most commonly used causal effect. However, the causal effect may also be defined by a contrast of, say, medians, variances, or cdfs of counterfactual outcomes. In general, the causal effect can be defined as a contrast of any functional of the distributions of counterfactual outcomes under different exposure values. The causal null hypothesis refers to the particular contrast of functionals (means, medians, variances, cdfs, ...) used to define the causal effect. A2.3 Non-deterministic counterfactual outcomes We have defined the counterfactual outcome Y[a] as the subject’s outcome had he experienced exposure value a. For example, in our first vignette, Zeus would have died if treated and would have survived if untreated. This definition of counterfactual outcome is deterministic because each subject has a fixed value for each counterfactual outcome, for example, Y[a][=1]=1 and Y[a][=0]= 0 for Zeus. However, we could imagine a world in which Zeus has certain probability of dying, say (1)=0.9, if treated and certain probability of dying, say (1)=0.1, if untreated. This is a non-deterministic or stochastic definition of counterfactual outcome because the probabilities are not zero or one. In general, the probabilities vary across subjects (that is, they are random) because not all subjects are equally susceptible to develop the outcome. For discrete outcomes, the expected value E[Y[a]] is then defined as the weighted sum yp[Y[a]](y) over all possible values y of the random variable Y[a], where the probability mass function p[Y[a]](·;)=E[]. More generally, a non-deterministic definition of counterfactual outcome does not attach some particular value of the random variable Y[a] to each subject, but rather a statistical distribution Θ[Y [a]](·) of Y[a]. The deterministic definition of counterfactual outcome implies that the cdf Θ[Y[a]](y) can only take values 0 or 1 for all y. The use of random distributions of Y[a] (that is, distributions that may vary across subjects) to allow for non-deterministic counterfactual outcomes does not imply any modification in the definition of average causal effect or the methods used to estimate it. To show this, first note that E[Y[a]]=E[E[Y[a]|Θ[Y[a]](·)]]. Therefore, E[Y[a]]=E[∫y dΘ[Y[a]](y)]=∫y dE[Θ[Y[a]](y)]=∫y dF[Y[a]](y) because F[Y[a]](·)=E[Θ[Y[a, i]](·)]. The non-deterministic definition of causal effect is a generalisation of the deterministic definition in which Θ[Y[a]](·) is a general cdf that may take values between 0 and 1. The choice of deterministic compared with non-deterministic counterfactual outcomes has no consequences for the definition of the average causal effect and the point estimation of effect measures based on averages of counterfactual outcomes. However, this choice has implications for the computation of confidence intervals for the effect measures.^9 An implicit assumption in our definition of individual causal effect is that a subject’s counterfactual outcome under exposure value a does not depend on other subjects’ exposure value. This assumption was labelled “no interaction between units” by Cox,^10 and “stable-unit-treatment-value assumption (SUTVA)” by Rubin.^11 If this assumption does not hold (for example, in studies dealing with contagious diseases or educational programmes), then individual causal effects cannot be identified by using the hypothetical data in table 2. Most methods for causal inference assume that SUTVA Some philosophers of science define causal effects using the concept of “possible worlds.” The actual world is the way things actually are. A possible world is a way things might be. Imagine a possible world a where everybody receives exposure value a, and a possible world a′ where everybody received exposure value a′. The mean of the outcome is E[Y[a]] in the first possible world and E[Y [a][′]] in the second one. There is a causal effect if E[Y[a]]≠E[Y[a][′]] and the worlds a and a′ are the two worlds closest to the actual world where all subjects receive exposure value a and a′, We introduced the counterfactual Y[a] as the outcome of a certain subject under a well specified intervention that exposed her to a. Some philosophers prefer to think of the counterfactual Y[a] as the outcome of the subject in the possible world that is closest to our world and where she was exposed to a. Both definitions are equivalent when the only difference between the closest possible world involved and the actual world is that the intervention of interest took place. The possible worlds’ formulation of counterfactuals replaces the difficult problem of specifying the intervention of interest by the equally difficult problem of describing the closest possible world that is minimally different from the actual world. The two main counterfactual theories based on possible worlds, which differ only in details, have been proposed by Stalnaker^12 and Lewis.^13 The author is deeply indebted to James Robins for his contributions to earlier versions of this manuscript. • Funding: NIH grant KO8-AI-49392 • Conflicts of interest: none declared. Linked Articles
{"url":"https://jech.bmj.com/content/58/4/265?ijkey=3771b132b48ba7d1412e1c067abdedf92afc8098&keytype2=tf_ipsecsha","timestamp":"2024-11-10T10:18:08Z","content_type":"text/html","content_length":"158916","record_id":"<urn:uuid:a6ed8bbf-4040-4bbd-b3c6-5e2499b80269>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00198.warc.gz"}
Symmetry of second derivatives In mathematics, the symmetry of second derivatives (also called the equality of mixed partials) refers to the possibility under certain conditions (see below) of interchanging the order of taking partial derivatives of a function of n variables. If the partial derivative with respect to is denoted with a subscript , then the symmetry is the assertion that the second-order partial derivatives satisfy the identity so that they form an n × n symmetric matrix. This is sometimes known as Schwarz's theorem or Young's theorem.^[1]^[2] In the context of partial differential equations it is called the Schwarz integrability condition. Hessian matrix This matrix of second-order partial derivatives of f is called the Hessian matrix of f. The entries in it off the main diagonal are the mixed derivatives; that is, successive partial derivatives with respect to different variables. In most "real-life" circumstances the Hessian matrix is symmetric, although there are a great number of functions that do not have this property. Mathematical analysis reveals that symmetry requires a hypothesis on f that goes further than simply stating the existence of the second derivatives at a particular point. Schwarz' theorem gives a sufficient condition on f for this to occur. Formal expressions of symmetry In symbols, the symmetry says that, for example, This equality can also be written as Alternatively, the symmetry can be written as an algebraic statement involving the differential operator D[i] which takes the partial derivative with respect to x[i]: D[i] . D[j] = D[j] . D[i]. From this relation it follows that the ring of differential operators with constant coefficients, generated by the D[i], is commutative. But one should naturally specify some domain for these operators. It is easy to check the symmetry as applied to monomials, so that one can take polynomials in the x[i] as a domain. In fact smooth functions are possible. Schwarz's theorem In mathematical analysis, Schwarz's theorem (or Clairaut's theorem^[3]) named after Alexis Clairaut and Hermann Schwarz, states that if has continuous second partial derivatives at any given point in , say, then The partial derivations of this function are commutative at that point. One easy way to establish this theorem (in the case where , , and , which readily entails the result in general) is by applying Green's theorem to the gradient of . Sufficiency of twice-differentiability A weaker condition than the continuity of second partial derivatives (which is implied by the latter) which nevertheless suffices to ensure symmetry is that all partial derivatives are themselves Distribution theory formulation The theory of distributions (generalized functions) eliminates analytic problems with the symmetry. The derivative of an integrable function can always be defined as a distribution, and symmetry of mixed partial derivatives always holds as an equality of distributions. The use of formal integration by parts to define differentiation of distributions puts the symmetry question back onto the test functions, which are smooth and certainly satisfy this symmetry. In more detail (where f is a distribution, written as an operator on test functions, and φ is a test function), Another approach, which defines the Fourier transform of a function, is to note that on such transforms partial derivatives become multiplication operators that commute much more obviously. Requirement of continuity The symmetry may be broken if the function fails to have differentiable partial derivatives, which is possible if Clairaut's theorem is not satisfied (the second partial derivatives are not An example of non-symmetry is the function: This function is everywhere continuous, but its derivatives at (0,0) cannot be computed algebraically. Rather, the limit of difference quotients shows that , so the graph z = f(x,y) has a horizontal tangent plane at (0,0), and the partial derivatives exist and are everywhere continuous. However, the second partial derivatives are not continuous at (0,0), and the symmetry fails. In fact, along the x-axis the y-derivative is , and so: Vice versa, along the y-axis the x-derivative , and so . That is, at (0, 0), although the mixed partial derivatives do exist, and at every other point the symmetry does hold. In general, the interchange of limiting operations need not commute. Given two variables near (0, 0) and two limiting processes on corresponding to making h → 0 first, and to making k → 0 first. It can matter, looking at the first-order terms, which is applied first. This leads to the construction of pathological examples in which second derivatives are non-symmetric. This kind of example belongs to the theory of real analysis where the pointwise value of functions matters. When viewed as a distribution the second partial derivative's values can be changed at an arbitrary set of points as long as this has Lebesgue measure 0. Since in the example the Hessian is symmetric everywhere except (0,0), there is no contradiction with the fact that the Hessian, viewed as a Schwartz distribution, is symmetric. In Lie theory Consider the first-order differential operators D[i] to be infinitesimal operators on Euclidean space. That is, D[i] in a sense generates the one-parameter group of translations parallel to the x[i] -axis. These groups commute with each other, and therefore the infinitesimal generators do also; the Lie bracket [D[i], D[j]] = 0 is this property's reflection. In other words, the Lie derivative of one coordinate with respect to another is zero. Further reading This article is issued from - version of the 11/13/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Symmetry_of_second_derivatives.html","timestamp":"2024-11-11T11:29:33Z","content_type":"text/html","content_length":"28158","record_id":"<urn:uuid:dea86d5d-7ce3-483d-b191-afb1a4c5ae04>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00591.warc.gz"}
HSPT Math Practice Workbook 100% aligned with the 2022 HSPT test HSPT Math test-takers #1 Choice! Recommended by Test Prep Experts! HSPT Math Practice Workbook, which reflects the 2022 test guidelines, offers extensive exercises, math problems, sample HSPT questions, and quizzes with answers to help you hone your math skills, overcome your exam anxiety, boost your confidence, and perform at your very best to ace the HSPT Math test. 52% Off* Includes HSPT Math Prep Books, Workbooks, and Practice Tests The Most Comprehensive Math Workbook for the HSPT Test! The best way to succeed on the HSPT Math Test is with a comprehensive practice in every area of math that will be tested and that is exactly what you will get from the HSPT Math Practice Workbook. Not only will you receive a comprehensive exercise book to review all math concepts that you will need to ace the HSPT Math test, but you will also get two full-length HSPT Math practice tests that reflect the format and question types on the HSPT to help you check your exam-readiness and identify where you need more practice. HSPT Math Practice Workbook contains many exciting and unique features to help you prepare for your test, including: ✓ It’s 100% aligned with the 2022 HSPT test ✓ Written by a top HSPT Math instructor and test prep expert ✓ Complete coverage of all HSPT Math topics which you will be tested ✓ Abundant Math skill-building exercises to help test-takers approach different question types ✓ 2 complete and full-length practices featuring new questions, with decisive answers. HSPT Math Practice Workbook, along with other Effortless Math Education books, are used by thousands of test-takers preparing to take the HSPT test each year to help them brush up on math and achieve their very best scores on the HSPT test! This practice workbook is the key to achieving a higher score on the HSPT Math Test. Ideal for self-study and classroom usage! So if you want to give yourself the best possible chance of success, scroll up, click Add to Cart and get your copy now!
{"url":"https://testinar.com/product.aspx?P_ID=92IuinUB7q6haLDJ9EsKXw%3D%3D","timestamp":"2024-11-03T20:19:20Z","content_type":"text/html","content_length":"56801","record_id":"<urn:uuid:8e25de3d-5ff5-4300-b0ca-fbe59399e9af>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00372.warc.gz"}
what combination of australian coins weigh 100 grams Hungary. Simplest solution — use the coin counting machines!!! A dollar coin is 2.00 mm thick. Photographer, cinematographer, web master/coder. James, both $2 and $5 coin bags are still available, though some banks have a preference for one or the other. I just take my coins to the bank and tip into the coin counting machine and it get deposited into my account no fuss no counting. Thanks for this useful information, Michael. The coin bagging reference is very helpful, where did you find that information out? The Australian one-dollar coin is the second most valuable circulation denomination coin of the Australian dollar after the two-dollar coin; there are also non-circulating legal-tender coins of higher denominations (five-, ten-, two-hundred-dollar coins and the one-million-dollar coin… If it’s a REALLY big container then you may have to do it in batches. [5], List of people who have appeared on Australian currency, https://en.wikipedia.org/w/index.php?title =Coins_of_the_Australian_dollar&oldid=969403510, Use Australian English from December 2013, All Wikipedia articles written in Australian English, Creative Commons Attribution-ShareAlike License, This page was last edited on 25 July 2020, at 06:17. You can also count the number of coins, the value of the coins you have, or simply weigh them. Save time and use one of the Coin Counting machines, found now in many Banks, but note that you must take the receipt to a teller to deposit into your account, so no outsiders can use the facility. 94.13 Australian 1966 round 50c coins make up a fine kilogram of silver. Has anyone noticed how filthy your hands get when you are dealing with largish amounts of coins? I don’t have enough New Zealand coins to check, although if there was I’d expect there to be more NZ coins in circulation. That's 11.48 grams of coins. View all posts by Michael Kubler. At this point I usually head to the nearest ATM and pull the money out in notes, unless I actually want it in the account. So handy especially after hours. You may get them through if you go to a bank which counts them manually but most Banks use a coin counter/ weigher, so they detect even the slightest difference in weight. About 6.5 x 50 cent coins should make 100 grams. At one time coins were minted in precious metal and the value of the coin depended directly on the weight of precious metal, regardless of the denomination of the coin. 100 ml to grams converter to calculate how many grams is 100 ml. I am not sure if it’s the counting of money, the fact you’ve kept so much shrapnel you could buy a new TV with it all, or the fact that you figured out a good way of taking it to the bank….. man, do you have too much spare time? Heaped pile – A bunch of coins where they are spread out over an area, usually only a couple of layers thick. 900 : Isle of Man. 799 coins in total which were worth $362.35, although I could only bank $320. has anyone calculated the value per kilo for coins to calculate their value by weight using a commercial scale. I have counted coin by coin , and the banks counter machine came up with $4 less ,for $400 . The guy was still in the queue with people gawking at the trolley — honestly I can’t imagine what sort of business extracted the haul. That’s about 8.8 ounces or .55 pounds. You should currently have bunches of sorted and stacked the coins, you will now need to weigh them. 100 Korona. According to the Australian Mint http://www.ramint.gov.au/designs/ram-designs/ 5c = 2.83g 10c = 5.65g 20c = 11.3g 50c = 15.55g $1 = 9g and $2 = 6.6g you may want to update your table as it seems you may be giving the bank a few cents here and there. The people talking about NZ coins in Australia are referring to the old style, from before 2006. Up next we will weight the world's most popular silver bullion coin by sales volume, the 1 oz American Silver Eagle coin… The palladium value is computed based on the total weight … For instance, a nickel alone weighs 5 grams. Community Answer. I have a heap of coins to count tonight and forgot to get bank bags for the job. Weigh Gram Scale Digital Pocket Scale,100g by 0.01g,Digital Grams Scale, Food Scale, Jewelry Scale Black, Kitchen Scale 100g(TOP-100) 4.6 out of 5 stars 16,692 $11.99 $ 11 . I agree. I determined the average weight of each coin by weighing 20 of them, and dividing the total weight by 20. Too easy so thank you. These machines definitely reject foreign coins – including New Zealand coins. 100 grams equals... (1/5 pound or 3.5 ounces) 1 stick of butter, or a little less than half a cup; Half a medium sized apple; Two fried eggs; One medium sized banana; 3/4 cup of all-purpose flour; 1/2 cup … 37.08. The two-dollar coin, also replacing a banknote, was introduced in 1988. All banks appear to use a standard bag which takes $5 of 5c, at least that is what I was given when I went to ANZ, Commonwealth and NAB. Put them back into your original container for the next time you count your coins. I have found myself with about $300 of NZ coins, fortunately not the old ones which are the same sizes as Aussie otherwise it would be a very heavy bag to carry onto the plane! According to the US Mint, a nickel weighs exactly 5.000 grams. Thus, all Australian coins in use currently are composed of more than half copper. Try not to think about how many other people have touched the coins, maybe after they’ve just blow their nose or gone to the toilet, haha! The diameter of a dollar coin is 1.043 inches (26.49 mm). Usually the bank can provide you with these. Your email address will not be published. Some branches of NAB just empty your bag into a drawer after weighing… ANZ was happy to take $4.90 of 5c (they added 2 of their own into my bag after accepting it..). In 2016, to celebrate 50 years of decimal currency, a commemorative design for the obverse of the coins was released. The one- and two-cent coins were discontinued in 1991 due to the metal exceeding face value and were withdrawn from circulation. Mintages reported for these coins vary from around 500,000 to around 50 million. Thus, all Australian coins in use currently are composed of more than half copper. I had about 6.2kgs worth of loose change. This is when they are likely to charge you. I could tell when the coins I had were a bit heavy because the outcome wouldn’t be a nice number. Ask they bank! The idea is that you can combine however many nickels needed to reach the scale’s capacity (i.e. There are a couple of ways of counting and sorting the coins. Produced by the Royal Australian Mint, all current coins portrayed four versions of the effigy of Her Majesty Elizabeth II, Queen of Australia, on the obverse, the first effigy designed by Arnold Machin, the second effigy designed by Raphael Maklouf, the third effigy designed by Ian Rank-Broadley and the fourth effigy designed by artist Jody Clark. To convert 100 ml to g, simply multiply 100 ml by 1 to get grams. ... How do I calibrate a digital pocket scale with an Australian coin? Best results can be obtained for masses between about 10 and 100 grams… New Zealand bags are $100 for dollars, and $10 for cents. 1.00. 99 The Royal Australian Mint regularly releases collectable coins, one of the most famous of which is the 1980-1994 gold two-hundred-dollar coin series. Coins weigh far more than dollar bills. 21.00. 1.0891. However, you don’t need to fill a bag full anyway. 35 Australian 5 cent coins weigh nearly 100 grams. I determined the average weight of each coin by weighing 20 of them, and dividing the total weight by 20. I guess it depends on what you are doing with the coins. for a 100 gram scale, you would need 20 nickels). There was also about 177grams worth of foreign currency (I’m not sure why there was so much, I think I’d actually collected some and accidentally put them into the change jar). How thick is a dollar coin? This is a conversion chart for gram (Metric). To ensure security and guarantee … As you are probably carrying multiple kilograms worth of change, you’ll want to make sure that whatever container you try using can actually carry the weight of all the bags. 33.8743. This helped so much now all I have to do is sit at the bank transferring coins from one bag to the other. I worked the information out myself by simply weighing 10 coins and dividing the total by 10. In 1992 the Mint commenced production of commemorative issues which were not for circulation. No Australian general circulation coin weighs anywhere near 100 grams. Australian collectable coins are all legal tender[3] and can be used directly as currency or converted to "normal" coinage at a bank. Although I doubt my table is much different to what the bank machines use. Over time, coins wear down ,reducing their weight .. It’s not a great amount but ,it would reduce the value per weight ratio . The first $1 coin commemorative issue was in 1986, the first 20c commemorative issue in 1995, and the first $2 commemorative issue in 2012. A Lincoln-faced penny made after 1982 weighs 2.5 grams. The weight of each denomination is well known in the industry and can be found on many websites, including that of The Australian Mint. The approximate weight of a bill, regardless of denomination, is 1 gram. To switch the unit simply find the one you want on the page and click it. Thanks for this, the weight per bag is great for this. Effigy of Elizabeth II by Arnold Machin displayed on coins minted in 1966. 9 Australian 20 cent coins weigh a little over 100 grams. You should now have a bunch of plastic coin bags… FILLED WITH COINS! It was changed to a 12-sided shape for 1969 and all following years, but the 12-sided issue was minted as a specimen piece in 1966–67 to test the design. Thanks Jason. 3 Australian 50 cent coins weigh … The bank will basically do what you just did, except with a slightly more advanced set of scales, which can actually detect when you haven’t put the correct number of coins, or even if you counted some foreign coin as a local one (in which case it’ll error). 01960. Hahaha… talking about coin delivery and banks [yes I found your site easily because I wanted to know how much the banks sorted denominations into bags — excellent], OK so I rocked up to a local bank to put cash into my account and the queue was fairly long so I was in for a wait. But note that, if you are “purchasing” change, that some banks also use $4 lots in their bags or wrapped packs. Also by design, British sixpences (phased out in 1971), shillings/old style 5p and florins/old style 10p (phased out in 1992) are exactly the same size as the Australian 5,10 and 20c. In reality, the true weight … (2) The coin enjoys a worldwide audience because premiums are low and it is a recognizable bullion coin … Hi, this is an old article, but which bank did you use that takes 5c in bags of $2? The dollar coin is exactly 9 grams, 50-cent coins are 15.55 grams, 10-cent coins are 5.65 grams, and 5-cent coins are 2.83 grams… Take the money to the bank. Money bag/coin money bag – A plastic bag with small holes in it that utilises a ziplock seal and is designed for holding coins of the correct amount for when you deposit them at the bank. Instead of picking up each of the coins, I usually find it’s faster to simply slide them into the different heaps, then pull some of the coins off the table and into your other hand, which should now be half full, so you can make them into stacks. For U.S. Mint legal tender coins presently in production for annual sets how... The loose change sure if you are dealing with largish amounts of coins to count tonight and to! On top of each coin by weighing 20 of them, and euro coinage coins an!, a nickel alone weighs 5 grams aluminium bronze, silver, gold and coins! Filled with 2 $ coins only it ’ s a REALLY big container then you may have do... Compile this information on counting coins for the obverse of the ….... Scale ’ s been so much now all i have a load of change i need weigh... For taking the time to compile this information on counting coins for next..., so take it with a different design for that year they notice? ) oz.... A set of kitchen scales, can you chuck a couple of ways of counting sorting... Piles, or directly into the plastic bags Elizabeth II by Arnold Machin displayed on coins minted 1966! Table or work area is clear of dirt you pickup in the coins largish amounts coins... By design, exactly the same value shows what the bank transferring coins from one bag to the.. Has installed a coin counting machine it ’ s capacity ( i.e can container filled. In Australia are referring to the metal exceeding face value and were withdrawn from circulation guy with a of. Page and click it, gold and bi-metal coins since my bank has installed a counting... And debasement ) slightly thicker Arnold Machin displayed on coins minted in 1966 a shitload these machines definitely foreign... Total which were not for circulation different - because of the coins of that are... Weight by 20 as the sizes haven ’ t want to be sure if did! Include aluminium bronze, silver, gold and bi-metal coins, exactly the same value, simply! This allows precious metal dealers to post daily buy and sell prices of... Set of kitchen scales made counting my coins easy Arnold Machin displayed on coins in. 1984, to celebrate 50 years of decimal currency, because they are all the coins was released 454 in... And sell prices i determined the average weight of coins and dividing the total weight by 20 switch the simply... Simply find the one you want to be sure banks counter machine came up with $ 4 less for. Or two 10 shillings in the 1980s guy with a different design for that year with! Exactly the same bags and amounts one great tip ; stacking the coins of each coin by coin, which... The universal conversion page to an accuracy of 1g ) i ’ ve seen coin counting!... Weights changed several times due to the universal conversion page Blackstart in relatively plain English, NAS! Will work in any Australian bank coin counting machine it ’ s good to know people are finding website. Commonwealth coinage, and the banks counter machine came up with $ 4 less, for 400! How much money i ’ ve seen coin counting machines in more and more banks which likely. Of each denomination ( type ) 50c pieces and 92 % copper total... Weight using a set of kitchen scales one directly deposits the money into my hand make... A bag of coins to count tonight and forgot to get grams 100 scale! Exceeding face what combination of australian coins weigh 100 grams and were withdrawn from circulation million coins have medallic,. Been so much for taking the time to compile this information on coins... A $ 100 for what combination of australian coins weigh 100 grams, and coins with the coins i had were a bit heavy because the wouldn. It in batches, was introduced in 1988 a nice number grams each a Canadian 5-cent weighs. Foreign currency coins into their own pile, out of the 50c pieces to remove any foreign coins. Proportional as what combination of australian coins weigh 100 grams Australian silver coin calculator only shows what the bank transferring from. A container, every day or two the dollar was equivalent in value to 10 shillings in the former (... The people talking about NZ coins in use currently are composed of than! To make a stack and putting them into a container, every day or.. In general circulation since 1970 the dollar was equivalent in value to 10 shillings in the former currency half..., 100 what combination of australian coins weigh 100 grams these coins weigh nearly 100 grams up how much money i got in there NZ into! Are on top of each denomination ( type ) sell prices a coin counting in... Issued in both standard and commemorative designs. [ 1 ] the … Hungary of them and! In $ 39.20 worth of 20c coins can also count the number of coins, then weigh and bag. Reference table below, put the required weight of coins where they are spread out over area!, to celebrate 50 years of decimal currency, because they are likely to charge you, suspect... Hand, weighs 11.34 grams now need to deposit to deposit the half crown $.!, exactly the same size Australian bank coin counting machines in more and more banks which would likely with... With the smaller drives first 1 U.S. pound, a commemorative design for the job coins for the bank coins. Determined the average weight of coins where the coins you have, or directly the... To an accuracy of 1g ) all what combination of australian coins weigh 100 grams coins in total which were for! But i ’ ve seen coin counting machines!!!!!!... % aluminium and 92 % copper grams ( 0.286 oz ) ( Looking at the weight/bag column onto... A heap of coins all Australian coins have medallic orientation, as do most other Commonwealth coinage Japanese... Canadian 1-cent coin weights 2.35 what combination of australian coins weigh 100 grams since 2000 a stack and putting them into account. Haven ’ t be a nice number were, by design, and dividing the total weight 20! Of dirt you pickup in the table below, put the required weight of each denomination type! Smaller drives first the average weight of coins where the coins you have or... The old style, from before 2006, but which bank did you use that takes 5c in of. Coin calculator only shows what the silver in Australian coins in total which were $! Machin displayed on coins minted in 1966 do you make five dollars out coins and.! Geek who wants to change the world of education coin weighs anywhere near 100 grams in 100 ml will... Know the estimate of money i got in there coins was released what combination of coins to what combination of australian coins weigh 100 grams and! That i was searching for Australian coin scale, you will now explain my process the of! Machine or even Coles/Woolies load of change i need to fill a bag of coins Looking. Value to 10 shillings what combination of australian coins weigh 100 grams the table below, put the required weight of to... Their own pile, out of different material, having different thicknesses, and the banks counter machine up! Diameter of a dollar coin is smaller in diameter than the one-dollar coin, but the two-dollar coin, the! Should make 100 grams out myself by simply weighing 10 coins and notes bags for the obverse of 50c... Likely deal with the standard reverse are also released kilogram of silver back into original. 1 gram, therefore there are 100 grams coins make up a fine of... Weight of coins, then weigh and finally bag them and will now need to them., by design, and $ 10 for cents... how do i a. Counts of ten was another this information on counting coins for the obverse of the same size dirt! Gram scale, you would need 20 nickels ) than half copper in some years all... ’ ve got 50c coins make up a fine kilogram of silver a little 100. Foreign currency coins into their own pile, out of the way that can... At the weight/bag column ) onto the kitchen scales ( or something that shows weights to an accuracy 1g. Are finding my website Start with the standard reverse are also released, there ’ s a big... Daily buy and sell prices are all the coins in use currently are composed of than! All i have a load of change i need to deposit grams is 100 to! The 1980s by picking up 90 % of the way shillings in the 1980s denomination ( type ) is! Near 100 grams suspect what combination of australian coins weigh 100 grams would not like you that much if you out. Referring to the other hand, weighs 11.34 grams a nice number currency... Of different material, having different thicknesses, and dividing the total weight by.... Of silver coin series bags to deposit — a shitload will deposit the money into my,. Any idea whether New Zealand bags are $ 100 bill weighs the same bags and?! Than the one-dollar coin, also replacing a banknote, was introduced in 1988 many is. A LOT of dirt you pickup in the former currency ( half of a pound ) has since issued. Innovative geek who wants to change the world of education specifications as the Australian silver what combination of australian coins weigh 100 grams calculator only shows the! ; stacking the coins i had were a bit heavy because the outcome wouldn ’ t forget to any. [ 1 ] the 50c coin have been placed in general circulation coin 3.95. No Australian general circulation coin is the 1980-1994 gold two-hundred-dollar coin series at the weight/bag column onto! Don ’ t changed since decimalisation and debasement ) in any Australian coin... 2 $ coins only depends on what you are doing with the smaller drives.!
{"url":"https://cm40.uk/exahf/what-combination-of-australian-coins-weigh-100-grams-82e0a8","timestamp":"2024-11-13T01:30:21Z","content_type":"text/html","content_length":"28647","record_id":"<urn:uuid:ce0a852a-615b-4c70-904f-b5fb20696cc2>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00521.warc.gz"}
Systems 05: Strange Solutions - Math For All Welcome to MathforAll! In this video, we are going to learn about solving systems of equations 1. We explain the three things that can happen when we solve systems of equations: answer is intersecting lines at a point, they are parallel lines, where there is no solution, and the two lines are the same, where the answer is infinite solutions. 2. We explain how to tell which of the three options we have when we’re solving algebraically 3. We solve two examples, one that results in parallel lines and one that is the same line. For fill in the blank notes, worksheets and other related topics and resources, visit us at mathforall.net
{"url":"https://mathforall.net/modules/algebra-1/systems-of-equations/systems-05-strange-solutions/","timestamp":"2024-11-04T11:22:51Z","content_type":"text/html","content_length":"159915","record_id":"<urn:uuid:0de461b6-4cb2-4f7e-8128-5e013c219244>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00331.warc.gz"}
Radiative ablation of melting solids Radiative ablation occurring in melting solids when a large temperature difference exists between the solid and the environment from which the solid receives heat is regarded as a phase change problem. Biot's variational method is used to obtain closed-form solutions for melting distance and surface temperature when ablation occurs in the melting solid as a result of radiative heating. A numerical solution is also obtained using Simpson's rule. It is found that for any value of the dimensionless temperature (beta) of the surroundings, both the surface temperature and the melting distance increase with an increase in time and that they decrease at any time with an increase in beta. AIAA Journal Pub Date: October 1976 □ Ablation; □ Biot Method; □ Melting; □ Radiative Heat Transfer; □ Aerodynamic Heating; □ Calculus Of Variations; □ Numerical Integration; □ Surface Temperature; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1976AIAAJ..14.1494P/abstract","timestamp":"2024-11-08T22:36:10Z","content_type":"text/html","content_length":"36382","record_id":"<urn:uuid:719534cd-6b97-430a-aa05-23931daf82cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00047.warc.gz"}
Python projects My Python Projects • Sudoku solver I created a simple sudoku solver in Python 3 using the Pygame library for the graphical side. I implemented the recursive backtracking algorithm on my own, as I had tried to write this code as quickly as I could, and it took me about an hour to create the whole solution, from the algorithm design to the finished working product. The algorithm works in 3 parts. First it finds the next cell to fill, then it generates the possible values, and then it checks the values. In the event that no values fit, it then backtracks change the previous values.
{"url":"https://www.rajzer.dev/projects/python","timestamp":"2024-11-11T08:15:10Z","content_type":"text/html","content_length":"7023","record_id":"<urn:uuid:0f1a0175-f8a6-47b4-982a-82292841e75a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00016.warc.gz"}
Risk Management & Trading | Parcl Docs Key Risk Management Settings Skew Scale Skew scale is used to normalize market skew for use in risk management calculations. Each market has its own skew scale. $premium/discount = \frac {skew}{skewScale}$ Skew scale determines how sensitive premiums/discounts are to marginal changes in skew, which is used in risk management calculations. Max Funding Velocity Max funding velocity is a market's maximum rate the funding rate increases by per day. Each market has its own max funding velocity. Max Side Size Max side size is a market's maximum open interest measured in base asset units that can be opened per side of the market. Each market has its own max side size. For example, if max side size is 100 sqft, then the max long open interest available is 100 sqft and the max short open interest available is 100 sqft. Funding keeps markets balanced by taxing the majority side of a market and paying that tax to the minority side of the market. Funding flows through the LP pool, so it can be a short term risk to Funding Rate The funding rate uses a velocity model where the premium/discount clamped to 1 is multiplied by the market's max funding velocity and time elapsed in days. This means the effective rate increase for markets that are not maximally skewed is a fraction of max velocity multiplied by time elapsed. Funding Per Unit Funding per unit is the collateral denominated accumulation of the funding rate over time. It is the average funding rate over the elapsed time period multiplied by the index price and the time elapsed in days. Funding PnL Funding per unit is used to determine a position's accrued funding. Accrued funding is the position's size in base asset units multiplied by the position's net funding per unit. Net funding per unit is the exit funding per unit less the position's last funding per unit. Since funding per unit is an accumulation with respect to time, it is precise to subtract the accumulation at position interval start from the accumulation at position interval end. Each trade on a position is effectively a close and resets the position's last funding per unit and last fill price. Margin System The protocol uses a cross margin system. Margin accounts can have a maximum of 12 positions. Position Margins The protocol uses a dynamic initial margin ratio, a constant minimum initial margin ratio, and a maintenance margin proportion of the computed initial ratio for a new position. The dynamic initial margin ratio scales linearly with position size normalized by skew scale. The total computed initial margin ratio sums the dynamic ratio and the minimum initial ratio. The minimum initial ratio is used to set the lowest possible requirement. The maintenance ratio is the product of the total initial ratio and the market's maintenance proportion. The initial margin requirement and the maintenance margin requirement are the position notional using the index price multiplied by the respective ratio. A fixed minimum position margin requirement is added to both margin requirements. Minimum position margin is configurable by market. The liquidation margin acts as a buffer to ensure accounts can cover the liquidation keeper fee. Notional position values use the current index price. This means accounts can be liquidated due to changes in the index price. Account Margins An account's margins are the sum of the account's positions' margins using each position's market's index price as the exit fill price in PnL calculations. Available margin is the account's total net PnL plus its deposited margin collateral. Total required margin is the account's maintenance requirement plus a liquidation fee margin. The liquidation fee margin is the maximum of the exchange's minimum liquidation fee and the account's total liquidation fee margin. Since accounts are cross margined, an account with positive PnL positions may negate the account's positions that are below their maintenance requirement. In such a case, the account's requirements can be met even though the account has one or more underwater positions. If an account falls below its total required margin, then it is fully liquidated and all collateral posted as margin to the account is sent to the LP pool. Parcl v3 margin is dissimilar from other margin products in that Parcl v3 traders do not borrow an asset on margin, pay interest, and then freely risk the asset in the market. Parcl v3 traders explicitly borrow potential PnL at the expense of the exchange's LPs. In the worst case scenario where there is significant skew in markets, LPs pay positive trader PnL with no offset from other traders in the respective markets. However, even in perfectly balanced markets there are short term liquidity risks since PnL flows through the LP pool. For example, winning traders may realize PnL while the losing traders stay in the market hoping for the market to change. Nonetheless, the LPs paid out the positive PnL and are waiting to receive the negative PnL offset. Short term solvency and the nature of trading against the LPs are the main reasons why liquidations fully close out accounts that are unable to meet requirements. Liquidation fees are earned based on the notional open interest closed per position per account in a liquidate instruction invocation. The liquidation fee rate is set by governance. MEV Protection Although a liquidation begins by transferring all margin collateral from the liquidated account to the exchange, the accounts positions may be partially closed out over multiple transactions based on criteria with respect to time. This design is meant to protect markets from actors taking advantage of short term or even atomic large swings in a market's premium/discount that could be profitable to order transactions against. Mechanically, each market has a liquidation epoch length measured in seconds in which there is a maximum capacity of open interest in base asset units that can be liquidated. A position liquidation may still occur after capacity per epoch has been reached if the market's current premium/discount is below the max liquidation premium/discount set by governance. Authorized Liquidator Each market has an authorized liquidator that is set by governance. This role can optionally bypass epoch capacity and perform full liquidations. Additionally, the authorized liquidator does not collect liquidation fees from the exchange. Price Impact Each trade’s fill price is adjusted linearly by the trade’s impact on the market’s premium/discount. It produces a symmetrical high frequency rebalancing opportunity since any trade that reduces skew will receive a discounted price and any trade that widens skew will receive a price premium. The purpose of the dynamic fill price is to disincentivize volume that increases skew and promote volume that contracts skew by directly adjusting entry prices similar to how a perps market maker might widen or tighten spreads on a perps clob in response to price drift or net exposure to funding. Price PnL Price PnL is the change in price multiplied by the position's size in base asset units. Price change is the current (exit) fill price less the last fill price. Each trade on a position is effectively a close and resets the position's last fill price and last funding per unit. Trading Fees The trade fee is determined by the trade's notional size multiplied by the calculated trade fee rate. It is a risk management feature in the sense that the trade fee rate is a blend of the market's maker fee rate and the market's taker fee rate. "Maker" is the v3 parlance for traders who widen skew and "takers" is the v3 parlance for traders who decrease skew. Trades that only increase skew use the taker fee rate. Trades that only decrease skew use the lower maker fee rate. Trades that flip skew are the only trades that receive a blended fee rate between the maker and taker rates weighted by the proportion of their trade that increased and decreased skew.
{"url":"https://docs.parcl.co/protocol-overview/risk-management-and-trading","timestamp":"2024-11-04T20:55:41Z","content_type":"text/html","content_length":"538554","record_id":"<urn:uuid:cba37dc5-774c-4288-9d14-c0ff2645f6cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00094.warc.gz"}
A semismooth Newton method for SOCCPs based on a one-parametric class of SOC complementarity functions DOI 10.1007/s10589-008-9166-9 A semismooth Newton method for SOCCPs based on a one-parametric class of SOC complementarity functions Shaohua Pan· Jein-Shan Chen Received: 29 March 2007 / Revised: 29 October 2007 / Published online: 7 February 2008 © Springer Science+Business Media, LLC 2008 Abstract In this paper, we present a detailed investigation for the properties of a one-parametric class of SOC complementarity functions, which include the glob- ally Lipschitz continuity, strong semismoothness, and the characterization of their B-subdifferential. Moreover, for the merit functions induced by them for the second- order cone complementarity problem (SOCCP), we provide a condition for each sta- tionary point to be a solution of the SOCCP and establish the boundedness of their level sets, by exploiting Cartesian P -properties. We also propose a semismooth New- ton type method based on the reformulation of the nonsmooth system of equations involving the class of SOC complementarity functions. The global and superlinear convergence results are obtained, and among others, the superlinear convergence is established under strict complementarity. Preliminary numerical results are reported for DIMACS second-order cone programs, which confirm the favorable theoretical properties of the method. Keywords Second-order cone· Complementarity · B-subdifferential · Semismooth· Newton’s method S. Pan work is partially supported by the Doctoral Starting-up Foundation (B13B6050640) of GuangDong Province. J.-S. Chen member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work is partially supported by National Science Council of Taiwan. S. Pan School of Mathematical Sciences, South China University of Technology, Guangzhou 510640, People’s Republic of China e-mail:shhpan@scut.edu.cn J.-S. Chen ( Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan e-mail:jschen@math.ntnu.edu.tw 1 Introduction We consider the following conic complementarity problem of finding ζ∈ R^n such that F (ζ )∈ K, G(ζ )∈ K, F (ζ ), G(ζ ) = 0, (1) where·, · represents the Euclidean inner product, F and G are the mappings from R^ntoR^nwhich are assumed to be continuously differentiable, andK is the Cartesian product of second-order cones (SOCs), also called Lorentz cones [10]. In other words, K = K^n^1× K^n^2× · · · × K^n^m, (2) where m, n1, . . . , n[m]≥ 1, n1+ n2+ · · · + nm= n, and K^n^i:= {(x1, x2)∈ R × R^n^i^−1| x1≥ x2}, with· denoting the Euclidean norm and K^1denoting the set of nonnegative real numbersR[+]. We refer to (1)–(2) as the second-order cone complementarity problem (SOCCP). In the sequel, corresponding to the Cartesian structure ofK, we write x= (x1, . . . , x[m])with xi ∈ R^n^i for any x ∈ R^n, and F = (F1, . . . , F[m]) and G= (G[1], . . . , G[m])with Fi, G[i]: R^n→ R^n^i. An important special case of the SOCCP corresponds to G(ζ )= ζ for all ζ ∈ R^n. Then (1) reduces to F (ζ )∈ K, ζ ∈ K, F (ζ ), ζ = 0, (3) which is a natural extension of the nonlinear complementarity problem (NCP) where K = K^1×· · ·×K^1. Another important special case corresponds to the Karush-Kuhn- Tucker (KKT) conditions of the convex second-order cone program (SOCP): min g(x) s.t. Ax= b, x ∈ K, (4) where A∈ R^m^×nhas full row rank, b∈ R^mand g: R^n→ R is a convex twice con- tinuously differentiable function. From [6], the KKT conditions for (4), which are sufficient but not necessary for optimality, can be written in the form of (1) and (2) with F (ζ ):= d +(I −A^T(AA^T)^−1A)ζ, G(ζ ):= ∇g(F (ζ ))−A^T(AA^T)^−1Aζ, (5) where d∈ R^nis any vector satisfying Ax= b. For large problems with a sparse A, (5) has an advantage that the main cost of evaluating the Jacobian∇F and ∇G lies in inverting AA^T, which can be done efficiently via sparse Cholesky factorization. There have been various methods proposed for solving SOCPs and SOCCPs, which include interior-point methods [1–3, 18, 19, 23, 26], non-interior smooth- ing Newton methods [7,13], smoothing-regularization methods [15], merit function methods [6] and semismooth Newton methods [16]. Among others, the last four kinds of methods are all based on an SOC complementarity function or a smooth merit function induced by it. Given a mapping φ: R^l× R^l→ R^l, we call φ an SOC complementarity function associated with the coneK^lif for any (x, y)∈ R^l× R^l, φ (x, y)= 0 ⇐⇒ x ∈ K^l, y∈ K^l, x, y = 0. (6) Clearly, when l= 1, an SOC complementarity function reduces to an NCP function, which plays an important role in the solution of NCPs; see [24] and references therein. A popular choice of φ is the Fischer-Burmeister (FB) function [11,12], defined by φFB(x, y):= (x^2+ y^2)^1/2− (x + y). (7) More specifically, for any x= (x1, x[2]), y= (y1, y[2])∈ R × R^l^−1, we define their Jor- dan product associated withK^las x◦ y := (x, y, y1x2+ x1y2). (8) The Jordan product “◦”, unlike scalar or matrix multiplication, is not associative, which is the main source on complication in the analysis of SOCCPs. The identity element under this product is e:= (1, 0, . . . , 0)^T ∈ R^l. We write x^2to mean x◦ x and write x+ y to mean the usual componentwise addition of vectors. It is known that x^2∈ K^l for all x∈ R^l. Moreover, if x∈ K^l, then there exists a unique vector inK^l, denoted by x^1/2, such that (x^1/2)^2= x^1/2◦ x^1/2= x. Thus, φFBin (7) is well-defined for all (x, y)∈ R^l× R^land mapsR^l× R^ltoR^l. The function φFBwas proved in [13] to satisfy the equivalence (6), and therefore its squared norm, denoted by ψFB(x, y):=1 2φFB(x, y)^2, is a merit function for the SOCCP. The merit function is shown to be continuously differentiable by Chen and Tseng [6], and a merit function approach was proposed by use of it. Another popular choice of φ is the natural residual function φNR: R^l× R^l→ R^l given by φ[NR](x, y):= x − [x − y][+], where[·]+means the minimum Euclidean distance projection ontoK^l. The function was studied in [13,15] which is involved in smoothing methods for the SOCCP, recently it was used to develop a semismooth Newton method for nonlinear SOCPs by Kanzow and Fukushima [16]. We note that φNRinduces a natural residual merit function ψNR(x, y):=1 2φNR(x, y)^2, but, compared to ψFB, it has a remarkable drawback, i.e. the non-differentiability. In this paper, we consider a one-parametric class of vector-valued functions φ[τ](x, y):= [(x − y)^2+ τ(x ◦ y)]^1/2− (x + y) (9) with τ being any but fixed parameter in (0, 4). The class of functions is a natural ex- tension of the family of NCP functions proposed by Kanzow and Kleinmichel [17], and has been shown in [4] to satisfy the characterization (6). It is not hard to see that as τ= 2, φτ reduces to the FB function φFB in (7) while it becomes a multiple of the natural residual function φNR as τ → 0^+. With the class of SOC complemen- tarity functions, clearly, the SOCCP can be reformulated as a nonsmooth system of equations τ(ζ ):= φ[τ](F1(ζ ), G1(ζ )) ... φτ(Fi(ζ ), Gi(ζ )) ... φ[τ](F[m](ζ ), G[m](ζ )) = 0, (10) which induces a natural merit function τ : R^n→ R[+]given by τ(ζ )=1 2τ(ζ )^2= m i=1 ψτ(Fi(ζ ), Gi(ζ )), (11) with ψτ being the natural merit function associated with φτ, i.e., ψ[τ](x, y)=1 2φτ(x, y)^2. (12) In [4], we studied the continuous differentiability of ψτ and showed that each sta- tionary point of τ is a solution of the SOCCP if ∇F and −∇G are column monotone. In this paper, we concentrate on the properties of φτ, including the glob- ally Lipschitz continuity, the strong semismoothness, and the characterization of the B-subdifferential. Particularly, we provide a weaker condition than [4] for each sta- tionary point of τ to be a solution of the SOCCP and establish the boundedness of the level sets of τ, by using Cartesian P -properties. We also propose a semismooth Newton method based on the system (10), and obtain the corresponding global and the superlinear convergence results. Among others, the superlinear convergence is established under strict complementarity. Throughout this paper, I represents an identity matrix of suitable dimension, and R^n^1 × · · · × R^n^m is identified with R^n^1^+···+n^m. For a differentiable mapping F : R^n→ R^m,∇F (x) denotes the transpose of the Jacobian F^(x). For a symmet- ric matrix A∈ R^n^×n, we write A O (respectively, A O) to mean A is positive semidefinite (respectively, positive definite). Given a finite number of square matrices Q[1], . . . , Q[n], we denote the block diagonal matrix with these matrices as block diag- onals by diag(Q1, . . . , Q[n])or by diag(Qi, i= 1, . . . , n). If J and B are index sets such thatJ , B ⊆ {1, 2, . . . , m}, we denote P[J B]by the block matrix consisting of the sub-matrices Pj k∈ R^n^j^×n^k of P with j∈ J , k ∈ B, and by x[B]a vector consisting of sub-vectors xi∈ R^n^i with i∈ B. 2 Preliminaries In this section, we recall some background materials and preliminary results that will be used in the subsequent sections. We begin with the interior and the boundary ofK^l. It is known thatK^lis a closed convex self-dual cone with nonempty interior given by int(K^l):= {x = (x1, x2)∈ R × R^l^−1| x1>x2} and the boundary given by bd(K^l):= {x = (x1, x2)∈ R × R^l^−1| x1= x2}. For each x= (x1, x[2])∈ R × R^l^−1, the determinant and the trace of x are defined by det(x):= x[1]^2− x2^2, tr(x):= 2x1. In general, det(x◦ y) = det(x) det(y) unless x2= αy2 for some α∈ R. A vector x∈ R^lis said to be invertible if det(x)= 0, and its inverse is denoted by x^−1. Given a vector x= (x1, x[2])∈ R × R^l^−1, we often use the following symmetry matrix x1 x[2]^T x[2] x[1]I , (13) which can be viewed as a linear mapping fromR^l toR^l. It is easy to verify Lxy= x◦ y and Lx+y= Lx+ Ly for any x, y∈ R^l. Furthermore, x∈ K^l if and only if L[x] O, and x ∈ int(K^l)if and only if Lx O. Then Lxis invertible with L^−1[x] = 1 det(x) x[1] −x[2]^T −x2 det(x) x[1] I+[x]^1 . (14) We recall from [13] that each x= (x1, x2)∈ R × R^l^−1admits a spectral factoriza- tion, associated withK^l, of the form x= λ1(x)· u^(1)[x] + λ2(x)· u^(2)[x] , where λi(x)and u^(i)x for i= 1, 2 are the spectral values and the associated spectral vectors of x, respectively, given by λi(x)= x1+ (−1)^ix2, u^(i)[x] =1 2(1, (−1)^i¯x2) (15) with ¯x2= x2/x2 if x2= 0, and otherwise ¯x2 being any vector inR^l^−1satisfying ¯x2 = 1. If x2= 0, then the factorization is unique. The spectral decompositions of x, x^2 and x^1/2have some basic properties as below, whose proofs can be found in [13]. Property 2.1 For any x= (x1, x[2])∈ R × R^l^−1with the spectral values λ1(x), λ[2](x) and spectral vectors u^(1)x , u^(2)[x] given as above, we have that (a) x∈ K^l if and only if λ1(x)≥ 0, and x ∈ int(K^l)if and only if λ1(x) >0. (b) x^2= λ^2[1](x)u^(1)[x] + λ^2[2](x)u^(2)[x] ∈ K^l. (c) x^1/2=√ λ1(x) u^(1)[x] +√ λ2(x) u^(2)[x] ∈ K^l if x∈ K^l. (d) det(x)= λ1(x)λ2(x), tr(x)= λ1(x)+ λ2(x)andx^2= [λ^2[1](x)+ λ^2[2](x)]/2. For the sake of notation, throughout the rest of this paper, we always let w= (w1, w2)= w(x, y) := (x − y)^2+ τ(x ◦ y), z= (z1, z2)= z(x, y) := [(x − y)^2+ τ(x ◦ y)]^1/2 (16) for any x= (x1, x[2]), y= (y1, y[2])∈ R × R^l^−1. It is easy to compute w[1]= x^2+ y^2+ (τ − 2)x^Ty, w2= 2(x1x2+ y1y2)+ (τ − 2)(x1y2+ y1x2). Moreover, w∈ K^land z∈ K^lhold by considering that w= x^2+ y^2+ (τ − 2)(x ◦ y) x+τ− 2 2 y +τ (4− τ) 4 y^2= y+τ− 2 2 x +τ (4− τ) 4 x^2. (17) In what follows, we present several important technical lemmas. Since their proofs can be found in [4], we here omit them for simplicity. Lemma 2.1 [4, Lemma 3.4] For any x= (x1, x2), y= (y1, y2)∈ R × R^l^−1and τ∈ (0, 4), let w= (w1, w2)be defined as in (16). Ifw2 = 0, then x1+τ − 2 2 y1 + (−1)^i x2+τ − 2 2 y2 T w[2] x2+τ − 2 2 y2 + (−1)^i x1+τ − 2 2 y1 ≤ λi(w) for i= 1, 2. Furthermore, these relations also hold when interchanging x and y. Lemma 2.2 [4, Lemma 3.2] For any x= (x1, x[2]), y= (y1, y[2])∈ R × R^l^−1and τ∈ (0, 4), let w= (w1, w2)be given as in (16). If w /∈ int(K^l), then x[1]^2= x2^2, y[1]^2= y2^2, x1y1= x2^Ty2, x1y2= y1x2; (18) x[1]^2+ y[1]^2+ (τ − 2)x1y[1]= x1x[2]+ y1y[2]+ (τ − 2)x1y[2] = x2^2+ y2^2+ (τ − 2)x^T[2]y[2]. (19) If, in addition, (x, y)= (0, 0), then w2 = 0, and moreover, x[2]^T w[2] w2= x1, x1 w2 = x2, y[2]^T w[2] w2 = y1, y1 w2= y2. (20) Lemma 2.3 [4, Proposition 3.2] For any x= (x1, x[2]), y= (y1, y[2])∈ R × R^l^−1, let z(x, y)be defined by (16). Then z(x, y) is continuously differentiable at a point (x, y) if and only if (x− y)^2+ τ(x ◦ y) ∈ int(K^l), and furthermore, ∇xz(x, y)= L[x][+]^τ−2 2 yL^−1[z] , ∇yz(x, y)= L[y][+]^τ−2 2 xL^−1[z] , where L^−1[z] = ⎝ b c ^w 2 aI+ (b − a)^w[w]^2^w^T^2 ⎠ if w2= 0; w1)I if w2= 0, a= 2 λ1(w), b=1 2 √λ2(w)+ 1 c=1 2 √λ[2](w)− 1 To close this section, we recall some definitions that will be used in the subsequent sections. Given a mapping H: R^n→ R^m, if H is locally Lipschitz continuous, the set ∂[B]H (z):= {V ∈ R^m^×n| ∃{z^k} ⊆ DH: z^k→ z, H^(z^k)→ V } is nonempty and is called the B-subdifferential of H at z, where DH ⊆ R^ndenotes the set of points at which H is differentiable. The convex hull ∂H (z):= conv∂BH (z) is the generalized Jacobian of H at z in the sense of Clarke [8]. For the concepts of (strongly) semismooth functions, please refer to [21,22] for details. We next present definitions of Cartesian P -properties for a matrix M∈ R^n^ ×n, which are in fact special cases of those introduced by Chen and Qi [5] for a linear transformation. Definition 2.1 A matrix M∈ R^n^×nis said to have (a) the Cartesian P -property if for any 0= x = (x1, . . . , x[m])∈ R^n with xi ∈ R^n^i, there exists an index ν∈ {1, 2, . . . , m} such that xν, (Mx)ν > 0; (b) the Cartesian P0-property if for any 0= x = (x1, . . . , xm)∈ R^n with xi ∈ R^n^i, there exists an index ν∈ {1, 2, . . . , m} such that x[ν]= 0 and xν, (Mx)[ν] ≥ 0. Some nonlinear generalizations of these concepts in the setting ofK are defined as follows. Definition 2.2 Given a mapping F= (F1, . . . , F[m])with Fi : R^n→ R^n^i, F is said to (a) have the uniform Cartesian P -property if for any x = (x1, . . . , x[m]), y = (y1, . . . , y[m])∈ R^n, there exists an index ν∈ {1, 2, . . . , m} and a positive constant ρ >0 such that xν− yν, Fν(x)− Fν(y) ≥ ρx − y^2; (b) have the Cartesian P0-property if for any x= (x1, . . . , xm), y= (y1, . . . , ym)∈ R^n, there exists an index ν∈ {1, 2, . . . , m} such that x[ν]= yν and xν− yν, F[ν](x)− Fν(y) ≥ 0. If a continuously differentiable mapping F has the Cartesian P -properties, then the matrix∇F (x) at any x ∈ R^nenjoys the corresponding Cartesian P -properties. 3 Properties of the functions φτ and τ This section is devoted to investigating the favorable properties of φτ, which include the globally Lipschitz continuity, the strong semismoothness and the characterization of the B-subdifferential at any point. Based on these results, we also present some properties of the operator τ related to the generalized Newton method. From the definition of φτ and z(x, y) given as in (9) and (16), respectively, we have φτ(x, y)= z(x, y) − (x + y) = z − (x + y) (23) for any x= (x1, x[2]), y= (y1, y[2])∈ R × R^l^−1. Recall that the vectors w= (w1, w[2]) and z= (z1, z[2])in (16) satisfy w, z∈ K^l, and hence, from Property2.1(b) and (c), √λ2(w)+√ λ1(w) 2 , √λ2(w)−√ λ1(w) 2 ¯w2 , (24) where ¯w2= [w]^w^2[2][] if w2= 0, and otherwise ¯w2 is any vector in R^l^−1 satisfying ¯w2 = 1. The following proposition states some favorable properties possessed by φτ. Proposition 3.1 The function φτ defined as in (9) has the following properties. (a) φτ is continuously differentiable at a point (x, y)∈ R^l × R^l if and only if (x− y)^2+ τ(x ◦ y) ∈ int(K^l). Moreover, ∇xφ[τ](x, y)= L[x][+]^τ−2 2 yL^−1[z] − I, ∇yφ[τ](x, y)= L[y][+]^τ−2 2 xL^−1[z] − I. (b) φτ is globally Lipschitz continuous with the Lipschitz constant independent of τ . (c) φτ is strongly semismooth at any (x, y)∈ R^l× R^l. (d) ψτ defined by (12) is continuously differentiable everywhere. Proof (a) The proof directly follows from Lemma2.3and (23). (b) It suffices to prove that z(x, y) is globally Lipschitz continuous by (23). Let ˆz = (ˆz1,ˆz2)= ˆz(x, y, ) := [(x − y)^2+ τ(x ◦ y) + e]^1/2 (25) for any > 0 and x= (x1, x2), y= (y1, y2)∈ R × R^ l−1. Then, applying Lemma A.1 inAppendixand the Mean-Value Theorem, we have z(x, y) − z(a, b) = lim →0^+ˆz(x, y, ) − lim →0^+ˆz(a, b, ) ≤ lim →0^+ˆz(x, y, ) − ˆz(a, y, ) + ˆz(a, y, ) − ˆz(a, b, ) ≤ lim ∇xˆz(a + t(x − a), y, )(x − a)dt + lim ∇yˆz(a, b + t(y − b), )(y − b)dt 2C(x, y) − (a, b) for any (x, y), (a, b)∈ R^l× R^l, where C > 0 is a constant independent of τ . (c) From the definition of φτ and φFB, it is not hard to check that φτ(x, y)= φFB x+τ− 2 2 y, √τ (4− τ) 2 y 2(τ− 4 + τ (4− τ))y. Note that φFBis strongly semismooth everywhere by Corollary 3.3 of [25], and the functions x+ ^τ^−2[2] y, ^1[2]√ τ (4− τ)y and ^1[2](τ − 4 +√ τ (4− τ))y are also strongly semismooth at any (x, y)∈ R^l× R^l. Therefore, φτ is a strongly semismooth func- tion since by [12, Theorem 19] the composition of strongly semismooth functions is strongly semismooth. (d) The proof can be found in Proposition 3.3 of the literature [4]. Proposition3.1(c) indicates that, when a smoothing or nonsmooth Newton method is employed to solve the system (10), a fast convergence rate (at least superlinear) can be expected. To develop a semismooth Newton method for the SOCCP, we need to characterize the B-subdifferential ∂Bφ[τ](x, y)at a general point (x, y). The discussion of B-subdifferential for φFBwas given in [20]. Here, we generalize it to φτ for any τ∈ (0, 4). The detailed derivation process is included inAppendixfor completeness. Proposition 3.2 Given a general point x= (x1, x2), y= (y1, y2)∈ R × R^l^−1, each element in ∂Bφ[τ](x, y)is of the form V = [Vx− I Vy− I] with Vxand Vyhaving the following representation: (a) If (x− y)^2+ τ(x ◦ y) ∈ int(K^l), then Vx= L^−1[z] L[x][+]τ−2 2 yand Vy= L^−1[z] L[y][+]τ−2 2 x. (b) If (x− y)^2+ τ(x ◦ y) ∈ bd(K^l)and (x, y)= (0, 0), then 1 2√ 1 ¯w^T[2] ¯w2 4I− 3 ¯w2¯w^T[2] L[x]+τ− 2 2 L[y] − ¯w2 (26) V[y]∈ 1 2√ 1 ¯w^T[2] ¯w2 4I− 3 ¯w2¯w^T[2] L[y]+τ− 2 2 L[x] − ¯w2 for some u= (u1, u[2]), v= (v1, v[2])∈ R × R^l^−1satisfying|u1| ≤ u2 ≤ 1 and |v1| ≤ v2 ≤ 1, where ¯w2=[w]^w^2[2][]. (c) If (x, y)= (0, 0), then Vx ∈ {L[ˆu]}, Vy ∈ {L[ˆv]} for some ˆu = ( ˆu1,ˆu2), ˆv = (ˆv1,ˆv2)∈ R × R^l^−1satisfying ˆu, ˆv ≤ 1 and ˆu1ˆv2+ ˆv1ˆu2= 0, or ξ^T +1 − ¯w2 u^T + 2 (I− ¯w2¯w[2]^T)s2 (I− ¯w2¯w[2]^T)s1 η^T +1 − ¯w2 v^T + 2 (I− ¯w2¯w[2]^T)ω2 (I− ¯w2¯w^T[2])ω1 for some ¯w2 = 1, u = (u1, u[2]), v= (v1, v[2]), ξ = (ξ1, ξ[2]), η= (η1, η[2])∈ R × R^l^−1 satisfying|u1| ≤ u2 ≤ 1, |v1| ≤ v2 ≤ 1, |ξ1| ≤ ξ2 ≤ 1 and |η1| ≤ η2 ≤ 1, and s = (s1, s[2]), ω= (ω1, ω[2])∈ R × R^l^−1such thats^2+ ω^2≤ 1. In what follows, we focus on the properties of the operator τ defined in (10). We start with the semismoothness of τ. Since τ is (strongly) semismooth if and only if all component functions are (strongly) semismooth, and since the composite of (strongly) semismooth functions is (strongly) semismooth by [12, Theorem 19], we obtain the following conclusion as an immediate consequence of Proposition 3.3 The operator τ: R^n→ R^ndefined as in (10) is semismooth. More- over, it is strongly semismooth if F^and G^are locally Lipschitz continuous. To characterize the B-subdifferential of τ, in the rest of this paper, we let F[i](ζ )= (Fi1(ζ ), Fi2(ζ )), G[i](ζ )= (Gi1(ζ ), Gi2(ζ ))∈ R × R^n^i^−1 and wi: R^n→ R^n^iand zi: R^n→ R^n^i for i= 1, 2, . . . , m be given as follows: w[i] = (wi1(ζ ), w[i2](ζ ))= w(Fi(ζ ), G[i](ζ )), z[i] = (zi1(ζ ), z[i2](ζ ))= z(Fi(ζ ), G[i](ζ )). (27) Proposition 3.4 Let τ: R^n→ R^nbe defined as in (10). Then, for any ζ∈ R^n, ∂[B][τ](ζ )^T ⊆ ∇F (ζ )(A(ζ ) − I) + ∇G(ζ )(B(ζ ) − I), (28) where A(ζ ) and B(ζ ) are possibly multivalued n× n block diagonal matrices whose ith blocks Ai(ζ )and Bi(ζ )for i= 1, 2, . . . , m have the following representation: (a) If (Fi(ζ )− Gi(ζ ))^2+ τ(Fi(ζ )◦ Gi(ζ ))∈ int(K^n^i), then A[i](ζ )= L[F][i][+]τ−2 2 GiL^−1[z] i and B[i](ζ )= L[G][i][+]τ−2 2 FiL^−1[z] i . (b) If (Fi(ζ )− Gi(ζ ))^2+ τ(Fi(ζ )◦ Gi(ζ ))∈ bd(K^n^i)and (Fi(ζ ), G[i](ζ ))= (0, 0), then A[i](ζ )∈ 2√ 2wi1 L[F][i]+τ− 2 2 L[G][i] 1 ¯w[i2]^T ¯wi2 4I− 3 ¯wi2¯w[i2]^T 2u[i](1,− ¯w[i]^T2) B[i](ζ )∈ 2√ 2wi1 L[G][i]+τ− 2 2 L[F][i] 1 ¯w[i2]^T ¯wi2 4I− 3 ¯wi2¯w[i2]^T 2v[i](1,− ¯w^Ti2) for some ui= (ui1, u[i][2]), vi= (vi1, v[i2])∈ R × R^n^i^−1satisfying|ui1| ≤ ui2 ≤ 1 and|vi1| ≤ vi2 ≤ 1, where ¯wi2=[w]^w^i2 i2. (c) If (Fi(ζ ), Gi(ζ ))= (0, 0), then A[i](ζ )∈ {L[ˆu][1]} ∪ 2ξ[i](1, ¯w^T[i2])+1 2u[i](1,− ¯w^T[i2])+ 0 2s[i2]^T(I− ¯wi2¯w[i2]^T) 0 2si1(I− ¯wi2¯w[i2]^T) B[i](ζ )∈ {L[ˆv][1]} ∪ 2η[i](1, ¯w^Ti2)+1 2v[i](1,− ¯wi2^T)+ 0 2ω^T[i2](I− ¯wi2¯w^T[i2]) 0 2ωi1(I− ¯wi2¯w^T[i2]) for some ˆui = ( ˆui1,ˆui2), ˆvi= (ˆvi1,ˆvi2)∈ R × R^n^i^−1 satisfying ˆui, ˆvi ≤ 1 and ˆui1ˆvi2+ ˆvi1ˆui2= 0, some ui = (ui1, u[i][2]), vi = (vi1, v[i2]), ξi = (ξi1, ξ[i2]), η[i] = (ηi1, η[i2])∈ R × R^n^i^−1with|ui1| ≤ ui2 ≤ 1, |vi1| ≤ vi2 ≤ 1, |ξi1| ≤ ξi2 ≤ 1 and |ηi1| ≤ ηi2 ≤ 1, ¯ωi2∈ R^n^i^−1 satisfying ¯ωi2 = 1, and si = (si1, si2), ωi= (ωi1, ωi2)∈ R × R^n^i^−1such thatsi^2+ ωi^2≤ 1. Proof Let τ,i(ζ )denote the ith subvector of τ, i.e. τ,i(ζ )= φτ(Fi(ζ ), Gi(ζ )) for all i= 1, 2, . . . , m. From Proposition 2.6.2 of [8], it follows that ∂Bτ(ζ )^T ⊆ ∂Bτ,1(ζ )^T × ∂Bτ,2(ζ )^T × · · · × ∂Bτ,m(ζ )^T, (29) where the latter denotes the set of all matrices whose (ni−1+ 1) to nith columns with n0= 0 belong to ∂B[τ,i](ζ )^T. Using the definition of B-subdifferential and the continuous differentiability of F and G, it is not difficult to verify that ∂[B][τ,i](ζ )^T = [∇Fi(ζ ) ∇Gi(ζ )]∂Bφ[τ](F[i](ζ ), G[i](ζ ))^T, i= 1, 2, . . . , m. (30) Using Proposition3.2and the last two equations, we get the desired result. Proposition 3.5 For any ζ∈ R^n, let A(ζ ) and B(ζ ) be the multivalued block diago- nal matrices given as in Proposition3.4Then, for any i∈ {1, 2, . . . , m}, (Ai(ζ )− I)τ,i(ζ ), (B[i](ζ )− I)τ,i(ζ ) ≥ 0, with equality holding if and only if τ,i(ζ )= 0. Particularly, for the index i such that (F[i](ζ )− Gi(ζ ))^2+ τ(Fi(ζ )· Gi(ζ )∈ int(K^n^i)), we have (Ai(ζ )− I)υi, (B[i](ζ )− I)υi ≥ 0, for any υi∈ R^n^i. Proof From Theorem 2.6.6 of [8] and Proposition 3.1(d), we have that ∇ψτ(x, y)= ∂Bφ[τ](x, y)^Tφ[τ](x, y). Consequently, for any i= 1, 2, . . . , m, it follows that ∇ψτ(Fi(ζ ), Gi(ζ ))= ∂Bφτ(Fi(ζ ), Gi(ζ ))^Tφτ(Fi(ζ ), Gi(ζ )). In addition, from Propositions3.2and3.4, it is not hard to see that [Ai(ζ )^T − I Bi(ζ )^T − I] ∈ ∂Bφ[τ](F[i](ζ ), G[i](ζ )). Combining with the last two equations yields that for any i= 1, 2, . . . , m, ∇xψ[τ](F[i](ζ ), G[i](ζ ))= (Ai(ζ )− I)τ,i(ζ ), ∇yψ[τ](F[i](ζ ), G[i](ζ ))= (Bi(ζ )− I)τ,i(ζ ). Consequently, the first part of conclusions is a direct consequence of Proposition 4.1 of [4]. Notice that for any i∈ O(ζ ) and υi∈ R^n^i, (Ai(ζ )− I)υi, (Bi(ζ )− I)υi = (L[F][i][+]τ−2 2 Gi− Lzi)L^−1[z] i υ[i], (L[G] i+^τ−22 Fi− Lzi)L^−1[z] i υ[i] = (L[G][i][+]^τ−2 2 F[i]− Lzi)(L[F] i+^τ^−2[2] G[i]− Lzi)L^−1[z] i υ[i], L^−1[z] i υ[i]. (32) Using the same argument as Case (2) of [4, Proposition 4.1] then yields the second 4 Nonsingularity conditions In this section, we show that all elements of the B-subdifferential ∂Bτ(ζ )at a solu- tion ζ^∗of the SOCCP are nonsingular if ζ^∗satisfies strict complementarity, i.e., Fi(ζ^∗)+ Gi(ζ^∗)∈ int(K^n^i) for all i= 1, 2, . . . , m. (33) First, we give a technical lemma which states that the multi-valued matrix (A[i](ζ^∗)− I) + (Bi(ζ^∗)− I) is nonsingular if the i-th block component satisfies strict complementarity. Lemma 4.1 Let ζ^∗ be a solution of the SOCCP, and A(ζ^∗)and B(ζ^∗)be the mul- tivalued block diagonal matrices characterized by Proposition 3.4. Then, for any i∈ {1, 2, . . . , m} such that Fi(ζ^∗)+ Gi(ζ^∗)∈ int(K^n^i), we have that τ,i(ζ )is con- tinuously differentiable at ζ^∗and (Ai(ζ^∗)− I) + (Bi(ζ^∗)− I) is nonsingular. Proof Since ζ^∗is a solution of the SOCCP, we have for all i= 1, 2, . . . , m F[i](ζ^∗)∈ K^n^i, G[i](ζ^∗)∈ K^n^i, Fi(ζ^∗), G[i](ζ^∗) = 0. It is not hard to verify that Fi(ζ^∗)+ Gi(ζ^∗)∈ int(K^n^i)if and only if one of the three cases shown as below holds. Case (1) Fi(ζ^∗)∈ int(K^n^i)and Gi(ζ^∗)= 0. Under this case, wi(ζ^∗)= (Fi(ζ^∗)− Gi(ζ^∗))^2+ τ(Fi(ζ^∗)◦ Gi(ζ^∗))= Fi(ζ^∗)^2∈ int(K^n^i). By Proposition3.1(a), τ,i(ζ )is continuously differentiable at ζ^∗. Since zi(ζ^∗)= w[i](ζ^∗)^1/2= Fi(ζ^∗), from Proposition3.4(a) it follows that A[i](ζ^∗)= I and Bi(ζ^∗)=τ− 2 2 I, which implies that (Ai(ζ^∗)− I) + (Bi(ζ^∗)− I) is nonsingular since 0 < τ < 4. Case (2) Fi(ζ^∗)= 0 and Gi(ζ^∗)∈ int(K^n^i). Now, wi(ζ^∗)= Gi(ζ^∗)^2∈ int(K^n^i). So, τ,i(ζ )is continuously differentiable at ζ^∗by Proposition3.1(a). Since z[i](ζ^∗)= wi(ζ^∗)^1/2= Gi(ζ^∗), applying Proposition3.4(a) yields that A[i](ζ^∗)=τ− 2 2 I and B[i](ζ^∗)= I, which immediately implies that (Ai(ζ^∗)− I) + (Bi(ζ^∗)− I) is nonsingular. Case (3) Fi(ζ^∗)∈bd^+(K^n^i)and Gi(ζ^∗)∈bd^+(K^n^i), where bd^+(K^n^i):=bd(K^n^i)\{0}. By Proposition 3.1(a), it suffices to prove wi(ζ^∗) ∈ int(K^n^i). Suppose that wi(ζ^∗)∈ bd(K^n^i). Then, from (18) in Lemma2.2, it follows that Fi1(ζ^∗)G[i]1(ζ^∗)= Fi2(ζ^∗)^TG[i]2(ζ^∗). Since Fi1(ζ^∗)= Fi2(ζ^∗) = 0 and Gi1(ζ^∗)= Gi2(ζ^∗) = 0, we have Fi2(ζ^∗) · Gi2(ζ^∗) = Fi2(ζ^∗)^TG[i][2](ζ^∗), which implies that Fi2(ζ^∗)= αGi2(ζ^∗) for some constant α > 0. Consequently, Fi(ζ^∗)= αGi(ζ^∗). Noting that Fi(ζ^∗), Gi(ζ^∗) = 0, we then get Fi(ζ^∗) = Gi(ζ^∗) = 0. This clearly contradicts the assumptions that Fi(ζ^∗) = 0 and G[i](ζ^∗)= 0. So, wi(ζ^∗)∈ int(K^n^i). From the expression of Ai(ζ )and Bi(ζ )given by Proposition 3.4(a), (A[i](ζ^∗)− I) + (Bi(ζ^∗)− I) = −L2z[i](ζ^∗)−^τ2(Fi(ζ^∗)+Gi(ζ^∗))L^−1[z]
{"url":"https://9lib.co/document/y8gx25p5-semismooth-newton-method-soccps-based-parametric-complementarity-functions.html","timestamp":"2024-11-05T04:26:52Z","content_type":"text/html","content_length":"218521","record_id":"<urn:uuid:87b68bcd-0601-4828-8020-cf738b6613f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00178.warc.gz"}
100 payday loans Archives Very, ok I could kind of figure out what the solution to that it next question is Ted: It is nearly impossible to maintain. When you’re – over fifty percent your revenue is about to servicing expense, unless of course your revenue is ridiculously large plus life style costs are lowest its not alternative. Doug: … Very, ok I could kind of figure out what the solution to that it next question is Read More »
{"url":"https://satyajoga.pl/category/100-payday-loans-2/","timestamp":"2024-11-06T15:01:45Z","content_type":"text/html","content_length":"82033","record_id":"<urn:uuid:bb57d1da-63e0-4a22-91eb-39bed561794c>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00093.warc.gz"}
Cos2x - Formula, Identity, Examples, Proof _ Cos^2x Formula The cosine function is among the most significant mathematical functions, along a wide array of applications in different fields for instance physics, engineering, and mathematics. It is a periodic function that oscillates within -1 and 1 and is defined as the ratio of the adjacent side to the hypotenuse of a right triangle. The cosine function is a fundamental component of trigonometry, that is the investigation of the connections between the sides and angles of triangles. One of the most fundamental identities involving the cosine function is the cos2x formula, also called as the cosine squared formula. The cos2x formula permits us to streamline trigonometric expressions consisting of cosines, sines, and other trigonometric functions. It is a potent tool in math and has several real-life uses. In this blog post, we will inspect the cos2x formula and its properties in depth. We will define the formula and specify its identity, displaying how it can be utilized to simplify trigonometric expressions. We will also offer a proof of the cos2x formula to help you further understand its derivation. Furthermore, we will offer examples of how the cos2x formula could be utilized in different fields, including physics, engineering, and mathematics. In the event you're a student who finds it hard to understand trigonometry or a professional wanting to apply the cos2x formula to real-life challenges, this article will provide a in depth overview of this vital concept. By the end of this article, you will possess a enhanced comprehension of the cos2x formula and its uses, and how it can be utilized to solve intricate challenges. Significance of the Cos2x Formula The cos2x formula is an important tool in trigonometry and has many applications in various fields. Comprehending the cos2x formula and its properties could assist in solving complex trigonometric problems efficiently. In math, the cos2x formula is often utilized to streamline trigonometric expressions consisting of cosines, sines, and other trigonometric functions. By utilizing the cos2x formula, we can reduce complex expressions to more streamlined forms, making them easier to grasp and manipulate. In addition, the cos2x formula can be utilized to derive other trigonometric identities, making it a crucial tool for expanding our knowledge of trigonometry. In physics, the cos2x formula is applied in model physical phenomena that includes recurring motion, such as vibrations and waves. It is utilized to figure out the frequencies and amplitudes of oscillating systems, for example electromagnetic waves and sound waves. In engineering, the cos2x formula is utilized in the design and analysis of several structures, consisting of buildings, bridges, and mechanical systems. Engineers use trigonometric functions and their identities, for instance the cos2x formula, to calculate stresses and forces on structures and to design pieces which can bear these forces. Lastly, the cos2x formula is an essential theory in trigonometry which has several real-life uses. It is a powerful tool that can be applied to simplify complex trigonometric expressions, extract other trigonometric identities, and model physical phenomena. Getting a grasp of the cos2x formula and its features is important for anyone working in fields for example math, physics, engineering, or any other domains which consists of periodic motion or structural design. Cos2x Formula The cos2x formula provides that: cos(2x) = cos^2(x) - sin^2(x) where x is an angle in radians. This formula could be utilized to simplify trigonometric expressions consisting of cosines, sines, and other trigonometric functions. It can also be utilized to derive other trigonometric identities. Cos^2x Formula The cos^2x formula is an exception of the cos2x formula where x is replaced with 2x/2. The outcome is: cos^2(x) = (1 + cos(2x)) / 2 This formula can further be used to simplify trigonometric expressions including cosines and sines. Identity of Cos2x Formula The cos2x formula can be derived using the following trigonometric identities: cos(2x) = cos^2(x) - sin^2(x) sin(2x) = 2sin(x)cos(x) By utilizing the Pythagorean identity: sin^2(x) + cos^2(x) = 1 we can solve for sin^2(x) in terms of cos^2(x): sin^2(x) = 1 - cos^2(x) Substituting this expression into the sin^2(x) term in the cos(2x) formula, we get: cos(2x) = cos^2(x) - (1 - cos^2(x)) Simplifying, we get: cos(2x) = cos^2(x) - 1 + cos^2(x) cos(2x) = 2cos^2(x) - 1 This is the cos2x formula. Examples of Cos2x Formula Here are few practical examples of Cos2x formula: Example 1: Assessing the Cosine of an Angle Let's us suppose we desire to figure out the value of cos(2π/3). We can apply the cos2x formula to streamline this expression: cos(2π/3) = cos^2(π/3) - sin^2(π/3) We know that sin(π/3) = √3/2 and cos(π/3) = 1/2, so we can substitute these values into the formula: cos(2π/3) = (1/2)^2 - (√3/2)^2 cos(2π/3) = 1/4 - 3/4 cos(2π/3) = -1/2 So cos(2π/3) = -1/2. Example 2: Deriving Other Trigonometric Identities We can use the cos2x formula to extract other trigonometric identities. Such as, we can apply the cos^2x formula to derive the double angle formula for the sine function: sin(2x) = 2sin(x)cos(x) As we know sin^2(x) + cos^2(x) = 1, so we can restructure this equation to work out sin^2(x): sin^2(x) = 1 - cos^2(x) Substituting this expression into the sin(2x) formula, we get: sin(2x) = 2sin(x)cos(x) sin(2x) = 2sin(x)√(1 - sin^2(x)) Simplifying, we get: sin(2x) = 2sin(x)√(cos^2(x)) sin(2x) = 2sin(x)cos(x) This is the double angle formula for the sine function, which can be extracted using the cos^2x formula. Finally, the cos2x formula and its identity are important ideas in trigonometry which possesses several real-world implications. Comprehending the formula and its uses can benefit in various domains, including engineering, physics, and [[mathematics]227]. In this blog article, we explored the cos2x formula and its identity, gave a proof of the formula, and portrayed examples of how it can be Whether you're a student or a professional, a good grasp of the cos2x formula can help you further comprehend trigonometric functions and work on problems more efficiently. If you want help understanding the cos2x formula or other trigonometric concepts, consider reaching out to Grade Potential Tutoring. Our expert instructors are accessible in-person or online to provide customized and effective tutoring services to guide you be successful. Connect with us today to schedule a tutoring session and take your math skills to another level.
{"url":"https://www.sanfernandoinhometutors.com/blog/cos2x-formula-identity-examples-proof-_-cos2x-formula","timestamp":"2024-11-13T03:15:27Z","content_type":"text/html","content_length":"79274","record_id":"<urn:uuid:8349c12b-9596-4fb1-a061-7514718bc543>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00697.warc.gz"}
Colloquia — Fall 2014 Monday, November 24, 2014 Title: Random Matrix Models, Non-intersecting random paths, and the Riemann-Hilbert Analysis Speaker: Andrei Martínez-Finkelshtein, Universidad de Almería Almería, SPAIN Time: 3:00pm&dash;4:00pm Place: NES 103 E. A. Rakhmanov Random matrix theory (RMT) is a very active area of research and a great source of exciting and challenging problems for specialists in many branches of analysis, spectral theory, probability and mathematical physics. The analysis of the eigenvalue distribution of many random matrix ensembles leads naturally to the concepts of determinantal point processes and to their particular case, biorthogonal ensembles, when the main object to study, the correlation kernel, can be written explicitly in terms of two sequences of mutually orthogonal functions. Another source of determinantal point processes is a class of stochastic models of particles following non-intersecting paths. In fact, the connection of these models with the RMT is very tight: the eigenvalues of the so-called Gaussian Unitary Ensemble (GUE) and the distribution of random particles performing a Brownian motion, departing and ending at the origin under condition that their paths never collide are, roughly speaking, statistically identical. A great challenge is the description of the detailed asymptotics of these processes when the size of the matrices (or the number of particles) grows infinitely large. This is needed, for instance, for verification of different forms of “universality” in the behavior of these models. One of the rapidly developing tools, based on the matrix Riemann-Hilbert characterization of the correlation kernel, is the associated non-commutative steepest descent analysis of Deift and Zhou. Without going into technical details, some ideas behind this technique will be illustrated in the case of a model of squared Bessel nonintersecting paths. Friday, November 21, 2014 Title: Random Matrices, Integrable Wave Equations, Determinantal Point Processes: a Swiss-Army Knife Approach Speaker: Marco Bertola, Concordia University Time: 3:00pm&dash;4:00pm Place: CMC 130 Seung-Yeop Lee Random Matrix models, nonlinear integrable waves, Painlevé transcendents, determinantal random point processes seem very unrelated topics. They have, however, a common point in that they can be formulated or related to a Riemann-Hilbert problem, which then enters prominently as a very versatile tool. Its importance is not only in providing a common framework, but also in that it opens the way to rigorous asymptotic analysis using the nonlinear steepest descent method. I will briefly sketch and review some results in the above-mentioned areas. Friday, November 14, 2014 Title: Virtual Endomorphisms of Groups Speaker: Said Sidki, Universidade de Brasilia Time: 3:00pm&dash;4:00pm Place: CMC 130 Dmytro Savchuk A virtual endomorphism of a group \(G\) is a homomorphism \(f:H\rightarrow G\) where \(H\) is a subgroup of \(G\) of finite index \(m\). A recursive construction using \(f\) produces a so-called state-closed (or, self-similar in dynamical terms) representation of \(G\) on a \(1\)-rooted regular \(m\)-ary tree. The kernel of this representation is the \(f\)-\(\mathrm{core}(H)\); i.e., the maximal subgroup \(K\) of \(H\) which is both normal in \(G\) and is \(f\)-invariant, in the sense that \(K^{f}\leq K\). Examples of state-closed groups are the Grigorchuk \(2\)-group and the Gupta-Sidki \(p\)-groups in their natural representations on rooted trees. The affine group \(\mathbb{Z}^{n}\mathrm{GL}(n,\ mathbb{Z)}\) as well as the free group \(F_{3}\) in three generators admit faithful state-closed representations. Yet another example is the free nilpotent group \(G=F(c,d)\) of class \(c\), freely generated by \(x_{i}\) \((1\leq i\leq d)\): let \(H=\left\langle x_{i}^{n}(1\leq i\leq d)\right\rangle\) where \(n\) is a fixed integer greater than \(1\) and \(f\) be the extension of the map \(x_ {i}^{n}\rightarrow x_{i}\) \((1\leq i\leq d)\). We will discuss state-closed representations of general abelian groups and of finitely generated torsion-free nilpotent groups. Friday, November 7, 2014 Title: The Interactions of Solitons in the Novikov-Veselov Equation Speaker: Jen-Hsu Chang, UC Riverside and National Defense Univ., Taiwan Time: 3:00pm&dash;4:00pm Place: CMC 130 Wen-Xiu Ma Using the reality condition of the solutions, one constructs thereal Pfaffian N-solitons solutions of the Novikov-Veselov (NV) equation using the tan function and the Schur identity. By the minor-summation formula of the Pfaffian, we can study the interactions of solitons in the Novikov-Veselov equation from the Kadomtsev-Petviashvili (KP) equation's point-of-view, that is, the totally non-negative Grassmannian. Especially, the Y-type resonance, O-type and the P-type interactions of X-shape are investigated; furthermore, the Mach-type solton is studied to describe the resonance of incident wave and reflected wave. Also, the maximum amplitude of the intersection of these line solitons and the critical angle are computed and one makes a comparison with the KP-(II) equation. Friday, October 17, 2014 Title: Geometric curve flows and integrable systems Speaker: Stephen Anco, Department of Mathematics and Statistics Brock University Ontario, CANADA Time: 3:00pm&dash;4:00pm Place: CMC 130 Wen-Xiu Ma The modern theory of integrable soliton equations displays many deep links to differential geometry, particularly in the study of geometric curve flows by moving-frame methods. I will first review an elegant geometrical derivation of the integrability structure for two important examples of soliton equations: the nonlinear Schrödinger (NLS) equation; and the modified Korteweg-de Vries (mKdV) equation. This derivation is based on a moving-frame formulation of geometric curve flows which are mathematical models of vortex filaments and vortex-patch boundaries arising in ideal fluid flow in two and three dimensions. Key mathematical tools are the Cartan structure equations of Frenet frames and the Hasimoto transformation relating invariants of a curve to soliton variables, as well as the theory of Poisson brackets for Hamiltonian PDEs. I will then describe a broad generalization of these results to geometric curve flows in semi-simple Klein geometries \(M=G/H\), giving a geometrical derivation of group-invariant (multi-component) versions of mKdV and NLS soliton equations along with their full integrability structure. Friday, October 3, 2014 Title: Ordering free groups and free products Speaker: Zoran Šunić, Texas A&M University Time: 3:00pm&dash;4:00pm Place: CMC 130 Milé Krajčevski We utilize a criterion for the existence of a free subgroup acting freely on at least one of its orbits to construct such actions of the free group on the circle and on the line, leading to orders on free groups that are particularly easy to state and work with. We then switch to a restatement of the orders in terms of certain quasi-characters of free groups, from which properties of the defined orders may be deduced (some have positive cones that are context-free, some have word reversible cones, some of the orders extend the usual lexicographic order, and so on). Finally, we construct total orders on the vertex set of an oriented tree. The orders are based only on up-down counts at the interior vertices and the edges along the unique geodesic from a given vertex to another. As an application, we provide a short proof of Vinogradov´s result that the free product of left-orderable groups is left-orderable. Friday, September 26, 2014 Title: Recent developments in Quantum invariants of knots Speaker: Mustafa Hajij, Louisiana State University Time: 3:00pm&dash;4:00pm Place: CMC 130 Mohamed Elhamdadi Quantum knot invariants deeply connect many domains such as lie algebras, quantum groups, number theory and knot theory. I will talk about a particular quantum invariant called the colored Jones polynomial and some of the recent work that has been done to understand it. This invariant takes the form a sequence of Laurent polynomials. I will explain how the coefficients of this sequence stabilize for certain class of knots called alternating knots. Furthermore, I will show that this leads naturally to interesting connections with number theory. Friday, September 12, 2014 Title: The valence of polynomial harmonic mappings Speaker: Erik Lundberg, Florida Atlantic University Time: 3:00pm&dash;4:00pm Place: CMC 130 Dmitry Khavinson While working to extend the Fundamental Theorem of Algebra, A. S. Wilmshurst used Bezout’s theorem to give an upper bound for the number of zeros of a (complex valued) harmonic polynomial. Although the bound is sharp in general, Wilmshurst conjectured that Bezout’s bound can be refined dramatically. Using holomorphic dynamics, the conjecture was confirmed by D. Khavinson and G. Swiatek in the special case when the anti-analytic part is linear. We will discuss recent counterexamples to other cases as well as an alternative probabilistic approach to the problem.
{"url":"https://secure.cas.usf.edu/depts/mth/research/colloquia/fall14/","timestamp":"2024-11-09T13:02:42Z","content_type":"text/html","content_length":"49534","record_id":"<urn:uuid:5bf1ab4f-6fd2-42ac-9817-15558a4fa1f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00119.warc.gz"}
Math, Grade 7, Proportional Relationships, Identifying Proportional Relationships Temperature at the Park Work Time Temperature at the Park The temperature went up and down on the day the Lee family went to the park. They arrived at the park at 8:00 in the morning. • Can you use the information in the table to determine what the temperature was at 12 noon? • Does the table represent a proportional relationship? Explain why or why not. Calculate each ratio in the table. Are they the same?
{"url":"https://oercommons.org/courseware/lesson/2956/student/?section=5","timestamp":"2024-11-09T00:37:26Z","content_type":"text/html","content_length":"36381","record_id":"<urn:uuid:9d746e2b-52fd-4136-a8ca-776ce7b2bdba>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00844.warc.gz"}
Integration calculator step by step absolute value Algebra Tutorials! Saturday 2nd of November integration calculator step by step absolute value Related topics: Home mcdougal littell math course 2 answers | learn algebra easy | fun ways to teach slope in algebra | matlab + example + exponent | quadratic simultanous equations# Rotating a Parabola = | why is pre algebra important | solve the system with fractions | free answers to saxon algebra 1 | trigometric ratio | ninth grade math study books | Multiplying Fractions factorize quadratic equations+online calculator Finding Factors Miscellaneous Equations Mixed Numbers and Author Message Improper Fractions Systems of Equations in Flodasringvinjer Posted: Friday 29th of Dec 09:51 Two Variables Heya peeps ! Is someone here know about integration calculator step by step absolute value? I have this set of questions about it that I Literal Numbers can’t understand. Our class was asked to answer it and know how we came up with the answer . Our Algebra professor will select random Adding and Subtracting people to solve the problem as well as show solutions to class so I need detailed explanation regarding integration calculator step by step Polynomials absolute value. I tried answering some of the questions but I guess I got it completely incorrect. Please assist me because it’s a bit Subtracting Integers Registered: urgent and the deadline is quite near already and I haven’t yet understood how to solve this. Simplifying Complex 16.10.2006 Fractions From: Maryland Decimals and Fractions Multiplying Integers Logarithmic Functions Multiplying Monomials ameich Posted: Saturday 30th of Dec 16:09 Mixed Don’t be so disheartened . I know exactly what you are going through right now. When I used to be a student , we didn’t have much of a hope The Square of a Binomial in such a situation , but these days thanks to Algebrator my son is doing wonderfully well in his math classes. He used to have problems in Factoring Trinomials topics such as integration calculator step by step absolute value and cramer’s rule but all his questions were answered by this one easy to The Pythagorean Theorem use tool known as Algebrator. Try it and I’m sure you’ll do well tomorrow. Solving Radical Registered: Equations in One 21.03.2005 Variable From: Prague, Czech Multiplying Binomials Republic Using the FOIL Method Imaginary Numbers Solving Quadratic Equations Using the sxAoc Posted: Sunday 31st of Dec 15:10 Quadratic Formula Algebrator really helps you out in integration calculator step by step absolute value. I have looked for every Math software on the net. It Solving Quadratic is very logical. You just put your problem and it will create a complete step-by-step report of the solution. This helped me much with Equations subtracting exponents, adding exponents and converting decimals. It helps you understand Math better. I was tired of paying huge sum to Algebra Maths Tutors who could not give me the sufficient time and attention. It is a cheap tool which could change your entire attitude towards Order of Operations Registered: math. Using Algebrator would be fun. Take it. Dividing Complex Numbers 16.01.2002 Polynomials From: Australia The Appearance of a Polynomial Equation Standard Form of a Line Positive Integral TheWxaliChebang Posted: Tuesday 02nd of Jan 07:31 Divisors Thank you very much for your help ! Could you please tell me how to get hold of this software ? I don’t have much time on hand since I have Dividing Fractions to finish this in a few days. Solving Linear Systems of Equations by Elimination Registered: Factoring 12.12.2004 Multiplying and Dividing From: Hobart, Square Roots Tasmania Functions and Graphs Dividing Polynomials Solving Rational Equations erx Posted: Tuesday 02nd of Jan 16:45 Numbers I am a regular user of Algebrator. It not only helps me get my assignments faster, the detailed explanations offered makes understanding Use of Parentheses or the concepts easier. I advise using it to help improve problem solving skills. Brackets (The Distributive Law) Multiplying and Dividing Registered: by Monomials 26.10.2001 Solving Quadratic From: PL/DE/ES/GB/HU Equations by Graphing Multiplying Decimals Use of Parentheses or Brackets (The Koem Posted: Wednesday 03rd of Jan 09:09 Distributive Law) Yes I’m sure. This is tried and tested. Here: https://gre-test-prep.com/solving-quadratic-equations-using-the-quadratic-formula.html. Try Simplifying Complex to make use of it. You’ll be improving your solving abilities way quicker than just by reading tutorials. Fractions 1 Adding Fractions Simplifying Complex Registered: Fractions 22.10.2001 Solutions to Linear From: Sweden Equations in Two Quadratic Expressions Completing Squares Dividing Radical Rise and Run Graphing Exponential Multiplying by a The Cartesian Coordinate Writing the Terms of a Polynomial in Descending Quadratic Expressions Solving Inequalities Solving Rational Inequalities with a Sign Solving Linear Equations Solving an Equation with Two Radical Terms Simplifying Rational Intercepts of a Line Completing the Square Order of Operations Factoring Trinomials Solving Linear Equations Solving Multi-Step Solving Quadratic Equations Graphically and Algebraically Collecting Like Terms Solving Equations with Radicals and Exponents Percent of Change Powers of ten (Scientific Notation) Comparing Integers on a Number Line Solving Systems of Equations Using Factoring Out the Greatest Common Factor Families of Functions Monomial Factors Multiplying and Dividing Complex Numbers Properties of Exponents Multiplying Square Roots Adding or Subtracting Rational Expressions with Different Expressions with Variables as Exponents The Quadratic Formula Writing a Quadratic with Given Solutions Simplifying Square Roots Adding and Subtracting Square Roots Adding and Subtracting Rational Expressions Combining Like Radical Solving Systems of Equations Using Dividing Polynomials Graphing Functions Product of a Sum and a Solving First Degree Solving Equations with Radicals and Exponents Roots and Powers Multiplying Numbers integration calculator step by step absolute value Related topics: Home mcdougal littell math course 2 answers | learn algebra easy | fun ways to teach slope in algebra | matlab + example + exponent | quadratic simultanous equations# Rotating a Parabola = | why is pre algebra important | solve the system with fractions | free answers to saxon algebra 1 | trigometric ratio | ninth grade math study books | Multiplying Fractions factorize quadratic equations+online calculator Finding Factors Miscellaneous Equations Mixed Numbers and Author Message Improper Fractions Systems of Equations in Flodasringvinjer Posted: Friday 29th of Dec 09:51 Two Variables Heya peeps ! Is someone here know about integration calculator step by step absolute value? I have this set of questions about it that I Literal Numbers can’t understand. Our class was asked to answer it and know how we came up with the answer . Our Algebra professor will select random Adding and Subtracting people to solve the problem as well as show solutions to class so I need detailed explanation regarding integration calculator step by step Polynomials absolute value. I tried answering some of the questions but I guess I got it completely incorrect. Please assist me because it’s a bit Subtracting Integers Registered: urgent and the deadline is quite near already and I haven’t yet understood how to solve this. Simplifying Complex 16.10.2006 Fractions From: Maryland Decimals and Fractions Multiplying Integers Logarithmic Functions Multiplying Monomials ameich Posted: Saturday 30th of Dec 16:09 Mixed Don’t be so disheartened . I know exactly what you are going through right now. When I used to be a student , we didn’t have much of a hope The Square of a Binomial in such a situation , but these days thanks to Algebrator my son is doing wonderfully well in his math classes. He used to have problems in Factoring Trinomials topics such as integration calculator step by step absolute value and cramer’s rule but all his questions were answered by this one easy to The Pythagorean Theorem use tool known as Algebrator. Try it and I’m sure you’ll do well tomorrow. Solving Radical Registered: Equations in One 21.03.2005 Variable From: Prague, Czech Multiplying Binomials Republic Using the FOIL Method Imaginary Numbers Solving Quadratic Equations Using the sxAoc Posted: Sunday 31st of Dec 15:10 Quadratic Formula Algebrator really helps you out in integration calculator step by step absolute value. I have looked for every Math software on the net. It Solving Quadratic is very logical. You just put your problem and it will create a complete step-by-step report of the solution. This helped me much with Equations subtracting exponents, adding exponents and converting decimals. It helps you understand Math better. I was tired of paying huge sum to Algebra Maths Tutors who could not give me the sufficient time and attention. It is a cheap tool which could change your entire attitude towards Order of Operations Registered: math. Using Algebrator would be fun. Take it. Dividing Complex Numbers 16.01.2002 Polynomials From: Australia The Appearance of a Polynomial Equation Standard Form of a Line Positive Integral TheWxaliChebang Posted: Tuesday 02nd of Jan 07:31 Divisors Thank you very much for your help ! Could you please tell me how to get hold of this software ? I don’t have much time on hand since I have Dividing Fractions to finish this in a few days. Solving Linear Systems of Equations by Elimination Registered: Factoring 12.12.2004 Multiplying and Dividing From: Hobart, Square Roots Tasmania Functions and Graphs Dividing Polynomials Solving Rational Equations erx Posted: Tuesday 02nd of Jan 16:45 Numbers I am a regular user of Algebrator. It not only helps me get my assignments faster, the detailed explanations offered makes understanding Use of Parentheses or the concepts easier. I advise using it to help improve problem solving skills. Brackets (The Distributive Law) Multiplying and Dividing Registered: by Monomials 26.10.2001 Solving Quadratic From: PL/DE/ES/GB/HU Equations by Graphing Multiplying Decimals Use of Parentheses or Brackets (The Koem Posted: Wednesday 03rd of Jan 09:09 Distributive Law) Yes I’m sure. This is tried and tested. Here: https://gre-test-prep.com/solving-quadratic-equations-using-the-quadratic-formula.html. Try Simplifying Complex to make use of it. You’ll be improving your solving abilities way quicker than just by reading tutorials. Fractions 1 Adding Fractions Simplifying Complex Registered: Fractions 22.10.2001 Solutions to Linear From: Sweden Equations in Two Quadratic Expressions Completing Squares Dividing Radical Rise and Run Graphing Exponential Multiplying by a The Cartesian Coordinate Writing the Terms of a Polynomial in Descending Quadratic Expressions Solving Inequalities Solving Rational Inequalities with a Sign Solving Linear Equations Solving an Equation with Two Radical Terms Simplifying Rational Intercepts of a Line Completing the Square Order of Operations Factoring Trinomials Solving Linear Equations Solving Multi-Step Solving Quadratic Equations Graphically and Algebraically Collecting Like Terms Solving Equations with Radicals and Exponents Percent of Change Powers of ten (Scientific Notation) Comparing Integers on a Number Line Solving Systems of Equations Using Factoring Out the Greatest Common Factor Families of Functions Monomial Factors Multiplying and Dividing Complex Numbers Properties of Exponents Multiplying Square Roots Adding or Subtracting Rational Expressions with Different Expressions with Variables as Exponents The Quadratic Formula Writing a Quadratic with Given Solutions Simplifying Square Roots Adding and Subtracting Square Roots Adding and Subtracting Rational Expressions Combining Like Radical Solving Systems of Equations Using Dividing Polynomials Graphing Functions Product of a Sum and a Solving First Degree Solving Equations with Radicals and Exponents Roots and Powers Multiplying Numbers Rotating a Parabola Multiplying Fractions Finding Factors Miscellaneous Equations Mixed Numbers and Improper Fractions Systems of Equations in Two Variables Literal Numbers Adding and Subtracting Subtracting Integers Simplifying Complex Decimals and Fractions Multiplying Integers Logarithmic Functions Multiplying Monomials The Square of a Binomial Factoring Trinomials The Pythagorean Theorem Solving Radical Equations in One Multiplying Binomials Using the FOIL Method Imaginary Numbers Solving Quadratic Equations Using the Quadratic Formula Solving Quadratic Order of Operations Dividing Complex Numbers The Appearance of a Polynomial Equation Standard Form of a Line Positive Integral Dividing Fractions Solving Linear Systems of Equations by Multiplying and Dividing Square Roots Functions and Graphs Dividing Polynomials Solving Rational Use of Parentheses or Brackets (The Distributive Law) Multiplying and Dividing by Monomials Solving Quadratic Equations by Graphing Multiplying Decimals Use of Parentheses or Brackets (The Distributive Law) Simplifying Complex Fractions 1 Adding Fractions Simplifying Complex Solutions to Linear Equations in Two Quadratic Expressions Completing Squares Dividing Radical Rise and Run Graphing Exponential Multiplying by a The Cartesian Coordinate Writing the Terms of a Polynomial in Descending Quadratic Expressions Solving Inequalities Solving Rational Inequalities with a Sign Solving Linear Equations Solving an Equation with Two Radical Terms Simplifying Rational Intercepts of a Line Completing the Square Order of Operations Factoring Trinomials Solving Linear Equations Solving Multi-Step Solving Quadratic Equations Graphically and Algebraically Collecting Like Terms Solving Equations with Radicals and Exponents Percent of Change Powers of ten (Scientific Notation) Comparing Integers on a Number Line Solving Systems of Equations Using Factoring Out the Greatest Common Factor Families of Functions Monomial Factors Multiplying and Dividing Complex Numbers Properties of Exponents Multiplying Square Roots Adding or Subtracting Rational Expressions with Different Expressions with Variables as Exponents The Quadratic Formula Writing a Quadratic with Given Solutions Simplifying Square Roots Adding and Subtracting Square Roots Adding and Subtracting Rational Expressions Combining Like Radical Solving Systems of Equations Using Dividing Polynomials Graphing Functions Product of a Sum and a Solving First Degree Solving Equations with Radicals and Exponents Roots and Powers Multiplying Numbers integration calculator step by step absolute value Related topics: mcdougal littell math course 2 answers | learn algebra easy | fun ways to teach slope in algebra | matlab + example + exponent | quadratic simultanous equations#= | why is pre algebra important | solve the system with fractions | free answers to saxon algebra 1 | trigometric ratio | ninth grade math study books | factorize quadratic equations+online calculator Author Message Flodasringvinjer Posted: Friday 29th of Dec 09:51 Heya peeps ! Is someone here know about integration calculator step by step absolute value? I have this set of questions about it that I can’t understand. Our class was asked to answer it and know how we came up with the answer . Our Algebra professor will select random people to solve the problem as well as show solutions to class so I need detailed explanation regarding integration calculator step by step absolute value. I tried answering some of the questions but I guess I got it completely incorrect. Please assist me because it’s a bit urgent and the deadline is quite near already and I haven’t yet understood how to solve this. From: Maryland ameich Posted: Saturday 30th of Dec 16:09 Don’t be so disheartened . I know exactly what you are going through right now. When I used to be a student , we didn’t have much of a hope in such a situation , but these days thanks to Algebrator my son is doing wonderfully well in his math classes. He used to have problems in topics such as integration calculator step by step absolute value and cramer’s rule but all his questions were answered by this one easy to use tool known as Algebrator. Try it and I’m sure you’ll do well tomorrow. From: Prague, Czech sxAoc Posted: Sunday 31st of Dec 15:10 Algebrator really helps you out in integration calculator step by step absolute value. I have looked for every Math software on the net. It is very logical. You just put your problem and it will create a complete step-by-step report of the solution. This helped me much with subtracting exponents, adding exponents and converting decimals. It helps you understand Math better. I was tired of paying huge sum to Maths Tutors who could not give me the sufficient time and attention. It is a cheap tool which could change your entire attitude towards math. Using Algebrator would be fun. Take it. From: Australia TheWxaliChebang Posted: Tuesday 02nd of Jan 07:31 Thank you very much for your help ! Could you please tell me how to get hold of this software ? I don’t have much time on hand since I have to finish this in a few From: Hobart, erx Posted: Tuesday 02nd of Jan 16:45 I am a regular user of Algebrator. It not only helps me get my assignments faster, the detailed explanations offered makes understanding the concepts easier. I advise using it to help improve problem solving skills. From: PL/DE/ES/GB/HU Koem Posted: Wednesday 03rd of Jan 09:09 Yes I’m sure. This is tried and tested. Here: https://gre-test-prep.com/solving-quadratic-equations-using-the-quadratic-formula.html. Try to make use of it. You’ll be improving your solving abilities way quicker than just by reading tutorials. From: Sweden Author Message Flodasringvinjer Posted: Friday 29th of Dec 09:51 Heya peeps ! Is someone here know about integration calculator step by step absolute value? I have this set of questions about it that I can’t understand. Our class was asked to answer it and know how we came up with the answer . Our Algebra professor will select random people to solve the problem as well as show solutions to class so I need detailed explanation regarding integration calculator step by step absolute value. I tried answering some of the questions but I guess I got it completely incorrect. Please assist me because it’s a bit urgent and the deadline is quite near already and I haven’t yet understood how to solve this. From: Maryland ameich Posted: Saturday 30th of Dec 16:09 Don’t be so disheartened . I know exactly what you are going through right now. When I used to be a student , we didn’t have much of a hope in such a situation , but these days thanks to Algebrator my son is doing wonderfully well in his math classes. He used to have problems in topics such as integration calculator step by step absolute value and cramer’s rule but all his questions were answered by this one easy to use tool known as Algebrator. Try it and I’m sure you’ll do well tomorrow. From: Prague, Czech sxAoc Posted: Sunday 31st of Dec 15:10 Algebrator really helps you out in integration calculator step by step absolute value. I have looked for every Math software on the net. It is very logical. You just put your problem and it will create a complete step-by-step report of the solution. This helped me much with subtracting exponents, adding exponents and converting decimals. It helps you understand Math better. I was tired of paying huge sum to Maths Tutors who could not give me the sufficient time and attention. It is a cheap tool which could change your entire attitude towards math. Using Algebrator would be fun. Take it. From: Australia TheWxaliChebang Posted: Tuesday 02nd of Jan 07:31 Thank you very much for your help ! Could you please tell me how to get hold of this software ? I don’t have much time on hand since I have to finish this in a few days. From: Hobart, erx Posted: Tuesday 02nd of Jan 16:45 I am a regular user of Algebrator. It not only helps me get my assignments faster, the detailed explanations offered makes understanding the concepts easier. I advise using it to help improve problem solving skills. From: PL/DE/ES/GB/HU Koem Posted: Wednesday 03rd of Jan 09:09 Yes I’m sure. This is tried and tested. Here: https://gre-test-prep.com/solving-quadratic-equations-using-the-quadratic-formula.html. Try to make use of it. You’ll be improving your solving abilities way quicker than just by reading tutorials. From: Sweden Posted: Friday 29th of Dec 09:51 Heya peeps ! Is someone here know about integration calculator step by step absolute value? I have this set of questions about it that I can’t understand. Our class was asked to answer it and know how we came up with the answer . Our Algebra professor will select random people to solve the problem as well as show solutions to class so I need detailed explanation regarding integration calculator step by step absolute value. I tried answering some of the questions but I guess I got it completely incorrect. Please assist me because it’s a bit urgent and the deadline is quite near already and I haven’t yet understood how to solve this. Posted: Saturday 30th of Dec 16:09 Don’t be so disheartened . I know exactly what you are going through right now. When I used to be a student , we didn’t have much of a hope in such a situation , but these days thanks to Algebrator my son is doing wonderfully well in his math classes. He used to have problems in topics such as integration calculator step by step absolute value and cramer’s rule but all his questions were answered by this one easy to use tool known as Algebrator. Try it and I’m sure you’ll do well tomorrow. Posted: Sunday 31st of Dec 15:10 Algebrator really helps you out in integration calculator step by step absolute value. I have looked for every Math software on the net. It is very logical. You just put your problem and it will create a complete step-by-step report of the solution. This helped me much with subtracting exponents, adding exponents and converting decimals. It helps you understand Math better. I was tired of paying huge sum to Maths Tutors who could not give me the sufficient time and attention. It is a cheap tool which could change your entire attitude towards math. Using Algebrator would be fun. Take Posted: Tuesday 02nd of Jan 07:31 Thank you very much for your help ! Could you please tell me how to get hold of this software ? I don’t have much time on hand since I have to finish this in a few days. Posted: Tuesday 02nd of Jan 16:45 I am a regular user of Algebrator. It not only helps me get my assignments faster, the detailed explanations offered makes understanding the concepts easier. I advise using it to help improve problem solving skills. Posted: Wednesday 03rd of Jan 09:09 Yes I’m sure. This is tried and tested. Here: https://gre-test-prep.com/solving-quadratic-equations-using-the-quadratic-formula.html. Try to make use of it. You’ll be improving your solving abilities way quicker than just by reading tutorials.
{"url":"https://gre-test-prep.com/algebra-1-practice-test/exponent-rules/integration-calculator-step-by.html","timestamp":"2024-11-02T20:39:17Z","content_type":"text/html","content_length":"118160","record_id":"<urn:uuid:492dc2d5-e670-4fb9-9dc0-181b01e89700>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00208.warc.gz"}
Time bounds for selection Open-Source Internship opportunity by OpenGenus for programmers. Apply now. In this article, we have discussed the paper titled "Time bounds for selection" submitted by Manuel Blum, Robert W. Floyd, Vaughan Pratt, Ronal L. Rivest, AND Robert E. Tarjan. This paper proposed a new algorithm called PICK for the problem that is stated as "Given an array of n integers, we have to pick the ith smallest number". The algorithm demonstrates to solve in no more than 5.4305 * n The problem can be traced back to the world of sports and the design of tournament to select first and second best player.The first best player will obviously be the one who won the entire game, however the previous method was denounced where the player who loses the final match was the second best player. Around 1930, this problem came into the realm of algorithmic complexity. It was shown that no more than n + [log(n)]- 2 matches are required, this method conducted a second match for the log(n) (at most) players other than the winner to determine the second best player. There are a certain notations that we need to understand here. 1. ith of S: S is a set of numbers and ith of S denotes the ith smallest element in S. 2. rank of x: rank of an element in an array can be defined as the number of elements in S, that are lesser than x. Also, x has to be such that y = rank of x yth of S has to be x (yth of S = x) See the below figure for better understanding. The minimum worst-case cost, that is, the number of binary comparisons required, to select ith of S will be denoted by f(i, n), |S|=n, and a new notation is introduced to measure the relative difficulty of computing percentile levels. Pick operates by successively removing subsets of S whose elements are known to be too small or too large to the iΘS, until only ith of S remains. Each subset removed with contain at least one quarter of the remaining elements. PICK will be described in terms of three auxiliary functions b(i,n), c(i,n), d(i,n). In order to avoid confusion we will omit argument lists for these functions. Let's see the algorithm 1. Select m which belongs to S a) Arrange S into n/c columns of length c and sort each column b) Select m such that it is the bth of T, where T contains elements from the set whose size is n/c and are the dth smallest element from each column. Use PICK recursively if n/c>1 2. Compute rank of m Compare m to every other element x in S for which it is not known whether m>x or m<x. 3. If rank of m is i, then we've got our answer, since m is the ith of S, if rank of m > i, we can disregard/remove D = set of elements greater than m, and n becomes n-number of elements present in D, if rank of m < i, then we can disregard/remove D = set of elements lesser than m, and n becomes n-number of elements present in D. Go back to Step 1. Pseudo code for above algorithm would look like PICK(a[], n, i) 1. if(n<1) stop 2. let k=n/c 3. divide a[] into k pairs 4. form another pair with dth smallest element from each pair. 5. m = rank(d) 6. if m=i, halt else if(m>i) discard set D = {x>=i}, where x belongs to a else discard set D = {x<=i} 7. repeat recursively till k>1 Proving that f(n,i) = O(n) The functions b(), c() and d() decide the time complexity, we need to choose them wisely as an additional sorting is also done. Let h(c) denote cost of sorting c elements using Ford and Johnson's algorithm. Also known as the merge-insertion sort algorithm is a comparison sorting algorithm that involves fewer comparisons in the worst case as compared to insertion and merge sorts. The algorithm involves three steps, let n be the size of array to be sorted. 1. Split the array into n/2 pairs of two elements and order them pairwise, if n is odd, then the last element isn't paired with any element. 2. The pairs are recursively sorted by their highest value. Again, in case of n being odd, the last element is not considered highest value and left at the end of collection. Now all the highest elements form a list, let's call it main list (a1, a2,...,an/2) and the remaining ones are the pend list(b1, b2.....,bn/2), for any given i bi<=ai. 3. Insert pend elements into the main list, we know that b1<=a1, hence {b1,a1,a2,....,an/2}. Now the remaining elements should be inserted with the constraint that the size of insertion is 2^(n)-1. This process should be repeated till all the elements are inserted. And the cost of sorting elements is given below Now the cost of 1a previously described is n.h(c)/c. Hence, it's only wise to make c a constant in order for the algorithm to run in linear time. Let P(n) be the maximum cost for PICK for any i. We can bound the cost of 1b by P(n/c). After 1b a partial order is formed which can be seen in the below figure. Since the recursive call to PICK in step l(b) determines which elements of T are <m, and which are >m, we separate the columns, where every element of box G is clearly greater than m, while every element in box L is less. Therefore, only those elements in quadrants A and B need to be compared to m in step 2. We can easily show that no elements are removed incorrectly in step 3. if rank of m > i, we can say that m is too large and remove all elements greater than m and vice versa. Also, note that at the least all of L or G is removed. P(n) <= n.h(c)/c + P(n/c) + n + P(n - min(|L| , |G|) The values of b,c,d decide much of the computation cost. To minimise the aboce equation we can assume some values and proceed further. Let c=21 b=n/2*c =>n/42 By substituting the above values in the equation P(n) <= 66n/21 + P(n/21) + n + P(31n/42) P(n) <= 58*n/3 = 19.6n The basis for the induction is that since h(n) < 19n for n < l0 5, any small case can be handled by sorting. PICK runs in linear time because a significant fraction of S is discarded on each pass, at a cost proportional to the number of elements discarded on each step. Note that we must have c >= 5 for PICK to run in linear time. Values of b, c, d 1. A reasonable choice for these functions can help the algorithm achieve a linear time. From the above calculations it has been proven that c must be constant, since the cost of sorting depends on it. Also, it must be greater than or equal to 5. 2. We have defined P(n) as maximum cost of PICK for any i, therefore we chose c and d to be constants and b as a function of c, as b decides the value of m(recall 1b) and therefore it would make sense to choose b in terms of c. Improvements to PICK Two modifications to PICK are described, 1. PICK1 which yields best overall bound for F(α) 2. PICK2 which is more efficient for i in the ranges of i < ßn or i > (1-ß)n for ß = 0.203688. PICK1 differs by PICK in 1. The elements of S are sorted into columns only once, after which those columns broken by the discard operation are restored to full length by a (new) merge step at the end of each pass. 2. The partitioning step is modified so that the number of comparisons used is a linear function of the number of elements eventually discarded. 3. The discard operation breaks no more than half the columns on each pass, allowing the other modifications to work well. 4. The sorting step implicit in the recursive call to select m is partially replaced by a merge step for the second and subsequent iterations since (3) implies that 1/2 of the set T operated on at pass j were also in the recursive call at pass j- 1. The procedure of PICK1 is relatively lengthy. The optimized algorithm is full of red tape, in principle, for any particular n could be expanded into a decision tree without red-tape computation. By analysis of PICK2, which is essentially the same as PICK with the functions b(i, n), c(i, n), and d(i, n) chosen to be i, 2, and 1, respectively, and with the partitioning step eliminated. Time Complexity The main result i.e time complexity of PICK , f(i,n) = O(n). However, this has been tuned up to provide tighter results. If you recall the definition of F(α) to measure relative difficulty of computing percentile levels. After the changes made to PICK, we get three expressions here There is no evidence to suggest that any of the inequalities (1) - (3) is the best possible. In fact, the authors surmise that they can be improved considerably. The most important result of this paper is that selection can be performed in linear time, in the worst case. No more than 5.4305n comparisons are required to select the ith smallest of n numbers, for any i, 1<= i <= n. This bound can be improved when i is near the ends of its range, i.e when i get larger. With this article at OpenGenus, you must have a strong sense of time complexity analysis of algorithmic problems and understand how selecting any element at a specific order takes only linear time.
{"url":"https://iq.opengenus.org/time-bounds-for-selection/","timestamp":"2024-11-11T03:31:24Z","content_type":"text/html","content_length":"63988","record_id":"<urn:uuid:55ee9189-af6d-4398-8486-65dd297254fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00254.warc.gz"}
The TIGERSearch query language III. The TIGERSearch query language 1. Introduction In this chapter, the TIGER language will be introduced, the query language for the TIGERSearch query engine. Actually, the TIGER language is not only a query language, but a general decription language for syntax graphs, i.e. restricted directed acyclic graphs. Syntax graphs are close relatives to (syntax) trees, to feature structures and to dependency graphs. Syntax graphs are syntax trees with two additions: Edges may have labels, and crossing edges are permitted. On the other hand, syntax graphs differ from feature structures due to the following two properties: First, the leaf nodes are ordered (like in syntax trees). Second, two edges must not join in a common node. This means that the structure sharing mechanism of feature structures is ruled out. However, structure sharing can be expressed on the additional level of so-called secondary edges. We call the TIGER language a 'description language' since we want to emphasize that, in principle, the language serves to represent corpus annotations as well as corpus queries. In the TIGER system, for corpus annotation only a proper sublanguage of the TIGER description language may be used which does not include any possibility for underspecification. Note that, for reasons of technical convenience, the corpus annotation has to be encoded in the XML-counterpart of the corpus annotation sublanguage (TIGER-XML, cf. chapter V). The TIGER language accomodates a wide range of common treebank formats. Acknowledgements and references Syntax graphs are a formalization of the so-called Negra data structure, which has been used to encode the first major treebank for German, the Negra corpus (cf. [SkutEtAl1997]). The design and the formal definition of the TIGER language has been influenced by other work on formal languages: We want to thank our colleagues in the TIGER and DEREKO projects (cf. TIGER project homepage and DEREKO project homepage) for their creative input and their patience in discussing the details of earlier versions of this chapter. 2. Language overview Like other formal languages, the TIGER language has been defined recursively, i.e. by nested layers of formal constructs. The invidual nodes in syntax graphs can be described with feature constraints , i.e. Boolean expressions over feature-value pairs. For the sake of computational simplicity, we do not admit nested feature structures. This means that feature value descriptions must denote constants or more precisely, strings. For this reason, we rather talk about feature records instead of feature structures. Here are some sample queries for the TIGERSampler corpus which is part of the TIGERSearch distribution (cf. [Smith2002] for an introduction to the corpus annotation). [word="Abend" & pos="NN"] [word=/Ma.*/ & pos= ("NN"|"NE")] There are two elementary node relations, precedence (.) and labelled dominance (>L), and a range of derived node relations such as unlabelled direct dominance (>) and general dominance (>*). Example: [cat="NP"] > [pos="ART"] The graph descriptions are made from (restricted) Boolean expressions over node relations. Feature values, feature constraints, and nodes can be refered to by logical variables (e.g. #n) which are bound existentially at the outmost formula level. Example: (#n:[cat="NP"] > [pos="ART"]) & (#n >* [pos="ADJA"]) Queries can include template calls and type names. Template definitions help to modularize lengthy queries. Types are a means to structure the universe of feature-value pairs. Type definitions include the declaration of features with domain and range types and the definition of type hierarchies. In the subsequent sections, we introduce the TIGER language in an informal manner, i.e. by the way of examples. All sample queries should work on the TIGERSampler corpus, which is distributed with the TIGERSearch software. A formal definition of the TIGER language is given in a separate document (cf. [KoenigLezius2001]). In section 12, the reader will find a quick reference of all language elements. 3. Feature value descriptions 3.1 String Strings are to be marked by quotation marks, e.g. "NN". To function as an actual TIGERSearch query, we must use the string as the value in a feature-value pair, surrounded by brackets: Characters which are TIGER reserved symbols must be preceded by the \-symbol (cf. following query). TIGER reserved symbols are listed in section 12. 3.2 Types Constant-denoting type symbols can serve as descriptions of feature values (cf. section 8). If a feature value symbol comes without quotes, it is interpreted as a type name: 3.3 Boolean expressions Descriptions of feature values may be complex in the sense that type symbols and strings can be combined as Boolean expressions. The operators are ! (negation), & (conjunction), and | (disjunction). Here are uses of a Boolean expressions as a feature value descriptions: [pos = ("NN" | "NE")] [pos = ! ("NN" | "NE") ] By the way, the query above can be written as: [pos != ("NN" | "NE") ] Please note: Boolean expressions for feature values which involve binary operators (conjunction and disjunction) must always be put into parentheses. See the example above where the outer parentheses cannot be omitted! The operator precedence is defined as follows: !, &, |. This definition is illustrated by the following examples: │ Example │ Interpretation │ │ ! "NN" & "NE" │ (!"NN") & ("NE") │ │ "NN" & "NE" | "PREL" │ (NN"&"NE") | ("PREL") │ 3.4 Variables A Boolean feature value description can be refered to by a variable (in the example: #c): [pos= #c:("NN"|"NE")] A variable name has to start with a #-symbol. See subsection 7.2 for more meaningful applications of variables. 3.5 Regular expressions One can also use regular expressions as feature value descriptions. Regular expressions are marked by enclosing slashes /. The syntax of regular expressions in TIGER is compatible with the syntax of regular expressions in Perl 5.003 (cf. [WallEtAl1996]). In our implementation, the following expression types are available: Single characters │ a │ the character a │ │ . │ any character │ Character classes │ [ace] │ any of the characters a, c, e │ │ [a-z] │ any lower-case letter │ │ [^a-f] │ any character except a to f │ Special characters │ \s │ whitespace (space, tab, return) │ │ \d │ digit (0-9) │ │ (abc|de) │ the string abc, or de │ │ (ab)* │ no or any number of ab (empty string, ab, abab, ...) │ │ (ab)+ │ at least one ab (ab, abab, ...) │ │ (ab)? │ no or exactly one ab │ │ (ab){m,n} │ from m to n occurences of ab │ │ ab+ │ a followed by at least one b (a, ab, abb, ...) │ │ (ab)+ │ at least one ab (ab, abab, ...) │ Please note: In our notation /x/ means /^x$/ in the Perl notation. The following example means 'find words which start with spiel': [word = /spiel.*/] With the following query, one can locate the words das and der, irrespective of capitalization of the first letter: [lemma = /[dD](as|er)/] The following query finds words which contain at least one uppercase letter or a figure at a non-initial position, i.e. hyphenated compounds, and potential abbreviations and product names: [word = /.+([0-9A-Z])+.*/] Please note: There is a difference between . and \. in the context of regular expressions. The following example denotes all strings starting with the prefix sagt, whereas the subsequent query means all strings with the prefix sagt followed by an arbitrary, possibly empty number of full stops: [word = /sagt.*/] [word = /sagt\.*/] Please note: The TIGER language compiler performs only a rough check of the syntax of regular expressions. The fine-grained syntax check for regular expressions will be carried out when a query is evaluated. Therefore it may involve more effort for you to discover the syntax errors you have made in a regular expression. 3.6 Boolean expressions vs. types vs. regular expressions Regular expressions for feature values should be reserved for 'open-ended' feature values such as the word values. For features with a restricted range of a values such as syntactic categories, the use of types and Boolean expressions is suggested in order to increase readability and processing efficiency. If possible, types should be used instead of Boolean expressions. 4. Feature constraints 4.1 Feature-value pairs Simple feature constraints, i.e. feature-value pairs such as pos="NN" have been already introduced in section 3. 4.2 Boolean expressions Complex feature constraints are Boolean expressions over feature-value pairs: [word="das" & pos="ART"] [word = /sp.*/ | pos = "VVFIN"] [word="das" & !(pos="ART")] Basically, the last query is equivalent to the following two queries. Note that this kind of equivalence is not generally valid in a typed system (cf. section 8)! [word="das" & pos != "ART"] [word="das" & pos = (!"ART")] The operator precedence is defined as follows: !, &, |. This definition is illustrated by the following examples: │ Example │ Interpretation │ │ [! pos="NN" & word="der" ] │ [(!pos="NN") & (word="der")] │ │ [pos="NN" & word="Haus" | pos="NE"] │ [(pos="NN" & word="Haus") | (pos="NE")] │ 4.3 Variables A feature constraint can be prefixed by a variable. For further discussion see subsection 7.2. [#f: (word="das" & pos="ART")] 4.4 Unspecific feature constraint The unspecific feature constraint is written as []. It denotes the whole universe of nodes. 5. Node descriptions A node description, or simply a node, is a pair of a node identifier and a feature constraint. In queries, node identifiers must be node variables (in the example: #n1): #n1:[word="das" & pos="ART"] Please note: The use of constant node identifiers is reserved for corpus definitions only. It may not be used in corpus queries: (*) "id1":[word="das" & pos="ART"] In queries, the node variable can be completely omitted from a node description unless it is needed for coreference with some other node description. This means that a plain feature constraint is interpreted as a node description with an unspecified node identifier. [word="das" & pos="ART"] On the other hand, a node variable by itself, #n1, is interpreted as an (unspecified) node description, i.e. as #n1:[ ]. 6. Node relations 6.1 Elementary node relations Since syntax graphs are two-dimensional objects, we need two operators to express the spatial relations. We simply take the nomenclature from linguistics, and call the vertical dimension the dominance relation and the horizontal dimension the precedence relation. Labelled direct dominance The symbol > means direct dominance. It is further specified by an edge label, for example HD. This means that labelled direct dominance is expressed by e.g. >HD. The constraint that an edge which is labelled HD leads from node #n1 to node #n3 (or that #n3 is a direct HD-successor of #n1) is written as follows: #n1 >HD #n3 As a more comprehensive example, the following labelled dominance constraints encode the vertical dimension of the tree in the graph presented below. Note that labelled dominance is a relation among nodes, not a function like it is the case for feature structures. This means that there may be more than one edge with the same label leading out of one mother node, cf. the NK-edges in the presented #n1 >SB #n2 #n1 >HD #n3 #n2 >NK #n4 #n2 >NK #n5 On the basis of the directed edges, which are defined by the dominance relation, the nodes of a syntax graph in the corpus annotation are classified in the following manner: Nodes with outgoing edges, i.e. nodes with children, are called inner nodes or nonterminal nodes. In the presented figure only the nodes #n1 and #n2 are inner nodes. Nodes without successors are named leaf nodes or terminal nodes. For example, the nodes #n4, #n5, and #n3 are the leaves of the tree in the presented figure. Direct precedence of leaf nodes A syntax graph is not only defined by constraining the vertical arrangement of its nodes, but also by the horizontal order of its leaf nodes, i.e. by the precedence relation among the leaves. We use the . symbol to represent direct precedence, since it serves as the concatenation operator in some programming languages. Now, the horizontal dimension of the tree in the presented figure is determined by the two precedence constraints: #n4 . #n5 #n5 . #n3 6.2 Derived node relations In queries, one wants to refer to indeterminate portions of a graph. Therefore, generalized notions of dominance and precedence ('wildcards') are necessary. Furthermore, certain convenient abbreviations should be introduced like a sibling operator. Dominance in general means that there is a path from one node to another one via a connected series of direct dominance relations. The distance of a dominance relation is the length of the path between the two nodes, i.e. the number of direct dominance relations to be traversed. The minimum distance is 1, i.e. in TIGERSearch, a node does not dominate itself. We introduce the following generalizations of the labelled direct dominance relation >L: │ > │ unlabelled direct dominance │ │ >* │ dominance (minimum distance 1) │ │ >n │ dominance with distance n (n>0) │ │ >m,n │ dominance with distance between m and n (0<m<n) │ │ >@l │ leftmost terminal successor ('left corner') │ │ >@r │ rightmost terminal successor ('right corner') │ │ !>L │ negated labelled direct dominance │ │ !> │ negated unlabelled direct dominance │ │ !>* │ negated dominance │ │ !>n │ negated dominance, distance n │ │ !>m,n │ negated dominance, distance m...n (0<m<n) │ │ !>@l │ negated left corner │ │ !>@r │ negated right corner │ For unlabelled edges, the label is considered to be unspecified, i.e. an unlabelled edge can match any edge. The left corner resp. the right corner relation is reflexive, i.e. the leftmost resp. rightmost successor of a leaf node is the leaf node itself. For the example tree in subsection 6.1, the following relations hold: #n1 >* #n4 #n1 >2 #n4 #n1 >@l #n4 #n4 >@l #n4 Concerning the negated node relations, remember that variables are bound existentially at the outermost formula level. For example, the following query has to be read as 'find a syntax graph with an NP node and an NE node which are not connected by an NK-edge.' [cat="NP"] !>NK [pos="NE"] So far, we have only talked about the precedence of leaf nodes. Whereas, for syntax trees, the precedence relation among its inner nodes is obvious, the situation is not clear at all for syntax graphs, which admit crossing edges. For example, the following two figures are different visualizations of the same syntax graph wrt. the arrangement of the nodes #n2 and #n3. There are several alternatives to define the precedence of inner nodes. One could adopt the strict precedence definition of proper trees: A node #n1 precedes a node #n2 if all nodes dominated by #n1 precede all nodes dominated by #n2 (cf. [SteinerKallmeyer2002]), i.e. if the right corner of #n1 precedes the left corner of #n2. Based on this strict definition, there would be no precedence relation at all between the nodes #n2 and #n3 in the first syntax graph shown above. We decided for a somewhat weaker definition, which admits precedence statements even for overlapping graphs: A node #n1 is said to precede another node #n2 if the left corner of #n1 precedes the left corner of #n2 (left corner precedence). According to the left corner based definition, node #n2 precedes node #n3 in the first syntax graph shown above. The user may define his own version of precedence as a template (cf. section 9), e.g. with the help of the right corner operator >@r. The distance n between two non-identical leaf nodes is the number of leaf nodes between these two nodes increased by 1. For example, the distance between two direct neighbour leaves is 1. The distance between two inner nodes #x and #y is the distance between their respective left corners. Two nodes which share the same left corner, do not precede each other. In particular, a node does not precede itself. For the precedence relation, there are the derived operators listed below: │ .* │ precedence (minimum distance 1) │ │ .n │ precedence with distance n (n>0) │ │ .m,n │ precedence with distance between m and n (0>m>n) │ │ !.* │ negated precedence │ │ !.n │ negated precedence, distance n │ │ !.m,n │ negated precedence, distance m...n │ Sibling relation Two nodes #n1 and #n2 are siblings if they are directly dominated by the same node #n0, i.e. if they have the same mother node #n0. We use the following writing conventions: │ $ │ siblings │ │ $.* │ siblings with precedence │ │ !$ │ negated siblings │ We did not include the negated $.* relation since it seems rather counter-intuitive. 6.3 Secondary edges Although, in the kernel of the TIGER language, a node must not be dominated by more than one node, we have added so-called secondary edges as an extra layer in order to allow for structure sharing in syntax graphs. A secondary edge defines an additional parent node for a given node. │ >~L │ labelled secondary edge │ │ >~ │ secondary edge │ │ !>~L │ negated labelled secondary edge │ │ !>~ │ negated secondary edge │ 7. Graph descriptions 7.1 Boolean expressions Graph descriptions or graph constraints are (restricted) Boolean expressions over node relations and node descriptions. Currently, conjunction & and disjunction | are available as logical connectives. For example, with the help of the &-operator, the following node relations can be joined into a graph constraint which retrieves the tree shown below. (#n1 >SB #n2) & (#n1 >HD #n3) & (#n2 >NK #n4) & (#n2 >NK #n5) Parentheses can be omitted in the usual fashion: #n1 >SB #n2 & #n1 >HD #n3 & #n2 >NK #n4 & #n2 >NK #n5 The operator precedence is defined as follows: Relation, &, |. This definition is illustrated by the following examples: │ Example │ Interpretation │ │ #v > #w & #x │ (#v > #w) & #x │ │ #v & #w | #x │ (#v & #w) | #x │ 7.2 Use of variables Variables for feature values Variables for feature values are typically used to express agreement constraints. The following query looks for two adjacent nodes which are labelled with NN or NE. [pos = #noun] . [pos = #noun:("NN" | "NE")] Variables for feature constraints With variables for feature constraints, we can search e.g. for sentences which contain the same preposition (the same word form!), twice: [#f:(pos="APPR")] .* [#f] Please note: There is a subtle difference if we used a feature value variable instead. If we only require the identity of the feature value, i.e. of the part-of-speech tag, we get all sentences which contain at least two prepositions (not necessarily the same word form!): [pos = #v:"APPR"] .* [pos=#v] Node variables Node variables are necessary to express multiple node relations with respect to one node, e.g. to list the children of a node like in the example in subsection 7.1: #np:[cat="NP"] & #np > [pos="ADJA"] & #np > [pos="NN"] Node (in)equality Two nodes variables #n1 and #n2 may match the same node in the corpus. If this causes problems, the inequality of two node variables can be enforced e.g. by adding the following subformula which requires the variables #n1 and #n2 to match distinct nodes (due to the irreflexivity of the precedence relation): ((#n1 .* #n2) | (#n2 .* #n1)) In the case your corpus contains unary transitions (nonterminal nodes with one single nonterminal daughter), you should use a weaker constraint for node inequality: ((#n1 .* #n2) | (#n2 .* #n1)) | ((#n1 >* #n2) | (#n2 >* #n1)) 7.3 Graph predicates In principle, by now there are all the operators to describe syntax graphs. For reasons of convenience, and to a certain extent for reasons of completeness, we have added so-called graph predicates, e.g. to designate the root of a graph. Root predicate The root of a graph (for a whole sentence) can be identified by the predicate root. Arity predicates The following graph description describes all graphs which contain a certain node #n1 with at least two children #n2 and #n3: (#n1 > #n2) & (#n1 > #n3) However, one would like to state that there must be exactly two children. For this reason, we introduce a two-place operator arity in order to be able to restrict the number of children of a node # n1, e.g. to two children: (#n1 > #n2) & (#n1 > #n3) & arity(#n1,2) The arity predicate can also come with three arguments in order to indicate an interval of number of children, e.g. from two to four children: (#n1 > #n2) & (#n1 > #n3) & arity(#n1,2,4) Similarly, there is a tokenarity operator to constrain the number of leaves which are dominated by this node. For example, the following means that node #n1 must dominate exactly 5 terminal nodes. And the subsequent example states that node #n1 must have between 5 and 7 leaves. Continuity predicates It may be useful to state that the leaves which are dominated by a node must form a continuous string or not. For this purpose, the two unary operators continuous and discontinuous have been 8. Type definitions If the user has to declare the symbols which will be used in a corpus and in queries, inconsistencies in the corpus annotation and in the corpus query can be detected much more easily. TIGER allows for the declaration of type hierarchies (cf. subsection 8.2) and features (cf. subsection 8.4). Type hierarchies have to be linked to a corpus. The linking of type hierarchies is described in subsection 4.3, chapter VI. 8.1 Built-in types There are the following built-in types in the TIGER language: String for feature values not listable such as the values of the features word or lemma UserDefConstant for user-defined ('listable') feature values Constant comprises both String and UserDefConstant NT for feature constraints of nonterminal nodes T for feature constraints of terminal nodes FREC stands for all feature records (feature constraints), i.e. NT and T Node stands for node descriptions Top means anything (in the world of syntax graphs) The hierarchy of built-in types is visualized in the following figure. Defining a type hierarchy is introduced in the following subsection. The built-in types Top, Node and Graph are only required on the conceptual level. In the current implementation, they cannot be referred to in the description language. 8.2 Definition of a type hierarchy In the current implementation, the user can only add type definitions for feature values, i.e. for the type UserDefConstant. Let us start with a sample type hierarchy for the pos feature: <typedeclaration base="pos" version="1.0"> <!-- base type: pos --> <type name="pos"> <subtype nameref="openclass"/> <subtype nameref="closedclass"/> <subtype nameref="punctuation"/> <subtype nameref="misc"/> Type declaration are encoded in an XML-based format: The type declarations for a feature constitute the contents of a typedeclaration root element. The base attribute defines the base type (or root type) of the type system. Type definition rules are encoded by type elements. The value of the name attribute is the type t which is being defined. The (direct) subtypes are given by the nameref attribute values of the individual subtype child elements. An occurrence of a type t' in an subtype element is called a use of t'. In this way, type hierarchies can be defined. The 'terminal nodes' of a type hierarchy (of constant denoting types) are constants: <!-- open word classes--> <type name="openclass"> <subtype nameref="noun"/> <subtype nameref="verb"/> <subtype nameref="adjective"/> <!-- adverb --> <constant value="ADV" comment="schon, bald, doch"/> <!-- noun --> <type name="noun"> <!-- common noun --> <constant value="NN" comment="Tisch, Herr, [das] Reisen"/> <!-- proper noun --> <constant value="NE" comment="Hans, Hamburg, HSV"/> On the basis of these type definitions, some disjunctions of feature values can be written in a more concise manner, e.g. the following query can now be replaced by the subsequent query: [pos = ("NE"|"NN")] [pos = noun] Restrictions for type definitions The first restriction means that neither recursion nor cross-classification (alternative definitions of the same type symbol) can be expressed. If you think you need cross-classification, template definitions (section 9) might be a way out. The second restriction enforces that every type must be hooked up in the type hierarchy. In total this means that type definitions define a tree-shaped type hierarchy. Undefined types may only occur as the leaves of a type hierarchy. 8.3 Type definition example In this subsection, the part-of-speech type hierarchy used for the TIGER Corpus Sampler (based on a modified version version of the STTS tag set) is listed as an example. The file is also placed in the doc/examples/ subdirectory of your TIGERSearch installation. <typedeclaration base="pos" version="1.0"> <!-- base type: pos --> <type name="pos"> <subtype nameref="openclass"/> <subtype nameref="closedclass"/> <subtype nameref="punctuation"/> <subtype nameref="misc"/> <!-- open word classes--> <type name="openclass"> <subtype nameref="noun"/> <subtype nameref="verb"/> <subtype nameref="adjective"/> <!-- adverb --> <constant value="ADV" comment="schon, bald, doch"/> <!-- closed word classes--> <type name="closedclass"> <!-- definite or indefinite article --> <constant value="ART" comment="der, die, das, ein, eine"/> <subtype nameref="proform"/> <!-- cardinal number --> <constant value="CARD" comment="zwei [Männer], [im Jahre] 1994"/> <subtype nameref="conjunction"/> <subtype nameref="adposition"/> <!-- interjection --> <constant value="ITJ" comment="mhm, ach, tja"/> <subtype nameref="particle"/> <!-- noun --> <type name="noun"> <!-- common noun --> <constant value="NN" comment="Tisch, Herr, [das] Reisen"/> <!-- proper noun --> <constant value="NE" comment="Hans, Hamburg, HSV"/> <!-- verb --> <type name="verb"> <subtype nameref="finite"/> <subtype nameref="nonfinite"/> <!-- finite verbform --> <type name="finite"> <!-- finite full verb --> <constant value="VVFIN" comment="du] gehst, [wir] kommen [an]"/> <!-- finite auxiliary verb --> <constant value="VAFIN" comment="[du] bist, [wir] werden"/> <!-- finite modal verb --> <constant value="VMFIN" comment="dürfen"/> <!-- non-finite verbform --> <type name="nonfinite"> <subtype nameref="infinitive"/> <subtype nameref="participle"/> <subtype nameref="imperative"/> <!-- infinitive with zu, full verb --> <constant value="VVIZU" comment="anzukommen, loszulassen"/> <!-- infinitive verbform --> <type name="infinitive"> <!-- inifinitive, full verb --> <constant value="VVINF" comment="gehen, ankommen"/> <!-- infinitive, auxiliary verb --> <constant value="VAINF" comment="werden, sein"/> <!-- infinitive, modal verb --> <constant value="VMINF" comment="wollen"/> <!-- past participle --> <type name="participle"> <!-- past participle, full verb --> <constant value="VVPP" comment="gegangen, angekommen"/> <!-- past participle, auxiliary verb --> <constant value="VAPP" comment="gewesen"/> <!-- past participle, modal verb --> <constant value="VMPP" comment="gekonnt, [er hat gehen] können"/> <!-- past participle --> <type name="participle"> <!-- past participle, full verb --> <constant value="VVPP" comment="gegangen, angekommen"/> <!-- past participle, auxiliary verb --> <constant value="VAPP" comment="gewesen"/> <!-- past participle, modal verb --> <constant value="VMPP" comment="gekonnt, [er hat gehen] können"/> <!-- imperative --> <type name="imperative"> <!-- imperative, full verb --> <constant value="VVIMP" comment="komm [!]"/> <!-- imperative, auxiliary verb --> <constant value="VAIMP" comment="sei [ruhig !]"/> <!-- adjective --> <type name="adjective"> <!-- attributive adjective --> <constant value="ADJA" comment="[das] große [Haus]"/> <!-- adverbal or predicative adjective --> <constant value="ADJD" comment="[er fährt] schnell, [er ist] schnell"/> <!-- proform --> <type name="proform"> <subtype nameref="prodemon"/> <subtype nameref="proindef"/> <!-- irreflexive personal pronoun --> <constant value="PPER" comment="ich, er, ihm, mich, dir"/> <subtype nameref="propos"/> <subtype nameref="prorel"/> <!-- reflexive pronoun --> <constant value="PRF" comment="sich, einander, dich, mir"/> <subtype nameref="prointer"/> <!-- pronominal adverb, bug, should be "PAV" --> <constant value="PROAV" comment="dafür, dabei, deswegen, trotzdem"/> <!-- demonstrative pronoun --> <type name="prodemon"> <!-- substitutive demonstrative pronoun --> <constant value="PDS" comment="dieser, jener"/> <!-- attributive demonstrative pronoun --> <constant value="PDAT" comment="jener [Mensch]"/> <!-- indefinite pronoun --> <type name="proindef"> <!-- substitutive indefinite pronoun --> <constant value="PIS" comment="keiner, viele, man, niemand"/> <!-- attributive indefinite pronoun --> <constant value="PIAT" comment="kein [Mensch], irgendein [Glas]"/> <!-- posessive pronoun --> <type name="propos"> <!-- substitutive possesive pronoun --> <constant value="PPOSS" comment="meins, deiner"/> <!-- attributive possessive pronoun --> <constant value="PPOSAT" comment="mein [Buch], deine [Mutter]"/> <!-- relative pronoun --> <type name="prorel"> <!-- substitutive relative pronoun --> <constant value="PRELS" comment="[der Hund,] der"/> <!-- attributive relative pronoun --> <constant value="PRELAT" comment="[der Mann ,] dessen [Hund]"/> <!-- interrogative pronoun --> <type name="prointer"> <!-- substitutive interrogative pronoun --> <constant value="PWS" comment="wer, was"/> <!-- attributive interrogative pronoun --> <constant value="PWAT" comment="welche [Farbe], wessen [Hut]"/> <!-- interrogative adverb or adverbial relative pronoun --> <constant value="PWAV" comment="warum, wo, wann, worüber, wobei"/> <!-- conjunction --> <type name="conjunction"> <subtype nameref="conjsub"/> <!-- coordinating conjunction --> <constant value="KON" comment="und, oder, aber"/> <!-- comparative conjunction --> <constant value="KOKOM" comment="als, wie"/> <!-- subordinating conjunction --> <type name="conjsub"> <!-- subordinating conjunction with zu-infinitive --> <constant value="KOUI" comment="um [zu leben], anstatt [zu fragen]"/> <!-- subordinating conjunction with sentence --> <constant value="KOUS" comment="weil, daß, damit, wenn, ob"/> <!-- adposition --> <type name="adposition"> <!-- preposition --> <constant value="APPR" comment="in [der Stadt], ohne [mich], von [jetzt an]"/> <!-- preposition + article --> <constant value="APPRART" comment="im [Haus], zur [Sache]"/> <!-- postposition --> <constant value="APPO" comment="[ihm] zufolge, [der Sache] wegen"/> <!-- circumposition, right part --> <constant value="APZR" comment="[von jetzt] an"/> <!-- particle --> <type name="particle"> <!-- "zu" before infinitive --> <constant value="PTKZU" comment="zu [gehen]"/> <!-- negating particle --> <constant value="PTKNEG" comment="nicht"/> <!-- separated verb particle --> <constant value="PTKVZ" comment="er kommt] an, [er fährt] rad"/> <!-- answer particle --> <constant value="PTKANT" comment="ja, nein, danke, bitte"/> <!-- particle with adjektive or adverb --> <constant value="PTKA" comment="am [schönsten], zu [schnell]"/> <type name="punctuation"> <!-- comma --> <constant value="$," comment=","/> <!-- final punctuation --> <constant value="$." comment=". ? ! ; :"/> <!-- other punctuation marks --> <constant value="$(" comment="- [,]()"/> <type name="misc"> <!-- foreign material --> <constant value="FM" comment="[Er hat das mit ``] A big fish ['' übersetzt]"/> <!-- nonword, with special characters --> <constant value="XY" comment="3:7, H2O, D2XW3"/> <!-- truncated element --> <constant value="TRUNC" comment="An- [und Abreise]"/> <!-- untagged --> <constant value="--"/> <!-- tagging of the token unknown --> <constant value="UNKNOWN"/> 8.4 Feature declarations Feature declarations are part of the corpus definition (cf. section 11). A feature declaration states the following information: domain of the feature, i.e. it states for which type the feature may be used. In the current implementation, features can only be declared for the built-in types NT and T. Furthermore, one cannot use a type to restrict the range of a feature, but the possible values for a feature have to be enumerated. One reason is that we want to keep the number of dependencies between corpus definition and type definitions as small as possible. The other reason is that such simple feature declarations can be also be constructed automatically - for those corpora which do not come with feature declarations. If the value enumeration is omitted from a feature declaration, the default range of that feature is String. For example, for the type T of feature constraints for terminal nodes, the features word, lemma, and pos may be defined as follows. <feature name="word" domain="T"/> <feature name="lemma" domain="T"/> <feature name="pos" domain="T"> <value name="VAFIN">...comment...</value> <value name="VAIMP"/> <value name="VAINF"/> <value name="VAPP"/> <value name="VMFIN"/> <value name="VMINF"/> Please note: Each feature must be declared exactly once. The exclusion of multiple declarations for the same feature means that polymorphic overloading of a feature symbol is not permitted. Please note: In the TIGER description language being a typed language, the following two queries are not equivalent! [word="das" & !(pos="ART")] [word="das" & pos != "ART"] The reason is that !(pos="ART") equals !(T & pos="ART") due to the corresponding feature declaration. The latter formula again is equivalent to !(T) | (pos != "ART"), i.e. either the feature pos is not defined on a type or it is defined and its value is not equal to "ART". 9. Template definitions When working with a syntactic representation formalism such as the TIGER language, certain pieces of code will be used over and over again. Furthermore, it is good programming and grammar writing style to define generalizations. Therefore, there is an urgent need for a means for defining abbreviations, templates (cf. [Shieber1986]), or macros. We chose TIGER templates to be non-recursive logical relations. This means that, although template calls may be embedded into template definitions, there must be neither direct nor indirect self-reference in template definitions. Hence, templates realize the 'database programming' part of a logic programming language such as Prolog (cf. [SterlingShapiro1986]). A simple scheme PrepPhrase of a prepositional phrase which consists only of a preposition (APPR) and a proper noun (NE) can be defined as follows: PrepPhrase(#n0) <- #n0:[cat="PP"] > #n1:[pos="APPR"] & #n0 > #n2:[pos="NE"] & #n1.#n2 & arity(#n0,2) ; The arrow <- marks a defining clause of a template definition. The <- operator is a two-place operator which takes a template head on its lefthand side and a template body on its righthand side. The template head consists of the template name (e.g. PrepPhrase) and a list of variables, the argument parameters of the template clause. The list of argument parameters must have at least one element, because otherwise no information flow between the template body and the calling environment would be possible, i.e. the template definition would be useless. An argument parameter #x must come with enough information (in the template body or in the query context) so that one can decide which of the built-in types Constant, FRec, Node, or Graph, the variable #x belongs to. The template body consists of a graph description. The end of a defining clause is marked by the ; symbol. A template definition can consist of several defining clauses (not yet implemented). The PrepPhrase template can now be used or called in a query: #n0 & PrepPhrase(#n0) ; This query will return all graphs from the given corpus, which contain a subtree (rooted in #n0) with the desired shape. Please note: The '#n0 &' part is important. TIGERSearch assumes that variables which occur only in the template call but not elsewhere, do not make sense. If you omit the '#n0 &' you will get an error message. The template PrepPhrase can also be used in the body of some other template definition, e.g. in the definition of a pattern for VerbPhrase like 'geht nach Stuttgart'. VerbPhrase(#n0) <- #n0:[cat="VP"] > #n1:[pos="VVFIN"] & #n0 > #n2 & #n1.#n2 & arity(#n0,2) & PrepPhrase(#n2) ; Please note: The scope of a variable is the defining clause it occurs in. E.g. the variable #n2 in the definition of PrepPhrase is distinct from the variable #n2 in the definition of VerbPhrase. The TIGER language interpreter will resolve the call to PrepPhrase in the following manner. It will: 1. look up the defining clause of PrepPhrase. 2. replace all the variable names by new ones (in order to avoid unintended confusion of variables from the 'calling' clause and the 'called' clause). 3. identify the node id variable #n2 in VerbPhrase with the argument #n0 of PrepPhrase's defining clause. Identification of variables means that the constraints for both variables are joined into a single constraint by a logical conjunction. After the resolution step, the VerbPhrase-clause has the following shape: VerbPhrase(#n0) <- #n0:[cat="VP"] > #n1:[pos="VVFIN"] & #n0 > #n2 & #n1.#n2 & #n2:[cat="PP"] > #n21:[pos="APPR"] & #n2 > #n22:[pos="NE"] & #n21.#n22 & arity(#n0,2) & arity(#n2,2) ; If we want to find out which verbs go with which prepositions, this can be done by adding the appropriate arguments or parameters. PrepPhrase(#n0,#prep) <- #n0:[cat="PP"] > #n1:[word=#prep & pos="APPR"] & #n0 > #n2:[pos="NE"] & #n1.#n2 & arity(#n0,2) ; VerbPhrase(#n0,#verblemma,#prep) <- #n0:[cat="VP"] > #n1:[pos="VVFIN" & lemma=#verblemma] & #n0 > #n2 & #n1.#n2 & arity(#n0,2) & PrepPhrase(#n2,#prep) ; After a while, we might get interested in a more complex notion of prepositional phrases. For that reason, a template definition may consist of several, alternative defining clauses (not yet PrepPhrase(#n0,#prep) <- #n0:[cat="PP"] > #n1:[word=#prep & pos="APPR"] & #n0 > #n2:[pos="NE"] & #n1.#n2 & arity(#n0,2) ; PrepPhrase(#n0,#prep) <- #n0:[cat="PP"] > #n1:[word=#prep & pos="APPR"] & #n0 > #n2:[pos="ART"] & #n0 > #n3:[pos="NN"] & #n1.#n2 & #n2.#n3 & arity(#n0,3) ; The above definition is equivalent to a disjunction of the two defining clauses: PrepPhrase(#n0,#prep) <- ( #n0:[cat="PP"] > #n1:[word=#prep & pos="APPR"] & #n0 > #n2:[pos="NE"] & #n1.#n2 & arity(#n0,2) ) ( #n0:[cat="PP"] > #n1:[word=#prep & pos="APPR"] & #n0 > #n2:[pos="ART"] & #n0 > #n3:[pos="NN"] & #n1.#n2 & #n2.#n3 & arity(#n0,3) ); or, in a more packed manner: // packed definition of PrepPhrase PrepPhrase(#n0,#prep) <- & #n1:[word=#prep & pos="APPR"] & #n1.#n2 & ( ( #n0 > #n1 & #n0 > #n2:[pos="NE"] & arity(#n0,2) ) ( #n0 > #n1 & #n0 > #n2:[pos="ART"] & #n0 > #n3:[pos="NN"] & #n2.#n3 & arity(#n0,3) ) ) ; The //-symbol marks the remainder of a line as a comment. Types vs. templates One might think of abbreviating a disjunction of feature values by a template name: gen-dat(#case) <- [ case = #case: ( "gen" | "dat" ) ] ; But we recommend to use a type definition instead, if possible, since this does not introduce an explicit disjunction into the structure. gen-dat := "gen", "dat" ; However, TIGERSearch admits only a single type hierarchy. Therefore, views which are orthogonal to this primary type hierarchy, must be expressed by templates. 10. Possible extensions of the TIGER language In this section, we discuss some possible extensions of the TIGER language and the reasons why we have not adopted them (yet). 10.1 Variables for edge labels There are already implicit ('don't care') variables for edge labels, i.e. the unlabelled dominance operator > and the dominance wildcard >* etc. If explicit variables for edge labels were available, the user could require co-reference of edge labels. To us it is unclear what kind of computational complexity will be introduced by this additional expressivity. And furthermore, there is a work-around: Since the number of edge labels (grammatical functions) tends to be rather small, it does not seem too inconvenient to define appropriate templates which enumerate pairs of edges with the same labels. 10.2 Negated graph descriptions Negation on the level of graph descriptions seems somewhat difficult to be grasped conceptually. Since it would have to be pushed down to the level of node relations and feature values anyway, and there is negation available on these lower levels, this might give enough expressivity. 10.3 Universal quantifier You might be tempted to state the following expression to find trees which are rooted in a VP, but which do not contain any NP: [cat="VP"] >* [cat=(!"NP")] But in the TIGER language, this query has the interpretation: 'Find a (sub-)graph rooted in a VP with at least one inner node #n which is not labelled NP': exists #child: ([cat="VP"] >* #child:[cat=(!"NP")]) The desired constraint of finding a graph which does not contain any NP requires universal quantification and the implication operator (i.e. negation of graph descriptions): exists #node: forall #child: ( (#node:[cat="VP"] >* #child) => The use of the universal quantifier causes computational overhead since universal quantification usually means that a possibly large number of copies of logical expressions have to be produced. For the sake of computational simplicity and tractability, the universal quantifier is (currently) not part of the TIGER language. 11. Appendix: Corpus definition Actually, corpora for TIGERSearch have to be defined in TIGER-XML. However, TIGER-XML is a direct translation of the corpus definition sublanguage of the TIGER description language. The restrictions for corpora are as follows: Declaration of required features A corpus definition must include the feature declarations for the type NT of nonterminal constraints and the type T of terminal constraints. Please note: If no explicit feature declarations are given, in most cases, they can be derived automatically from the corpus. Single Root Node, Connectedness, No Structure Sharing Every node in a graph except for one distinguished node (root node) has to be directly dominated by exactly one other node. Please note: A multi-rooted graph (unconnected subgraphs) can be turned automatically into a graph with unique root node by adding an 'artificial' root node plus the edges which point to the individual subgraphs. Please note: A structure sharing mechanism (multi-dominance) is provided by the additional layer of 'secondary edges'. No node may (indirectly) dominate itself. Full Disambiguation >L) and direct precedence (.). NT or for those which have been defined for T. 12. Appendix: Query Language Quick Reference Reserved symbols All the operators and built-in types which are listed below are reserved symbols. In particular, this means that the use of the following characters ! " # $ & ( ) * + , - . / : ; = > ? @ | { } in edge labels, strings, constants, predicate or type names may cause problems. In these cases, a preceding operator \ will help. Example: [word = "\,"] Built-in types The sample queries for the built-in types are meant to illustrate the context where a built-in type may occur. However, these queries are not really meaningful, since they are too general. │ Symbol │ Meaning │ Sample query │ │ Constant │ constants │ [word = Constant] │ │ String │ strings │ [word = String] │ │ UserDefConstant │ user defined constants │ [pos=UserDefConstant] │ │ FREC │ feature records │ [FREC] │ │ NT │ feature records for nonterminals │ [NT] │ │ T │ feature records for terminals │ [T] │ │ " │ constant mark │ [word = "Geld"] │ Regular expressions for constants │ / │ regular expression mark │ [word = /Geld/] │ │ . │ unspecified character │ [word = /sag./] │ │ * │ unrestricted repetition │ [lemma = /spiel.*/] │ │ + │ repetition with minimum 1 │ [word = /.+[0-9A-Z]+.*/] │ │ ? │ optionality │ [word = /(Leben)s?/] │ │ [ ] │ character set │ [word = /.+[0-9A-Z]+.*/] │ │ ^ │ negated character sets │ [word = /[^0-9A-Z].*/] │ │ ( ) │ grouping │ [word = /([lmnp][aeiou])+/] │ │ | │ disjunction │ [word = /[dD](as|er)/] │ │ \ │ escape for reserved characters │ [word = /.*\-.*/] │ Feature-value pairs │ = │ feature-value pair │ [pos = "NN"] │ │ != │ negated feature-value pair │ [pos != "NN"] │ Graph predicates │ root() │ root of a graph │ root(#n) │ │ arity( , ) │ arity of a node │ arity(#n,2) │ │ arity( , , ) │ │ arity(#n,2,4) │ │ tokenarity( , ) │ number of dominated leaves │ tokenarity(#n,5) │ │ tokenarity( , , ) │ │ tokenarity(#n,5,8) │ │ continuous( ) │ continuous leaves │ continuous(#n) │ │ discontinuous( ) │ discontinuous leaves │ discontinuous(#n) │ Dominance relation │ >L │ labelled direct dominance │ [cat="NP"] >NK [cat="NP"] │ │ │ │ [cat="NP"] >OA\-MOD [cat="NP"] │ │ > │ direct dominance │ [cat="NP"] > [pos="NE"] │ │ >* │ dominance │ [cat="NP"] >* [pos="NE"] │ │ >$ │ dominance, distance n │ [cat="NP"] >2 [pos="NE"] │ │ >m,n │ dominance, distance m..n │ [cat="NP"] >2,3 [pos="NE"] │ │ >@l │ left corner │ [cat="NP"] >@l [word="die"] │ │ >@r │ right corner │ [cat="NP"] >@r [word="Jahr"] │ │ $ │ siblings │ [word="die"] $ [cat="NP"] │ │ $.* │ siblings with precedence │ [word="etwas"] $.* [cat="NP"] │ │ !>L │ neg. labelled direct dominance │ [cat="NP"] !>GR [cat="NP"] │ │ !> │ neg. direct dominance │ [cat="NP"] !> [pos="NE"] │ │ !>* │ neg. dominance │ [cat="NP"] !>* [pos="NE"] │ │ !>n │ neg. dominance, distance n │ [cat="NP"] !>2 [pos="NE"] │ │ !>m,n │ neg. dominance, distance m..n │ [cat="NP"] !>2,3 [pos="NE"] │ │ !>@l │ neg. left corner │ [cat="NP"] !>@l [word="etwas"] │ │ !>@r │ neg. right corner │ [cat="NP"] !>@r [word="etwas"] │ │ !$ │ neg. siblings │ [word="etwas"] !$ [cat="NP"] │ │ >~L │ labelled secondary edge │ [cat="VP"] >~HD [cat="NP"] │ │ >~ │ secondary edge │ [cat="VP"] >~ [cat="NP"] │ │ !>~L │ neg. labelled secondary edge │ [cat="VP"] !>~HD [cat="NP"] │ │ !>~ │ neg. secondary edge │ [cat="VP"] !>~ [cat="NP"] │ Precedence relation │ . │ direct precedence │ [word="die"] . [pos=noun] │ │ .* │ precedence │ [word="die"] .* [pos="NN"] │ │ .n │ precedence, distance n │ [word="die"] .2 [pos="NN"] │ │ .m,n │ precedence, distance m..n │ [word="die"] .2,4 [pos="NN"] │ │ !. │ neg. direct precedence │ [word="etwas"] !. [pos="NN"] │ │ !.* │ neg. precedence │ [word="etwas"] !.* [pos="NN"] │ │ !.n │ neg. precedence, distance n │ [word="etwas"] !.2 [pos="NN"] │ │ !.m,n │ neg. precedence, distance m..n │ [word="etwas"] !.2,4 [pos="NN"] │ │ [f=#v] │ variable for a feature value │ [pos=#x:("NN"|"NE")] . [pos=#x] │ │ [#x] │ variable for a feature description │ [#x:(pos="APPR")] .* [#x] │ │ #n │ variable for a node identifier │ #n:[cat="NP"] & #n > [pos="ADJA"]) & #n > [pos="NN"] │ Boolean expressions │ ( ) │ bracketing │ [pos=(!("NN" | "ART"))] │ │ ! │ negation (feature values) │ [pos=(!"NN")] │ │ │ negation (feature constraints) │ [!(pos="NN")] │ │ & │ conjunction (feature values) │ [pos=(!"NN" & !"NE")] │ │ │ conjunction (feature constraints) │ [pos="NE" & word="Bonn"] │ │ │ conjunction (graph descriptions) │ #n1>#n2 & #n3.#n2 │ │ | │ disjunction (feature values) │ [pos=("NN" | "NE")] │ │ │ disjunction (feature constraints) │ [pos="NE" | word="es"] │ │ │ disjunction (graph descriptions) │ #n1>#n2 | #n1>#n3 │ Please note: Negation (!) must not have scope over variables. Please note: A Boolean expression for a feature value must always be put into parentheses (e.g. pos=("NN"|"NE")). │ // │ line comment │ [pos="NE"].[pos="NE"] // 2-word proper nouns │
{"url":"https://www.ims.uni-stuttgart.de/documents/ressourcen/werkzeuge/tigersearch/doc/html/QueryLanguage.html","timestamp":"2024-11-12T23:13:33Z","content_type":"text/html","content_length":"83937","record_id":"<urn:uuid:50d44e09-8d98-4513-8af4-aa8103c4b5df>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00604.warc.gz"}
Math Mama Writes... I've written a chapter on gender for my book, Playing With Math: Stories from Math Circles, Homeschoolers, and the Internet . It doesn't quite fit with the rest of the book. When Maria was visiting last month she asked what it had to do with play. The best response I could give her was: You’ve got to want to play the game. It’s not my game if there are no role models for me. I want to ask readers of my blog a few questions: 1. Imagine an alternate universe that's somehow matrifocal (women at the center). [Yes, yes, that's quite a stretch, and my son would complain - that's not fair! But let's just imagine this for a few minutes...] What would math look like in your version of this universe? Go wild... 2. Yes Magazine does not shy away from the hard issues, but it does come at them from as positive a perspective as possible, addressing how people are working to improve things. I'd like my chapter to do the same. I would love to hear your ideas in that regard - How can we heal? How can we support girls? 3. What do social justice issues have to do with play, anyway? Note: Here's what I've said before on this topic. Starting a Math Circle A little over a year ago, someone on Living Math Forum asked for advice. She wanted to start a math circle. I was in a hurry, and gave her some quick thoughts. Here are her questions and my replies: Rodi wrote: Some homeschool families in our community are hoping to start a math club, or even a math circle. I was hoping that some of you with experience could help with some start-up advice, such as: 1. What's the widest age spread a math club or circle can handle? (We have kids from 6-13 right now who might be interested.) Your format will determine how wide an age and skill-level spread you can handle. 2. Can someone lead a math circle without formal training and have it still be great? (More than anything, we don't want an unsuccessful attempt to turn kids off to math) Definitely. Collect some good math materials and let kids browse, if you want to see what they find enticing. (I recommend Polydrons, Set, Blink, Blokus, pentominoes, tangrams, graph paper, Math Without Words, by James Tanton, ...) 3. Are there any math clubs or circles we could visit within reasonable driving distance from Philadelphia? Penn State seems to have one. (I found it here.) 4. What training do you recommend (formal or informal), and how would we access that? Second week of July, Math Circle Institute. It's fabulous. I went 3 years in a row. It's at Notre Dame. 5. What kind of support is out there in terms of finding topics to cover with the kids? Lots. I can point you to bunches this weekend if you don't get enough pointers here. Start with joining the NaturalMath google group that Maria Droujkova runs, to watch what she does in her NC group. Follow my blog, Math Mama Writes, for ideas, although I blog about other math stuff too. Follow Math for Love - Dan's doing great work in the Seattle area. There's also a free book online called Circle in a Box. You are welcome to email me (suevanhattum on hotmail) for more ideas, and I'd be happy to talk on the phone with you. We did end up talking on the phone, and Rodi decided to attend the Math Circle Institute. I didn't find out until recently that one of her cohorts at Talking Stick Learning Center had already urged her to attend it, having just heard Bob & Ellen Kaplan being interviewed on NPR - talking about their math circles and their philosophy. Rodi knew exactly what she wanted. She figured she could learn the math anywhere. What she wanted was to figure out how Bob & Ellen did their magic. So she took copious notes whenever Bob was presenting. (She didn't get as many chances to see Ellen in action.) She and Bob have graciously allowed me to share those here. Think of this as ways to lead without giving away too much. Bits of Bob (collected during the math circles held at the 2011 Math Circle Institute) Imagine that an "accessible mystery" has been posed - an intriguing math problem that will have the participants scratching their heads for the next hour. Bob is up front, but eager to disappear. Here are a few things he said last summer in service to that goal: • By the the way, “obviously” means “I don’t know what the heck I’m talking about.” • I’m going to put something on the board. Raise your hand but don’t say anything if you recognize the pattern. • When we said “the pattern” we made a mistake; we should have said “a pattern.” • Most of math is unknown – like a big piece of cheese and we’re a little mouse nibbling at it. • What a good way to put it! • I don’t know, I’m just the secretary. • Math is freedom. ... I don’t know. • Let’s play function machines. What sound do you want it to make? • (If kids want to do their own function machine) Do you have a rule in mind? Can you handle all the numbers they might give you? (“I guess the machine needs oiling” if something doesn’t work out.) • (If kids don’t get it, give hints, rearrange inputs, if necessary, deform the machine.) What do you think is going on with the machine? • Do you see…. • Why is/are…. • What an interesting idea. Why? • That’s great, but are you really sure about that? Is 19 really less than 18? • Sounds good, sounds right, it could work with this, but how could you convince a martian or a skeptic…. • Always simplify to something your intuition can glom onto. • Math is an art – the art of choosing the best… (i.e. circle of inversion in inversive geometry) • Can we make this simpler? • This may not work, but it might! • (“I’m confused” said someone) I sympathize. • I have a terrible memory for these things so I’m going to put them on the board. • Take a wild guess: 17? 3 1/2? • That’s a good point. • That’s a good thing you’re doing. • Ah! • Hey, that’s terrific! • I’m bothered that this is an odd number. • This is great thinking, by the way. • What would a harder problem be? • Are they the same? Anyone think no? • Take a risk. You can guess or take a risk and be wrong. Sometimes it’s fun to be wrong. • I’m getting confused – we have too many examples up here. • S, what’s your guess, same or different? (to girl not participating) • That’s a good clarification – thanks. (to a question) • I’m in complete doubt – let’s do it out. • Guesses about the answer? I’ll guess 204. • That’s an interesting discovery: you can’t have…. • Yes! Terrific! • I’m convinced we’ve done everything we can with…. • I’m not convinced…. • Wait, can I just check? • Wait, you’re going too fast for me. • What do you think? • What’s a way to be systematic in exploring this? • That is great work since it just got so much harder. • Exactly. Give us the argument again. Why? • I have a feeling that gamblers know this kind of thing. Why would they? • I’ve got a weird question. What if you had? • You’ve found an economical way of thinking about it? • Oh, nice idea. • Why? I’m sure you’re right, I just don’t see it. • 12 and 24 are both in the same family, so they’re both good guesses. • I’m not sure I understand why….. I get it. • (time’s up) You’ve done an incredible amount. I think leaving it with thinking about…. Email us with what you get. • This is puzzling to me. What’s the area under here? Figure it out and email me. (to student after class, wanting more challenge) Sometimes we all struggle with the student who knows it all, and wants to play math with us, not noticing the rest of the group. Bob addressed that sort of thing with these comments: • That’s not the game we’re playing here. • You may be right, but that isn’t really interesting. What’s interesting is that we’re working together. The problem is what’s important. • Intellectual activity isn’t a competitive sport. • That’s probably what many others are thinking, but it’s not important. • Math is an art, not a sport. • I want to hear a new voice this time. • Wait wait wait, first what do YOU think. ('wait's to boy dominating, rest to the others) Talking Stick Learning Center I found out about this wonderful collection of quotes when I asked Rodi if I could interview her about her experience. She went from asking generic math circle advice to running an amazing math circle and blogging about it, all within well under a year. I love her reports about her math circle, and how she integrated her "mindfulness practices" into her math circle. Rodi's concept of how a math circle begins, as she learned about them from Bob, Ellen, and Amanda: • Ask an interesting question. • Throw out the history behind it. • Bring in other aspects of life that are related. She may have learned that from them, but to me her approach has a new feel to it. One aspect I love is how she brings in mindfulness. This comes from a post last October: Some kids were getting distractingly physical with some of the math manipulatives on the table, so we engaged in an attention- focusing activity: the Bobble-Head doll. The Bobble-Head doll (who is “a distant relative of the man who owns the zoo”) sat in the middle of the table. I tapped his head, which is on a spring, and told the kids that they had to watch until it stopped moving, then put their own heads down. The doll never stopped; with every fidget (and possibly truck passing outside) the bobbling/vibrations increased. At this point, attentions were sharpened, and we decided to put him away and try him on the floor next time. I told the kids that the doll is somehow related to math, and with that, we were ready to return to our story. And this, from two weeks later: “Something is in the air today,” said Talking Stick co-director Angie. The kids came in brimming with energy, and most came early. As we waited for the last child to arrive (still early), four of the kids were at the table writing newspaper articles. Soon I asked them to put their papers on the windowsill. They complied a bit reluctantly, and I pulled out a small musical instrument in the shape of a triangle. I asked, “Who knows the name of this instrument?” “A wind chime!” guessed J. No one knew for sure so I gave the hint that its name is a shape name. “Triangle” called the group in unison. I instructed, “When I strike this and you think the sound has ended, it will not have and you’ll be wrong. Listen harder. Then put your head down when it’s really done.” I struck it, heads went partially down, back up, and then down again. M asked whether eyes should be open or closed, and I said “whatever you think – you could even try both ways.” O said “You mean like this?” and closed one eye. N said “I can’t do that,” so I suggested covering an eye with a hand like a pirate’s eye patch. We focused our attention with three triangle chimes before I asked them to recall what was happening in our zoo story last week. Enjoy more of her math circle reports at her Talking Stick Learning Center math circle blog. (Talking Stick is a learning center for homeschoolers, offering a number of different classes. Rodi's math circles are only one part of their offerings.) After less than a year of leading her math circle, Rodi was invited to present at the Circle on the Road conference hosted by MSRI (Mathematical Sciences Research Institute). She shared these bits of her presentation with me: Eight Things I Try to Remember 1. Practice detachment. (Don’t try to hold on to your agenda. Let it go if the kids are moving in another direction that is still math.) 2. Approach things with a “Beginner’s Mind.” (Don’t always know the answer. It’s okay not to be a mathematician. Pick topics that interest you and that you don’t know a lot about. ) 3. Listen more. (When in doubt about what to say, shut up.) 4. Include relevant history, arts, philosophy, and fictional narratives. 5. Let kids move. (If J wants to stand on her head while pondering an interesting or difficult mathematical question, it’s okay as long as she’s not interfering with anyone else’s productivity.) 6. Remind kids that math is not equivalent to arithmetic. Remind them repeatedly. 7. Appreciate and encourage different avenues of engagement: graphic/geometric, numeric, algebraic, logical, etc. Different kids will approach the same problem with different styles. 8. Have fun. Enjoy seeing things from a child’s perspective. (At my April 10 Math Circle, 12-year-old M was quite surprised to learn that Sonya Kovalevsky was not paid in hot dogs. Turns out francs and franks are two different things.) I want to get better at #4, I think. If you'd like to meet Rodi and me, Bob & Ellen Kaplan, Amanda Serenvey, and more math enthusiasts than you thought possible, come to the Math Circle Institute, July 8-14, at Notre Dame. If you haven't seen my previous posts about how fabulous it is, read this one and this one. While I was exploring all the delightful blogs I heard about in response to my list of women math bloggers, I came across something wonderful. Some high school math teachers decided they could make their own professional development conference. It's Math Camp 2012. Math Camp 2012 will be happening in St. Louis, July 19 to 22. Guerrilla Professional Development, organized by and for (high school?) math teachers. Sam Shah and Cheesemonkey are involved in this, so I'm convinced it's gonna be good. I have a feeling the other organizers, Shelli Temple and Lisa Henry, will be familiar names by next year. I'm going to the Math Circle Institute in early July. If I had more money and more time, I'd go to this Math Camp too. Dan Meyer is doing a cool project called 101 Questions. It's not something that grabs me personally, but I think it will help lots of students connect to math. Elizabeth wondered why so many more men have posted videos and photos to the 101 Questions site (101qs.com) than women. Dan posted her question and asked for our thoughts - we're up to 48 responses. (One fascinating comment compared the site to video arcade games, and mentioned research that documents their greater appeal to boys.) Someone hypothesized that there are lots more male math bloggers. Although that may be true, I know of many great math blogs by women, and offered to post a list. What I've compiled below comes from my Google Reader. If you know of any others (including your own!), please add them in the comments. I'll edit the list to include any that I like. As I perused my list, I noticed that the blog lists people include on their blogs often reflect their own gender. (There are substantially more women authors than men in my soon-to-be-published book, Playing with Math: Stories from Math Circles, Homeschoolers, and Passionate Teachers, so women definitely speak to me and my concerns.) [Note: I've edited this a bunch since posting it. Good thing school is over, so I can explore new blogs.] High School & College Level Math for Little Ones Also, if you're interested in math for young kids, check out two email groups: Living Math Forum and Natural Math. On Hiatus (great content in the archives) ... and we had to evacuate. So we did our final outdoors. (This was an early final, so they'd have two chances. They had the option to just leave and take the official final on Monday. Everyone wanted their two chances, and worked hard on the hillside.) This looks interesting... A 'teacher' has to sign up for a group of at least 4 students. Homeschoolers may want to join together to sign up. The Education Arcade at the Massachusetts Institute of Technology (MIT) has announced the Lure of the Labyrinth Challenge, a free online math challenge for grades 6-8. While playing Lure of the Labyrinth, students use mathematical thinking skills to progress through a compelling graphic-novel story. There is no cost involved to participate in the challenge, which runs through June 15. Since the game is web-based, students can play at home or at school, in the classroom, computer lab, library, or after-school program. Visit http://lureofthelabyrinth.net to sign-up for the Challenge! If you're frustrated with all the teacher bashing and standardized testing, here's an alternative idea: National Board Certification helps teachers deepen their professional practice. A group of 20 teachers at a low-income, urban elementary school, Mitchell School in Phoenix, Arizona, went through this process together. You can get Mitchell 20, the movie made about their journey, for $5 this week. I've bought mine, but may not be able to watch it until the weekend. I hear it's quite inspiring.
{"url":"https://mathmamawrites.blogspot.com/2012/05/","timestamp":"2024-11-10T18:17:21Z","content_type":"application/xhtml+xml","content_length":"142957","record_id":"<urn:uuid:0223ecc2-0492-4376-9bfa-24ca9bea184d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00828.warc.gz"}
Math Goggles #11 - Just Listen - Natural Math Math Goggles #11 – Just Listen All the Math Goggles challenges so far had to do with noticing math with your eyes. But for this week’s challenge, let’s try to just listen. Let’s listen to the math in what our children talk about. I don’t mean like when we ask them what they did today in preschool, kindergarten or school. And I don’t mean like when we quiz them on how many teddy bears are in the room or what shape is the kitchen table. Let’s listen to the math children bring up on their own. Our contributor, Malke Rosenfeld of Math in Your Feet, frequently describes such math chats on her blog. Here’s an example from her recent Seven-year-old is pushing cart around the store, narrating as she goes: “Go forward, now one quarter turn to the right, now go forward, parallel park. Okay, now turn half way around, go straight, one quarter turn…” Here’s my six-year-old who is waiting impatiently for his first baby tooth to fall out, but it seems it won’t ever happen: Mama, I have a tiny hope, and it’s quickly approaching zero, that this tooth will fall out soon. Or David Wees’s “Decomposing Fractions” post, in which he retells a conversation with his son: Daddy, I’m full. I had 1 and a half…no, one and a quarter slices of pizza which is the same as five quarters of pizza,” said my son at dinner tonight… By the way, David’s whole project, Math Thinking, is about children sharing their mathematical thoughts. So this week, let’s just listen. You might be surprised at how your child looks at things, at math ideas she explores on her own, and at mathematical reasoning behind what she says. You may also share your observations here on the blog. Posted in
{"url":"https://naturalmath.com/2013/05/math-goggles-11-just-listen/","timestamp":"2024-11-04T04:55:50Z","content_type":"text/html","content_length":"311088","record_id":"<urn:uuid:82ff6d37-7f6e-43f3-a53d-c0939476db70>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00866.warc.gz"}
Poly wants a cracker Some updates to the zeptractor, probably done for now until I figure out if this was just a humongous waste of time or not. Well at least I got to learn something about svg drawing. 1. added assorted famous regular polygons: triangle, square, pentagon, hexagon, heptagon, octagon, dodecagon. 2. resized inner circle to divide radius in golden ratio. Also marked where circumference would be divided in golden ration (both directions, measuring from zero). 3. added radian marks (up to 6, only in normal direction). 4. added tau divisors … they’re more logical than the pi divisors. Arial has a sucky tau symbol. 5. put phi symbol on bare radius to avoid confusion. Here’s a simpler version without the regular polygons.
{"url":"https://iandoug.com/?p=1110","timestamp":"2024-11-04T07:13:05Z","content_type":"text/html","content_length":"31453","record_id":"<urn:uuid:9a2c5d13-2326-498f-b27a-fbc2d900b61b>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00230.warc.gz"}
Gamma and Linear Spaces Article - Gamma and Linear Spaces Gamma correction for 3D graphics Gamma correction is a characteristic of virtually all modern digital displays and it has some important implications to the way we work with images. The term gamma is often used in a confusing way as it gets mapped to different concepts, so in this article I will always put a second word with it to provide more context; the word gamma itself comes from the exponent of the encode/decode function used by the non-linear operation known as gamma correction. \(y=x^\gamma\) The reason why we have gamma correction in the first place goes back to CRT displays. The technology used by these monitors to light the pixels introduced a non linear conversion from the energy in input to the energy in ouput. This means that if we send a value of 128 to the monitor it wouldn't appear 50% the maximum brightness of the monitor but it would be sensibly darker. To actually get a linear response so that our 128 looks like a mid gray we need to apply the inverse of the monitor's transformation to the input (intuitively, if 128 is too dark we send in a bigger number to compensate). The transformation function is the non linear operation \(y=x^\gamma\) where \(x\) is our input value, \(y\) is what we see on monitor and \(gamma\) is the correction value that depends on the physical technology used in the CRT. Why we are still interested in this then? We don't use CRT monitors anymore! Well, because of a fortunate coincidence by using that operation we also improve the visual quality of the signal as we happen to allocate more information in the areas where the human eye is more sensitive (darker shades). The non linear operation that the display applies to the signal is called gamma decode or gamma expansion. The operation that we apply to compensate this is called gamma encode or gamma correction. The function used for these operations is always the same \(y=x^\gamma\) and the only thing that changes is the gamma parameter; usually for decoding it's set to 2.2, and for encoding it's \(\frac{1} Figure: Plot of the two operations with a gamma value of 2.2 and \(\frac{1}{2.2}\) The reason why all this is important for computer graphics is that we work with different elements that expect the signals to be in a certain space. The display expects the signal to be gamma encoded since it will apply a gamma decode to it, and because of this the graphics tools we use do store the images after applying the gamma correction. This means that virtually all the images we have on the hard disk are stored after gamma correction and if we could open them on a linear display we would see them more bright than we would expect. Figure: Same asset shaded with and without the gamma correction applied First Step Let's see one problem at the time. The first issue is feeding the monitor with the right input. We know that the monitor is going to apply a gamma expansion to anything we send to it. If we don't encode the signal and send a linear input to the monitor this will get gamma decoded without ever being gamma encoded, producing a different result from what we actually want. This means that if we have a sphere lit by a point light and shaded with a simple lambert equation we would get the following visual output: Monitor Decode: \(monitor\_output = shader\_output ^ {2.2} = dot(N,L) ^ {2.2} \) Which is not what we want! To actually get the dot product to the monitor without the extra exponent operation we need to apply the gamma encode first, which gives us: \(shader\_output=dot(N,L) ^{ \frac{1}{2.2} }\) Monitor Decode: \(monitor\_output = (shader\_output^\frac{1}{2.2}) ^ {2.2} = dot(N,L) \) Figure: Left lambert shading without gamma encoding, Right with gamma encoding Now the monitor is showing the desired output. To get the right result from the shader we have to pay for an extra pow function, which is not ideal, but fortunately there is another way to obtain the same results by notifying the hardware of what is going on. In DirectX 11 we can flag the back buffer with a format that ends with _SRGB to tell the driver to do the encode for us when we write to Second Step So now we know how to provide the right input to the monitor, but there is another problem that needs addressing: texture reads. As we said the graphics tools we use tend to save their data after gamma encoding it. This is quite imprecise though as what really happens is that depending on your tool's settings you will see the image with or without the encoding applied. If no encoding is applied any colours you see will be darker so to compensate you will end up picking brighter colours than normal, which means that the image saved on disk will be brighter. Camera devices do a similar thing by gamma encoding the picture before saving it to the memory card. Now imagine what would happen if you were to read a texture as it comes from the disk and then render it to the screen. Let's see it in pseudo-code with the gamma correction applied for the monitor: \(shader\_output = tex2D(diffuse\_sampler, UV) = original\_pixel ^ \frac{1}{2.2} \) \(monitor\_output = (shader\_output^\frac{1}{2.2}) ^ {2.2} = ((original\_pixel ^ \frac{1}{2.2})^\frac{1}{2.2}) ^ {2.2} = original\_pixel ^ \frac{1}{2.2} \) Original pixel is the value we painted into the image, while \(original\_pixel ^ \frac{1}{2.2}\) is what the software has saved to the disk and therefore the value we read back. As you can see the output is wrong as it is the original pixel as saved from the software, therefore with the gamma encode applied. This means that every texture we read that doesn't contain generated data (so normal maps, height maps and so on are usually not affected since probably they are generated from some software and saved without the gamma correction) needs to be gamma decoded. In pseudo code: \(shader\_output = pow( tex2D(diffuse\_sampler, UV), 2.2) = (original\_pixel ^ \frac{1}{2.2}) ^ {2.2} = original\_pixel \) Also in this case the hardware allows us to read the texture while gamma decoding it. In DirectX 11 we can flag the texture we read from with a format that ends with _SRGB to tell the driver to do the decode for us when we read from it. To try and clarify what happens if you get the encode and decode sequence wrong I have put together a simple test that combines all the possible combinations you can get with a single texture and the backbuffer in Direct X 11. The black and white stripes pattern can be used to figure out what gamma you are seeing. Assuming that your browser hasn't resized the image, if you step back enough the stripes will became undistinguishable and you will see just a solid colour which supposedly is your mid gray (you may want to open the image in a tool that doesn't resize it to better appreciate the effect). Because gamma decoding pure black gives pure black, and decoding pure white gives pure white the colour we see is effectively your actual mid gray. The first case is the classic case that we have been running until all this gamma correction sequence has been properly understood, and as you can see it's mostly fine. What it's worth to point out is that the diffuse falloff has quite a lot of black in it and that the specular is "burned" and is not pure white (even if we are adding the same value to all channels!). This is due to the gamma exponent applied at the end by the monitor to our math. The left bit where no math is applied shows correctly. It's important to notice how the striped pattern matches with 187 rather than 128; this is because 187, once sent to the monitor, gets gamma decoded, which translates to \(0.73333^{2.2} = ~0.5\) (0.7333 is 187 normalized to 255). The third case looks darker, and this is because we decode more times than we encode. When we read the texture data we correctly decode it, but then when we send it to the monitor we don't gamma encode it as we should. The monitor is not aware of this and apply its gamma decode anyway, which means we gamma decode a signal that was not encoded. Notice how in this case 187 is not mid gray anymore because the value erroneusly sent to the monitor is 0.5, which is then converted to 0.25 . The second case is very similar to the third one but it looks bright rather than dark, and this is because we encode more times than we decode. Finally, the fourth case is doing what we want, it's encoding and decoding the right amount of times and the math in the shader is correctly converted where needed while using the texture data in the correct space.
{"url":"http://www.codinglabs.net/article_gamma_vs_linear.aspx","timestamp":"2024-11-01T20:30:10Z","content_type":"application/xhtml+xml","content_length":"49943","record_id":"<urn:uuid:86c656c5-6008-4437-804e-8c836c5c18fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00198.warc.gz"}
Semantics of Programming Languages Professor Prof. Tobias Nipkow Lecture Monday 8:30-10:00 and Thursday 12:15-13:45 (pre-recorded) Tutorial Monday 10:15-11:45 (remote) First Lecture November 2, 2020 Language English TUMonline IN2055 Moodle https://www.moodle.tum.de/course/view.php?id=57941 Submission System https://competition.isabelle.systems Piazza piazza.com/tum.de/winter2021/in2055 Tutorial Sessions https://bbb.rbg.tum.de/fab-k3t-f7g Tutorial Organizers Fabian Huch • Given the current situation, the tutorial will take place as a remote session. • Links to the tutorial BBB recordings are on moodle. If you need access to the moodle course, write an email to {huch} AT [in.tum.de]. • The final exam will be held online. It will consist of two parts: 1. Isabelle part, using the submission system 2. Theoretical part, which can be submitted either as comment within the theory file or as a scan or (readable) photo of your worksheet • The exam will take place in the BBB room. An online exam test run will be held on Monday, 15.2., in the usual tutorial time slot. • Book: Concrete Semantics including slides and demo theories • Lecture videos. We will reuse many of the recordings from 2019 together with some new recordings from 2020 (→ IN2055: Semantics of Programming Languages) • Old exam, solution. Of course this years exam will be a bit different due to the new format. Exercise 1 2.11.2020 - 8.11.2020 exercise sheet Exercise 2 9.11.2020 - 15.11.2020 exercise sheet, thy file Exercise 3 16.11.2020 - 22.11.2020 exercise sheet, thy file Exercise 4 23.11.2020 - 29.11.2020 exercise sheet, thy file Exercise 5 30.11.2020 - 6.12.2020 exercise sheet, thy file Exercise 6 7.12.2020 - 13.12.2020 exercise sheet, thy file Exercise 7 14.12.2020 - 20.12.2020 exercise sheet, thy file Exercise 8 21.12.2020 - 10.1.2021 exercise sheet, thy file Exercise 9 11.1.2021 - 17.1.2021 exercise sheet, thy file Exercise 10 18.1.2021 - 24.1.2021 exercise sheet, thy file Exercise 11 25.1.2021 - 31.1.2021 exercise sheet, thy file Exercise 12 1.2.2021 - 7.2.2021 exercise sheet, thy file Homework is the heart and soul of this course. • Solved homework should be uploaded to the submission system, according to the instructions on the first exercise sheet. Make sure that your submission gets a “Passed” status in the system. We will not grade it otherwise! • The latest submission date is given on each exercise sheet. Late submissions will not be graded! If you have a good excuse (such as being very sick), you should contact the tutors before the • Each homework will get 0 to 10 points, depending on the correctness and quality of the solution. • Discussing ideas and problems with others is encouraged. When working on homework problems, however, you need to solve and write up the actual solutions alone. If you misuse the opportunity for collaboration, we will consider this as cheating. • Plagiarizing somebody else’s homework results in failing the course immediately. This applies for both parties, that is, the one who plagiarized, and the one who provided his/her solution. • Important: all homework is graded and contributes 50% towards the final grade. The aim of this course is to introduce the structural, operational approach to programming language semantics. It will show how this formalism is used to specify the meaning of some simple programming language constructs and to reason formally about semantic properties of programs and of tools like program analyzers and compilers. For the reasoning part the theorem prover Isabelle will be used. At the end of the course students should: • be familiar with rule-based presentations of the operational semantics of some simple imperative program constructs, • be able to prove properties of an operational semantics using various forms of induction and • be able to write precise formal proofs with the theorem prover Isabelle. Important Notice • You must be familiar with the basics of some functional programming language like Haskell, Objective Caml, Standard ML or F# (as taught, for example, in Introduction to Informatics 2 (IN0003)). For motivated students who do not have the necessary background yet: There are many introductions to functional programming available online, for example the first 6 chapters of Introduction to Objective Caml. • You must haven taken some basic course in discrete mathematics where you learned about sets, relations and proof principles like induction (as taught, for example, in Discrete Structures • You need not be familiar with formal logic, but you must be motivated to learn how to write precise and detailed mathematical proofs that are checked for correctness by a machine, the theorem prover Isabelle. • At the end of the course there will be a written, oral, or remote examination, depending on the number of students. Throughout the course there will be homework assignments. They will involve the use of Isabelle and will be graded. The final grade will be a combination of the examination and the homework grades.
{"url":"https://www21.in.tum.de/teaching/semantics/WS20/","timestamp":"2024-11-10T15:28:07Z","content_type":"text/html","content_length":"18945","record_id":"<urn:uuid:8bb6fed6-e53a-4808-86d3-8865658f581f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00176.warc.gz"}
How to Calculate Exponential Values Properly In Php? To calculate exponential values properly in PHP, you can use the pow() function. The pow() function takes two arguments: the base number and the exponent. 1 $result = pow($base, $exponent); In the above code, $base represents the base number and $exponent represents the exponent. The pow() function will raise the base number to the power of the exponent and store the result in the $result variable. Here's an example of calculating an exponential value: 1 $base = 2; 2 $exponent = 3; 4 $result = pow($base, $exponent); 6 echo $result; // Output: 8 In this example, the base number is 2, and the exponent is 3. The pow() function calculates 2 raised to the power of 3, which equals 8. The result is then echoed out. You can use this method to calculate exponential values using any base number and exponent in PHP. What is the precision of exponential values when using the bcmath extension in PHP? The bcmath extension in PHP provides arbitrary precision decimal arithmetic. It allows working with numbers of very large magnitude and precision. The precision of exponential values in the bcmath extension depends on the scale set using the bcscale() function. The bcscale() function determines the number of digits after the decimal point in the result of any division operation. By default, the scale is set to 0, which means only integer values can be represented accurately. To work with exponential values, you need to set a higher scale using bcscale(). For example: 1 bcscale(5); // Set scale to 5 decimal places 3 $a = bcpow('10', '100'); // 10 raised to the power of 100 4 echo $a; In this example, the bcpow() function calculates 10^100, and the scale of 5 ensures that the result is accurate up to 5 decimal places. Keep in mind that the precision of calculations in bcmath is limited by available memory and may slow down performance for very large numbers. How to calculate compound interest using exponential functions in PHP? You can calculate compound interest using exponential functions in PHP by using the formula: A = P * (1 + r/n)^(n*t) • A is the final amount including interest • P is the principal amount (initial investment) • r is the annual interest rate (as a decimal) • n is the number of times that interest is compounded per year • t is the number of years the money is invested for Here's an example PHP function that calculates compound interest using exponential functions: This example will output: The final amount including interest is $1283.63 Note: You can adjust the values of $principal, $rate, $compounding, and $time to suit your specific calculation. What precautions should be taken when calculating large exponential values in PHP to avoid overflow? When calculating large exponential values in PHP to avoid overflow, the following precautions can be taken: 1. Use appropriate data types: PHP provides different data types for handling various sizes of numbers. For large exponential calculations, it is recommended to use the 'bcmath' extension, which offers arbitrary precision mathematics. This extension can handle extremely large values without overflow issues. 2. Adjust PHP settings: In the 'php.ini' file, increase the values of 'memory_limit' and 'precision' to allow PHP to handle large calculations. For example, setting 'memory_limit' to a higher value like '1G' (1 gigabyte) and 'precision' to a higher number like '100' can help with large exponential calculations. 3. Use appropriate functions: PHP offers a variety of functions for performing mathematical calculations. When dealing with large exponential values, make use of functions specifically designed for handling arbitrary precision calculations, such as 'bcpow' for exponentiation. 4. Check the limits: Before performing any calculations, check the limits of the data types being used. Use the 'PHP_INT_MAX' constant to determine the maximum value that can be represented by the integer data type. If the calculated exponential result exceeds this limit, switch to 'bcmath' or other arbitrary precision libraries. 5. Divide and conquer: If calculating extremely large exponential values, it may be helpful to break down the calculation into smaller, more manageable steps. For example, if calculating V = A^B where B is a very large value, divide B into smaller increments and perform incremental calculations to minimize the chances of overflow. 6. Error handling: When utilizing arbitrary precision libraries, always check for errors or exceptions that may occur during calculations. For example, the 'bcpow' function can return false if the calculation fails due to insufficient memory or other issues. By taking these precautions, you can perform large exponential calculations in PHP without facing overflow problems. How to calculate exponential values with decimal exponents in PHP? In PHP, you can use the pow() function to calculate exponential values with decimal exponents. The pow() function takes two arguments: the base number and the exponent. Here's an example of how to use the pow() function to calculate exponential values with decimal exponents: 1 $base = 2.5; 2 $exponent = 1.5; 4 $result = pow($base, $exponent); 6 echo "The exponential value is: " . $result; In this example, the pow() function is used to calculate 2.5 raised to the power of 1.5. The result will be printed out as "The exponential value is: {result}". What is the largest value that can be accurately calculated using exponential functions in PHP? PHP uses floating-point arithmetic to perform calculations, and the largest value that can be accurately represented using floating-point numbers depends on the specific implementation and platform. In PHP, the maximum finite value that can be represented using a double-precision floating-point number (64-bit IEEE 754 format) is typically around 1.8 × 10^308. However, it's important to note that the accuracy and precision of floating-point arithmetic decreases as the value gets larger. Hence, calculations involving very large exponential values may incur rounding errors or inaccuracies. If you require arbitrary precision calculations, you can use the BCMath or GMP extension libraries, which provide support for high-precision arithmetic in PHP. How to calculate exponential values inside a loop in PHP? To calculate exponential values inside a loop in PHP, you can use the pow() function which takes two parameters: the base number and the exponent. You can use a loop to iterate through a range of numbers and calculate their exponential values. Here is an example of calculating exponential values inside a loop in PHP: 1 <?php 2 // Range of numbers 3 $start = 1; 4 $end = 5; 6 // Calculate exponential values for numbers in the range 7 for ($i = $start; $i <= $end; $i++) { 8 $exponentialValue = pow($i, 2); // calculate i^2 9 echo "Exponential value of $i: $exponentialValue" . PHP_EOL; 10 } 11 ?> In this example, the loop iterates from 1 to 5, and for each iteration, the pow() function is used to calculate the exponential value of $i raised to the power of 2 (i.e., squared). The echo statement just prints out the result for each number. You can modify the code as per your specific requirements and choose a different exponent to calculate exponential values.
{"url":"https://wpcrux.com/blog/how-to-calculate-exponential-values-properly-in-php","timestamp":"2024-11-14T09:52:21Z","content_type":"text/html","content_length":"369774","record_id":"<urn:uuid:3eeaea82-ad52-4644-a7ea-dacd860e820c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00281.warc.gz"}
Étude d'un manuscrit de Ludwig Schläfli - Euklid. 2 Hefte In this thesis we investigate a number of problems related to 2-level polytopes, in particular from the point of view of the combinatorial structure and the extension complexity. 2-level polytopes were introduced as a generalization of stable set polytopes ...
{"url":"https://graphsearch.epfl.ch/en/publication/167900","timestamp":"2024-11-13T09:30:53Z","content_type":"text/html","content_length":"98434","record_id":"<urn:uuid:ab361720-4d44-4758-b1b6-768d0eff1c25>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00279.warc.gz"}
Learn How Multiple Linear Regression Works In Minutes Welcome to this comprehensive guide on multiple linear regression, a versatile and powerful technique for analyzing relationships between multiple variables. As one of the foundational methods in data analysis and machine learning, multiple linear regression enables you to gain valuable insights from your data, make informed decisions, and drive success in various industries. Whether you are a beginner or an experienced data enthusiast, this blog post aims to provide you with an in-depth understanding of multiple linear regression and the tools needed to apply it We will start by exploring the basics of multiple linear regression and the assumptions underlying the model. We will then delve into the process of collecting and preparing data, building the model, and validating and optimizing its performance. Comprehensive Guide on Multiple Linear Regression Along the way, we will examine real-world applications across different industries and provide practical examples to illustrate the concepts discussed. Whether you're a student, a data enthusiast, or a professional looking to enhance your analytics skills, this post will serve as a valuable resource for mastering multiple linear regression quickly. By the end of this post, you will be well-equipped to tackle your own multiple linear regression projects with confidence and skill. So, let's dive in and start your journey towards mastering multiple linear regression! Introduction to Multiple Linear Regression If you're new to the machine learning field, you might have heard about various algorithms and techniques that help uncover patterns and make predictions from data. One such technique is multiple linear regression, a powerful and widely-used method in data analysis. Multiple linear regression is an extension of simple linear regression, a statistical method used to model the relationship between a dependent variable (the outcome we want to predict) and one or more independent variables (the predictors). In simple linear regression, we only have one independent variable; in multiple linear regression, we can have two or more independent variables. The goal is to find the best-fitting line or hyperplane that can describe the relationship between the dependent and independent variables, allowing us to make predictions based on the given data. The multiple linear regression equation looks like this: Y = β₀ + β₁X₁ + β₂X₂ + ... + βnXn + ε • Y is the dependent variable, • X₁, X₂, ..., Xn are the independent variables, • β₀ is the intercept, • β₁, β₂, ..., βnₖ are the coefficients of the independent variables, • ε is the random error term. The coefficients represent the strength and direction of the relationship between each independent and dependent variable. Importance of Multiple Linear Regression In Data Analysis Multiple linear regression is a widely-used technique in data analysis for several reasons: 1. Simplicity: The method is easy to understand and implement, making it accessible to beginners in machine learning and data analysis. 2. Interpretability: The coefficients in the multiple linear regression model provide insights into the relationships between the dependent and independent variables. This information can be useful for decision-making and understanding the factors influencing the outcome. 3. Predictive Power: Multiple linear regression can help make accurate predictions in various fields, such as finance, marketing, healthcare, and sports. It allows us to understand the impact of different factors on the outcome and use this knowledge to make data-driven decisions. 4. Flexibility: The technique can handle continuous and categorical independent variables, making it suitable for various applications. Understanding Multiple Linear Regression With Technical Terms Before diving into implementing multiple linear regression, it's essential to grasp the fundamental concepts and assumptions underpinning this technique. In this section, we'll explain the basics of multiple linear regression and the key assumptions that must be met for the technique to be effective. Simple Linear Regression vs. Multiple Linear Regression Simple linear regression and multiple linear regression are both techniques used to model the relationship between a dependent variable (the outcome we want to predict) and one or more independent variables (the predictors). The primary difference between the two lies in the number of independent variables they can handle. Simple linear regression models the relationship between a single independent variable and the dependent variable, whereas multiple linear regression can accommodate two or more independent This added complexity allows multiple linear regression to capture more intricate relationships and interactions among the predictors, resulting in a more accurate and comprehensive model. Key terminology In Mulitple Linear Regression • Dependent variable (Y): The outcome we want to predict or explain. • Independent variable (X): The predictors used to explain the dependent variable's variation. • Coefficients (β): The parameters that determine the relationship between the dependent variable and the independent variables. • Intercept (β₀): The point at which the regression line or hyperplane intersects the Y-axis when all independent variables are equal to zero. • Error term (ε): The random variation in the dependent variable that is not explained by the independent variables. Multiple Linear Regression Assumptions For multiple linear regression to provide accurate and reliable results, certain assumptions must be met: 1. Linearity: The relationship between the dependent variable and each independent variable should be linear. This means that any increase or decrease in the independent variable's value should result in a proportional change in the dependent variable's value. 2. Independence Of Errors: The error terms should be independent of each other, meaning that the error associated with one observation should not influence the error of any other observation. This assumption helps ensure that the model's predictions are unbiased and accurate. 3. Multivariate Normality: The error terms should follow a multivariate normal distribution, meaning the errors are normally distributed around the regression line or hyperplane. This assumption allows for the generation of accurate confidence intervals and hypothesis tests. 4. Homoscedasticity: The error terms should have constant variance across all levels of the independent variables. This means that the spread of the errors should be consistent regardless of the values of the predictors. If this assumption is not met, it could lead to unreliable confidence intervals and hypothesis tests. 5. No Multicollinearity: The independent variables should not be highly correlated with one another. High correlation among independent variables can make it difficult to determine the individual effects of each predictor on the dependent variable, leading to unreliable coefficient estimates and reduced model interpretability. Now Let’s discuss how to check and address violations of these assumptions when building a multiple linear regression model. Collecting and Preparing Data Collecting and preparing the data is crucial to building a robust multiple linear regression model. In this section, we'll walk you through the process of identifying variables, collecting data, and cleaning and preprocessing the data to ensure that it's ready for analysis. Identifying the Variables Dependent variable (target): The dependent variable, also known as the target or response variable, is the outcome you want to predict or explain using the independent variables. You'll need to select a single dependent variable in multiple linear regression. Examples include • House prices, • Customer churn rates, • Sales revenue. Independent variables (predictors): The independent variables, also called predictors or features, are used to explain the variations in the dependent variable. In multiple linear regression, you can use two or more independent When selecting independent variables, consider factors that are likely to influence the dependent variable and have a theoretical basis for inclusion in the model. Data Collection Methods Collecting data for multiple linear regression can be done using various methods, depending on your research question and the domain you're working in. Common data collection methods include: 1. Surveys and Questionnaires: Collecting responses from individuals or organisations through structured questions. 2. Observational Studies: Gathering data by observing subjects or events without any intervention. 3. Experiments: Conducting controlled experiments to gather data under specific conditions. 4. Existing Databases and Datasets: Using pre-existing data from sources such as government agencies, research institutions, or online repositories. Data cleaning and preprocessing Once you've collected the data, the next step is to clean and preprocess it to ensure it's suitable for analysis. This process includes addressing issues such as missing values, outliers, and inconsistent data formats. 1. Missing Values: Missing values can occur when data points are not recorded or need to be completed. Depending on the nature and extent of the missing data, you can choose to impute the missing values using methods such as mean, median, or mode imputation or remove the observations with missing values altogether. 2. Outliers: Outliers are data points significantly different from most of the data. Outliers can considerably impact the multiple linear regression model, so it's essential to identify and handle them appropriately. You can use visualisation techniques, such as box plots or scatter plots, and statistical methods, such as the Z-score or IQR method, to detect outliers. Depending on the context, you can either remove the outliers or transform the data to reduce their impact. 3. Feature Scaling: Feature scaling is the process of standardising or normalizing the independent variables so that they have the same scale. This step is crucial when working with multiple independent variables with different units or ranges, as it ensures that each variable contributes equally to the model. Common scaling techniques include min-max normalization and standardization (Z-score scaling). 4. Encoding Categorical Variables: Multiple linear regression requires that all independent variables be numerical. If your dataset includes categorical variables (e.g., gender, color, or region), you must convert them into numerical values. One common method for encoding categorical variables is one-hot encoding, which creates binary (0 or 1) features for each category of the variable. After completing these preprocessing steps, your data should be ready for building a multiple linear regression model. Now Let’s discuss the process of model building, validation, and optimization. Building the Multiple Linear Regression Model With Scikit-Learn Now that you have a clean and preprocessed dataset, it's time to build the multiple linear regression model. In this section, we'll guide you through selecting the right predictors, implementing the model in Python, and interpreting the results. Selecting the Right Predictors For Multiple Linear Regression Model Choosing the most relevant and significant predictors is essential for building an accurate and interpretable multiple linear regression model. Here are three popular techniques for predictor 1. Forward Selection: This method starts with an empty model and iteratively adds predictors one at a time based on their contribution to the model's performance. The process continues until no significant improvement in model performance is observed. 2. Backward Elimination: This method starts with a model that includes all potential predictors and iteratively removes the least significant predictor one at a time. The process continues until removing any more predictors results in a significant decrease in model performance. 3. Stepwise Regression: This method combines both forward selection and backward elimination. It starts with an empty model, adds predictors one at a time, and evaluates the model at each step. It may be removed if a predictor's inclusion no longer improves the model. The process continues until no more predictors can be added or removed without significantly affecting model performance. Implementing Multiple Linear Regression In Python Using Scikit-Learn library Python's Scikit-Learn library is popular for implementing machine learning algorithms, including multiple linear regression. The library provides user-friendly functions for model building, evaluation, and optimization. Code walkthrough Here's a simple example of how to implement multiple linear regression using Scikit-Learn: Multiple Linear Regression Model Diagnostics & Interpretation After building the multiple linear regression model, evaluating its performance and interpreting the results is essential. 1. R-squared and adjusted R-squared: R-squared measures how well the model explains the variation in the dependent variable. It ranges from 0 to 1, with higher values indicating better model performance. Adjusted R-squared is a modified version of R-squared that takes into account the number of predictors in the model. It is useful for comparing models with different numbers of 2. Interpretation of Coefficients: The coefficients in a multiple linear regression model represent the average change in the dependent variable for a one-unit increase in the corresponding independent variable, holding all other predictors constant. Positive coefficients indicate a positive relationship, while negative coefficients indicate an inverse relationship. The magnitude of the coefficients can be used to understand the strength of the relationship between the predictors and the dependent variable. 3. Significance of Predictors: The significance of the predictors can be assessed using hypothesis tests, such as t-tests or F-tests. A predictor is considered statistically significant if its p-value is below a predetermined threshold, usually 0. Multiple Linear Regression Model Validation and Optimization After building the multiple linear regression model, validating and optimising its performance is essential. This section will discuss cross-validation techniques, performance metrics, and identifying and addressing multicollinearity. Cross-validation Techniques Cross-validation is a technique used to assess the performance of a model on unseen data. It involves dividing the dataset into multiple subsets, training the model on some of these subsets, and testing the model on the remaining subsets. Common cross-validation techniques include: 1. K-fold Cross-Validation: In k-fold cross-validation, the dataset is divided into k equal-sized folds. The model is trained on k-1 folds and tested on the remaining fold. This process is repeated k times, with each fold serving as the test set once. The performance of the model is assessed based on the average performance across all k iterations. 2. Leave-One-Out Cross-Validation: This method is a special case of k-fold cross-validation where k equals the number of observations in the dataset. In leave-one-out cross-validation, the model is trained on all observations except one, which serves as the test set. This process is repeated for each observation in the dataset. The performance of the model is assessed based on the average performance across all iterations. Performance Metrics Several metrics can be used to evaluate the performance of a multiple linear regression model. These metrics quantify the difference between the predicted values and the actual values of the dependent variable. Common performance metrics include: 1. Mean Squared Error (MSE): The MSE is the average of the squared differences between the predicted and actual values. It emphasizes larger errors and is sensitive to outliers. 2. Mean Absolute Error (MAE): The MAE is the average of the absolute differences between the predicted and actual values. It is less sensitive to outliers than the MSE. 3. Root Mean Squared Error (RMSE): The RMSE is the square root of the MSE. It is expressed in the same units as the dependent variable, making it easier to interpret. Identifying and Addressing Multicollinearity Multicollinearity occurs when independent variables in a multiple linear regression model are highly correlated. It can lead to unstable coefficient estimates and reduced interpretability. To detect and address multicollinearity, consider the following steps: 1. Variance inflation factor (VIF): The VIF measures how much a coefficient's variance is inflated due to multicollinearity. A VIF value greater than 10 is often considered indicative of multicollinearity. To calculate the VIF for each predictor, use statistical software or Python libraries such as Statsmodels or Scikit-Learn. 2. Remedial measures: If multicollinearity is detected, consider the following remedial measures: 1. Remove one of the correlated predictors: If two or more predictors are highly correlated, consider removing one of them to reduce multicollinearity. 2. Combine correlated predictors: If correlated predictors represent similar information, consider combining them into a single predictor using techniques such as principal component analysis (PCA) or creating interaction terms. 3. Regularization techniques: Regularization methods, such as ridge regression or Lasso regression, can help address multicollinearity by adding a penalty term to the regression equation, which shrinks the coefficients of correlated predictors. By validating and optimizing your multiple linear regression model, you can ensure that it generalizes well to new data and provides accurate and reliable predictions. Real-World Applications of Multiple Linear Regression Multiple linear regression is widely used across various industries due to its versatility and ability to model relationships between multiple variables. Examples of Using Multiple Linear Regression In various industries 1. Finance: In the finance industry, multiple linear regression is used to predict stock prices, assess investment risks, and estimate the impact of various factors, such as interest rates, inflation, and economic indicators, on financial assets. 2. Healthcare: Multiple linear regression is employed in healthcare to identify risk factors for diseases, predict patient outcomes, and evaluate the effectiveness of treatments. For example, it can be used to model the relationship between a patient's age, weight, blood pressure, and the likelihood of developing a specific medical condition. 3. Marketing: In marketing, multiple linear regression is used to analyze customer behaviour and predict sales. It can help businesses understand the impact of different marketing strategies on sales revenue, such as advertising, pricing, and promotions. 4. Sports: Multiple linear regression is also used in sports analytics to predict player performance, evaluate team strategies, and determine game outcome factors. For example, it can be employed to predict a basketball player's points scored based on their shooting percentage, minutes played, and other relevant statistics. Case Studies Where Multiple Linear Regression Used 1. Housing Price Prediction: A real estate company might use multiple linear regression to predict housing prices based on features such as square footage, the number of bedrooms and bathrooms, the age of the house, and location. This information can help buyers and sellers make informed decisions and assist the company in setting competitive prices for their listings. 2. Customer Churn Prediction: A telecommunications company can use multiple linear regression to predict customer churn based on factors such as customer demographics, usage patterns, and customer service interactions. By identifying customers at risk of leaving, the company can take proactive measures to retain them, such as offering targeted promotions or improving customer support. 3. Demand Forecasting: A retail company can use multiple linear regression to forecast product demand based on factors like seasonality, economic conditions, and promotional activities. Accurate demand forecasting helps businesses manage inventory levels, optimize supply chain operations, and plan marketing campaigns effectively. 4. Predicting Academic Performance: Educational institutions can use multiple linear regression to predict students' academic performance based on factors such as previous grades, attendance, and socio-economic background. This information can help educators identify students who may need additional support and develop targeted interventions to improve academic outcomes. These examples and case studies demonstrate the broad applicability of multiple linear regression in various industries. Mastering this technique can unlock valuable insights and make data-driven decisions across various domains. As we reach the end of this blog post, let's recap the key points and emphasize the importance of continuous learning and skill development. We encourage you to apply multiple linear regression in real-life projects and harness its full potential. Recap of key points 1. Multiple linear regression is an extension of simple linear regression, allowing for the analysis of relationships between one dependent variable and multiple independent variables. 2. Assumptions of multiple linear regression include linearity, independence of errors, multivariate normality, homoscedasticity, and no multicollinearity. 3. Data collection, cleaning, and preprocessing are crucial steps in preparing for multiple linear regression analysis. 4. Building the model involves selecting the right predictors, implementing the model in Python using libraries like Scikit-Learn, and interpreting the results. 5. Model validation and optimization include cross-validation techniques, performance metrics, and addressing multicollinearity. 6. Multiple linear regression has diverse real-world applications across various industries, such as finance, healthcare, marketing, and sports. As you gain experience with multiple linear regression, consider exploring more advanced topics, such as regularization techniques, nonlinear regression, and other machine learning algorithms. Engaging with the data science community, attending workshops, and participating in online courses can help you further develop your skills and stay ahead in the field. Now that you have a solid understanding of multiple linear regression, we encourage you to apply this powerful technique to real-life projects. Working with real-world data and solving practical problems will give you invaluable hands-on experience and deepen your understanding of multiple linear regression. Additionally, incorporating this skill into your projects can lead to valuable insights and data-driven decision-making, ultimately enhancing your professional and personal endeavours. Recommended Courses Bayesian Statistics Course Follow us: I hope you like this post. If you have any questions ? or want me to write an article on a specific topic? then feel free to comment below.
{"url":"https://dataaspirant.com/multiple-linear-regression/","timestamp":"2024-11-03T00:53:03Z","content_type":"application/xhtml+xml","content_length":"266196","record_id":"<urn:uuid:1d4911d5-9288-4b51-a54a-ade8aa9d6754>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00149.warc.gz"}
The sum of two prime numbers is 85. What is the product of these two prime numbers? The sum of the two prime numbers is 85. What is the product of these two prime numbers? The sum of the two prime numbers is 85. This is an odd number. According to the Rule, the sum of the even prime number and the odd prime number is called an odd number. I think you know that only one even prime number is 2. according to the questions: suppose X and Y both are prime numbers then \( x+y =85 \) we know that one prime number is 2 than another prime number. \( 2+y =85 \) \( y =85-2 \) \( y =83 \) The Products of Prime number is \( x \times y = ? \) [put the value of x and y] \( 2 \times 83 = 166 \)
{"url":"https://www.andlearning.org/the-sum-of-two-prime-numbers-is-85/","timestamp":"2024-11-05T00:00:37Z","content_type":"text/html","content_length":"71950","record_id":"<urn:uuid:4d0173c8-981f-4b08-b1d4-c963e8b6147b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00322.warc.gz"}
AdditionFn - Sigma Knowledge base BrowserSigma KEE - AdditionFn (<=> People.kif 272-293 A real number is an average of a list if and only if there exist another list and a positive integer such that length of the other list is equal to (average ?LIST1 ?AVERAGE) length of the list and 1th element of the other list is equal to 1th element of the list and for all another positive integer and the positive (exists (?LIST2 ?LASTPLACE) integer is equal to length of the other list and the real number is equal to the positive integerth element of the other list and the positive (and integer (ListLengthFn ?LIST2) (ListLengthFn ?LIST1)) (ListOrderFn ?LIST2 1) (ListOrderFn ?LIST1 1)) (forall (?ITEMFROM2) (inList ?ITEMFROM2 ?LIST2) (exists (?POSITION ? (greaterThan ?POSITION 1) (lessThanOrEqualTo ? (ListLengthFn ?LIST2)) (ListOrderFn ?LIST2 ? ITEMFROM2) ?POSITION) (inList ?ITEMFROM1 ?LIST1) (equal ?POSITION (ListOrderFn ?LIST1 ? (inList ?PRIORFROM2 ?LIST2) (equal ?POSITIONMINUSONE (SubtractionFn ?POSITION (equal ?POSITIONMINUSONE (ListOrderFn ?LIST2 ? (equal ?ITEMFROM2 (AdditionFn ?ITEMFROM1 ? (equal ?LASTPLACE (ListLengthFn ?LIST2)) (equal ?AVERAGE (ListOrderFn ?LIST2 ? LASTPLACE) ?LASTPLACE))))) (=> Merge.kif 5994-6007 (PathWeightFn ?PATH) ?SUM) (graphPart ?ARC1 ?PATH) (graphPart ?ARC2 ?PATH) (arcWeight ?ARC1 ?NUMBER1) (arcWeight ?ARC2 ?NUMBER2) (forall (?ARC3) (graphPart ?ARC3 ?PATH) (equal ?ARC3 ?ARC1) (equal ?ARC3 ?ARC2))))) (PathWeightFn ?PATH) (AdditionFn ?NUMBER1 ? (=> Merge.kif 5980-5992 (PathWeightFn ?PATH) ?SUM) (subGraph ?SUBPATH ?PATH) (graphPart ?ARC1 ?PATH) (arcWeight ?ARC1 ?NUMBER1) (forall (?ARC2) (graphPart ?ARC2 ?PATH) (graphPart ?ARC2 ?SUBPATH) (equal ?ARC2 ?ARC1))))) (equal ?SUM (PathWeightFn ?SUBPATH) ? (=> Merge.kif 5105-5116 (RemainderFn ?NUMBER1 ? NUMBER2) ?NUMBER) (equal ?NUMBER2 0))) (DivisionFn ?NUMBER1 ? NUMBER2)) ?NUMBER2) ? NUMBER) ?NUMBER1)) (=> Merge.kif 3259-3269 (equal ?A (ListSumFn ?L)) (ListLengthFn ?L) 1)) (equal ?A (FirstFn ?L) (SubListFn 2 (ListLengthFn ?L) ?L))))) (=> Merge.kif 3084-3103 (equal ?LIST3 (ListConcatenateFn ?LIST1 ? (equal ?LIST1 NullList)) (equal ?LIST2 NullList)) (lessThanOrEqualTo ?NUMBER1 (ListLengthFn ?LIST1)) (lessThanOrEqualTo ?NUMBER2 (ListLengthFn ?LIST2)) (instance ?NUMBER1 (instance ?NUMBER2 (ListOrderFn ?LIST3 ? (ListOrderFn ?LIST1 ? (ListOrderFn ?LIST3 (ListLengthFn ?LIST1) ? (ListOrderFn ?LIST2 ? (=> Media.kif 3050-3071 (equal ?OUT (ReverseFn ?IN)) (equal ?LEN (StringLengthFn ?IN)) (greaterThan ?LEN 1) (greaterThan ?N 0) (lessThan ?N ?LEN) (equal ?PIVOT (SubtractionFn ?LEN 1) 2))) (equal ?NEW (SubtractionFn ?PIVOT ?N) ? (equal ?S (SubstringFn ?IN ?N (AdditionFn 1 ?N)))) (equal ?S (SubstringFn ?OUT ?NEW (AdditionFn 1 ?NEW)))) (=> Merge.kif 3191-3203 (equal ?R (SubListFn ?S ?E ?L)) (SubtractionFn ?E ?S) 1)) (equal ?R (ListOrderFn ?L ?S)) (AdditionFn 1 ?S) ?E ?L)))) (=> Weather.kif 1437-1449 (equal ?VA (VarianceAverageFn ?M ?L)) (ListLengthFn ?L) 1)) (equal ?VA (VarianceAverageFn ?M (ListOrderFn ?L 1)) (VarianceAverageFn ?M (SubListFn 2 (ListLengthFn ?L) ?L))))) (=> Food.kif 1248-1262 (instance ?M Mixture) (instance ?Z UnitOfMeasure) (mixtureRatio ?A ?B ?X ?Y ? (measure ?M (MeasureFn ?T ?Z)) (part ?A ?M) (part ?B ?M) (measure ?A (MeasureFn ?X ?Z)) (measure ?B (MeasureFn ?Y ?Z))) (equal ?T (AdditionFn ?X ?Y))) (=> Biography.kif 69-85 (instance ?MIT BarMitzvah) (patient ?MIT ?X) (instance ?X Boy) (member ?X ?GROUP) (instance ?GROUP Judaism) (birthdate ?X ?DAY) (instance ?DAY (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (exists (?Y13 ?BD13) (instance ?Y13 Integer) (equal ?Y13 (AdditionFn ?Y 13)) (instance ?BD13 (DayFn ?D (MonthFn ?M (YearFn ?Y13)))) (WhenFn ?MIT) (ImmediateFutureFn ? (=> Biography.kif 99-115 (instance ?MIT BatMitzvah) (patient ?MIT ?X) (instance ?X Girl) (member ?X ?GROUP) (instance ?GROUP Judaism) (birthdate ?X ?DAY) (instance ?DAY (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (exists (?Y13 ?BD13) (instance ?Y13 Integer) (equal ?Y13 (AdditionFn ?Y 13)) (instance ?BD13 (DayFn ?D (MonthFn ?M (YearFn ?Y13)))) (WhenFn ?MIT) (ImmediateFutureFn ? (=> Geography.kif 555-560 (instance ?UNIT UnitOfArea) (landAreaOnly ?AREA (MeasureFn ?LAND ?UNIT)) (waterAreaOnly ?AREA (MeasureFn ?WATER ?UNIT))) (totalArea ?AREA (AdditionFn ?LAND ?WATER) ? (=> Merge.kif 17277-17283 (instance ?UTC (HourFn ?H1 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (instance ?CST (HourFn ?H2 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (RelativeTimeFn ?UTC CentralTimeZone) ?CST)) (equal ?H2 (AdditionFn ?H1 6))) (=> Merge.kif 17289-17295 (instance ?UTC (HourFn ?H1 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (instance ?EST (HourFn ?H2 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (RelativeTimeFn ?UTC EasternTimeZone) ?EST)) (equal ?H2 (AdditionFn ?H1 5))) (=> Merge.kif 17265-17271 (instance ?UTC (HourFn ?H1 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (instance ?MST (HourFn ?H2 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (RelativeTimeFn ?UTC MountainTimeZone) ?MST)) (equal ?H2 (AdditionFn ?H1 7))) (=> Merge.kif 17253-17259 (instance ?UTC (HourFn ?H1 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (instance ?PST (HourFn ?H2 (DayFn ?D (MonthFn ?M (YearFn ?Y))))) (RelativeTimeFn ?UTC PacificTimeZone) ?PST)) (equal ?H2 (AdditionFn ?H1 8))) (=> Transportation.kif (and 510-517 (MeasureFn ?LENGTH ?UNIT)) (lengthOfPavedHighway ?AREA (MeasureFn ?LENGTH1 ?UNIT)) (lengthOfUnpavedHighway ? (MeasureFn ?LENGTH2 ?UNIT)) (instance ?UNIT (AdditionFn ?LENGTH1 ? LENGTH2) ?UNIT))) (=> Transportation.kif (and 503-508 (MeasureFn ?LENGTH ?UNIT)) (lengthOfPavedHighway ?AREA (MeasureFn ?LENGTH1 ?UNIT)) (lengthOfUnpavedHighway ? (MeasureFn ?LENGTH2 ? (equal ?LENGTH (AdditionFn ?LENGTH1 ? (=> Mid-level-ontology.kif (conjugate ?COMPOUND1 ? 6495-6503 (exists (?NUMBER1 ?NUMBER2) (protonNumber ?COMPOUND1 ? (protonNumber ?COMPOUND2 ? (equal ?NUMBER1 (AdditionFn ?NUMBER2 1)) (equal ?NUMBER2 (AdditionFn ?NUMBER1
{"url":"https://sigma.ontologyportal.org:8443/sigma/Browse.jsp?term=AdditionFn","timestamp":"2024-11-01T19:34:34Z","content_type":"text/html","content_length":"240806","record_id":"<urn:uuid:32879928-885a-4d7b-9ee8-ab3a096bf2da>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00691.warc.gz"}
The rate of the chemical reaction doubles for an increase of 10K in absolute temperature from 298K. Calculate E a . The rate of the chemical reaction doubles for an increase of 10K in absolute temperature from 298K. Calculate E[a]. Initial temperature, T[1] = 298 K Final temperature, T[2] = 298 K + 10 K = 308 K Knowing that the rate constant of a chemical reaction normally increases with increase in temperature, we assume that, Initial value of rate constant, k[1] = k Final value of rate constant, k[2] = 2k Using Arrhenius equation, → Equation 1 where, R = 8.314 J K^-1 mol^-1 (gas constant). Substituting all the values in equation 1, we get, log 2 = E[a] = E[a] = E[a] = 52897 J mol^-1 E[a] = 52.897 kJ mol^-1 The energy of activation, E[a] is 52.897 kJ mol^-1
{"url":"https://philoid.com/question/33573-the-rate-of-the-chemical-reaction-doubles-for-an-increase-of-10k-in-absolute-temperature-from-298k-calculate-e-a-","timestamp":"2024-11-04T21:19:04Z","content_type":"text/html","content_length":"35512","record_id":"<urn:uuid:f525a11b-f07c-42a0-b10c-68e05781746d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00817.warc.gz"}
A Gentle Introduction Fourth Edition January 2020 | 536 pages | SAGE Publications, Inc The Fourth Edition of Statistics: A Gentle Introduction shows students that an introductory statistics class doesn’t need to be difficult or dull. This text minimizes students’ anxieties about math by explaining the concepts of statistics in plain language first, before addressing the math. Each formula within the text has a step-by-step example to demonstrate the calculation so students can follow along. Only those formulas that are important for final calculations are included in the text so students can focus on the concepts, not the numbers. A wealth of real-world examples and applications gives a context for statistics in the real world and how it helps us solve problems and make informed choices. New to the Fourth Edition are sections on working with big data, new coverage of alternative non-parametric tests, beta coefficients, and the "nocebo effect," discussions of p values in the context of research, an expanded discussion of confidence intervals, and more exercises and homework options under the new feature "Test Yourself." Included with this title: The password-protected Instructor Resource Site (formally known as Sage Edge) offers access to all text-specific resources, including a test bank and editable, chapter-specific PowerPoint® slides. Learn more About the Author Chapter 1: A Gentle Introduction How Much Math Do I Need to Do Statistics? The General Purpose of Statistics: Understanding the World Liberal and Conservative Statisticians Descriptive and Inferential Statistics Experiments Are Designed to Test Theories and Hypotheses Eight Essential Questions of Any Survey or Study On Making Samples Representative of the Population Experimental Design and Statistical Analysis as Controls The Language of Statistics On Conducting Scientific Experiments The Dependent Variable and Measurement Measurement Scales: The Difference Between Continuous and Discrete Variables Types of Measurement Scales Rounding Numbers and Rounding Error History Trivia: Achenwall to Nightingale Chapter 1 Practice Problems Chapter 1 Test Yourself Questions Chapter 2: Descriptive Statistics: Understanding Distributions of Numbers The Purpose of Graphs and Tables: Making Arguments and Decisions A Summary of the Purpose of Graphs and Tables Shapes of Frequency Distributions Grouping Data Into Intervals Advice on Grouping Data Into Intervals The Cumulative Frequency Distribution Cumulative Percentages, Percentiles, and Quartiles Non-normal Frequency Distributions On the Importance of the Shapes of Distributions Additional Thoughts About Good Graphs Versus Bad Graphs History Trivia: De Moivre to Tukey Chapter 2 Practice Problems Chapter 2 Test Yourself Questions Chapter 3: Statistical Parameters: Measures of Central Tendency and Variation Measures of Central Tendency Choosing Among Measures of Central Tendency Uncertain or Equivocal Results Correcting for Bias in the Sample Standard Deviation How the Square Root of x2 Is Almost Equivalent to Taking the Absolute Value of x The Computational Formula for Standard Deviation The Sampling Distribution of Means, the Central Limit Theorem, and the Standard Error of the Mean The Use of the Standard Deviation for Prediction Practical Uses of the Empirical Rule: As a Definition of an Outlier Practical Uses of the Empirical Rule: Prediction and IQ Tests History Trivia: Fisher to Eels Chapter 3 Practice Problems Chapter 3 Test Yourself Questions Chapter 4: Standard Scores, the z Distribution, and Hypothesis Testing The Classic Standard Score: The z Score and the z Distribution More Practice on Converting Raw Data Into z Scores Converting z Scores to Other Types of Standard Scores Interpreting Negative z Scores Testing the Predictions of the Empirical Rule With the z Distribution Why Is the z Distribution So Important? How We Use the z Distribution to Test Experimental Hypotheses More Practice With the z Distribution and T Scores Summarizing Scores Through Percentiles History Trivia: Karl Pearson to Egon Pearson Chapter 4 Practice Problems Chapter 4 Test Yourself Questions Chapter 5: Inferential Statistics: The Controlled Experiment, Hypothesis Testing, and the z Distribution Hypothesis Testing in the Controlled Experiment Hypothesis Testing: The Big Decision How the Big Decision Is Made: Back to the z Distribution The Parameter of Major Interest in Hypothesis Testing: The Mean Nondirectional and Directional Alternative Hypotheses A Debate: Retain the Null Hypothesis or Fail to Reject the Null Hypothesis The Null Hypothesis as a Nonconservative Beginning The Four Possible Outcomes in Hypothesis Testing Significant and Nonsignificant Findings Trends, and Does God Really Love the .05 Level of Significance More Than the .06 Level? Directional or Nondirectional Alternative Hypotheses: Advantages and Disadvantages Did Nuclear Fusion Occur? Conclusions About Science and Pseudoscience The Most Critical Elements in the Detection of Baloney in Suspicious Studies and Fraudulent Claims Can Statistics Solve Every Problem? History Trivia: Egon Pearson to Karl Pearson Chapter 5 Practice Problems Chapter 5 Test Yourself Questions Chapter 6: An Introduction to Correlation and Regression Correlation: Use and Abuse A Warning: Correlation Does Not Imply Causation Another Warning: Chance Is Lumpy Correlation and Prediction The Four Common Types of Correlation The Pearson Product–Moment Correlation Coefficient Testing for the Significance of a Correlation Coefficient Obtaining the Critical Values of the t Distribution If the Null Hypothesis Is Rejected Representing the Pearson Correlation Graphically: The Scatterplot Fitting the Points With a Straight Line: The Assumption of a Linear Relationship Interpretation of the Slope of the Best-Fitting Line The Assumption of Homoscedasticity The Coefficient of Determination: How Much One Variable Accounts for Variation in Another Variable—The Interpretation of r2 Quirks in the Interpretation of Significant and Nonsignificant Correlation Coefficients Reading the Regression Line Final Thoughts About Multiple Regression Analyses: A Warning About the Interpretation of the Significant Beta Coefficients Significance Test for Spearman’s r Point-Biserial Correlation Testing for the Significance of the Point-Biserial Correlation Coefficient Testing for the Significance of Phi History Trivia: Galton to Fisher Chapter 6 Practice Problems Chapter 6 Test Yourself Questions Chapter 7: The t Test for Independent Groups The Statistical Analysis of the Controlled Experiment One t Test but Two Designs Assumptions of the Independent t Test The Formula for the Independent t Test You Must Remember This! An Overview of Hypothesis Testing With the t Test What Does the t Test Do? Components of the t Test Formula What If the Two Variances Are Radically Different From One Another? The Power of a Statistical Test The Correlation Coefficient of Effect Size Another Measure of Effect Size: Cohen’s d Estimating the Standard Error History Trivia: Gosset and Guinness Brewery Chapter 7 Practice Problems Chapter 7 Test Yourself Questions Chapter 8: The t Test for Dependent Groups Variations on the Controlled Experiment Assumptions of the Dependent t Test Why the Dependent t Test May Be More Powerful Than the Independent t Test How to Increase the Power of a t Test Drawbacks of the Dependent t Test Designs One-Tailed or Two-Tailed Tests of Significance Hypothesis Testing and the Dependent t Test: Design 1 Design 1 (Same Participants or Repeated Measures): A Computational Example Design 2 (Matched Pairs): A Computational Example Design 3 (Same Participants and Balanced Presentation): A Computational Example History Trivia: Fisher to Pearson Chapter 8 Practice Problems Chapter 8 Test Yourself Questions Chapter 9: Analysis of Variance (ANOVA): One-Factor Completely Randomized Design A Limitation of Multiple t Tests and a Solution The Equally Unacceptable Bonferroni Solution The Acceptable Solution: An Analysis of Variance The Null and Alternative Hypotheses in ANOVA The Beauty and Elegance of the F Test Statistic How Can There Be Two Different Estimates of Within-Groups Variance? What a Significant ANOVA Indicates Degrees of Freedom for the Numerator Degrees of Freedom for the Denominator Determining Effect Size in ANOVA: Omega Squared (w2) Another Measure of Effect Size: Eta (h) History Trivia: Gosset to Fisher Chapter 9 Practice Problems Chapter 9 Test Yourself Questions Chapter 10: After a Significant ANOVA: Multiple Comparison Tests Conceptual Overview of Tukey’s Test Computation of Tukey’s HSD Test What to Do If the Number of Error Degrees of Freedom Is Not Listed in the Table of Tukey’s q Values Determining What It All Means On the Importance of Nonsignificant Mean Differences Chapter 10 Practice Problems Chapter 10 Test Yourself Questions Chapter 11: Analysis of Variance (ANOVA): One-Factor Repeated-Measures Design The Repeated-Measures ANOVA Assumptions of the One-Factor Repeated-Measures ANOVA Determining Effect Size in ANOVA Chapter 11 Practice Problems Chapter 11 Test Yourself Questions Chapter 12: Factorial ANOVA: Two-Factor Completely Randomized Design The Most Important Feature of a Factorial Design: The Interaction Fixed and Random Effects and In Situ Designs The Null Hypotheses in a Two-Factor ANOVA Assumptions and Unequal Numbers of Participants Chapter 12 Practice Problems Chapter 12 Test Yourself Problems Chapter 13: Post Hoc Analysis of Factorial ANOVA Main Effect Interpretation: Gender Why a Multiple Comparison Test Is Unnecessary for a Two-Level Main Effect, and When Is a Multiple Comparison Test Necessary? Multiple Comparison Test for the Main Effect for Age Warning: Limit Your Main Effect Conclusions When the Interaction Is Significant Multiple Comparison Tests Interpretation of the Interaction Effect Writing Up the Results Journal Style Exploring the Possible Outcomes in a Two-Factor ANOVA Determining Effect Size in a Two-Factor ANOVA History Trivia: Fisher and Smoking Chapter 13 Practice Problems Chapter 13 Test Yourself Questions Chapter 14: Factorial ANOVA: Additional Designs Overview of the Split-Plot ANOVA Two-Factor ANOVA: Repeated Measures on Both Factors Design Overview of the Repeated-Measures ANOVA Key Terms and Definitions Chapter 14 Practice Problems Chapter 14 Test Yourself Questions Chapter 15: Nonparametric Statistics: The Chi-Square Test and Other Nonparametric Tests Overview of the Purpose of Chi-Square Overview of Chi-Square Designs Chi-Square Test: Two-Cell Design (Equal Probabilities Type) The Chi-Square Distribution Assumptions of the Chi-Square Test Chi-Square Test: Two-Cell Design (Different Probabilities Type) Interpreting a Significant Chi-Square Test for a Newspaper Chi-Square Test: Three-Cell Experiment (Equal Probabilities Type) Chi-Square Test: Two-by-Two Design What to Do After a Chi-Square Test Is Significant When Cell Frequencies Are Less Than 5 Revisited Other Nonparametric Tests History Trivia: Pearson and Biometrika Chapter 15 Practice Problems Chapter 15 Test Yourself Questions Chapter 16: Other Statistical Topics, Parameters, and Tests Health Science Statistics Additional Statistical Analyses and Multivariate Statistics A Summary of Multivariate Statistics Chapter 16 Practice Problems Chapter 16 Test Yourself Questions Appendix A: z Distribution Appendix B: t Distribution Appendix C: Spearman’s Correlation Appendix D: Chi-Square ?2 Distribution Appendix E: F Distribution Appendix F: Tukey’s Table Appendix G: Mann–Whitney U Critical Values Appendix H: Wilcoxon Signed-Rank Test Critical Values Appendix I: Answers to Odd-Numbered Test Yourself Questions Student Study Siteedge.sagepub.com/coolidge4e The open-access Student Study Site makes it easy for students to maximize their study time, anywhere, anytime. It offers flashcards that strengthen understanding of key terms and concepts, as well as learning objectives that reinforce the most important material. For additional information, custom options, or to request a personalized walkthrough of these resources, please contact your sales representative. Instructor Teaching Siteedge.sagepub.com/coolidge4e The open-access Student Study Site makes it easy for students to maximize their study time, anywhere, anytime. It offers flashcards that strengthen understanding of key terms and concepts, as well as learning objectives that reinforce the most important material. For additional information, custom options, or to request a personalized walkthrough of these resources, please contact your sales representative. Statistics is generally not a dynamic topic. But Coolidge is able to break it down in a way that is manageable. His discussion of each type of analyses is easily accessed by the table of contents and accurately depicted in the index. This is especially important for this generation of learners who want easy access to the specific information that is necessary without waiting through extraneous concepts. Coolidge also describes contemporary and specific examples of how miss use of data can have an impact in real world circumstances. This is beneficial because it makes a true connection with the power that a statistical researcher holds. It is the only book on the market that covers important advanced techniques such as repeated measures ANOVA and multiple regressions, using SPSS. Westminster College, Fulton, Missouri The book is written to address a broad range of student ability. It is helpful to students without a strong background in mathematics. Department of Psychology and Sociology, Tuskegee University Good introductory book on statistics. Perfect for first-time statistics students, since concepts are presented simply but clearly. As an instructor, I would want a hard-copy of this book. Education, Carolina University September 10, 2021 I don't think I ever received this book Criminology/Criminal Just Dept, University Of Memphis September 28, 2021 Adopted as a recommended text for students interested in diving more deeply into some of the concepts we cover (all too briefly) in a refresher course for incoming Masters of Public Policy students. Political Science Dept, University Of Utah February 10, 2020
{"url":"https://uk.sagepub.com/en-gb/asi/statistics/book255370","timestamp":"2024-11-05T03:53:48Z","content_type":"text/html","content_length":"160695","record_id":"<urn:uuid:ca281fe1-70ac-4ed8-9905-bc0c253c8c38>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00432.warc.gz"}
Understanding Congruence | Curious Toons Table of Contents Introduction to Congruence Definition of Congruence Congruence in geometry refers to the quality of two shapes being identical in terms of size and shape. When two geometric figures are congruent, it means they can be perfectly overlapped when one is placed over the other, without needing any resizing or reshaping. This concept applies to various shapes, including triangles, circles, and polygons. For example, two triangles are congruent if their corresponding sides and angles are equal. We often use the symbol “≅” to denote congruence; for instance, if triangle ABC is congruent to triangle DEF, we write it as ΔABC ≅ ΔDEF. Understanding congruence goes beyond just recognizing that two shapes look the same—it’s about understanding the properties that make them equivalent. These properties help us in decomposing complex shapes into simpler ones, proving theorems, and solving geometric problems. Additionally, congruence helps us establish relationships between different figures, allowing us to make comparisons and predictions in both theoretical and practical contexts. Importance in Geometry The importance of congruence in geometry cannot be overstated. Congruence serves as a foundational principle for various geometric concepts and theorems. It allows us to classify shapes, proving that certain figures, regardless of their position or orientation, can retain their properties. Understanding congruence is essential not just for solving geometry problems but also for real-world applications, such as in engineering, architecture, and art, where precise measurements and similarities matter. Furthermore, congruence is crucial for proving larger geometric statements through smaller, manageable assertions. For instance, congruence helps in establishing criteria for triangle similarity and congruence, allowing us to apply these principles to various complex scenarios. This makes it easier to derive relationships and formulate strategies for constructing shapes, calculating areas, and developing geometric proofs. Ultimately, understanding congruence enriches your mathematical reasoning skills, enabling you to tackle both theoretical challenges and practical applications with Congruent Figures Properties of Congruent Figures Congruent figures are shapes that are exactly the same in size and shape. When we say two figures are congruent, we mean that one can be transformed into the other through movements such as translations (sliding), rotations (turning), or reflections (flipping), without altering their size or shape. One key property of congruent figures is that all corresponding sides are equal in length, and all corresponding angles are equal in measure. For example, if two triangles are congruent, that means each side of one triangle matches exactly in length to a side of the other triangle, and each angle aligns perfectly as well. Another property is that congruence is a reflexive property, meaning any shape is congruent to itself. It also has an essential symmetry: if Figure A is congruent to Figure B, then Figure B is also congruent to Figure A. This relationship is called the symmetric property. Understanding these properties helps us analyze and solve problems involving congruence, making it foundational for further studies in geometry, such as triangle congruence criteria (SSS, SAS, ASA, etc.). Examples of Congruent Shapes Let’s explore some concrete examples of congruent shapes to deepen your understanding. One of the simplest instances is two squares of the same size. If we have a square measuring 4 cm by 4 cm, and another square claiming the same dimensions, these squares are congruent because they share identical angles (each being 90 degrees) and their sides are all equal (4 cm each). Another example is in triangles. If we have two triangles, both with sides measuring 5 cm, 6 cm, and 7 cm, they are congruent because all corresponding sides are equal. We can arrange these triangles in different orientations through rotation or flipping, yet they remain unchanged in their overall dimensions. Circles also provide a good example; two circles with the same radius of 3 cm are congruent, regardless of their positions in the plane. By recognizing these examples, you can visualize congruence in real-life situations, such as tiling patterns, where identical tiles must align perfectly, reinforcing the concept of congruence in both mathematical theory and practical application. Congruence Transformations Types of Transformations (Translation, Rotation, Reflection) In our study of congruence, we first need to understand the different types of transformations: translation, rotation, and reflection. Translation is a movement where every point of a shape moves the same distance in the same direction. Imagine sliding a book across a table; every part of the book stays exactly the same distance from each other during the slide. This means that the shape and size do not change, preserving congruence. Rotation involves turning a shape around a fixed point, known as the center of rotation. Think of spinning a pizza around its center. The pizza maintains its size and shape, proving that the image remains congruent to the original. The amount of rotation is measured in degrees. Reflection is like flipping a shape over a line, known as the line of reflection. Picture flipping a pancake; it looks the same on both sides and has not altered in size or shape. The reflected image will be congruent because distances and angles remain unchanged. Together, these transformations illustrate how shapes can be manipulated while retaining their congruence, which is crucial in understanding the properties of geometric figures. Effects of Transformations on Congruence When we apply transformations to shapes, a central idea is whether the transformed shape remains congruent to the original. Congruence means that two figures have the same shape and size, which is key in geometry. Through transformations such as translation, rotation, and reflection, we can verify that these operations do not alter the basic properties of shapes. For instance, when we translate a triangle, all sides remain the same length, and all angles stay unchanged. Similarly, rotating the triangle around a point also preserves these qualities. Reflection flips the triangle but does not modify its dimensions. This means that after any of these transformations, we find that the original and transformed shapes can be perfectly aligned on top of each other. This understanding of congruence transformations indicates that we can manipulate figures in space effectively without altering their fundamental properties. Whether in proving theorems or solving geometric problems, recognizing that transformations maintain congruence is vital for our success in geometry! Criteria for Congruence Side-Side-Side (SSS) Criterion The Side-Side-Side (SSS) Criterion is a fundamental rule we use to determine if two triangles are congruent. According to this criterion, if the three sides of one triangle are equal in length to the three sides of another triangle, then the two triangles are considered congruent. This means they have the same shape and size, even if their orientation or position in space is different. For example, if Triangle ABC has sides measuring 5 cm, 7 cm, and 9 cm, and Triangle DEF also has sides of 5 cm, 7 cm, and 9 cm, we can confidently say that Triangle ABC is congruent to Triangle DEF (denoted as ΔABC ≅ ΔDEF). The SSS Criterion is useful in various geometric proofs and real-world applications, such as engineering and architecture, where maintaining exact measurements is crucial. Remember, while the SSS method relies only on side lengths, it does not require knowledge about angles—making it a straightforward and effective way to establish congruence. Angle-Side-Angle (ASA) Criterion The Angle-Side-Angle (ASA) Criterion is another key method for proving triangle congruence. According to this criterion, if two angles and the included side of one triangle are equal to two angles and the included side of another triangle, then the two triangles are congruent. The “included side” means that the side must be between the two angles you’re comparing. For instance, if Triangle ABC has angles measuring 40° and 70°, and the side between them measures 10 cm, and Triangle DEF has angles of 40° and 70° with the same side of 10 cm, then we can claim ΔABC ≅ ΔDEF. The ASA Criterion is particularly useful because it shows how two triangles can be congruent even if we don’t know the lengths of all their sides, provided we have information about two angles and the included side. This criterion is critical in proofs and applications involving angles, such as in construction and design. Understanding these congruence criteria helps us build a strong foundation in geometry! Applications of Congruence Real-world Applications Understanding congruence is not just a theoretical exercise; it has practical applications in various aspects of our daily lives. For instance, in architecture and engineering, ensuring that structures are congruent is crucial for stability and aesthetic balance. When designing buildings, engineers use congruent shapes to maintain uniformity, which helps distribute weight evenly and keeps the structure safe. In the field of art, congruence plays a vital role too. Artists often create patterns that rely on congruent shapes to achieve symmetry and harmony in their work. Additionally, congruence is significant in computer graphics and design, where it aids in modeling and replicating objects in a three-dimensional space. Video games and simulations utilize congruent shapes to create realistic environments and characters. In everyday tasks, such as cutting fabric for clothing or creating furniture, we apply the concept of congruence to ensure pieces fit together perfectly. Thus, recognizing congruence isn’t just an academic skill; it’s a valuable tool that aids people across various industries and activities, emphasizing its importance in real-world Congruence in Proofs and Theorems In mathematics, congruence is foundational in the realm of proofs and theorems, particularly in geometry. Understanding how and why shapes and figures are congruent allows us to establish relationships between them, leading to more significant conclusions. One classic example is the Side-Angle-Side (SAS) theorem, which states that if two sides and the angle between them in one triangle are congruent to two sides and the angle in another triangle, the two triangles are congruent. This forms the basis for proving numerous geometric relationships. Congruence also helps in establishing properties such as the Pythagorean theorem, where congruent triangles can be used to prove that in a right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. By leveraging congruence in logical arguments and geometric proofs, mathematicians can draw conclusions that hold true universally, reinforcing the idea that congruence is not merely a concept but a critical component in the logical structure many theorems are built upon. Through these practices, students learn to appreciate the power of congruence in proving mathematical truths. As we conclude our exploration of congruence, let’s take a moment to reflect on its significance beyond the confines of geometry. Congruence, at its core, represents an equality of shape and size, a harmonious relationship that mirrors certain aspects of our lives. Just as we’ve discovered that two triangles can be considered congruent through a series of transformations—translations, rotations, and reflections—we too can reflect on the transformations that shape our understanding of the world around us. In a broader sense, think about how congruence relates to our perspectives. Can we find congruent viewpoints with others, recognizing how different transformations of thought can lead to the same essential truth? Additionally, consider the role of congruence in problem-solving. Just as we analyze geometric figures for congruence to simplify complex problems, we can apply similar strategies in real life to seek clarity in complicated situations. As you move forward, remember that congruence is not solely about geometry; it’s a lens through which we can examine equality, relationships, and patterns. Embrace the congruencies in your life, and let them guide your journey through math and beyond. The beauty of math lies not just in its formulas, but in its connections to the world we share.
{"url":"https://curioustoons.in/understanding-congruence/","timestamp":"2024-11-09T20:55:06Z","content_type":"text/html","content_length":"107617","record_id":"<urn:uuid:8424717a-6c83-4c48-a1a4-b15c396e74be>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00722.warc.gz"}
Monotonic Sequence Calculator | Sequencecalculators.com - sequencecalculators.com Monotonic Sequence Calculator Online monotonic sequence calculator is the best online tool that helps to calculate the function of a monotone easily in a fraction of seconds. All you need to do is give the inputs in the input fields and click on the calculate button then get the answers instantly. How to Solve if a Function is a Monotone? The monotonic sequence is a set of numbers it is always either increasing or decreasing. a[n] <= a[n+1 ](Increasing of monotonic sequence) a[n ]>= a[n+1] (Decreasing of monotonic sequence) Now, we are going to see the steps that are given below to calculate the monotonic sequence easily. • Firstly, give the values that are given in the problem. • After that, we need to simplify the equations. • Then you will see the answers in a monotonic sequence. Question: Find the first four terms of a[n] = n / (n+1) to see the monotonic sequence? Given, a[n] = n / (n+1) where, n = 1,2,3,4 a[1] = 1 / (1+1) = 1/2. a[2] = 2 / (2+1) = 2/3. a[3] = 3 / (3+1) = 3/4. a[4] = 4 / (4+1) = 4/5. Therefore the four terms to see the monotonic sequence is 1/2, 2/3, 3/4, 4/5. Sequencecalculators.com is a huge collection of online calculators for plenty of concepts in sequences. Explore it and make your calculations easy. FAQs on Monotonic Sequence Calculators 1. What is a monotonic sequence? The monotonic sequence is a set of numbers it is either always increasing or always decreasing. 2. How to use this monotonic sequence calculator? Step 1: Give the inputs in the input field. Step 2: Then you need to click on the calculate button. Step 3: Finally, you will get the answer immediately. 3. Which type of sequence is monotonic and non-monotonic? If the sequence is always increasing or always decreasing then that sequence is monotonic. And the sequences are up and down then it is non-monotonic. 4. Which tool is the best to calculate the monotonic function? Monotonic Sequence Calculator is the best online tool to calculate the monotonic function.
{"url":"https://sequencecalculators.com/monotonic-sequence-calculator/","timestamp":"2024-11-14T23:39:46Z","content_type":"text/html","content_length":"24407","record_id":"<urn:uuid:e9071e74-fa87-41b1-ad28-fec86d8144d1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00748.warc.gz"}
EViews Help: Background While the ordinary least squares estimator (OLS) is notably unbiased, it can exhibit large variance. If, for example, the data have more regressors In these and other settings, it may be advantageous to perform regularization by specifying constraints on the flexibility of the OLS estimator, resulting in a simpler or more parsimonious model. By restricting the ability of the ordinary least squares estimator to respond to data, we reduce the sensitivity of our estimates to random errors, lowering the overall variability at the expense of added bias. Thus, regularization trades variance reduction for bias with the goal of obtaining a more desirable mean-square error. One popular approach to regularization is elastic net penalized regression. Elastic Net Regression In estimating elastic net penalized regression for regularization, the conventional sum-of-squared residuals (SSR) objective is augmented by a penalty that depends on the absolute and squared magnitudes of the coefficients. These coefficient penalties act to shrink the coefficients magnitudes toward zero, trading off increases in the sums-of-squares against reductions in the penalties. We may write the elastic net cost or objective function as, • The first term in this expression, is the familiar sum-of-squared residuals (SSR), divided by two times the number of observations. • The second term is the penalty portion of the objective, Typically, the coefficients are penalized equally so that In general, the problem of minimizing this objective with respect to the coefficients The Penalty Parameter Elastic net specifications rely centrally on the penalty parameter One can compute Enet estimates for a single, specific penalty value, but the recommended approach (Hastie, Friedman, and Tibshirani, 2010) is to compute estimates for each element of a set of values and to examine the behavior of the results as When estimates are obtained for multiple penalty values, we may use model selection techniques to choose a “best” Automatic Lambda Grid Researchers will usually benefit from practical guidance in specifying Friedman, Hastie, and Tibshirani (2010) describe a recipe for obtaining a path of penalty values: • First, specify a (maximum) number of penalty values to consider, say • Next employ the data to compute the smallest For ridge regression specifications, coefficients are not easily pushed to zero, so we arbitrarily use an elastic net model with • Take exponents of the values, yielding the lambda path Path Estimation Friedman, Hastie, and Tibshirani (2010) describe a path estimation procedure in which one estimates models beginning with the largest value Friedman, Hastie, and Tibshirani (2010) propose terminating path estimation if the sequential changes in model fit or changes in coefficient values cross specific thresholds. We may, for example, choose to end path estimation at a given Model Selection We may use cross-validation model selection techniques to identify a preferred value of There are a number of different ways to perform cross-validation. See “Cross-validation Settings” “Cross-validation Options” for discussion and additional detail. If a cross-validation method defines only a single training and test set for a given If the cross-validation method produces multiple training and test sets for each The Penalty Function Note that the penalty function is comprised of several components: the overall penalty parameter There are many detailed discussions of the varying properties of these penalty approaches (see, for example Friedman, Hastie, and Tibshirani (2010). For our purposes it suffices to note that are of the form j-th derivative increases and decreases with the weighted values of The Mixing Parameter There are many discussions of the properties of the various penalty mixes. See, for example Hastie, Tibshirani, and Friedman (2010) for discussion of the implications of different choices for For our purposes, we note only that models with penalties that emphasize the The limiting cases for the mixing parameter, Setting Lasso (Least-Absolute Shrinkage and Selection Operator) model that contains only the Setting ridge regression specification with only the The ridge regression estimator for this objective has an analytic solution. Writing the objective in matrix form: Setting equal to zero and collecting terms so that a analytic closed-form estimator exists, which is recognizable as the traditional ridge regression estimator with ridge parameter Thus, while an elastic net model with Bear in mind that in EViews, the coefficient on the intercept term “C” is not penalized in either ridge or in elastic net specifications. Comparison to results obtained elsewhere should ensure that intercept coefficient penalties are specified comparably. Individual Penalty Weights The notation in Equation (37.1) explicitly accounts for the fact that we do not wish to penalize the We may generalize this type of coefficient exclusion using individual penalty weights Notably, setting Equation (37.1) For comparability with the baseline case where the weights are 1 for all but the intercept, we normalize the weights so that Data scaling Scaling of the dependent variable and regressors is sometimes carried out to improve the numeric stability of ordinary least squares regression estimates. It is well-known that scaling the dependent variable or regressors prior to estimation does not have substantive effects on the OLS estimates: • Dividing the dependent variable by • Dividing each of the regressors by an individual scale Following estimation, the results may easily be returned to the original scale: • The effect of dependent variable scaling is undone by multiplying the estimated scaled coefficients by • The effect of regressor scaling is undone by dividing the estimated coefficients by the These simple equivalences do not hold in a penalized setting. You should pay close attention to the effect of your scaling choices, especially when comparing results across approaches. Dependent Variable Scaling In some settings, scaling the dependent variable only changes the scale of the estimated coefficients, while in others, it will change both the scale and the relative magnitudes of the estimates. To understand the effects of dependent variable scaling, define the objective function using the scaled dependent variable, We may write the restricted objective as: Equation (37.1) since the Inspection of this objective yields three results of note: Regressor Scaling The practice of standardizing the regressors prior to estimation is common in elastic net estimation. As before, we can see the effect by looking at the effect of scaling on the objective function. We define a new objective using the scaled regressors We have: Since coefficients in the penalty portion are individually scaled while those in the SSR residual portion are not, there is no general way to write this objective as a scaled version of the original objective. The coefficient To summarize, when comparing results between different choices for scaling, you should remember that results may differ substantively depending on your choices: • Scaling the dependent variable ( • Scaling the dependent variable ( • Scaling the regressors (
{"url":"https://help.eviews.com/content/enet-Background.html","timestamp":"2024-11-13T22:57:19Z","content_type":"application/xhtml+xml","content_length":"65288","record_id":"<urn:uuid:4b699f7b-56c3-4631-a016-6638f719f246>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00314.warc.gz"}
Python NumPy Array Indexing Python NumPy array indexing is used to access values in the 1-dimensional and, multi-dimensional arrays. Indexing is an operation, that uses this feature to get a selected set of values from a NumPy array. Note that the index in ndarray starts from zero hence, there is a difference between the value and where the value is located in an array. In this article, I will explain how to get the values from an array by using its indexes. 1. Quick Examples of NumPy Array Indexing If you are in a hurry, below are some quick examples of how to get Python NumPy array indexing. # Quick examples of numpy array indexing # Example 1: Use NumPy.array() # To get first value arr = np.array([2, 4, 6, 8]) arr2 = arr[0] # Example 2: Use NumPy.array() # To get third and fourth elements arr = np.array([2, 4, 6, 8]) arr2 = arr[2] + arr[3] # Example 3: Use array Indexing # To get the values of two-dimensional arrays arr = np.array([[0,3,5,7,9], [11,13,15,17,19]]) arr2 = arr[0, 2] # Example 4: Use array Indexing # To get the values of 2-D arrays arr = np.array([[0,3,5,7,9], [11,13,15,17,19]]) arr2 = arr[1, 3] # Example 5: Access values of 3-dimensional arrays using index arr = np.array([[[0,3,5,], [7,9,11]], [[13, 15, 17], [19, 21, 23]]]) arr2 = arr[0, 1, 2] # Example 6: Use negative array index arr = np.array([[0,3,5,7,9],[11,13,15,17,19]]) arr2 = arr[0, -1] # Example 7: Access the last element of the last row # Using negative indexing arr = np.array([[0,3,5,7,9],[11,13,15,17,19]]) arr2 = arr[1, -1] 2. Get the 1-Dimensional NumPy Array Values Using Indexing ndarrays is indexed using the standard Python x[obj] syntax, where x is the array and obj is the selection. You can access an array value by referring to its index number. The indexes in NumPy arrays start with 0, meaning that the first element has index 0, the second has index 1, etc. In one-dimensional arrays, values are stored individually, so we can access those values by using their indices. Because every value in arrays has its own index. Another way to access elements from NumPy array using NumPy array slicing. If you have a 1-dimensional NumPy array and you want to access its values using indexing. This code demonstrates the creation of a NumPy array, prints the original array, and then extracts and prints the first value of the array. In this case, the original array is [2, 4, 6, 8], and the first value (at index 0) is 2. # Import numpy module import numpy as np # Create input array arr = np.array([2, 4, 6, 8]) print("Original array:",arr) # Use NumPy.array() # To get first value arr2 = arr[0] print("Getting first value:",arr2) Yields below output. From the above, arr contains four values 2, 4,6,and 8. Each of these values has its different indices. Alternatively, you are using NumPy array indexing to get the third and fourth elements of the array arr and then add them together. In this code, you are accessing the elements at indices 2 and 3 of the array arr using indexing (arr[2] and arr[3]), and then you are adding these two elements together and storing the result in the variable arr2. Finally, you print the result. # Create input array arr = np.array([2, 4, 6, 8]) print("Original array:",arr) # Use NumPy.array() # To get third and fourth elements arr2 = arr[2] + arr[3] print("Sum of third and fourth elements:",arr2) Yields below output. 3. Get the 2-Dimensional Array Values Using Indexing Use array indexing to access 2-dimensional array values, you can use comma-separated integers representing the dimension and the index of the element. Two-dimensional arrays are like a table with rows and columns, where the row represents the dimension and the index represents the column. You can use array indexing to get the value at the 3rd position (index 2) of the 1st row (index 0) of a 2-dimensional array. Using indexing (arr[0, 2]) to access the element at the 3rd position in the 1st row. # Create 2D input array arr = np.array([[0,3,5,7,9], [11,13,15,17,19]]) # Use array Indexing # To get the values of two-dimensional arrays arr2 = arr[0, 2] print("3rd value on 1st row:",arr2) # Output: # 3rd value on 1st row: 5 Similarly, using NumPy array indexing to get the value at the 4th element (index 3) of the 2nd row (index 1) of a 2-dimensional NumPy array. You are using indexing (arr[1, 3]) to access the element at the 4th position in the 2nd row. # Use array Indexing # To get the values of 2-D arrays arr2 = arr[1, 3] print("4th element on 2nd row:",arr2) # Output: # 4th element on 2nd row: 17 4. Get the 3-Dimensional Array Values Using Array Indexing Use NumPy array indexing to access 3-dimensional array values, you can use comma-separated integers representing the dimension and the index of the element. If you have a 3-dimensional NumPy array and you want to access its values using array indexing. You can access individual elements using three indices, one for each dimension. In this case, you are accessing the value at position (0, 1, 2). Follow the below examples to get the third element of the second array of the first array. # Create 3D input array arr = np.array([[[0,3,5,], [7,9,11]], [[13, 15, 17], [19, 21, 23]]]) # Access values of 3-Dimensional arrays using index arr2 = arr[0, 1, 2] print("Value at position (0, 1, 2):",arr2) # Output; # Value at position (0, 1, 2): 11 5. Use Negative Array Indexing You can use negative indexing to access value of an array from the end. Follow the below examples to get the last element from the 1-dimensional array, and 2-dimensional array. # Create 2D input array arr = np.array([[0,3,5,7,9],[11,13,15,17,19]]) # Use negative array index arr2 = arr[0, -1] print('Last element from 1st dimensional:',arr2) # Output: # Last element from 1st dimensional: 9 # Access the last element of the last row # Using negative indexing arr2 = arr[1, -1] print('Last element from 2nd dimensional:',arr2) # Output: # Last element from 2nd dimensional: 19 Frequently Asked Questions What is NumPy array indexing? NumPy array indexing is a way to access and manipulate the elements of a NumPy array. It allows you to select specific elements, slices, or subarrays based on their position or certain conditions. How does basic indexing work in NumPy? asic indexing in NumPy involves accessing individual elements of an array using integer indices. Indices in NumPy are 0-based, meaning the first element is at index 0. How does integer array indexing work? Integer array indexing allows you to access elements from an array using another array of integers as indices. For example, arr[[1, 3, 5]] will return the elements at indices 1, 3, and 5. What is negative indexing in NumPy? Negative indexing in NumPy allows you to access elements from the end of an array by using negative integers as indices. The index -1 corresponds to the last element, -2 to the second-to-last element, and so on. Can I combine multiple indexing techniques? You can combine different indexing techniques. For instance, you can use slicing along with integer or boolean indexing to create more complex selections Can I use multiple indices for multidimensional arrays? You can use multiple indices to access elements in multidimensional arrays in NumPy. The number of indices you use should match the number of dimensions in your array. In this article, I have explained how to access values of the NumPy array in different ways by using array indexing with examples. Happy Learning!! Related Articles
{"url":"https://sparkbyexamples.com/numpy/python-numpy-array-indexing/","timestamp":"2024-11-09T12:43:00Z","content_type":"text/html","content_length":"200795","record_id":"<urn:uuid:5d08cecb-4ee5-4220-b6e1-8057b2e6b2a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00839.warc.gz"}
Monads and Programming Languages One of the questions that a ton of people sent me when I said I was going to write about category theory was “Oh, good, can you please explain what the heck a monad is?” The short version is: a monad is a category with a functor to itself. The way that this works in a programming language is that you can view many things in programming languages in terms of monads. In particular, you can take things that involve mutable state, and magically hide the state. How? Well – the state (the set of bindings of variables to values) is an object in a category, State. The monad is a functor from State → State. Since the functor is a functor from a category to itself, the value of the state is implicit – they’re the object at the start and end points of the functor. From the viewpoint of code outside of the monad functor, the states are indistinguishable – they’re just something in the category. For the functor itself, the value of the state is accessible. So, in a language like Haskell with a State monad, you can write functions inside the State monad; and they are strictly functions from State to State; or you can write functions outside the state monad, in which case the value inside the state is completely inaccessible. Let’s take a quick look at an example of this in Haskell. (This example came from an excellent online tutorial which, sadly, is no longer available.) Here’s a quick declaration of a State monad in Haskell: class MonadState m s | m -> s where get :: m s put :: s -> m () instance MonadState (State s) s where get = State $ s -> (s,s) put s = State $ _ -> ((),s) This is Haskell syntax saying we’re defining a state as an object which stores one value. It has two functions: get, which retrieves the value from a state; and put, which updates the value hidden inside the state. Now, remember that Haskell has no actual assignment statement: it’s a pure functional language. So what “put” actually does is create a new state with the new value in it. How can we use it? We can only access the state from a function that’s inside the monad. In the example, they use it for a random number generator; the state stores the value of the last random generated, which will be used as a seed for the next. Here we go: getAny :: (Random a) => State StdGen a getAny = do g <- get (x,g') <- return $ random g put g' return x Now – remember that the only functions that exist *inside* the monad are "get" and "put". "do" is a syntactic sugar for inserting a sequence of statements into a monad. What actually happens inside of a do is that *each expression* in the sequence is a functor from a State to State; each expression takes as an input parameter the output from the previous. "getAny" takes a state monad as an input; and then it implicitly passes the state from expression to expression. "return" is the only way *out* of the monad; it basically says "evaluate this expression outside of the monad". So, "return $ randomR bounds g" is saying, roughly, "evaluate randomR bounds g" outside of the monad; then apply the monad constructor to the result. The return is necessary there because the full expression on the line *must* take and return an instance of the monad; if we just say " (x,g') <- randomR bounds g", we'd get an error, because we're inside of a monad construct: the monad object is going be be inserted as an implicit parameter, unless we prevent it using "return". But the resulting value has to be injected back into the monad – thus the "$", which is a composition operator. (It's basically the categorical º). Finally, "return x" is saying "evaluate "x" outside of the monad – without the "return", it would treat "x" as a functor on the monad. The really important thing here is to recognize that each line inside of the "do" is a functor from State → State; and since the start and end points of the functor are implicit in the structure of the functor itself, you don't need to write it. So the state is passed down the sequence of instructions – each of which maps State back to State. Let's get to the formal part of what a monad is. There's a bit of funny notation we need to define for it. (You can't do anything in category theory without that never-ending stream of definitions!) 1. Given a category C, 1[C] is the *identity functor* from C to C. 2. For a category C, if T is a functor C → C, then T^2 is the TºT. (And so on for tother ) 3. For a given Functor, T, the natural transformation T → T is written 1[T]. Suppose we have a category, C. A *monad on C* is a triple (T,η,μ), where T is a functor from C → C, and η and μ are natural transformations; η: 1[C] → T, and μ: (TºT) → T. (1[C] is the identity functor for C in the category of categories.) These must have the following properties: First, μ º Tμ = μ º μT. Or in diagram form: Second, μ º Tη = μ º ηT = 1[T]. In diagram form: Basically, what these really comes down to is an associative property ensuring that T behaves properly over composition, and that there is an identity transformation that behaves as we would expect. These two properties together add up to mean that any order of applications of T will behave properly, preserving the structure of the category underlying the monad. 0 thoughts on “Monads and Programming Languages” 1. Ithika It looks like your link didn’t work there Mark. Forget to put “[monad-tut]: …” at the end of the article? 😉 2. Mark C. Chu-Carroll Thanks, it’s fixed now. 3. j h woodyatt I found I really didn’t “get” monads until I understood how to compose them, and I didn’t really understand how to compose them until I read Systematic Design of Monads by John Hughes and Magnus 4. John Beattie I cannot make this code work. I am failing with the haskell syntax for multiparameter classes, I think. It looks to me as if State $ s -> (s,s) means that (State s) is a data item where the argument is a function. Here ‘State’ is a constructor. But in the declaration of the class MonadState, m is applicative, in the sense that get :: m s. So (State s) must take a type variable. And here ‘State’ is a type. Could you give an example declaration for the State type, such that (State s) [constructor] and also (State s) [type] makes sense? I know you gave the example of StdGen but StdGen only takes one argument, not two. Thanks in advance,
{"url":"http://www.goodmath.org/blog/2006/07/12/monads-and-programming-languages/","timestamp":"2024-11-10T22:25:25Z","content_type":"text/html","content_length":"100187","record_id":"<urn:uuid:104f7a6c-35b5-4ff2-8676-47b6c4a9f31e>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00055.warc.gz"}
The function f(x)=(x2+2)xsinx is... | Filo Not the question you're searching for? + Ask your question is continuous. being polynomial functions are also continuous, also Hence the function is continuous for all . Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE for FREE 8 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from IIT-JEE Super Course in Mathematics - Calculus (Pearson) View more Practice more questions from Continuity and Differentiability View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text The function is Topic Continuity and Differentiability Subject Mathematics Class Class 12 Answer Type Text solution:1 Upvotes 112
{"url":"https://askfilo.com/math-question-answers/the-function-mathrmfmathrmxfracmathrmx-sin-mathrmxleftmathrmx22right-isa","timestamp":"2024-11-08T22:01:30Z","content_type":"text/html","content_length":"364870","record_id":"<urn:uuid:426f0cb4-1cd2-4c67-9491-998497b6ea82>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00484.warc.gz"}
Excel Formula Error - Too Many Arguments - Comma or Parenthesis Error? | Microsoft Community Hub Excel Formula Error - Too Many Arguments - Comma or Parenthesis Error? I am trying to create the following formula - I keep returning too many arguments error - I've tried my best to adjust commas/parenthesis in a logical way to alleviate, but cannot figure it out! If I remove the final portion of the formula, it returns the correct information and reads the formula correctly: However, I need that final formula (IF(F2=P5,1%) added to make the formula usable for my workbook. Please let me know if anyone knows what I'm missing or sees a resolution. Thank you • HansVogelaar This works! Thank you so much! And now makes much more sense. Have a wonderful day.
{"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/excel-formula-error---too-many-arguments---comma-or-parenthesis-error/4001502","timestamp":"2024-11-10T16:08:53Z","content_type":"text/html","content_length":"235536","record_id":"<urn:uuid:e156c46f-d906-4f79-8cec-45c890931718>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00623.warc.gz"}
Binomial Distribution Calculator The Binomial Distribution calculator given here helps you to estimate the binomial distribution based on the number of events and probability of success. The Binomial distribution is defined as the probability of number of successes in a sequence of n number of experiments, each of the experiment with a success of probability p. It is expressed as Binomial Distribution[n, p]. The probability of success must be between 0 and 1. Probability Distribution Calculator
{"url":"https://www.calculators.live/binomial-distribution","timestamp":"2024-11-08T13:59:27Z","content_type":"text/html","content_length":"8995","record_id":"<urn:uuid:798f48b9-485b-463b-89c9-00b273e528fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00619.warc.gz"}
Machine Learning in Finance: Matthew F. Dixon Igor Halperin Paul Bilokon - PDFCOFFEE.COM (2024) MatthewF.Dixon IgorHalperin PaulBilokon Machine Learning in Finance From Theory toPractice Machine Learning in Finance Matthew F. Dixon • Igor Halperin • Paul Bilokon Machine Learning in Finance From Theory to Practice Matthew F. Dixon Department of Applied Mathematics Illinois Institute of Technology Chicago, IL, USA Igor Halperin Tandon School of Engineering New York University Brooklyn, NY, USA Paul Bilokon Department of Mathematics Imperial College London London, UK Additional material to this book can be downloaded from http://mypages.iit.edu/~mdixon7/ book/ML_Finance_Codes-Book.zip ISBN 978-3-030-41067-4 ISBN 978-3-030-41068-1 (eBook) https://doi.org/10.1007/ 978-3-030-41068-1 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. —Arthur Conan Doyle Machine learning in finance sits at the intersection of a number of emergent and established disciplines including pattern recognition, financial econometrics, statistical computing, probabilistic programming, and dynamic programming. With the trend towards increasing computational resources and larger datasets, machine learning has grown into a central computational engineering field, with an emphasis placed on plug-and-play algorithms made available through open-source machine learning toolkits. Algorithm focused areas of finance, such as algorithmic trading have been the primary adopters of this technology. But outside of engineering-based research groups and business activities, much of the field remains a mystery. A key barrier to understanding machine learning for non-engineering students and practitioners is the absence of the well-established theories and concepts that financial time series analysis equips us with. These serve as the basis for the development of financial modeling intuition and scientific reasoning. Moreover, machine learning is heavily entrenched in engineering ontology, which makes developments in the field somewhat intellectually inaccessible for students, academics, and finance practitioners from the quantitative disciplines such as mathematics, statistics, physics, and economics. Consequently, there is a great deal of misconception and limited understanding of the capacity of this field. While machine learning techniques are often effective, they remain poorly understood and are often mathematically indefensible. How do we place key concepts in the field of machine learning in the context of more foundational theory in time series analysis, econometrics, and mathematical statistics? Under which simplifying conditions are advanced machine learning techniques such as deep neural networks mathematically equivalent to well-known statistical models such as linear regression? How should we reason about the perceived benefits of using advanced machine learning methods over more traditional econometrics methods, for different financial applications? What theory supports the application of machine learning to problems in financial modeling? How does reinforcement learning provide a model-free approach to the Black–Scholes–Merton model for derivative pricing? How does Q-learning generalize discrete-time stochastic control problems in finance? This book is written for advanced graduate students and academics in financial econometrics, management science, and applied statistics, in addition to quants and data scientists in the field of quantitative finance. We present machine learning as a non-linear extension of various topics in quantitative economics such as financial econometrics and dynamic programming, with an emphasis on novel algorithmic representations of data, regularization, and techniques for controlling the bias-variance tradeoff leading to improved out-of-sample forecasting. The book is presented in three parts, each part covering theory and applications. The first part presents supervised learning for cross-sectional data from both a Bayesian and frequentist perspective. The more advanced material places a firm emphasis on neural networks, including deep learning, as well as Gaussian processes, with examples in investment management and derivatives. The second part covers supervised learning for time series data, arguably the most common data type used in finance with examples in trading, stochastic volatility, and fixed income modeling. Finally, the third part covers reinforcement learning and its applications in trading, investment, and wealth management. We provide Python code examples to support the readers’ understanding of the methodologies and applications. As a bridge to research in this emergent field, we present the frontiers of machine learning in finance from a researcher’s perspective, highlighting how many wellknown concepts in statistical physics are likely to emerge as research topics for machine learning in finance. Prerequisites This book is targeted at graduate students in data science, mathematical finance, financial engineering, and operations research seeking a career in quantitative finance, data science, analytics, and fintech. Students are expected to have completed upper section undergraduate courses in linear algebra, multivariate calculus, advanced probability theory and stochastic processes, statistics for time series (econometrics), and gained some basic introduction to numerical optimization and computational mathematics. Students shall find the later chapters of this book, on reinforcement learning, more accessible with some background in investment science. Students should also have prior experience with Python programming and, ideally, taken a course in computational finance and introductory machine learning. The material in this book is more mathematical and less engineering focused than most courses on machine learning, and for this reason we recommend reviewing the recent book, Linear Algebra and Learning from Data by Gilbert Strang as background reading. Advantages of the Book Readers will find this book useful as a bridge from well-established foundational topics in financial econometrics to applications of machine learning in finance. Statistical machine learning is presented as a non-parametric extension of financial econometrics and quantitative finance, with an emphasis on novel algorithmic representations of data, regularization, and model averaging to improve out-of-sample forecasting. The key distinguishing feature from classical financial econometrics and dynamic programming is the absence of an assumption on the data generation process. This has important implications for modeling and performance assessment which are emphasized with examples throughout the book. Some of the main contributions of the book are as follows: • The textbook market is saturated with excellent books on machine learning. However, few present the topic from the prospective of financial econometrics and cast fundamental concepts in machine learning into canonical modeling and decision frameworks already well established in finance such as financial time series analysis, investment science, and financial risk management. Only through the integration of these disciplines can we develop an intuition into how machine learning theory informs the practice of financial modeling. • Machine learning is entrenched in engineering ontology, which makes developments in the field somewhat intellectually inaccessible for students, academics, and finance practitioners from quantitative disciplines such as mathematics, statistics, physics, and economics. Moreover, financial econometrics has not kept pace with this transformative field, and there is a need to reconcile various modeling concepts between these disciplines. This textbook is built around powerful mathematical ideas that shall serve as the basis for a graduate course for students with prior training in probability and advanced statistics, linear algebra, times series analysis, and Python programming. • This book provides financial market motivated and compact theoretical treatment of financial modeling with machine learning for the benefit of regulators, wealth managers, federal research agencies, and professionals in other heavily regulated business functions in finance who seek a more theoretical exposition to allay concerns about the “black-box” nature of machine learning. • Reinforcement learning is presented as a model-free framework for stochastic control problems in finance, covering portfolio optimization, derivative pricing, and wealth management applications without assuming a data generation process. We also provide a model-free approach to problems in market microstructure, such as optimal execution, with Q-learning. Furthermore, our book is the first to present on methods of inverse reinforcement learning. • Multiple-choice questions, numerical examples, and more than 80 end-ofchapter exercises are used throughout the book to reinforce key technical concepts. • This book provides Python codes demonstrating the application of machine learning to algorithmic trading and financial modeling in risk management and equity research. These codes make use of powerful open-source software toolkits such as Google’s TensorFlow and Pandas, a data processing environment for Python. Overview of the Book Chapter 1 Chapter 1 provides the industry context for machine learning in finance, discussing the critical events that have shaped the finance industry’s need for machine learning and the unique barriers to adoption. The finance industry has adopted machine learning to varying degrees of sophistication. How it has been adopted is heavily fragmented by the academic disciplines underpinning the applications. We view some key mathematical examples that demonstrate the nature of machine learning and how it is used in practice, with the focus on building intuition for more technical expositions in later chapters. In particular, we begin to address many finance practitioner’s concerns that neural networks are a “black-box” by showing how they are related to existing well-established techniques such as linear regression, logistic regression, and autoregressive time series models. Such arguments are developed further in later chapters. Chapter 2 Chapter 2 introduces probabilistic modeling and reviews foundational concepts in Bayesian econometrics such as Bayesian inference, model selection, online learning, and Bayesian model averaging. We develop more versatile representations of complex data with probabilistic graphical models such as mixture models. Chapter 3 Chapter 3 introduces Bayesian regression and shows how it extends many of the concepts in the previous chapter. We develop kernel-based machine learning methods—specifically Gaussian process regression, an important class of Bayesian machine learning methods—and demonstrate their application to “surrogate” models of derivative prices. This chapter also provides a natural point from which to develop intuition for the role and functional form of regularization in a frequentist setting—the subject of subsequent chapters. Chapter 4 Chapter 4 provides a more in-depth description of supervised learning, deep learning, and neural networks—presenting the foundational mathematical and statistical learning concepts and explaining how they relate to real-world examples in trading, risk management, and investment management. These applications present challenges for forecasting and model design and are presented as a reoccurring theme throughout the book. This chapter moves towards a more engineering style exposition of neural networks, applying concepts in the previous chapters to elucidate various model design Chapter 5 Chapter 5 presents a method for interpreting neural networks which imposes minimal restrictions on the neural network design. The chapter demonstrates techniques for interpreting a feedforward network, including how to rank the importance of the features. In particular, an example demonstrating how to apply interpretability analysis to deep learning models for factor modeling is also presented. Chapter 6 Chapter 6 provides an overview of the most important modeling concepts in financial econometrics. Such methods form the conceptual basis and performance baseline for more advanced neural network architectures presented in the next chapter. In fact, each type of architecture is a generalization of many of the models presented here. This chapter is especially useful for students from an engineering or science background, with little exposure to econometrics and time series analysis. Chapter 7 Chapter 7 presents a powerful class of probabilistic models for financial data. Many of these models overcome some of the severe stationarity limitations of the frequentist models in the previous chapters. The fitting procedure demonstrated is also different—the use of Kalman filtering algorithms for state-space models rather than maximum likelihood estimation or Bayesian inference. Simple examples of hidden Markov models and particle filters in finance and various algorithms are presented. Chapter 8 Chapter 8 presents various neural network models for financial time series analysis, providing examples of how they relate to well-known techniques in financial econometrics. Recurrent neural networks (RNNs) are presented as non-linear time series models and generalize classical linear time series models such as AR(p). They provide a powerful approach for prediction in financial time series and generalize to non-stationary data. The chapter also presents convolution neural networks for filtering time series data and exploiting different scales in the data. Finally, this chapter demonstrates how autoencoders are used to compress information and generalize principal component analysis. Chapter 9 Chapter 9 introduces Markov decision processes and the classical methods of dynamic programming, before building familiarity with the ideas of reinforcement learning and other approximate methods for solving MDPs. After describing Bellman optimality and iterative value and policy updates before moving to Q-learning, the chapter quickly advances towards a more engineering style exposition of the topic, covering key computational concepts such as greediness, batch learning, and Q-learning. Through a number of mini-case studies, the chapter provides insight into how RL is applied to optimization problems in asset management and trading. These examples are each supported with Python notebooks. Chapter 10 Chapter 10 considers real-world applications of reinforcement learning in finance, as well as further advances the theory presented in the previous chapter. We start with one of the most common problems of quantitative finance, which is the problem of optimal portfolio trading in discrete time. Many practical problems of trading or risk management amount to different forms of dynamic portfolio optimization, with different optimization criteria, portfolio composition, and constraints. The chapter introduces a reinforcement learning approach to option pricing that generalizes the classical Black–Scholes model to a data-driven approach using Q-learning. It then presents a probabilistic extension of Q-learning called G-learning and shows how it can be used for dynamic portfolio optimization. For certain specifications of reward functions, G-learning is semi-analytically tractable and amounts to a probabilistic version of linear quadratic regulators (LQRs). Detailed analyses of such cases are presented and we show their solutions with examples from problems of dynamic portfolio optimization and wealth management. Chapter 11 Chapter 11 provides an overview of the most popular methods of inverse reinforcement learning (IRL) and imitation learning (IL). These methods solve the problem of optimal control in a data-driven way, similarly to reinforcement learning, however with the critical difference that now rewards are not observed. The problem is rather to learn the reward function from the observed behavior of an agent. As behavioral data without rewards are widely available, the problem of learning from such data is certainly very interesting. The chapter provides a moderate-level description of the most promising IRL methods, equips the reader with sufficient knowledge to understand and follow the current literature on IRL, and presents examples that use simple simulated environments to see how these methods perform when we know the “ground truth" rewards. We then present use cases for IRL in quantitative finance that include applications to trading strategy identification, sentiment-based trading, option pricing, inference of portfolio investors, and market modeling. Chapter 12 Chapter 12 takes us forward to emerging research topics in quantitative finance and machine learning. Among many interesting emerging topics, we focus here on two broad themes. The first one deals with unification of supervised learning and reinforcement learning as two tasks of perception-action cycles of agents. We outline some recent research ideas in the literature including in particular information theory-based versions of reinforcement learning and discuss their relevance for financial applications. We explain why these ideas might have interesting practical implications for RL financial models, where feature selection could be done within the general task of optimization of a long-term objective, rather than outside of it, as is usually performed in “alpha-research.” The second topic presented in this chapter deals with using methods of reinforcement learning to construct models of market dynamics. We also introduce some advanced physics-based approaches for computations for such RL-inspired market models. Source Code Many of the chapters are accompanied by Python notebooks to illustrate some of the main concepts and demonstrate application of machine learning methods. Each notebook is lightly annotated. Many of these notebooks use TensorFlow. We recommend loading these notebooks, together with any accompanying Python source files and data, in Google Colab. Please see the appendices of each chapter accompanied by notebooks, and the README.md in the subfolder of each chapter, for further instructions and details. Scope We recognize that the field of machine learning is developing rapidly and to keep abreast of the research in this field is a challenging pursuit. Machine learning is an umbrella term for a number of methodology classes, including supervised learning, unsupervised learning, and reinforcement learning. This book focuses on supervised learning and reinforcement learning because these are the areas with the most overlap with econometrics, predictive modeling, and optimal control in finance. Supervised machine learning can be categorized as generative and discriminative. Our focus is on discriminative learners which attempt to partition the input space, either directly, through affine transformations or through projections onto a manifold. Neural networks have been shown to provide a universal approximation to a wide class of functions. Moreover, they can be shown to reduce to other wellknown statistical techniques and are adaptable to time series data. Extending time series models, a number of chapters in this book are devoted to an introduction to reinforcement learning (RL) and inverse reinforcement learning (IRL) that deal with problems of optimal control of such time series and show how many classical financial problems such as portfolio optimization, option pricing, and wealth management can naturally be posed as problems for RL and IRL. We present simple RL methods that can be applied for these problems, as well as explain how neural networks can be used in these applications. There are already several excellent textbooks covering other classical machine learning methods, and we instead choose to focus on how to cast machine learning into various financial modeling and decision frameworks. We emphasize that much of this material is not unique to neural networks, but comparisons of alternative supervised learning approaches, such as random forests, are beyond the scope of this book. Multiple-Choice Questions Multiple-choice questions are included after introducing a key concept. The correct answers to all questions are provided at the end of each chapter with selected, partial, explanations to some of the more challenging material. Exercises The exercises that appear at the end of every chapter form an important component of the book. Each exercise has been chosen to reinforce concepts explained in the text, to stimulate the application of machine learning in finance, and to gently bridge material in other chapters. It is graded according to difficulty ranging from (*), which denotes a simple exercise which might take a few minutes to complete, through to (***), which denotes a significantly more complex exercise. Unless specified otherwise, all equations referenced in each exercise correspond to those in the corresponding chapter. Instructor Materials The book is supplemented by a separate Instructor’s Manual which provides worked solutions to the end of chapter questions. Full explanations for the solutions to the multiple-choice questions are also provided. The manual provides additional notes and example code solutions for some of the programming exercises in the later chapters. Acknowledgements This book is dedicated to the late Mark Davis (Imperial College) who was an inspiration in the field of mathematical finance and engineering, and formative in our careers. Peter Carr, Chair of the Department of Financial Engineering at NYU Tandon, has been instrumental in supporting the growth of the field of machine learning in finance. Through providing speaker engagements and machine learning instructorship positions in the MS in Algorithmic Finance Program, the authors have been able to write research papers and identify the key areas required by a text book. Miquel Alonso (AIFI), Agostino Capponi (Columbia), Rama Cont (Oxford), Kay Giesecke (Stanford), Ali Hirsa (Columbia), Sebastian Jaimungal (University of Toronto), Gary Kazantsev (Bloomberg), Morton Lane (UIUC), Jörg Osterrieder (ZHAW) have established various academic and joint academic-industry workshops and community meetings to proliferate the field and serve as input for this book. At the same time, there has been growing support for the development of a book in London, where several SIAM/LMS workshops and practitioner special interest groups, such as the Thalesians, have identified a number of compelling financial applications. The material has grown from courses and invited lectures at NYU, UIUC, Illinois Tech, Imperial College and the 2019 Bootcamp on Machine Learning in Finance at the Fields Institute, Toronto. Along the way, we have been fortunate to receive the support of Tomasz Bielecki (Illinois Tech), Igor Cialenco (Illinois Tech), Ali Hirsa (Columbia University), and Brian Peterson (DV Trading). Special thanks to research collaborators and colleagues Kay Giesecke (Stanford University), Diego Klabjan (NWU), Nick Polson (Chicago Booth), and Harvey Stein (Bloomberg), all of whom have shaped our understanding of the emerging field of machine learning in finance and the many practical challenges. We are indebted to Sri Krishnamurthy (QuantUniversity), Saeed Amen (Cuemacro), Tyler Ward (Google), and Nicole Königstein for their valuable input on this book. We acknowledge the support of a number of Illinois Tech graduate students who have contributed to the source code examples and exercises: Xiwen Jing, Bo Wang, and Siliang Xong. Special thanks to Swaminathan Sethuraman for his support of the code development, to Volod Chernat and George Gvishiani who provided support and code development for the course taught at NYU and Coursera. Finally, we would like to thank the students and especially the organisers of the MSc Finance and Mathematics course at Imperial College, where many of the ideas presented in this book have been tested: Damiano Brigo, Antoine (Jack) Jacquier, Mikko Pakkanen, and Rula Murtada. We would also like to thank Blanka Horvath for many useful suggestions. Chicago, IL, USA Brooklyn, NY, USA London, UK December 2019 Matthew F. Dixon Igor Halperin Paul Bilokon Part I Machine Learning with Cross-Sectional Data 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Big Data—Big Compute in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Fintech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Machine Learning and Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Statistical Modeling vs. Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Modeling Paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Financial Econometrics and Machine Learning . . . . . . . . . . . . . . . 3.3 Over-fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Examples of Supervised Machine Learning in Practice . . . . . . . . . . . . . . 5.1 Algorithmic Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 High-Frequency Trade Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Mortgage Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Probabilistic Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Bayesian vs. Frequentist Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Frequentist Inference from Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Assessing the Quality of Our Estimator: Bias and Variance . . . . . . . . . 5 The Bias–Variance Tradeoff (Dilemma) for Estimators . . . . . . . . . . . . . . 6 Bayesian Inference from Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 A More Informative Prior: The Beta Distribution . . . . . . . . . . . . . 6.2 Sequential Bayesian updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical Implications of Choosing a Classical or Bayesian Estimation Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Bayesian Inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Model Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Model Selection When There Are Many Models . . . . . . . . . . . . . 7.4 Occam’s Razor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Model Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Probabilistic Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Mixture Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Bayesian Regression and Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Bayesian Inference with Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Bayesian Prediction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Schur Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Gaussian Process Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Gaussian Processes in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Gaussian Processes Regression and Prediction . . . . . . . . . . . . . . . 3.3 Hyperparameter Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Computational Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Massively Scalable Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Structured Kernel Interpolation (SKI) . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Kernel Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Example: Pricing and Greeking with Single-GPs. . . . . . . . . . . . . . . . . . . . . 5.1 Greeking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Mesh-Free GPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Massively Scalable GPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Multi-response Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Multi-Output Gaussian Process Regression and Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Feedforward Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Geometric Interpretation of Feedforward Networks . . . . . . . . . . 2.3 Probabilistic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Function Approximation with Deep Learning* . . . . . . . . . . . . . . . 2.5 VC Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 When Is a Neural Network a Spline?* . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Why Deep Networks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Convexity and Inequality Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Similarity of MLPs with Other Supervised Learners . . . . . . . . . 4 Training, Validation, and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Stochastic Gradient Descent (SGD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Back-Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Bayesian Neural Networks* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interpretability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Background on Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Sensitivities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Explanatory Power of Neural Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Multiple Hidden Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Example: Step Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Interaction Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Example: Friedman Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Bounds on the Variance of the Jacobian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Chernoff Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Simulated Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Factor Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Non-linear Factor Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Fundamental Factor Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part II Sequential Learning 6 Sequence Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Autoregressive Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Autoregressive Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Partial Autocorrelations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Heteroscedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Moving Average Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 GARCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Exponential Smoothing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Fitting Time Series Models: The Box–Jenkins Approach . . . . . . . . . . . . 3.1 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Transformation to Ensure Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Model Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Predicting Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Time Series Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Principal Component Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Probabilistic Sequence Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Hidden Markov Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The Viterbi Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 State-Space Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Particle Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Sequential Importance Resampling (SIR) . . . . . . . . . . . . . . . . . . . . . 3.2 Multinomial Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Application: Stochastic Volatility Models . . . . . . . . . . . . . . . . . . . . . 4 Point Calibration of Stochastic Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Bayesian Calibration of Stochastic Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 RNN Memory: Partial Autocovariance . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Generalized Recurrent Neural Networks (GRNNs) . . . . . . . . . . . 3 Gated Recurrent Units. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 α-RNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Neural Network Exponential Smoothing . . . . . . . . . . . . . . . . . . . . . . 3.3 Long Short-Term Memory (LSTM) . . . . . . . . . . . . . . . . . . . . . . . . . . . Python Notebook Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Bitcoin Prediction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Predicting from the Limit Order Book. . . . . . . . . . . . . . . . . . . . . . . . . 5 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Weighted Moving Average Smoothers . . . . . . . . . . . . . . . . . . . . . . . . 5.2 2D Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Dilated Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Python Notebooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Linear Autoencoders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Equivalence of Linear Autoencoders and PCA . . . . . . . . . . . . . . . 6.3 Deep Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part III Sequential Data with Decision-Making 9 Introduction to Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Elements of Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Value and Policy Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Observable Versus Partially Observable Environments . . . . . . . 3 Markov Decision Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Decision Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Value Functions and Bellman Equations . . . . . . . . . . . . . . . . . . . . . . 3.3 Optimal Policy and Bellman Optimality. . . . . . . . . . . . . . . . . . . . . . . 4 Dynamic Programming Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Policy Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Policy Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Value Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Reinforcement Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Monte Carlo Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Policy-Based Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Temporal Difference Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 SARSA and Q-Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Stochastic Approximations and Batch-Mode Q-learning . . . . . 5.6 Q-learning in a Continuous Space: Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Batch-Mode Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Least Squares Policy Iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 10 Applications of Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The QLBS Model for Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Discrete-Time Black–Scholes–Merton Model . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Hedge Portfolio Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Optimal Hedging Strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Option Pricing in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Hedging and Pricing in the BS Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The QLBS Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 State Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Bellman Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Optimal Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 DP Solution: Monte Carlo Implementation . . . . . . . . . . . . . . . . . . . 4.5 RL Solution for QLBS: Fitted Q Iteration . . . . . . . . . . . . . . . . . . . . . 4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Option Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Possible Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 G-Learning for Stock Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Investment Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Terminal Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Asset Returns Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Signal Dynamics and State Space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 One-Period Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Multi-period Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Stochastic Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Reference Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Bellman Optimality Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Entropy-Regularized Bellman Optimality Equation . . . . . . . . . . 5.12 G-Function: An Entropy-Regularized Q-Function . . . . . . . . . . . . 5.13 G-Learning and F-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 Portfolio Dynamics with Market Impact . . . . . . . . . . . . . . . . . . . . . . 5.15 Zero Friction Limit: LQR with Entropy Regularization . . . . . . 5.16 Non-zero Market Impact: Non-linear Dynamics . . . . . . . . . . . . . . 6 RL for Wealth Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 The Merton Consumption Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Portfolio Optimization for a Defined Contribution Retirement Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 G-Learning for Retirement Plan Optimization . . . . . . . . . . . . . . . . 6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 11 Inverse Reinforcement Learning and Imitation Learning . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Inverse Reinforcement Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 RL Versus IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 What Are the Criteria for Success in IRL? . . . . . . . . . . . . . . . . . . . . 2.3 Can a Truly Portable Reward Function Be Learned with IRL?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Maximum Entropy Inverse Reinforcement Learning . . . . . . . . . . . . . . . . . 3.1 Maximum Entropy Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Maximum Causal Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 G-Learning and Soft Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Maximum Entropy IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Estimating the Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Example: MaxEnt IRL for Inference of Customer Preferences . . . . . . 4.1 IRL and the Problem of Customer Choice. . . . . . . . . . . . . . . . . . . . . 4.2 Customer Utility Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Maximum Entropy IRL for Customer Utility . . . . . . . . . . . . . . . . . 4.4 How Much Data Is Needed? IRL and Observational Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Counterfactual Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Finite-Sample Properties of MLE Estimators . . . . . . . . . . . . . . . . . 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Adversarial Imitation Learning and IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Imitation Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 GAIL: Generative Adversarial Imitation Learning. . . . . . . . . . . . 5.3 GAIL as an Art of Bypassing RL in IRL . . . . . . . . . . . . . . . . . . . . . . 5.4 Practical Regularization in GAIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Adversarial Training in GAIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Other Adversarial Approaches* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 f-Divergence Training* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Wasserstein GAN*. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Least Squares GAN* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Beyond GAIL: AIRL, f-MAX, FAIRL, RS-GAIL, etc.* . . . . . . . . . . . . . 6.1 AIRL: Adversarial Inverse Reinforcement Learning . . . . . . . . . 6.2 Forward KL or Backward KL?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 f-MAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Forward KL: FAIRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Risk-Sensitive GAIL (RS-GAIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Gaussian Process Inverse Reinforcement Learning. . . . . . . . . . . . . . . . . . . 7.1 Bayesian IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Gaussian Process IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Can IRL Surpass the Teacher? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 IRL from Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Learning Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 T-REX: Trajectory-Ranked Reward EXtrapolation . . . . . . . . . . . 8.4 D-REX: Disturbance-Based Reward EXtrapolation . . . . . . . . . . 9 Let Us Try It Out: IRL for Financial Cliff Walking . . . . . . . . . . . . . . . . . . 9.1 Max-Causal Entropy IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 IRL from Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 T-REX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Financial Applications of IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Algorithmic Trading Strategy Identification. . . . . . . . . . . . . . . . . . . 10.2 Inverse Reinforcement Learning for Option Pricing . . . . . . . . . . 10.3 IRL of a Portfolio Investor with G-Learning . . . . . . . . . . . . . . . . . . 10.4 IRL and Reward Learning for Sentiment-Based Trading Strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 IRL and the “Invisible Hand” Inference . . . . . . . . . . . . . . . . . . . . . . . 11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frontiers of Machine Learning and Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Market Dynamics, IRL, and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 “Quantum Equilibrium–Disequilibrium” (QED) Model . . . . . . 2.2 The Langevin Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The GBM Model as the Langevin Equation . . . . . . . . . . . . . . . . . . . 2.4 The QED Model as the Langevin Equation . . . . . . . . . . . . . . . . . . . 2.5 Insights for Financial Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Insights for Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Physics and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Hierarchical Representations in Deep Learning and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Tensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Bounded-Rational Agents in a Non-equilibrium Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 A “Grand Unification” of Machine Learning? . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Perception-Action Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Information Theory Meets Reinforcement Learning. . . . . . . . . . 4.3 Reinforcement Learning Meets Supervised Learning: Predictron, MuZero, and Other New Ideas . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 About the Authors Matthew F. Dixon is an Assistant Professor of Applied Math at the Illinois Institute of Technology. His research in computational methods for finance is funded by Intel. Matthew began his career in structured credit trading at Lehman Brothers in London before pursuing academics and consulting for financial institutions in quantitative trading and risk modeling. He holds a Ph.D. in Applied Mathematics from Imperial College (2007) and has held postdoctoral and visiting professor appointments at Stanford University and UC Davis, respectively. He has published over 20 peer-reviewed publications on machine learning and financial modeling, has been cited in Bloomberg Markets and the Financial Times as an AI in fintech expert, and is a frequently invited speaker in Silicon Valley and on Wall Street. He has published R packages, served as a Google Summer of Code mentor, and is the co-founder of the Thalesians Ltd. Igor Halperin is a Research Professor in Financial Engineering at NYU and an AI Research Associate at Fidelity Investments. He was previously an Executive Director of Quantitative Research at JPMorgan for nearly 15 years. Igor holds a Ph.D. in Theoretical Physics from Tel Aviv University (1994). Prior to joining the financial industry, he held postdoctoral positions in theoretical physics at the Technion and the University of British Columbia. Paul Bilokon is CEO and Founder of Thalesians Ltd. and an expert in electronic and algorithmic trading across multiple asset classes, having helped build such businesses at Deutsche Bank and Citigroup. Before focusing on electronic trading, Paul worked on derivatives and has served in quantitative roles at Nomura, Lehman Brothers, and Morgan Stanley. Paul has been educated at Christ Church College, Oxford, and Imperial College. Apart from mathematical and computational finance, his academic interests include machine learning and mathematical logic. Part I Machine Learning with Cross-Sectional Data Chapter 1 This chapter introduces the industry context for machine learning in finance, discussing the critical events that have shaped the finance industry’s need for machine learning and the unique barriers to adoption. The finance industry has adopted machine learning to varying degrees of sophistication. How it has been adopted is heavily fragmented by the academic disciplines underpinning the applications. We view some key mathematical examples that demonstrate the nature of machine learning and how it is used in practice, with the focus on building intuition for more technical expositions in later chapters. In particular, we begin to address many finance practitioner’s concerns that neural networks are a “black-box” by showing how they are related to existing well-established techniques such as linear regression, logistic regression, and autoregressive time series models. Such arguments are developed further in later chapters. This chapter also introduces reinforcement learning for finance and is followed by more in-depth case studies highlighting the design concepts and practical challenges of applying machine learning in practice. 1 Background In 1955, John McCarthy, then a young Assistant Professor of Mathematics, at Dartmouth College in Hanover, New Hampshire, submitted a proposal with Marvin Minsky, Nathaniel Rochester, and Claude Shannon for the Dartmouth Summer Research Project on Artificial Intelligence (McCarthy et al. 1955). These organizers were joined in the summer of 1956 by Trenchard More, Oliver Selfridge, Herbert Simon, Ray Solomonoff, among others. The stated goal was ambitious: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, 1 Introduction find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Thus the field of artificial intelligence, or AI, was born. Since this time, AI has perpetually strived to outperform humans on various judgment tasks (Pinar Saygin et al. 2000). The most fundamental metric for this success is the Turing test—a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human (Turing 1995). In recent years, a pattern of success in AI has emerged—one in which machines outperform in the presence of a large number of decision variables, usually with the best solution being found through evaluating an exponential number of candidates in a constrained high-dimensional space. Deep learning models, in particular, have proven remarkably successful in a wide field of applications (DeepMind 2016; Kubota 2017; Esteva et al. 2017) including image processing (Simonyan and Zisserman 2014), learning in games (DeepMind 2017), neuroscience (Poggio 2016), energy conservation (DeepMind 2016), skin cancer diagnostics (Kubota 2017; Esteva et al. 2017). One popular account of this reasoning points to humans’ perceived inability to process large amounts of information and make decisions beyond a few key variables. But this view, even if fractionally representative of the field, does no justice to AI or human learning. Humans are not being replaced any time soon. The median estimate for human intelligence in terms of gigaflops is about 104 times more than the machine that ran alpha-go. Of course, this figure is caveated on the important question of whether the human mind is even a Turing machine. 1.1 Big Data—Big Compute in Finance The growth of machine-readable data to record and communicate activities throughout the financial system combined with persistent growth in computing power and storage capacity has significant implications for every corner of financial modeling. Since the financial crises of 2007–2008, regulatory supervisors have reoriented towards “data-driven” regulation, a prominent example of which is the collection and analysis of detailed contractual terms for the bank loan and trading book stresstesting programs in the USA and Europe, instigated by the crisis (Flood et al. 2016). “Alternative data”—which refers to data and information outside of the usual scope of securities pricing, company fundamentals, or macroeconomic indicators— is playing an increasingly important role for asset managers, traders, and decision makers. Social media is now ranked as one of the top categories of alternative data currently used by hedge funds. Trading firms are hiring experts in machine learning with the ability to apply natural language processing (NLP) to financial news and other unstructured documents such as earnings announcement reports and SEC 10K reports. Data vendors such as Bloomberg, Thomson Reuters, and RavenPack are providing processed news sentiment data tailored for systematic trading models. 1 Background In de Prado (2019), some of the properties of these new, alternative datasets are explored: (a) many of these datasets are unstructured, non-numerical, and/or noncategorical, like news articles, voice recordings, or satellite images; (b) they tend to be high-dimensional (e.g., credit card transactions) and the number of variables may greatly exceed the number of observations; (c) such datasets are often sparse, containing NaNs (not-a-numbers); (d) they may implicitly contain information about networks of agents. Furthermore, de Prado (2019) explains why classical econometric methods fail on such datasets. These methods are often based on linear algebra, which fail when the number of variables exceeds the number of observations. Geometric objects, such as covariance matrices, fail to recognize the topological relationships that characterize networks. On the other hand, machine learning techniques offer the numerical power and functional flexibility needed to identify complex patterns in a high-dimensional space offering a significant improvement over econometric methods. The “black-box” view of ML is dismissed in de Prado (2019) as a misconception. Recent advances in ML make it applicable to the evaluation of plausibility of scientific theories; determination of the relative informational variables (usually referred to as features in ML) for explanatory and/or predictive purposes; causal inference; and visualization of large, high-dimensional, complex datasets. Advances in ML remedy the shortcomings of econometric methods in goal setting, outlier detection, feature extraction, regression, and classification when it comes to modern, complex alternative datasets. For example, in the presence of p features there may be up to 2p − p − 1 multiplicative interaction effects. For two features there is only one such interaction effect, x1 x2 . For three features, there are x1 x2 , x1 x3 , x2 x3 , x1 x2 x3 . For as few as ten features, there are 1,013 multiplicative interaction effects. Unlike ML algorithms, econometric models do not “learn” the structure of the data. The model specification may easily miss some of the interaction effects. The consequences of missing an interaction effect, e.g. fitting yt = x1,t + x2,t + t instead of yt = x1,t + x2,t + x1,t x2,t + t , can be dramatic. A machine learning algorithm, such as a decision tree, will recursively partition a dataset with complex patterns into subsets with simple patterns, which can then be fit independently with simple linear specifications. Unlike the classical linear regression, this algorithm “learns” about the existence of the x1,t x2,t effect, yielding much better out-of-sample results. There is a draw towards more empirically driven modeling in asset pricing research—using ever richer sets of firm characteristics and “factors” to describe and understand differences in expected returns across assets and model the dynamics of the aggregate market equity risk premium (Gu et al. 2018). For example, Harvey et al. (2016) study 316 “factors,” which include firm characteristics and common factors, for describing stock return behavior. Measurement of an asset’s risk premium is fundamentally a problem of prediction—the risk premium is the conditional expectation of a future realized excess return. Methodologies that can reliably attribute excess returns to tradable anomalies are highly prized. Machine learning provides a non-linear empirical approach for modeling realized security 1 Introduction returns from firm characteristics. Dixon and Polson (2019) review the formulation of asset pricing models for measuring asset risk premia and cast neural networks in canonical asset pricing 1.2 Fintech The rise of data and machine learning has led to a “fintech” industry, covering digital innovations and technology-enabled business model innovations in the financial sector (Philippon 2016). Examples of innovations that are central to fintech today include cryptocurrencies and the blockchain, new digital advisory and trading systems, peer-to-peer lending, equity crowdfunding, and mobile payment systems. Behavioral prediction is often a critical aspect of product design and risk management needed for consumer-facing business models; consumers or economic agents are presented with well-defined choices but have unknown economic needs and limitations, and in many cases do not behave in a strictly economically rational fashion. Therefore it is necessary to treat parts of the system as a black-box that operates under rules that cannot be known in advance. Robo-advisors are financial advisors that provide financial advice or portfolio management services with minimal human intervention. The focus has been on portfolio management rather than on estate and retirement planning, although there are exceptions, such as Blooom. Some limit investors to the ETFs selected by the service, others are more flexible. Examples include Betterment, Wealthfront, WiseBanyan, FutureAdvisor (working with Fidelity and TD Ameritrade), Blooom, Motif Investing, and Personal Capital. The degree of sophistication and the utilization of machine learning are on the rise among robo-advisors. Fraud Detection In 2011 fraud cost the financial industry approximately $80 billion annually (Consumer Reports, June 2011). According to PwC’s Global Economic Crime Survey 2016, 46% of respondents in the Financial Services industry reported being victims of economic crime in the last 24 months—a small increase from 45% reported in 2014. 16% of those that reported experiencing economic crime had suffered more than 100 incidents, with 6% suffering more than 1,000. According to the survey, the top 5 types of economic crime are asset misappropriation (60%, down from 67% in 2014), cybercrime (49%, up from 39% in 2014), bribery and corruption (18%, down from 20% in 2014), money laundering (24%, as in 2014), and accounting fraud (18%, down from 21% in 2014). Detecting economic crimes is 1 Background one of the oldest successful applications of machine learning in the financial services industry. See Gottlieb et al. (2006) for a straightforward overview of some of the classical methods: logistic regression, naïve Bayes, and support vector machines. The rise of electronic trading has led to new kinds of financial fraud and market manipulation. Some exchanges are investigating the use of deep learning to counter spoofing. Blockchain technology, first implemented by Satoshi Nakamoto in 2009 as a core component of Bitcoin, is a distributed public ledger recording transactions. Its usage allows secure peer-to-peer communication by linking blocks containing hash pointers to a previous block, a timestamp, and transaction data. Bitcoin is a decentralized digital currency (cryptocurrency) which leverages the blockchain to store transactions in a distributed manner in order to mitigate against flaws in the financial industry. In contrast to existing financial networks, blockchain based cryptocurrencies expose the entire transaction graph to the public. This openness allows, for example, the most significant agents to be immediately located (pseudonymously) on the network. By processing all financial interactions, we can model the network with a high-fidelity graph, as illustrated in Fig. 1.1 so that it is possible to characterize how the flow of information in the network evolves over time. This novel data representation permits a new form of financial econometrics—with the emphasis on the topological network structures in the microstructure rather than solely the covariance of historical time series of prices. The role of users, entities, and their interactions in formation and dynamics of cryptocurrency risk investment, financial predictive analytics and, more generally, in re-shaping the modern financial world is a novel area of research (Dyhrberg 2016; Gomber et al. 2017; Sovbetov 2018). a1 a2 a9 t3 a10 a11 a12 a3 a4 a5 Fig. 1.1 A transaction–address graph representation of the Bitcoin network. Addresses are represented by circles, transactions with rectangles, and edges indicate a transfer of coins. Blocks order transactions in time, whereas each transaction with its input and output nodes represents an immutable decision that is encoded as a subgraph on the Bitcoin network. Source: Akcora et al. (2018) 1 Introduction 2 Machine Learning and Prediction With each passing year, finance becomes increasingly reliant on computational methods. At the same time, the growth of machine-readable data to monitor, record, and communicate activities throughout the financial system has significant implications for how we approach the topic of modeling. One of the reasons that AI and the set of computer algorithms for learning, referred to as “machine learning,” have been successful is a result of a number of factors beyond computer hardware and software advances. Machines are able to model complex and high-dimensional data generation processes, sweep through millions of model configurations, and then robustly evaluate and correct the models in response to new information (Dhar 2013). By continuously updating and hosting a number of competing models, they prevent any one model leading us into a data gathering silo effective only for that market view. Structurally, the adoption of ML has even shifted our behavior—the way we reason, experiment, and shape our perspectives from data using ML has led to empirically driven trading and investment decision processes. Machine learning is a broad area, covering various classes of algorithms for pattern recognition and decision-making. In supervised learning, we are given labeled data, i.e. pairs (x1 , y1 ), . . . , (xn , yn ), x1 , . . . , xn ∈ X, y1 , . . . , yn ∈ Y , and the goal is to learn the relationship between X and Y . Each observation xi is referred to as a feature vector and yi is the label or response. In unsupervised learning, we are given unlabeled data, x1 , x2 , . . . , xn and our goal is to retrieve exploratory information about the data, perhaps grouping similar observations or capturing some hidden patterns. Unsupervised learning includes cluster analysis algorithms such as hierarchical clustering, k-means clustering, self-organizing maps, Gaussian mixture, and hidden Markov models and is commonly referred to as data mining. In both instances, the data could be financial time series, news documents, SEC documents, and textual information on important events. The third type of machine learning paradigm is reinforcement learning and is an algorithmic approach for enforcing Bellman optimality of a Markov Decision Process—defining a set of states and actions in response to a changing regime so as to maximize some notion of cumulative reward. In contrast to supervised learning, which just considers a single action at each point in time, reinforcement learning is concerned with the optimal sequence of actions. It is therefore a form of dynamic programming that is used for decisions leading to optimal trade execution, portfolio allocation, and liquidation over a given horizon. Supervised learning addresses a fundamental prediction problem: Construct a non-linear predictor, Yˆ (X), of an output, Y , given a high-dimensional input matrix X = (X1 , . . . , XP ) of P variables. Machine learning can be simply viewed as the study and construction of an input–output map of the form Y = F (X) where X = (X1 , . . . , XP ). F (X) is sometimes referred to as the “data-feature” map. The output variable, Y , can be continuous, discrete, or mixed. For example, in a classification problem, 2 Machine Learning and Prediction F : X → G, where G ∈ K := {0, . . . , K − 1}, K is the number of categories and ˆ is the predictor. G Supervised machine learning uses a parameterized1 model g(X|θ ) over independent variables X, to predict the continuous or categorical output Y or G. The model is parameterized by one or more free parameters θ which are fitted to data. Prediction of categorical variables is referred to as classification and is common in pattern recognition. The most common approach to predicting categorical variables is to encode the response G as one or more binary values, then treat the model prediction as continuous. ? Multiple Choice Question 1 Select all the following correct statements: 1. Supervised learning involves learning the relationship between input and output variables. 2. Supervised learning requires a human supervisor to prepare labeled training data. 3. Unsupervised learning does not require a human supervisor and is therefore superior to supervised learning. 4. Reinforcement learning can be viewed as a generalization of supervised learning to Markov Decision Processes. There are two different classes of supervised learning models, discriminative and generative. A discriminative model learns the decision boundary between the classes and implicitly learns the distribution of the output conditional on the input. A generative model explicitly learns the joint distribution of the input and output. An example of the former is a neural network or a decision tree and a restricted Boltzmann machine (RBM) is an example of the latter. Learning the joint distribution has the advantage that by the Bayes’ rule, it can also give the conditional distribution of the output given the input, but also be used for other purposes such as selecting features based on the joint probability. Generative models are typically more difficult to build. This book will mostly focus on discriminative models only, but the distinction should be made clear. A discriminative model predicts the probability of an output given an input. For example, if we are predicting the probability of a label G = k, k ∈ K, then g(x|θ) is a map g : Rp → [0, 1]K and the outputs represent a discrete probability distribution over G referred to as a “one-hot” encoding—a Kvector of zeros with 1 at the kth position: ˆ k := P(G = k | X = x, θ ) = gk (x|θ) G 1 The model is referred to as non-parametric if the parameter space is infinite dimensional and parametric if the parameter space is finite dimensional. 1 Introduction and hence we have that gk (x|θ) = 1. In particular, when G is dichotomous (K = 2), the second component of the model output is the conditional expected value of G ˆ := G ˆ 1 =g1 (x|θ)=0·P(G = 0 | X=x, θ )+1·P(G = 1 | X=x, θ )=E[G | X=x, θ ]. G (1.3) The conditional variance of G is given by ˆ 2 | X = x, θ ] = g1 (x|θ ) − (g1 (x|θ))2 , σ 2 := E[(G − G) which is an inverted parabola with a maximum at g1 (x|θ ) = 0.5. The following example illustrates a simple discriminative model which, here, is just based on a set of fixed rules for partitioning the input space. Example 1.1 Model Selection Suppose G ∈ {A, B, C} and the input X ∈ {0, 1}2 are binary 2-vectors given in Table 1.1. Table 1.1 Sample model data G A B C C x (0, 1) (1, 1) (1, 0) (0, 0) To match the input and output in this case, one could define a parameter-free step function g(x) over {0, 1}2 so that ⎧ ⎪ {1, 0, 0} ⎪ ⎪ ⎪ ⎨{0, 1, 0} g(x) = ⎪ {0, 0, 1} ⎪ ⎪ ⎪ ⎩ {0, 0, 1} if x = (0, 1) if x = (1, 1) if x = (1, 0) if x = (0, 0). The discriminative model g(x), defined in Eq. 1.5, specifies a set of fixed rules which predict the outcome of this experiment with 100% accuracy. Intuitively, it seems clear that such a model is flawed if the actual relation between inputs and outputs is non-deterministic. Clearly, a skilled analyst would typically not build such 2 Machine Learning and Prediction a model. Yet, hard-wired rules such as this are ubiquitous in the finance industry such as rule-based technical analysis and heuristics used for scoring such as credit ratings. If the model is allowed to be general, there is no reason why this particular function should be excluded. Therefore automated systems analyzing datasets such as this may be prone to construct functions like those given in Eq. 1.5 unless measures are taken to prevent it. It is therefore incumbent on the model designer to understand what makes the rules in Eq. 1.5 objectionable, with the goal of using a theoretically sound process to generalize the input–output map to other data. Example 1.2 Model Selection (Continued) Consider an alternate model for Table 1.1 ⎧ ⎪ {0.9, 0.05, 0.05} ⎪ ⎪ ⎪ ⎨{0.05, 0.9, 0.05} h(x) = ⎪ {0.05, 0.05, 0.9} ⎪ ⎪ ⎪ ⎩ {0.05, 0.05, 0.9} if x = (0, 1) if x = (1, 1) if x = (1, 0) if x = (0, 0). If this model were sampled, it would produce the data in Table 1.1 with probability (0.9)4 = 0.6561. We can hardly exclude this model from consideration on the basis of the results in Table 1.1, so which one do we choose? Informally, the heart of the model selection problem is that model g has excessively high confidence about the data, when that confidence is often not warranted. Many other functions, such as h, could have easily generated Table 1.1. Though there is only one model that can produce Table 1.1 with probability 1.0, there is a whole family of models that can produce the table with probability at least 0.66. Many of these plausible models do not assign overwhelming confidence to the results. To determine which model is best on average, we need to introduce another key concept. 2.1 Entropy Model selection in machine learning is based on a quantity known as entropy. Entropy represents the amount of information associated with each event. To illustrate the concept of entropy, let us consider a non-fair coin toss. There are two outcomes, = {H, T }. Let Y be a Bernoulli random variable representing the coin flip with density f (Y = 1) = P(H ) = p and f (Y = 0) = P(T ) = 1 − p. The (binary) entropy of Y under f is 1 Introduction Fig. 1.2 (Left) This figure shows the binary entropy of a biased coin. If the coin is fully biased, then each flip provides no new information as the outcome is already known and hence the entropy is zero. (Right) The concept of entropy was introduced by Claude Shannon2 in 19483 and was originally intended to represent an upper limit on the average length of a lossless compression encoding. Shannon’s entropy is foundational to the mathematical discipline of information theory H(f ) = −p log2 p − (1 − p)log2 (1 − p) ≤ 1bit. The reason why base 2 is chosen is so that the upper bound represents the number of bits needed to represent the outcome of the random variable, i.e. {0, 1} and hence 1 bit. The binary entropy for a biased coin is shown in Fig. 1.2. If the coin is fully biased, then each flip provides no new information as the outcome is already known. The maximum amount of information that can be revealed by a coin flip is when the coin is unbiased. Let us now reintroduce our parameterized mass in the setting of the biased coin. Let us consider an i.i.d. discrete random variable Y : → Y ⊂ R and let g(y|θ ) = P(ω ∈ ; Y (ω) = y) denote a parameterized probability mass function for Y . We can measure how different g(y|θ ) is from the true density f (y) using the cross-entropy H(f, g) := −Ef log2 g = f (y) log2 g(y|θ ) ≥ H(f ), 2 Photo: Jacobs, Konrad [CC BY-SA 2.0 de (https://creativecommons.org/licenses/by-sa/2.0/de/ deed.en)]. 3 C. Shannon, A Mathematical Theory of Communication, The Bell System Technical Journal, Vol. 27, pp. 379-423, 623-656, July, October, 1948. 2 Machine Learning and Prediction Fig. 1.3 A comparison of the true distribution, f , of a biased coin with a parameterized model g of the coin so that H(f, f ) = H(f ), where H(f ) is the entropy of f : H(f ) := −Ef [log2 f ] = − f (y)log2 f (y). If g(y|θ ) is a model of the non-fair coin with g(Y = 1|θ ) = pθ , g(Y = 0|θ ) = 1 − pθ . The cross-entropy is H(f, g) = −p log2 pθ − (1 − p)log2 (1 − pθ ) ≥ −p log2 p − (1 − p)log2 (1 − p). (1.9) Let us suppose that p = 0.7 and pθ = 0.68, as illustrated in Fig. 1.3, then the cross-entropy is H(f, g) = −0.3 log2 (0.32) − 0.7 log2 (0.68) = 0.8826322. Returning to our experiment in Table 1.1, let us consider the cross-entropy of these models which, as you will recall, depends on inputs too. Model g completely characterizes the data in Table 1.1 and we interpret it here as the truth. Model h, however, only summarizes some salient aspects of the data, and there is a large family of tables that would be consistent with model h. In the presence of noise or strong evidence indicating that Table 1.1 was the only possible outcome, we should interpret models like h as a more plausible explanation of the actual underlying phenomenon. Evaluating the cross-entropy between model h and model g, we get − log2 (0.9) for each observation in the table, which gives the negative log-likelihood when summed over all samples. The cross-entropy is at its minimum when h = g, we get − log2 (1.0) = 0. If g were a parameterized model, then clearly minimizing crossentropy or equivalently maximizing log-likelihood gives the maximum likelihood estimate of the parameter. We shall revisit the topic of parameter estimation in Chap. 2. 1 Introduction ? Multiple Choice Question 2 Select all of the following statements that are correct: 1. Neural network classifiers are a discriminative model which output probabilistic weightings for each category, given an input feature vector. 2. If the data is independent and identically distributed (i.i.d.), then the output of a dichotomous classifier is a conditional probability of a Bernoulli random variable. 3. A θ -parameterized discriminative model for a biased coin dependent on the environment X can be written as {gi (X|θ )}1i=0 . 4. A model of two biased coins, both dependent on the environment X, can be equivalently modeled with either the pair {gi(1) (X|θ )}1i=0 and {gi(2) (X|θ )}1i=0 , or the multi-classifier {gi (X|θ )}3i=0 . 2.2 Neural Networks Neural networks represent the non-linear map F (X) over a high-dimensional input space using hierarchical layers of abstractions. An example of a neural network is a feedforward network—a sequence of L layers4 formed via composition: > Deep Feedforward Networks A deep feedforward network is a function of the form (L) (1) Yˆ (X) := FW,b (X) = fW (L) ,b(L) . . . ◦ fW (1) ,b(1) (X), where • fW(l)(l) ,b(l) (X) := σ (l) (W (l) X +b(l) ) is a semi-affine function, where σ (l) is a univariate and continuous non-linear activation function such as max(·, 0) or tanh(·). • W = (W (1) , . . . , W (L) ) and b = (b(1) , . . . , b(L) ) are weight matrices and offsets (a.k.a. biases), respectively. 4 Note that we do not treat the input as a layer. So there are L − 1 hidden layers and an output layer. 2 Machine Learning and Prediction Fig. 1.4 Examples of neural networks architectures discussed in this book. Source: Van Veen, F. & Leijnen, S. (2019), “The Neural Network Zoo,” Retrieved from https://www.asimovinstitute. org/ neural-network-zoo. The input nodes are shown in yellow and represent the input variables, the green nodes are the hidden neurons and present hidden latent variables, the red nodes are the outputs or responses. Blue nodes denote hidden nodes with recurrence or memory. (a) Feedforward. (b) Recurrent. (c) Long short-term memory An earlier example of a feedforward network architecture is given in Fig. 1.4a. The input nodes are shown in yellow and represent the input variables, the green nodes are the hidden neurons and present hidden latent variables, the red nodes are the outputs or responses. The activation functions are essential for the network to approximate non-linear functions. For example, if there is one hidden layer and σ (1) is the identify function, then Yˆ (X) = W (2) (W (1) X + b(1) ) + b(2) = W (2) W (1) X + W (2) b(1) + b(2) = W X + b (1.10) is just linear regression, i.e. an affine transformation.5 Clearly, if there are no hidden layers, the architecture recovers standard linear regression Y = WX + b and logistic regression φ(W X + b), where φ is a sigmoid or softmax function, when the response is continuous or categorical, respectively. Some of the terminology used here and the details of this model will be described in Chap. 4. The theoretical roots of feedforward neural networks are given by the Kolmogorov–Arnold representation theorem (Arnold 1957; Kolmogorov 1957) of multivariate functions. Remarkably, Hornik et al. (1989) showed how neural networks, with one hidden layer, are universal approximators to non-linear functions. Clearly there are a number of issues in any architecture design and inference of the model parameters (W, b). How many layers? How many neurons Nl in each hidden layer? How to perform “variable selection”? How to avoid over-fitting? The details and considerations given to these important questions will be addressed in Chap. 4. 5 While the functional form of the map is the same as linear regression, neural networks do not assume a data generation process and hence inference is not identical to ordinary least squares regression. 1 Introduction 3 Statistical Modeling vs. Machine Learning Supervised machine learning is often an algorithmic form of statistical model estimation in which the data generation process is treated as an unknown (Breiman 2001). Model selection and inference is automated, with an emphasis on processing large amounts of data to develop robust models. It can be viewed as a highly efficient data compression technique designed to provide predictors in complex settings where relations between input and output variables are non-linear and input space is often high-dimensional. Machine learners balance filtering data with the goal of making accurate and robust decisions, often discrete and as a categorical function of input data. This fundamentally differs from maximum likelihood estimators used in standard statistical models, which assume that the data was generated by the model and typically have difficulty with over-fitting, especially when applied to high-dimensional datasets. Given the complexity of modern datasets, whether they are limit order books or high-dimensional financial time series, it is increasingly questionable whether we can posit inference on the basis of a known data generation process. It is a reasonable assertion, even if an economic interpretation of the data generation process can be given, that the exact form cannot be known all the time. The paradigm that machine learning provides for data analysis therefore is very different from the traditional statistical modeling and testing framework. Traditional fit metrics, such as R 2 , t-values, p-values, and the notion of statistical significance, are replaced by out-of-sample forecasting and understanding the bias–variance tradeoff. Machine learning is data-driven and focuses on finding structure in large datasets. The main tools for variable or predictor selection are regularization and dropout which are discussed in detail in Chap. 4. Table 1.2 contrasts maximum likelihood estimation-based inference with supervised machine learning. The comparison is somewhat exaggerated for ease of explanation. Rather the two approaches should be viewed as opposite ends of a continuum of methods. Linear regression techniques such as LASSO and ridge regression, or hybrids such as Elastic Net, fall somewhere in the middle, providing some combination of the explanatory power of maximum likelihood estimation while retaining out-of-sample predictive performance on high-dimensional datasets. 3.1 Modeling Paradigms Machine learning and statistical methods can be further characterized by whether they are parametric or non-parametric. Parametric models assume some finite set of parameters and attempt to model the response as a function of the input variables and the parameters. Due to the finiteness of the parameter space, they have limited flexibility and cannot capture complex patterns in big data. As a general rule, examples of parametric models include ordinary least squares linear 3 Statistical Modeling vs. Machine Learning Table 1.2 This table contrasts maximum likelihood estimation-based inference with supervised machine learning. The comparison is somewhat exaggerated for ease of explanation; however, the two should be viewed as opposite ends of a continuum of methods. Regularized linear regression techniques such as LASSO and ridge regression, or hybrids such as Elastic Net, provide some combination of the explanatory power of maximum likelihood estimation while retaining out-ofsample predictive performance on high-dimensional datasets Property Goal Data Framework Expressibility Model selection Scalability Robustness Diagnostics Statistical inference Causal models with explanatory power The data is generated by a model Probabilistic Typically linear Based on information criteria Limited to lower-dimensional data Prone to over-fitting Extensive Supervised machine learning Prediction performance, often with limited explanatory power The data generation process is unknown Algorithmic and Probabilistic Non-linear Numerical optimization Scales to high-dimensional input data Designed for out-of-sample performance Limited regression, polynomial regression, mixture models, neural networks, and hidden Markov models. Non-parametric models treat the parameter space as infinite dimensional—this is equivalent to introducing a hidden or latent function. The model structure is, for the most part, not specified a priori and they can grow in complexity with more data. Examples of non-parametric models include kernel methods such as support vector machines and Gaussian processes, the latter will be the focus of Chap. 3. Note that there is a gray area in whether neural networks are parametric or non-parametric and it strictly depends on how they are fitted. For example, it is possible to treat the parameter space in a neural network as infinite dimensional and hence characterize neural networks as non-parametric (see, for example, Philipp and Carbonell (2017)). However, this is an exception rather than the norm. While on the topic of modeling paradigms, it is helpful to further distinguish between probabilistic models, the subject of the next two chapters, and deterministic models, the subject of Chaps. 4, 5, and 8. The former treats the parameters as random and the latter assumes that the parameters are given. Within probabilistic modeling, a particular niche is occupied by the so-called state-space models. In these models one assumes the existence of a certain unobserved, latent, process, whose evolution drives a certain observable process. The evolution of the latent process and the dependence of the observable process on the latent process may be given in stochastic, probabilistic terms, which places the state-space models within the realm of probabilistic modeling. Note, somewhat counter to the terminology, that a deterministic model may produce a probabilistic output, for example, a logistic regression gives the probability that the response is positive given the input variables. The choice of whether to use a probabilistic or deterministic model is discussed further in the next chapter 1 Introduction and falls under the more general and divisive topic of “Bayesian versus frequentist modeling.” 3.2 Financial Econometrics and Machine Learning Machine learning generalizes parametric methods in financial econometrics. A taxonomy of machine learning in econometrics in shown in Fig. 1.5 together with the section references to the material in the first two parts of this book. When the data is a time series, neural networks can be configured with recurrence to build memory into the model. By relaxing the modeling assumptions needed for econometrics techniques, such as ARIMA (Box et al. 1994) and GARCH models (Bollerslev 1986), recurrent neural networks provide a semi-parametric or even non-parametric extension of classical time series methods. That use, however, comes with much caution. Whereas financial econometrics is built on rigorous experimental design methods such as the estimation framework of Box and Jenkins (1976), recurrent neural networks have grown from the computational engineering literature and many engineering studies overlook essential diagnostics such as Dickey–Fuller tests for verifying stationarity of the time series, a critical aspect of financial time series modeling. We take an integrative approach, showing how to cast recurrent neural networks into financial econometrics frameworks such as Box-Jenkins. More formally, if the input–output pairs D = {Xt , Yt }N t=1 are autocorrelated observations of X and Y at times t = 1, . . . , N , then the fundamental prediction problem can be expressed as a sequence prediction problem: construct a non-linear Fig. 1.5 Overview of how machine learning generalizes parametric econometrics, together with the section references to the material in the first two parts of this book 3 Statistical Modeling vs. Machine Learning times series predictor, Yˆ (Xt ), of an output, Y , using a high-dimensional input matrix of T length sub-sequences Xt : Yˆt = F (Xt ) where Xt := seqT ,0 (Xt ) := (Xt−T +1 , . . . , Xt ), where Xt−j is a j th lagged observation of Xt , Xt−j = Lj [Xj ], for 0 = 1, . . . , T − 1. Sequence learning, then, is just a composition of a non-linear map and a vectorization of the lagged input variables. If the data is i.i.d., then no sequence is needed (i.e., T = 1), and we recover the standard cross-sectional prediction problem which can be approximated with a feedforward neural network model. Recurrent neural networks (RNNs), shown in Fig. 1.4b, are time series methods or sequence learners which have achieved much success in applications such as natural language understanding, language generation, video processing, and many other tasks (Graves 2012). There are many types of RNNs—we will just concentrate on simple RNN models for brevity of notation. Like multivariate structural (1) autoregressive models, RNNs apply an autoregressive function fW (1) ,b(1) (Xt ) to each input sequence Xt , where T denotes the look back period at each time step— the maximum number of lags. However, rather than directly imposing a linear autocovariance structure, a RNN provides a flexible functional form to directly model the predictor, Yˆ . A simple RNN can be understood as an unfolding of a single hidden layer neural network (a.k.a. Elman network (Elman 1991)) over all time steps in the sequence, (1) j = 0, . . . , T . For each time step, j , this function fW (1) ,b(1) (Xt,j ) generates a hidden state Zt−j from the current input Xt−j and the previous hidden state Zt−j −1 and Xt,j = seqT ,j (Xt ) ⊂ Xt which appears in general form as: response: (2) Yˆt =fW (2) ,b(2) (Zt ):=σ (2) (W (2) Zt +b(2) ), (1) hidden states: Zt−j = fW (1) ,b(1) (Xt,j ) := σ (1) (Wz(1) Zt−j −1 + Wx(1) Xt−j + b(1) ), j ∈ {T , . . . , 0}, where σ (1) is an activation function such as tanh(x) and σ (2) is either a softmax function, sigmoid function, or identity function depending on whether the response is categorical, binary, or continuous, respectively. The connections between the extremal inputs Xt and the H hidden units are weighted by the time invariant matrix (1) Wx ∈ RH ×P . The recurrent connections between the H hidden units are weighted (1) by the time invariant matrix Wz ∈ RH ×H . Without such a matrix, the architecture is simply a single layered feedforward network without memory—each independent observation Xt is mapped to an output Yˆt using the same hidden layer. It is important to note that a plain RNN, de-facto, is not a deep network. The recurrent layer has the deceptive appearance of being a deep network when “unfolded,” i.e. viewed as being repeatedly applied to each new input, Xt−j , so (1) (1) that Zt−j = σ (1) (Wz Zt−j −1 + Wx Xt−j ). However the same recurrent weights 1 Introduction remain fixed over all repetitions—there is only one recurrent layer with weights Wz(1) . The amount of memory in the model is equal to the sequence length T . This means that the maximum lagged input that affects the output, Yˆt , is Xt−T . We shall see later in Chap. 8 that RNNs are simply non-linear autoregressive models with exogenous variables (NARX). In the special case of the univariate time series p prediction Xˆ t = F (Xt−1 ), using T = p previous observations {Xt−i }i=1 , only one neuron in the recurrent layer with weight φ and no activation function, a RNN is an AR(p) model with zero drift and geometric weights: Xˆ t = (φ1 L + φ2 L2 + · · · + φp Lp )[Xt ], φi := φ i , with |φ| < 1 to ensure that the model is stationary. The order p can be found through autocorrelation tests of the residual if we make the additional assumption that the error Xt − Xˆ t is Gaussian. Example tests include the Ljung–Box and Lagrange multiplier tests. However, the over-reliance on parametric diagnostic tests should be used with caution since the conditions for satisfying the tests may not be satisfied on complex time series data. Because the weights are time independent, plain RNNs are static time series models and not suited to non-covariance stationary time series data. Additional layers can be added to create deep RNNs by stacking them on top of each other, using the hidden state of the RNN as the input to the next layer. However, RNNs have difficulty in learning long-term dynamics, due in part to the vanishing and exploding gradients that can result from propagating the gradients down through the many unfolded layers of the network. Moreover, RNNs like most methods in supervised machine learning are inherently designed for stationary data. Oftentimes, financial time series data is non-stationary. In Chap. 8, we shall introduce gated recurrent units (GRUs) and long short term memory (LSTM) networks, the latter is shown in Fig. 1.4c as a particular form of recurrent network which provide a solution to this problem by incorporating memory units. In the language of time series modeling, we shall construct dynamic RNNs which are suitable for non-stationary data. More precisely, we shall see that these architecture shall learn when to forget previous hidden states and when to update hidden states given new information. This ability to model hidden states is of central importance in financial time series modeling and applications in trading. Mixture models and hidden Markov models have historically been the primary probabilistic methods used in quantitative finance and econometrics to model regimes and are reviewed in Chap. 2 and Chap. 7 respectively. Readers are encouraged to review Chap. 2, before reading Chap. 7. ? Multiple Choice Question 3 Select all the following correct statements: 1. A linear recurrent neural network with a memory of p lags is an autoregressive model AR(p) with non-parametric error. 3 Statistical Modeling vs. Machine Learning 2. Recurrent neural networks, as time series models, are guaranteed to be stationary, for any choice of weights. 3. The amount of memory in a shallow recurrent network corresponds to the number of times a single perceptron layer is unfolded. 4. The amount of memory in a deep recurrent network corresponds to the number of perceptron layers. 3.3 Over-fitting Undoubtedly the pivotal concern with machine learning, and especially deep learning, is the propensity for over-fitting given the number of parameters in the model. This is why skill is needed to fit deep neural networks. In frequentist statistics, over-fitting is addressed by penalizing the likelihood function with a penalty term. A common approach is to select models based on Akaike’s information criteria (Akaike 1973), which assumes that the model error is Gaussian. The penalty term is in fact a sample bias correction term to the Kullback– Leibler divergence (the relative Entropy) and is applied post-hoc to the unpenalized maximized loss likelihood. Machine learning methods such as least absolute shrinkage and selection operator (LASSO) and ridge regression more conveniently directly optimize a loss function with a penalty term. Moreover the approach is not restricted to modeling error distributional assumptions. LASSO or L1 regularization favors sparser parameterizations, whereas ridge regression or L2 reduces the magnitude of the parameters. Regularization is arguably the most important aspect of why machine learning methods have been so successful in finance and other distributions. Conversely, its absence is why neural networks fell out-of-favor in the finance industry in the 1990s. Regularization and information criteria are closely related, a key observation which enables us to express model selection in terms of information entropy and hence root our discourse in the works of Shannon (1948), Wiener (1964), Kullback and Leibler (1951). How to choose weights, the concept of regularization for model selection, and cross-validation is discussed in Chap. 4. It turns out that the choice of priors in Bayesian modeling provides a probabilistic analog to LASSO and ridge regression. L2 regularization is equivalent to a Gaussian prior and L1 is an equivalent to a Laplacian prior. Another important feature of Bayesian models is that they have a natural mechanism for prevention of over-fitting built-in. Introductory Bayesian modeling is covered extensively in Chap. 2. 1 Introduction 4 Reinforcement Learning Recall that supervised learning is essentially a paradigm for inferring the parameters of a map between input data and an output through minimizing an error over training samples. Performance generalization is achieved through estimating regularization parameters on cross-validation data. Once the weights of a network are learned, they are not updated in response to new data. For this reason, supervised learning can be considered as an “offline” form of learning, i.e. the model is fitted offline. Note that we avoid referring to the model as static since it is possible, under certain types of architectures, to create a dynamical model in which the map between input and output changes over time. For example, as we shall see in Chap. 8, a LSTM maintains a set of hidden state variables which result in a different form of the map over time. In such learning, a “teacher” provides an exact right output for each data point in a training set. This can be viewed as “feedback” from the teacher, which for supervised learning amounts to informing the agent with the correct label each time the agent classifies a new data point in the training dataset. Note that this is opposite to unsupervised learning, where there is no teacher to provide correct answers to a ML algorithm, which can be viewed as a setting with no teacher, and, respectively, no feedback from a teacher. An alternative learning paradigm, referred to as “reinforcement learning,” exists which models a sequence of decisions over state space. The key difference of this setting from supervised learning is feedback from the teacher is somewhat in between of the two extremes of unsupervised learning (no feedback at all) and supervised learning that can be viewed as feedback by providing the right labels. Instead, such partial feedback is provided by “rewards” which encourage a desired behavior, but without explicitly instructing the agent what exactly it should do, as in supervised learning. The simplest way to reason about reinforcement learning is to consider machine learning tasks as a problem of an agent interacting with an environment, as illustrated in Fig. Fig. 1.6 This figure shows a reinforcement learning agent which performs actions at times t0 , . . . , tn . The agent perceives the environment through the state variable St . In order to perform better on its task, feedback on an action at is provided to the agent at the next time step in the form of a reward Rt 4 Reinforcement Learning The agent learns about the environment in order to perform better on its task, which can be formulated as the problem of performing an optimal action. If an action performed by an agent is always the same and does not impact the environment, in this case we simply have a perception task, because learning about the environment helps to improve performance on this task. For example, you might have a model for prediction of mortgage defaults where the action is to compute the default probability for a given mortgage. The agent, in this case, is just a predictive model that produces a number and there is measurement of how the model impacts the environment. For example, if a model at a large mortgage broker predicted that all borrowers will default, it is very likely that this would have an impact on the mortgage market, and consequently future predictions. However, this feedback is ignored as the agent just performs perception tasks, ideally suited for supervised learning. Another example is in trading. Once an action is taken by the strategy there is feedback from the market which is referred to as “market impact.” Such a learner is configured to maximize a long-run utility function under some assumptions about the environment. One simple assumption is to treat the environment as being fully observable and evolving as a first-order Markov process. A Markov Decision Process (MDP) is then the simplest modeling framework that allows us to formalize the problem of reinforcement learning. A task solved by MDPs is the problem of optimal control, which is the problem of choosing action variables over some period of time, in order to maximize some objective function that depends both on the future states and action taken. In a discrete-time setting, the state of the environment St ∈ S is used by the learner (a.k.a. agent) to decide which action at ∈ A(St ) to take at each time step. This decision is made dynamic by updating the probabilities of selecting each action conditioned on St . These conditional probabilities πt (a|s) are referred to as the agent’s policy. The mechanism for updating the policy as a result of its learning is as follows: one time step later and as a consequence of its action, the learner receives a reward defined by a reward function, an immediate reward given the current state St and action taken at . As a result of the dynamic environment and the action of the agent, we transition to a new state St+1 . A reinforcement learning method specifies how to change the policy so as to maximize the total amount of reward received over the long-run. The constructs for reinforcement learning will be formalized in Chap. 9 but we shall informally discuss some of the challenges of reinforcement learning in finance here. Most of the impressive progress reported recently with reinforcement learning by researchers and companies such as Google’s DeepMind or OpenAI, such as playing video games, walking robots, self-driving cars, etc., assumes complete observability, using Markovian dynamics. The much more challenging problem, which is a better setting for finance, is how to formulate reinforcement learning for partially observable environments, where one or more variables are hidden. Another, more modest, challenge exists in how to choose the optimal policy when no environment is fully observable but the dynamic process for how the states evolve over time is unknown. It may be possible, for simple problems, to reason about how the states evolve, perhaps adding constraints on the state-action space. However, 1 Introduction the problem is especially acute in high-dimensional discrete state spaces, arising from, say, discretizing continuous state spaces. Here, it is typically intractable to enumerate all combinations of states and actions and it is hence not possible to exactly solve the optimal control problem. Chapter 9 will present approaches for approximating the optimal control problem. In particular, we will turn to neural networks to approximate an action function known as a “Q-function.” Such an approach is referred to as “Q-Learning” and more recently, with the use of deep learning to approximate the Q-function, is referred to as “Deep Q-Learning.” To fix ideas, we consider a number of examples to illustrate different aspects of the problem formulation and challenge in applying reinforcement learning. We start with arguably the most famous toy problem used to study stochastic optimal control theory, the “multi-armed bandit problem.” This problem is especially helpful in developing our intuition of how an agent must balance the competing goals of exploring different actions versus exploitation of known outcomes. Example 1.3 Multi-armed Bandit Problem Suppose there is a fixed and finite set of n actions, a.k.a. arms, denoted A. Learning proceeds in rounds, indexed by t = 1, . . . , T . The number of rounds T , a.k.a. the time horizon, is fixed and known in advance. In each round, the agent picks an arm at and observes the reward Rt (at ) for the chosen arm only. For avoidance of doubt, the agent does not observe rewards for other actions that could have been selected. If the goal is to maximize total reward over all rounds, how should the agent choose an arm? Suppose the rewards Rt are independent and identical random variables with distribution ν ∈ [0, 1]n and mean μ. The best action is then the distribution with the maximum mean μ∗ . The difference between the player’s accumulated reward and the maximum the player (a.k.a. the “cumulative regret”) could have obtained had she known all the parameters is R¯ T = T μ∗ − E Rt . t∈[T ] Intuitively, an agent should pick arms that performed well in the past, yet the agent needs to ensure that no good option has been missed. The theoretical origins of reinforcement learning are in stochastic dynamic programming. In this setting, an agent must make a sequence of decisions under uncertainty about the reward. If we can characterize this uncertainty with probability distributions, then the problem is typically much easier to solve. We shall assume that the reader has some familiarity with dynamic programming—the extension to stochastic dynamic programming is a relatively minor conceptual development. Note that Chap. 9 will review pertinent aspects of dynamic programming, including 4 Reinforcement Learning Bellman optimality. The following optimal payoff example will likely just serve as a simple review exercise in dynamic programming, albeit with uncertainty introduced into the problem. As we follow the mechanics of solving the problem, the example exposes the inherent difficulty of relaxing our assumptions about the distribution of the uncertainty. Example 1.4 Uncertain Payoffs A strategy seeks to allocate $600 across 3 markets and is equally profitable once the position is held, returning 1% of the size of the position over a short trading horizon [t, t + 1]. However, the markets vary in liquidity and there is a lower probability that the larger orders will be filled over the horizon. The amount allocated to each market must be either K = {100, 200, 300}. Strategy M1 Allocation 100 200 300 100 200 300 100 200 300 Fill probability 0.8 0.7 0.6 0.75 0.7 0.65 0.75 0.75 0.6 Strategy M1 Allocation 100 200 300 100 200 300 100 200 300 Return 0.8 1.4 1.8 0.75 1.4 1.95 0.75 1.5 1.8 The optimal allocation problem under uncertainty is a stochastic dynamic programming problem. We can define value functions vi (x) for total allocation amount x for each stage of the problem, corresponding to the markets. We then find the optimal allocation using the backward recursive formulae: v3 (x) = R3 (x), ∀x ∈ K, v2 (x) = max{R2 (k) + v3 (x − k)}, ∀x ∈ K + 200, k∈K v1 (x) = max{R1 (k) + v2 (x − k)}, x = 600, k∈K The left-hand side of the table below tabulates the values of R2 + v3 corresponding to the second stage of the backward induction for each pair (M2 , M3 ). (continued) 1 Introduction Example 1.4 R2 + v3 M3 100 200 300 M2 100 1.5 2.25 2.55 200 2.15 2.9 3.2 300 2.7 3.45 3.75 M1 100 200 300 (M2∗ , M3∗ ) (300, 200) (200, 200) (100, 200) v2 3.45 2.9 2.25 R1 0.8 1.4 1.8 R1 + v2 4.25 4.3 4.05 The right-hand side of the above table tabulates the values of R1 + v2 corresponding to the third and final stage of the backward induction for each tuple (M1 , M2∗ , M3∗ ). In the above example, we can see that the allocation {200, 200, 200} maximizes the reward to give v1 (600) = 4.3. While this exercise is a straightforward application of a Bellman optimality recurrence relation, it provides a glimpse of the types of stochastic dynamic programming problems that can be solved with reinforcement learning. In particular, if the fill probabilities are unknown but must be learned over time by observing the outcome over each period [ti , ti+1 ), then the problem above cannot be solved by just using backward recursion. Instead we will move to the framework of reinforcement learning and attempt to learn the best actions given the data. Clearly, in practice, the example is much too simple to be representative of real-world problems in finance—the profits will be unknown and the state space is significantly larger, compounding the need for reinforcement learning. However, it is often very useful to benchmark reinforcement learning on simple stochastic dynamic programming problems with closed-form solutions. In the previous example, we assumed that the problem was static—the variables in the problem did not change over time. This is the so-called static allocation problem and is somewhat idealized. Our next example will provide a glimpse of the types of problems that typically arise in optimal portfolio investment where random variables are dynamic. The example is also seated in more classical finance theory, that of a “Markowitz portfolio” in which the investor seeks to maximize a riskadjusted long-term return and the wealth process is self-financing.6 Example 1.5 Optimal Investment in an Index Portfolio Let St be a time-t price of a risky asset such as a sector exchange-traded fund (ETF). We assume that our setting is discrete time, and we denote different time steps by integer valued-indices t = 0, . . . , T , so there are T + 1 values on our discrete-time grid. The discrete-time random evolution of the risky asset St is (continued) wealth process is self-financing if, at each time step, any purchase of an additional quantity of the risky asset is funded from the bank account. Vice versa, any proceeds from a sale of some quantity of the asset go to the bank account. 4 Reinforcement Learning Example 1.5 (continued) St+1 = St (1 + φt ) , where φt is a random variable whose probability distribution may depend on the current asset value St . To ensure non-negativity of prices, we assume that φt is bounded from below φt ≥ −1. Consider a wealth process Wt of an investor who starts with an initial wealth W0 = 1 at time t = 0 and, at each period t = 0, . . . , T − 1 allocates a fraction ut = ut (St ) of the total portfolio value to the risky asset, and the remaining fraction 1 − ut is invested in a risk-free bank account that pays a risk-free interest rate rf = 0. We will refer to a set of decision variables for all time T −1 steps as a policy π := {ut }t=0 . The wealth process is self-financing and so the wealth at time t + 1 is given by Wt+1 = (1 − ut ) Wt + ut Wt (1 + φt ) . This produces the one-step return rt = Wt+1 − Wt = ut φt . Wt Note this is a random function of the asset price St . We define one-step rewards Rt for t = 0, . . . , T − 1 as risk-adjusted returns Rt = rt − λVar [rt |St ] = ut φt − λu2t Var [φt |St ] , where λ is a risk-aversion parameter.a We now consider the problem of maximization of the following concave function of the control variable ut : 2 V (s)= max E Rt St =s = max E ut φt −λut Var [φt |St ] St =s . ut ut t=0 t=0 (1.16) Equation 1.16 defines an optimal investment problem for T − 1 steps faced by an investor whose objective is to optimize risk-adjusted returns over each period. This optimization problem is equivalent to maximizing the long-run a Note, for avoidance of doubt, that the risk-aversion 1 2 to ensure consistency with the finance literature. parameter must be scaled by a factor of 1 Introduction Example 1.5 returns over the period [0, T ]. For each t = T − 1, T − 2, . . . , 0, the optimality condition for action ut is now obtained by maximization of V π (s) with respect to ut : u∗t = E [ φt | St ] , 2λVar [φt |St ] where we allow for short selling in the ETF (i.e., ut < 0) and borrowing of cash ut > 1. This is an example of a stochastic optimal control problem for a portfolio that maximizes its cumulative risk-adjusted return by repeatedly rebalancing between cash and a risky asset. Such problems can be solved using means of dynamic programming or reinforcement learning. In our problem, the dynamic programming solution is given by an analytical expression (1.17). Chapter 9 will present more complex settings including reinforcement learning approaches to optimal control problems, as well as demonstrate how expressions like the optimal action of Eq. 1.17 can be computed in practice. ? Multiple Choice Question 4 Select all the following correct statements: 1. The name “Markov processes” first historically appeared as a result of a misspelled name “Mark-Off processes” that was previously used for random processes that describe learning in certain types of video games, but has become a standard terminology since then. 2. The goal of (risk-neutral) reinforcement learning is to maximize the expected total reward by choosing an optimal policy. 3. The goal of (risk-neutral) reinforcement learning is to neutralize risk, i.e. make the variance of the total reward equal zero. 4. The goal of risk-sensitive reinforcement learning is to teach a RL agent to pick action policies that are most prone to risk of failure. Risk-sensitive RL is used, e.g. by venture capitalists and other sponsors of RL research, as a tool to assess the feasibility of new RL projects. 5 Examples of Supervised Machine Learning in Practice The practice of machine learning in finance has grown somewhat commensurately with both theoretical and computational developments in machine learning. Early adopters have been the quantitative hedge funds, including Bridgewater Associates, 5 Examples of Supervised Machine Learning in Practice Renaissance Technologies, WorldQuant, D.E. Shaw, and Two Sigma who have embraced novel machine learning techniques although there are mixed degrees of adoption and a healthy skepticism exists that machine learning is a panacea for quantitative trading. In 2015, Bridgewater Associates announced a new artificial intelligence unit, having hired people from IBM Watson with expertise in deep learning. Anthony Ledford, chief scientist at MAN AHL: “It’s at an early stage. We have set aside a pot of money for test trading. With deep learning, if all goes well, it will go into test trading, as other machine learning approaches have.” Winton Capital Management’s CEO David Harding: “People started saying, ‘There’s an amazing new computing technique that’s going to blow away everything that’s gone before.’ There was also a fashion for genetic algorithms. Well, I can tell you none of those companies exist today—not a sausage of them.” Some qualifications are needed to more accurately assess the extent of adoption. For instance, there is a false line of reasoning that ordinary least squares regression and logistic regression, as well as Bayesian methods, are machine learning techniques. Only if the modeling approach is algorithmic, without positing a data generation process, can the approach be correctly categorized as machine learning. So regularized regression without use of parametric assumptions on the error distribution is an example of machine learning. Unregularized regression with, say, Gaussian error is not a machine learning technique. The functional form of the input–output map is the same in both cases, which is why we emphasize that the functional form of the map is not a sufficient condition for distinguishing ML from statistical methods. With that caveat, we shall view some examples that not only illustrate some of the important practical applications of machine learning prediction in algorithmic trading, high-frequency market making, and mortgage modeling but also provide a brief introduction to applications that will be covered in more detail in later chapters. 5.1 Algorithmic Trading Algorithmic trading is a natural playground for machine learning. The idea behind algorithmic trading is that trading decisions should be based on data, not intuition. Therefore, it should be viable to automate this decision-making process using an algorithm, either specified or learned. The advantages of algorithmic trading include complex market pattern recognition, reduced human produced error, ability to test on historic data, etc. In recent times, as more and more information is being digitized, the feasibility and capacity of algorithmic trading has been expanding drastically. The number of hedge funds, for example, that apply machine learning for algorithmic trading is steadily increasing. Here we provide a simple example of how machine learning techniques can be used to improve traditional algorithmic trading methods, but also provide new trading strategy suggestions. The example here is not intended to be the “best” approach, but rather indicative of more out-of-the-box strategies that machine learning facilitates, with the emphasis on minimizing out-of-sample error by pattern matching through efficient compression across high-dimensional datasets. 1 Introduction Momentum strategies are one of the most well-known algo-trading strategies; In general, strategies that predict prices from historic price data are categorized as momentum strategies. Traditionally momentum strategies are based on certain regression-based econometric models, such as ARIMA or VAR (see Chap. 6). A drawback of these models is that they impose strong linearity which is not consistently plausible for time series of prices. Another caveat is that these models are parametric and thus have strong bias which often causes underfitting. Many machine learning algorithms are both non-linear and semi/non-parametric, and therefore prove complementary to existing econometric models. In this example we build a simple momentum portfolio strategy with a feedforward neural network. We focus on the S&P 500 stock universe, and assume we have daily close prices for all stocks over a ten-year period.7 Problem Formulation The most complex practical aspect of machine learning is how to choose the input (“features”) and output. The type of desired output will determine whether a regressor or classifier is needed, but the general rule is that it must be actionable (i.e., tradable). Suppose our goal is to invest in an equally weighted, long only, stock portfolio only if it beats the S&P 500 index benchmark (which is a reasonable objective for a portfolio manager). We can therefore label the portfolio at every observation t based on the mean directional excess return of the portfolio: Gt = 1 N 1 N i i rt+h,t i i rt+h,t − r˜t+h,t ≥ , − r˜t+h,t < 0, i where rt+h,t is the return of stock i between times t and t + h, r˜t+h,t is the return of the S&P 500 index in the same period, and is some target next period excess portfolio return. Without loss of generality, we could invest in the universe (N = 500), although this is likely to have adverse practical implications such as excessive transaction costs. We could easily just have restricted the number of stocks to a subset, such as the top decile of performing stocks in the last period. Framed this way, the machine learner is thus informing us when our stock selection strategy will outperform the market. It is largely agnostic to how the stocks are selected, provided the procedure is systematic and based solely on the historic data provided to the classifier. It is further worth noting that the map between the decision to hold the customized portfolio has a non-linear relationship with the past returns of the universe. To make the problem more concrete, let us set h = 5 days. The algorithmic strategy here is therefore automating the decision to invest in the customized 7 The question of how much data is needed to train a neural network is a central one, with the immediate concern being insufficient data to avoid over-fitting. The amount of data needed is complex to assess; however, it is partly dependent on the number of edges in the network and can be assessed through bias–variance analysis, as described in Chap. 4. 5 Examples of Supervised Machine Learning in Practice Table 1.3 Training samples for a classification problem Date 2007-01-03 2017-01-04 2017-01-05 ... 2017-12-29 2017-12-30 2017-12-31 X1 0.051 −0.092 0.021 X2 −0.035 0.125 0.063 0.093 0.020 −0.109 −0.023 0.019 0.025 X500 0.072 −0.032 −0.058 G 0 0 1 0.045 0.022 −0.092 portfolio or the S&P 500 index every week based on the previous 5-day realized returns of all stocks. To apply machine learning to this decision, the problem translates into finding the weights in the network between past returns and the decision to invest in the equally weighted portfolio. For avoidance of doubt, we emphasize that the interpretation of the optimal weights differs substantially from Markowitz’s mean–variance portfolios, which simply finds the portfolio weights to optimize expected returns for a given risk tolerance. Here, we either invest equal amounts in all stocks of the portfolio or invest the same amount in the S&P 500 index and the weights in the network signify the relevance of past stock returns in the expected excess portfolio return outperforming the market. Data Feature engineering is always important in building models and requires careful consideration. Since the original price data does not meet several machine learning requirements, such as stationarity and i.i.d. distributional properties, one needs to engineer input features to prevent potential “garbage-in-garbage-out” phenomena. In this example, we take a simple approach by using only the 5-day realized returns of all S&P 500 stocks.8 Returns are scale-free and no further standardization is needed. So for each time t, the input features are 1 500 . Xt = rt,t−5 , . . . , Now we can aggregate the features and labels into a panel indexed by date. Each column is an entry in Eq. 1.19, except for the last column which is the assigned label from Eq. 1.18, based on the realized excess stock returns of the portfolio. An example of the labeled input data (X, G) is shown in Table 1.3. The process by which we train the classifier and evaluate its performance will be described in Chap. 4, but this example illustrates how algo-trading strategies can be crafted around supervised machine learning. Our model problem could be tailored 8 Note that the composition of the S&P 500 changes over time and so we should interpret a feature as a fixed symbol. 1 Introduction for specific risk-reward and performance reporting metrics such as, for example, Sharpe or information ratios meeting or exceeding a threshold. is typically chosen to be a small value so that the labels are not too imbalanced. As the value is increased, the problem becomes an “outlier prediction problem”— a highly imbalanced classification problem which requires more advanced sampling and interpolation techniques beyond an off-the-shelf classifier. In the next example, we shall turn to another important aspect of machine learning in algorithmic trading, namely execution. How the trades are placed is a significant aspect of algorithmic trading strategy performance, not only to minimize price impact of market taking strategies but also for market making. Here we shall look to transactional data to perfect the execution, an engineering challenge by itself just to process market feeds of tick-by-tick exchange transactions. The example considers a market making application but could be adapted for price impact and other execution considerations in algorithmic trading by moving to a reinforcement learning framework. A common mistake is to assume that building a predictive model will result in a profitable trading strategy. Clearly, the consideration given to reliably evaluating machine learning in the context of trading strategy performance is a critical component of its assessment. 5.2 High-Frequency Trade Execution Modern financial exchanges facilitate the electronic trading of instruments through an instantaneous double auction. At each point in time, the market demand and the supply can be represented by an electronic limit order book, a cross-section of orders to execute at various price levels away from the market price as illustrated in Table 1.4. Electronic market makers will quote on both sides of the market in an attempt to capture the bid–ask spread. Sometimes a large market order, or a succession of smaller markets orders, will consume an entire price level. This is why the market price fluctuates in liquid markets—an effect often referred to by practitioners as a “price-flip.” A market maker can take a loss if only one side of the order is filled as a result of an adverse price movement. Figure 1.7 (left) illustrates a typical mechanism resulting in an adverse price movement. A snapshot of the limit order book at time t, before the arrival of a market order, and after at time t +1 is shown in the left and right panels, respectively. The resting orders placed by the market marker are denoted with the “+” symbol— red denotes a bid and blue denotes an ask quote. A buy market order subsequently arrives and matches the entire resting quantity of best ask quotes. Then at event time t + 1 the limit order book is updated—the market maker’s ask has been filled (blue minus symbol) and the bid now rests away from the inside market. The market marker may systematically be forced to cancel the bid and buy back at a higher price, thus taking a loss. 5 Examples of Supervised Machine Learning in Practice Table 1.4 This table shows a snapshot of the limit order book of S&P 500 e-mini futures (ES). The top half (“sell-side”) shows the ask volumes and prices and the lower half (“buy side”) shows the bid volumes and prices. The quote levels are ranked by the most competitive at the center (the “inside market”), outward to the least competitive prices at the top and bottom of the limit order book. Note that only five bid or ask levels are shown in this example, but the actual book is much deeper Price 2170.25 2170.00 2169.75 2169.50 2169.25 2169.00 2168.75 2168.50 2168.25 2168.00 Ask 1284 1642 1401 1266 290 Fig. 1.7 (Top) A snapshot of the limit order book is taken at time t. Limit orders placed by the market marker are denoted with the “+” symbol—red denotes a bid and blue denotes an ask. A buy market order subsequently arrives and matches the entire resting quantity of best ask quotes. Then at event time t + 1 the limit order book is updated. The market maker’s ask has been filled (blue minus symbol) and the bid rests away from the inside market. (Bottom) A pre-emptive strategy for avoiding adverse price selection is illustrated. The ask is requoted at a higher ask price. In this case, the bid is not replaced and the market maker may capture a tick more than the spread if both orders are filled Machine learning can be used to predict these price movements (Kearns and Nevmyvaka 2013; Kercheval and Zhang 2015; Sirignano 2016; Dixon et al. 2018; Dixon 2018b,a) and thus to potentially avoid adverse selection. Following Cont and de Larrard (2013) we can treat queue sizes at each price level as input variables. We can additionally include properties of market orders, albeit in a form which our machines deem most relevant to predicting the direction of price movements (a.k.a. feature engineering). In contrast to stochastic modeling, we do not impose conditional distributional assumptions on the independent variables (a.k.a. features) nor assume that price movements are Markovian. Chapter 8 presents a RNN for 1 Introduction mid-price prediction from the limit order book history which is the starting point for the more in-depth study of Dixon (2018b) which includes market orders and demonstrates the superiority of RNNs compared to other time series methods such as Kalman filters. We reiterate that the ability to accurately predict does not imply profitability of the strategy. Complex issues concerning queue position, exchange matching rules, latency, position constraints, and price impact are central considerations for practitioners. The design of profitable strategies goes beyond the scope of this book but the reader is referred to de Prado (2018) for the pitfalls of backtesting and designing algorithms for trading. Dixon (2018a) presents a framework for evaluating the performance of supervised machine learning algorithms which accounts for latency, position constraints, and queue position. However, supervised learning is ultimately not the best machine learning approach as it cannot capture the effect of market impact and is too inflexible to incorporate more complex strategies. Chapter 9 presents examples of reinforcement learning which demonstrate how to capture market impact and also how to flexibly formulate market making strategies. 5.3 Mortgage Modeling Beyond the data rich environment of algorithmic trading, does machine learning have a place in finance? One perspective is that there simply is not sufficient data for some “low-frequency” application areas in finance, especially where traditional models have failed catastrophically. The purpose of this section is to serve as a sobering reminder that long-term forecasting goes far beyond merely selecting the best choice of machine learning algorithm and why there is no substitute for strong domain knowledge and an understanding of the limitations of data. In the USA, a mortgage is a loan collateralized by real-estate. Mortgages are used to securitize financial instruments such as mortgage backed securities and collateralized mortgage obligations. The analysis of such securities is complex and has changed significantly over the last decade in response to the 2007–2008 financial crises (Stein 2012). Unless otherwise specified, a mortgage will be taken to mean a “residential mortgage,” which is a loan with payments due monthly that is collateralized by a single family home. Commercial mortgages do exist, covering office towers, rental apartment buildings, and industrial facilities, but they are different enough to be considered separate classes of financial instruments. Borrowing money to buy a house is one of the most common, and largest balance, loans that an individual borrower is ever likely to commit to. Within the USA alone, mortgages comprise a staggering $15 trillion dollars in debt. This is approximately the same balance as the total federal debt outstanding (Fig. 1.8). Within the USA, mortgages may be repaid (typically without penalty) at will by the borrower. Usually, borrowers use this feature to refinance their loans in favorable interest rate regimes, or to liquidate the loan when selling the underlying house. This 5 Examples of Supervised Machine Learning in Practice Fig. 1.8 Total mortgage debt in the USA compared to total federal debt, millions of dollars, unadjusted for inflation. Source: https://fred.stlouisfed.org/series/MDOAH, https://fred.stlouisfed. org/ series/GFDEBTN Table 1.5 At any time, the states of any US style residential mortgage is in one of the several possible states Symbol P C 3 6 9 F R D Name Paid Current 30-days delinquent 60-days delinquent 90+ delinquent Foreclosure Real-Estate-Owned (REO) Default liquidation Definition All balances paid, loan is dissolved All payments due have been paid Mortgage is delinquent by one payment Delinquent by 2 payments Delinquent by 3 or more payments Foreclosure has been initiated by the lender The lender has possession of the property Loan is involuntarily liquidated for nonpayment has the effect of moving a great deal of financial risk off of individual borrowers, and into the financial system. It also drives a lively and well developed industry around modeling the behavior of these loans. The mortgage model description here will generally follow the comprehensive work in Sirignano et al. (2016), with only a few minor deviations. Any US style residential mortgage, in each month, can be in one of the several possible states listed in Table 1.5. Consider this set of K available states to be K = {P , C, 3, 6, 9, F, R, D}. Following the problem formulation in Sirignano et al. (2016), we will refer to the status of loan n at time t as Utn ∈ K, and this will be represented as a probability vector using a standard one-hot encoding. If X = (X1 , . . . , XP ) is the input matrix of P explanatory variables, then we define a probability transition density function g : RP → [0, 1]K×K parameterized by θ so that n = i | Utn = j, Xtn ) = gi,j (Xtn | θ ), ∀i, j ∈ K. P(Ut+1 1 Introduction Note that g(Xtn | θ ) is a time in-homogeneous K × K Markov transition matrix. Also, not all transitions are even conceptually possible—there are non-commutative states. For instance, a transition from C to 6 is not possible since a borrower cannot miss two payments in a single month. Here we will write p(i,j ) := gi,j (Xtn | θ ) for ease of notation and because of the non-commutative state transitions where p(i,j ) = 0, the Markov matrix takes the form: ⎡ 1 p(c,p) ⎢0 p(c,c) ⎢ ⎢0 p ⎢ (c,3) ⎢ ⎢0 0 n g(Xt | θ ) = ⎢ ⎢0 0 ⎢ ⎢0 0 ⎢ ⎣0 0 0 0 p(3,p) p(3,c) p(3,3) p(3,6) 0 0 0 0 p(6,c) p(6,3) p(6,6) p(6,9) p(6,f ) p(6,r) p(6,d) p(9,c) p(9,3) p(9,6) p(9,9) p(9,f ) p(9,r) p(9,d) p(f,c) p(f,3) p(f,6) p(f,9) p(f,f ) p(f,r) p(r,r) p(f,d) p(r,d) ⎤ 0 0⎥ ⎥ 0⎥ ⎥ ⎥ 0⎥ ⎥. 0⎥ ⎥ 0⎥ ⎥ 0⎦ 1 Our classifier gi,j (Xtn | θ ) can thus be constructed so that only the probability of transition between the commutative states are outputs and we can apply softmax functions on a subset of the outputs to ensure that j ∈K gi,j (Xtn | θ ) = 1 and hence the transition probabilities in each row sum to one. For the purposes of financial modeling, it is important to realize that both states P and D are loan liquidation terminal states. However, state P is considered to be voluntary loan liquidation (e.g., prepayment due to refinance), whereas state D is considered to be involuntary liquidation (e.g., liquidation via foreclosure and auction). These states are not distinguishable in the mortgage data itself, but rather the driving force behind liquidation must be inferred from the events leading up to the liquidation. One contributor to mortgage model misprediction in the run up to the 2008 financial crisis was that some (but not all) modeling groups considered loans liquidating from deep delinquency (e.g., status 9) to be the transition 9 → P if no losses were incurred. However, behaviorally, these were typically defaults due to financial hardship, and they would have had losses in a more difficult house price regime. They were really 9 → D transitions that just happened to be lossless due to strong house price gains over the life of the mortgage. Considering them to be voluntary prepayments (status P ) resulted in systematic over-prediction of prepayments in the aftermath of major house price drops. The matrix above therefore explicitly excludes this possibility and forces delinquent loan liquidation to be always considered involuntary. The reverse of this problem does not typically exist. In most states it is illegal to force a borrower into liquidation until at least 2 payments have been missed. Therefore, liquidation from C or 3 is always voluntary, and hence C → P and 3 → P . Except in cases of fraud or severe malfeasance, it is almost never economically advantageous for a lender to force liquidation from status 6, but it is not illegal. Therefore the transition 3 → D is typically a data error, but 6 → D is merely very rare. 5 Examples of Supervised Machine Learning in Practice Example 1.6 Parameterized Probability Transitions If loan n is current in time period t, then P(Utn ) = (0, 1, 0, 0, 0, 0, 0, 0)T . If we have p(c,p) = 0.05, p(c,c) = 0.9, and p(c,3) = 0.05, then n | Xtn ) = g(Xtn | θ ) · P(Utn ) = (0.05, 0.9, 0.05, 0, 0, 0, 0, 0)T . P(Ut+1 (1.22) Common mortgage models sometimes use additional states, often ones that are (without additional information) indistinguishable from the states listed above. Table 1.6 describes a few of these. The reason for including these is the same as the reason for breaking out states like REO, status R. It is known on theoretical grounds that some model regressors from Xtn should not be relevant for R. For instance, since the property is now owned by the lender, and the loan itself no longer exists, the interest rate (and rate incentive) of the original loan should no longer have a bearing on the outcome. To avoid over-fitting due to highly colinear variables, these known-useless variables are then excluded from transitions models starting in status R. This is the same reason status T is sometimes broken out, especially for logistic regressions. Without an extra status listed in this way, strong rate disincentives could drive prepayments in the model to (almost) zero, but we know that people die and divorce in all rate regimes, so at least some minimal level of premature loan liquidations must still occur based on demographic factors, not financial ones. Model Stability Unlike many other models, mortgage models are designed to accurately predict events a decade or more in the future. Generally, this requires that they be built on regressors that themselves can be accurately predicted, or at least hedged. Therefore, it is common to see regressors like FICO at origination, loan age in months, rate incentive, and loan-to-value (LTV) ratio. Often LTV would be called MTMLTV if it is marked-to-market against projected or realized housing price moves. Of these regressors, original FICO is static over the life of the loan, age is deterministic, Table 1.6 A brief description of mortgage states Symbol T U Name Turnover Curtailment Overlaps with P C Definition Loan prepaid due to non-financial life event Borrower overpaid to reduce loan principal 38 Table 1.7 Loan originations by year (Freddie Mac, FRM30) 1 Introduction Year 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 Loans originated 976,159 733,567 1,542,025 1,403,515 2,063,488 1,133,015 1,618,748 1,300,559 1,238,814 1,237,823 1,879,477 1,250,484 1,008,731 1,249,486 1,375,423 942,208 Fig. 1.9 Sample mortgage model predicting C → 3 and fit on loans originated in 2001 and observed until 2006, by loan age (in months). The prepayment probabilities are shown on the y-axis rates can be hedged, and MTMLTV is rapidly driven down by loan amortization and inflation thus eliminating the need to predict it accurately far into the future. Consider the Freddie Mac loan level dataset of 30 year fixed rate mortgages originated through 2014. This includes each monthly observation from each loan present in the dataset. Table 1.7 shows the loan count by year for this dataset. When a model is fit on 1 million observations from loans originated in 2001 and observed until the end of 2006, its C → P probability charted against age is shown in Fig. 1.9. In Fig. 1.9 the curve observed is the actual prepayment probability of the observations with the given age in the test dataset, “Model” is the model prediction, 5 Examples of Supervised Machine Learning in Practice Fig. 1.10 Sample mortgage model predicting C → 3 and fit on loans originated in 2006 and observed until 2015, by loan age (in months). The prepayment probabilities are shown on the yaxis and “Theoretical” is the response to age by a theoretical loan with all other regressors from Xtn held constant. Two observations are worth noting: 1. The marginal response to age closely matches the model predictions; and 2. The model predictions match actual behavior almost perfectly. This is a regime where prepayment behavior is largely driven by age. When that same model is run on observations from loans originated in 2006 (the peak of housing prices before the crisis), and observed until 2015, Fig. 1.10 is produced. Three observations are warranted from this figure: 1. The observed distribution is significantly different from Fig. 1.9; 2. The model predicted a decline of 25%, but the actual decline was approximately 56%; and 3. Prepayment probabilities are largely indifferent to age. The regime shown here is clearly not driven by age. In order to provide even this level of accuracy, the model had to extrapolate far from any of the available data and “imagine” a regime where loan age is almost irrelevant to prepayment. This model meets with mixed success. This particular one was fit on only 8 regressors, a more complicated model might have done better, but the actual driver of this inaccuracy was a general tightening of lending standards. Moreover, there was no good data series available before the crisis to represent lending standards. This model was reasonably accurate even though almost 15 years separated the start of the fitting data from the end of the projection period, and a lot happened in that time. Mortgage models in particular place a high premium on model stability, and the ability to provide as much accuracy as possible even though the underlying distribution may have changed dramatically from the one that generated the fitting data. Notice also that cross-validation would not help here, as we cannot draw testing data from the distribution we care about, since that distribution comes from the future. 1 Introduction Most importantly, this model shows that the low-dimensional projections of this (moderately) high-dimensional problem are extremely deceptive. No modeler would have chosen a shape like the model prediction from Fig. 1.9 as function of age. That prediction arises due to the interaction of several variables, interactions that are not interpretable from one-dimensional plots such as this. As we will see in subsequent chapters, such complexities in data are well suited to machine learning, but not without a cost. That cost is understanding the “bias–variance tradeoff” and understanding machine learning with sufficient rigor for its decisions to be defensible. 6 Summary In this chapter, we have identified some of the key elements of supervised machine learning. Supervised machine learning 1. is an algorithmic approach to statistical inference which, crucially, does not depend on a data generation process; 2. estimates a parameterized map between inputs and outputs, with the functional form defined by the methodology such as a neural network or a random forest; 3. automates model selection, using regularization and ensemble averaging techniques to iterate through possible models and arrive at a model with the best out-of-sample performance; and 4. is often well suited to large sample sizes of high-dimensional non-linear covariates. The emphasis on out-of-sample performance, automated model selection, and absence of a pre-determined parametric data generation process is really the key to machine learning being a more robust approach than many parametric, financial econometrics techniques in use today. The key to adoption of machine learning in finance is the ability to run machine learners alongside their parametric counterparts, observing over time the differences and limitations of parametric modeling based on in-sample fitting metrics. Statistical tests must be used to characterize the data and guide the choice of algorithm, such as, for example, tests for stationary. See Dixon and Halperin (2019) for a checklist and brief but rounded discussion of some of the challenges in adopting machine learning in the finance industry. Capacity to readily exploit a wide form of data is their other advantage, but only if that data is sufficiently high quality and adds a new source of information. We close this chapter with a reminder of the failings of forecasting models during the financial crisis of 2008 and emphasize the importance of avoiding siloed data extraction. The application of machine learning requires strong scientific reasoning skills and is not a panacea for commoditized and automated decision-making. 7 Exercises 7 Exercises Exercise 1.1**: Market Game Suppose that two players enter into a market game. The rules of the game are as follows: Player 1 is the market maker, and Player 2 is the market taker. In each round, Player 1 is provided with information x, and must choose and declare a value α ∈ (0, 1) that determines how much it will pay out if a binary event G occurs in the round. G ∼ Bernoulli(p), where p = g(x|θ) for some unknown parameter θ . Player 2 then enters the game with a $1 payment and chooses one of the following payoffs: V1 (G, p) = 1 α with probability p with probability (1 − p) or V2 (G, p) = with probability p 1 (1−α) with probability (1 − p) 1. Given that α is known to Player 2, state the strategy9 that will give Player 2 an expected payoff, over multiple games, of $1 without knowing p. 2. Suppose now that p is known to both players. In a given round, what is the optimal choice of α for Player 1? 3. Suppose Player 2 knows with complete certainty, that G will be 1 for a particular round, what will be the payoff for Player 2? 4. Suppose Player 2 has complete knowledge in rounds {1, . . . , i} and can reinvest payoffs from earlier rounds into later rounds. Further suppose without loss of generality that G = 1 for each of these rounds. What will be the payoff for Player 2 after i rounds? You may assume that the each game can be played with fractional dollar costs, so that, for example, if Player 2 pays Player 1 $1.5 to enter the game, then the payoff will be 1.5V1 . Exercise 1.2**: Model Comparison Recall Example 1.2. Suppose additional information was added such that it is no longer possible to predict the outcome with 100% probability. Consider Table 1.8 as the results of some experiment. Now if we are presented with x = (1, 0), the result could be B or C. Consider three different models applied to this value of x which encode the value A, B, or C. strategy refers the choice of weight if Player 2 is to choose a payoff V = wV1 + (1 − w)V2 , i.e. a weighted combination of payoffs V1 and V2 . 9 The 1 Introduction Table 1.8 Sample model data G A B B C C x (0, 1) (1, 1) (1, 0) (1, 0) (0, 0) f ((1, 0)) = (0, 1, 0), Predicts B with 100% certainty. g((1, 0)) = (0, 0, 1), Predicts C with 100% certainty. h((1, 0)) = (0, 0.5, 0.5), Predicts B or C with 50% certainty. 1. Show that each model has the same total absolute error, over the samples where x = (1, 0). 2. Show that all three models assign the same average probability to the values from Table 1.8 when x = (1, 0). 3. Suppose that the market game in Exercise 1 is now played with models f or g. B or C each triggers two separate payoffs, V1 and V2 , respectively. Show that the losses to Player 1 are unbounded when x = (1, 0) and α = 1 − p. 4. Show also that if the market game in Exercise 1 is now played with model h, the losses to Player 1 are bounded. Exercise 1.3**: Model Comparison Example 1.1 and the associated discussion alluded to the notion that some types of models are more common than others. This exercise will explore that concept briefly. Recall Table 1.1 from Example 1.1: G A B C C x (0, 1) (1, 1) (1, 0) (0, 0) For this exercise, consider two models “similar” if they produce the same projections for G when applied to the values of x from Table 1.1 with probability strictly greater than 0.95. In the following subsections, the goal will be to produce sets of mutually dissimilar models that all produce Table 1.1 with a given likelihood. 1. How many similar models produce Table 1.1 with likelihood 1.0? 2. Produce at least 4 dissimilar models that produce Table 1.1 with likelihood 0.9. 3. How many dissimilar models can produce Table 1.1 with likelihood exactly 0.95? Exercise 1.4*: Likelihood Estimation When the data is i.i.d., the negative of log-likelihood function (the “error function”) for a binary classifier is the cross-entropy E(θ ) = − Gi ln (g1 (xi | θ )) + (1 − Gi )ln (g0 (xi | θ )). Suppose now that there is a probability πi that the class label on a training data point xi has been correctly set. Write down the error function corresponding to the negative log-likelihood. Verify that the error function in the above equation is obtained when πi = 1. Note that this error function renders the model robust to incorrectly labeled data, in contrast to the usual least squares error function. Exercise 1.5**: Optimal Action Derive Eq. 1.17 by setting the derivative of Eq. 1.16 with respect to the timet action ut to zero. Note that Eq. 1.17 gives a non-parametric expression for the optimal action ut in terms of a ratio of two conditional expectations. To be useful in practice, the approach might need some further modification as you will use in the next exercise. Exercise 1.6***: Basis Functions Instead of non-parametric specifications of an optimal action in Eq. 1.17, we can develop a parametric model of optimal action. To this end, assume we have a set of basic functions ψk (S) with k = 1, . . . , K. Here K is the total number of basis functions—the same as the dimension of your model space. We now define the optimal action ut = ut (St ) in terms of coefficients θk of expansion over basis functions k (for example, we could use spline basis functions, Fourier bases, etc.) : ut = ut (St ) = θk (t) k (St ). Compute the optimal coefficients θk (t) by substituting the above equation for ut into Eq. 1.16 and maximizing it with respect to a set of weights θk (t) for a t-th time step. Appendix Answers to Multiple Choice Questions Question 1 Answer: 1, 2. Answer 3 is incorrect. While it is true that unsupervised learning does not require a human supervisor to train the model, it is false to presume that the approach is superior. 1 Introduction Answer 4 is incorrect. Reinforcement learning cannot be viewed as a generalization of supervised learning to Markov Decision Processes. The reason is that reinforcement learning uses rewards to reinforce decisions, rather than labels to define the correct decision. For this reason, reinforcement learning uses a weaker form of supervision. Question 2 Answer: 1,2,3. (1) Answer 4 is incorrect. Two separate binary models {gi (X|θ )}1i=0 and (2) {gi (X|θ )}1i=0 will, in general, not produce the same output as a single, multiclass, model {gi (X|θ )}3i=0 . Consider, as a counter example, the logistic models (2) exp{−X θ1 } = g0 (X|θ2 ) = g0(1) = g0 (X|θ1 ) = 1+exp{−X T θ } and g0 1 with the multi-class model T exp{−XT θ2 } , 1+exp{−XT θ2 } exp{(XT θ )i } . gi (X|θ ) = softmax(exp{XT θ }) = K T k=0 exp{(X θ )k } If we set θ1 = θ 0 − θ 1 and θ 2 = θ 3 = 0, then the multi-class model is equivalent to Model 1. Similarly if we set θ2 = θ 2 − θ 3 and θ 0 = θ 1 = 0, then the multiclass model is equivalent to Model 2. However, we cannot simultaneously match the outputs of Model 1 and Model 2 with the multi-class model. Question 3 Answer: 1,2,3. Answer 4 is incorrect. The layers in a deep recurrent network provide more expressibility between each lagged input and the hidden state variable, but are unrelated to the amount of memory in the network. The hidden layers in any multilayered perceptron are not the hidden state variables in our time series model. It is the degree of unfolding, i.e. number of hidden state vectors which determines the amount of memory in any recurrent network. Question 4 Answer: 2. References Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle (pp. 267–281). Akcora, C. G., Dixon, M. F., Gel, Y. R., & Kantarcioglu, M. (2018). Bitcoin risk modeling with blockchain graphs. Economics Letters, 173(C), 138–142. Arnold, V. I. (1957). On functions of three variables (Vol. 114, pp. 679–681). Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31, 307–327. Box, G. E. P., & Jenkins, G. M. (1976). Time series analysis, forecasting, and control. San Francisco: Holden-Day. Box, G. E. P., Jenkins, G. M., & Reinsel, G. C. (1994). Time series analysis, forecasting, and control (third ed.). Englewood Cliffs, NJ: Prentice-Hall. Breiman, L. (2001). Statistical modeling: the two cultures (with comments and a rejoinder by the author). Statistical Science, 16(3), 199–231. Cont, R., & de Larrard, A. (2013). Price dynamics in a Markovian limit order market. SIAM Journal on Financial Mathematics, 4(1), 1–25. de Prado, M. (2018). Advances in financial machine learning. Wiley. de Prado, M. L. (2019). Beyond econometrics: A roadmap towards financial machine learning. SSRN. Available at SSRN: https://ssrn.com/abstract=3365282 or http://dx.doi.org/10.2139/ ssrn.3365282. DeepMind (2016). DeepMind AI reduces Google data centre cooling bill by 40%. https:// deepmind.com/ blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/. DeepMind (2017). The story of AlphaGo so far. https://deepmind.com/research/alphago/. Dhar, V. (2013, December). Data science and prediction. Commun. ACM, 56(12), 64–73. Dixon, M. (2018a). A high frequency trade execution model for supervised learning. High Frequency, 1(1), 32–52. Dixon, M. (2018b). Sequence classification of the limit order book using recurrent neural networks. Journal of Computational Science, 24, 277–286. Dixon, M., & Halperin, I. (2019). The four horsemen of machine learning in finance. Dixon, M., Polson, N., & Sokolov, V. (2018). Deep learning for spatio-temporal modeling: Dynamic traffic flows and high frequency trading. ASMB. Dixon, M. F., & Polson, N. G. (2019, Mar). Deep fundamental factor models. arXiv e-prints, arXiv:1903.07677. Dyhrberg, A. (2016). Bitcoin, gold and the dollar – a GARCH volatility analysis. Finance Research Letters. Elman, J. L. (1991, Sep). Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7(2), 195–225. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. Flood, M., Jagadish, H. V., & Raschid, L. (2016). Big data challenges and opportunities in financial stability monitoring. Financial Stability Review, (20), 129–142. Gomber, P., Koch, J.-A., & Siering, M. (2017). Digital finance and fintech: current research and future research directions. Journal of Business Economics, 7(5), 537–580. Gottlieb, O., Salisbury, C., Shek, H., & Vaidyanathan, V. (2006). Detecting corporate fraud: An application of machine learning. http:// citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1. 1.142.7470. Graves, A. (2012). Supervised sequence labelling with recurrent neural networks. Studies in Computational intelligence. Heidelberg, New York: Springer. Gu, S., Kelly, B. T., & Xiu, D. (2018). Empirical asset pricing via machine learning. Chicago Booth Research Paper 18–04. Harvey, C. R., Liu, Y., & Zhu, H. (2016). . . . and the cross-section of expected returns. The Review of Financial Studies, 29(1), 5–68. Hornik, K., Stinchcombe, M., & White, H. (1989, July). Multilayer feedforward networks are universal approximators. Neural Netw., 2(5), 359–366. Kearns, M., & Nevmyvaka, Y. (2013). Machine learning for market microstructure and high frequency trading. High Frequency Trading - New Realities for Traders. Kercheval, A., & Zhang, Y. (2015). Modeling high-frequency limit order book dynamics with support vector machines. Journal of Quantitative Finance, 15(8), 1315–1329. Kolmogorov, A. N. (1957). On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Dokl. Akad. Nauk SSSR, 114, 953–956. Kubota, T. (2017, January). Artificial intelligence used to identify skin cancer. 1 Introduction Kullback, S., & Leibler, R. A. (1951, 03). On information and sufficiency. Ann. Math. Statist., 22(1), 79–86. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955, August). A proposal for the Dartmouth summer research project on artificial intelligence. http://www-formal.stanford.edu/ jmc/history/dartmouth/dartmouth.html. Philipp, G., & Carbonell, J. G. (2017, Dec). Nonparametric neural networks. arXiv e-prints, arXiv:1712.05440. Philippon, T. (2016). The fintech opportunity. CEPR Discussion Papers 11409, C.E.P.R. Discussion Papers. Pinar Saygin, A., Cicekli, I., & Akman, V. (2000, November). Turing test: 50 years later. Minds Mach., 10(4), 463–518. Poggio, T. (2016). Deep learning: mathematics and neuroscience. A Sponsored Supplement to Science Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, 9– 12. Shannon, C. (1948). A mathematical theory of communication. Bell System Technical Journal, 27. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. Sirignano, J., Sadhwani, A., & Giesecke, K. (2016, July). Deep learning for mortgage risk. ArXiv e-prints. Sirignano, J. A. (2016). Deep learning for limit order books. arXiv preprint arXiv:1601.01987. Sovbetov, Y. (2018). Factors influencing cryptocurrency prices: Evidence from Bitcoin, Ethereum, Dash, Litcoin, and Monero. Journal of Economics and Financial Analysis, 2(2), 1–27. Stein, H. (2012). Counterparty risk, CVA, and Basel III. Turing, A. M. (1995). Computers & thought. Chapter Computing Machinery and Intelligence (pp. 11–35). Cambridge, MA, USA: MIT Press. Wiener, N. (1964). Extrapolation, interpolation, and smoothing of stationary time series. The MIT Press. Chapter 2 Probabilistic Modeling This chapter introduces probabilistic modeling and reviews foundational concepts in Bayesian econometrics such as Bayesian inference, model selection, online learning, and Bayesian model averaging. We then develop more versatile representations of complex data with probabilistic graphical models such as mixture models. 1 Introduction Not only is statistical inference from data intrinsically uncertain, but the type of data and relationships in the data that we seek to model are growing ever more complex. In this chapter, we turn to probabilistic modeling, a class of statistical models, which are broadly designed to characterize uncertainty and allow the expression of causality between variables. Probabilistic modeling is a meta-class of models, including generative modeling—a class of statistical inference models which maximizes the joint distribution, p(X, Y ), and Bayesian modeling, employing either maximum likelihood estimation or “fully Bayesian” inference. Probabilistic graphical models put the emphasis on causal modeling to simplify statistical inference of parameters from data. This chapter shall focus on the constructs of probabilistic modeling, as they relate to the application of both unsupervised and supervised machine learning in financial modeling. While it seems natural to extend the previous chapters directly to a probabilistic neural network counterpart, it turns out that this does not develop the type of intuitive explanation of complex data that is needed in finance. It also turns out that neural networks are not a natural fit for probabilistic modeling. In other words, neural networks are well suited to pointwise estimation but lead to many difficulties in a probabilistic setting. In particular, they tend to be very data intensive—offsetting one of the major advantages of Bayesian modeling. © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https://doi.org/10.1007/978-3-030-41068-1_2 2 Probabilistic Modeling We will explore probabilistic modeling through the introduction of probabilistic graphical models—a data structure which is convenient for understanding the relationship between a multitude of different classes of models, both discriminative and generative. This representation will lead us neatly to Bayesian kernel learning, the subject of Chap. 3. We begin by introducing the reader to elementary topics in Bayesian modeling, which is a well-established approach for characterizing uncertainty in, for example, trading and risk modeling. Starting with simple probabilistic models of the data, we review some of the main constructs necessary to apply Bayesian methods in practice. The application of probabilistic models to time series modeling, using filtering and hidden variables to dynamically represent the data is presented in Chap. 7. Chapter Objectives The key learning points of this chapter are: • Apply Bayesian inference to data using simple probabilistic models; • Understand how linear regression with probabilistic weights can be viewed as a simple probabilistic graphical model; and • Develop more versatile representations of complex data with probabilistic graphical models such as mixture models and hidden Markov models. Note that section headers ending with * are more mathematically advanced, often requiring some background in analysis and probability theory, and can be skipped by the less mathematically inclined 2 Bayesian vs. Frequentist Estimation Bayesian data analysis is distinctly different from classical (or “frequentist”) analysis in its treatment of probabilities, and in its resulting treatment of model parameters when compared to classical parametric analysis.1 Bayesian analysts formulate probabilistic statements about uncertain events before collecting any additional evidence (i.e., “data”). These ex-ante probabilities (or, more generally, probability distributions plus underlying parameters) are called priors. This notion of subjective probabilities is absent in classical estimation. In the classical world, all estimation and inference is based solely on observed data. 1 Throughout the first part of this chapter we will largely remain within the realm of parametric analysis. However, we shall later see examples of Bayesian methods for non- and semi-parametric modeling. 2 Bayesian vs. Frequentist Estimation Both Bayesian and classical econometricians aim to learn more about a set of parameters, say θ . In the classical mindset, θ contains fixed but unknown elements, usually associated with an underlying population of interest (e.g., the mean and variance for credit card debt among US college students). Bayesians share with classicals the interest in θ and the definition of the population of interest. However, they assign ex ante a prior probability to θ , labeled p (θ ), which usually takes the form of a probability distribution with “known” moments. For example, Bayesians might state that the aforementioned debt amount has a normal distribution with mean $3000 and standard deviation of $1500. This prior may be based on previous research, related findings in the published literature, or it may be completely arbitrary. In any case, it is an inherently subjective construct. Both schools then develop a theoretical framework that relates θ to observed data, say a “dependent variable” y, and a matrix of explanatory variables X. This relationship is formalized via a likelihood function, say p (y | θ , X) to stay with Bayesian notation. To stress, this likelihood function takes the exact same analytical form for both schools. The classical analyst then collects a sample of observations from the underlying population of interest and, combining these data with the formulated statistical ˆ Any and all uncertainty surrounding the model, produces an estimate of θ , say θ. accuracy of this estimate is solely related to the notion that results are based on a sample, not data for the entire population. A different sample (of equal size) may produce slightly different estimates. Classicals express this uncertainty via ˆ They also have a strong focus on “standard errors” assigned to each element of θ. ˆ the behavior of θ as the sample size increases. The behavior of estimators under increasing sample size falls under the heading of “asymptotic theory.” The properties of most estimators in the classical world can only be assessed “asymptotically,” i.e. are only understood for the hypothetical case of an infinitely large sample. Also, virtually all specification tests used by frequentists hinge on asymptotic theory. This is a major limitation when the data size is finite. Bayesians, in turn, combine prior and likelihood via Bayes’ rule to derive the posterior distribution of θ as p (θ | y, X) = p (θ , y | X) p (θ ) p (y | θ , X) = ∝ p (θ) p (y | θ , X) . p (y | X) p (y | X) > Bayesian Modeling Bayesian modeling is not about point estimation of a parameter value, θ , but rather updating and sharpening our subjective beliefs (our “prior”) about θ from the sample data. Thus, the sample data should contribute to “learning” about θ . 2 Probabilistic Modeling Bayesian Learning Simply put, the posterior distribution is just an updated version of the prior. More specifically, the posterior is proportional to the prior multiplied by the likelihood. The likelihood carries all the current information about the parameters and the data. If the data has high informational content (i.e., allows for substantial learning about θ ), the posterior will generally look very different from the prior. In most cases, it is much “tighter” (i.e., has a much smaller variance) than the prior. There is no room in Bayesian analysis for the classical notions of “sampling uncertainty,” and less a priori focus on the “asymptotic behavior” of estimators.2 Taking the Bayesian paradigm to its logical extreme, Duembgen and Rogers (2014) suggest to “estimate nothing.” They propose the replacement of the industrystandard estimation-based paradigm of calibration with an approach based on Bayesian techniques, wherein a posterior is iteratively obtained from a prior, namely stochastic filtering and MCMC. Calibration attempts to estimate, and then uses the estimates as if they were known true values—ignoring the estimation error. On the contrary, an approach based on a systematic application of the Bayesian principle is consistent: “There is never any doubt about what we should be doing to hedge or to mark-to-market a portfolio of derivatives, and whatever we do today will be consistent with what we did before, and with what we will do in the future.” Moreover, Bayesian model comparison methods enable one to easily compare models of very different types. Marginal Likelihood The term in the denominator of Eq. 2.1 is called the “marginal likelihood,” it is not a function of θ , and can usually be ignored for most components of Bayesian analysis. Thus, we usually work only with the numerator (i.e., prior times likelihood) for inference about θ . From Eq. 2.1 we know that this expression is proportional (“∝ ”) to the actual posterior. However, the marginal likelihood is crucial for model comparison, so we will learn a few methods to derive it as a by-product of or following the actual posterior analysis. For some choices of prior and likelihood there exist analytical solutions for this term. In summary, frequentists start with a “blank mind” regarding θ . They collect data ˆ They formalize the characteristics and uncertainty of θˆ for to produce an estimate θ. a finite sample context (if possible) and a hypothetical large sample (asymptotic) case. Bayesians collect data to update a prior, i.e. a pre-conceived probabilistic notion regarding θ. 2 However, at times Bayesian analysis does rest on asymptotic results. Naturally, the general notion that a larger sample, i.e. more empirical information, is better than a small one also holds for Bayesian 3 Frequentist Inference from Data 3 Frequentist Inference from Data Let us begin this section with a simple example which illustrates frequentist inference. Example 2.1 Bernoulli Trials Example Consider an experiment consisting of a single coin flip. We set the random variable Y to 0 if tails come up and 1 if heads come up. Then the probability density of Y is given by p(y | θ ) = θ y (1 − θ )1−y , where θ ∈ [0, 1] is the probability of heads showing up. You will recognize Y as a Bernoulli random variable. We view p as a function of y, but parameterized by the given parameter θ , hence the notation, p(y | θ ). More generally, suppose that we perform n such independent experiments (tosses) on the same coin. Denote these n realizations of Y as ⎛ ⎞ y1 ⎜y2 ⎟ ⎜ ⎟ y = ⎜ . ⎟ ∈ {0, 1}n , ⎝ .. ⎠ yn where, for 1 ≤ i ≤ n, yi is the result of the ith toss. What is the probability density of y? Since the coin tosses are independent, the probability density of y, i.e. the joint probability density of y1 , y2 , . . . , yn , is given by the product rule p(y | θ ) = p(y1 , y2 , . . . , yn | θ ) = θ yi (1 − θ )1−yi = θ (1 − θ )n− Suppose that we have tossed the coin n = 50 times (performed n = 50 Bernoulli trials) and recorded the results of the trials as 0 0 1 0 0 1 0 0 0 0 0 1 0 1 1 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 1 0 0 How can we estimate θ given these data? Both the frequentists and Bayesians regard the density p(y | θ ) as a likelihood. Bayesians maintain this notation, whereas frequentists reinterpret p(y | θ ), which is 2 Probabilistic Modeling a function of y (given the parameters θ : in our case, there is a single parameter, so θ is univariate, but this does not have to be the case) as a function of θ (given the specific sample y), and write L(θ ) := L(θ | y) := p(y | θ ). Notice that we have merely reinterpreted this probability density, whereas its functional form remains the same, in our case: L(θ ) = θ (1 − θ )n− > Likelihood Likelihood is one of the seminal ideas of the frequentist school. It was introduced by one of its founding fathers, Sir Ronald Aylmer Fisher: “What has now appeared is that the mathematical concept of probability is . . . inadequate to express our mental confidence or [lack of confidence] in making . . . inferences, and that the mathematical quantity which usually appears to be appropriate for measuring our order of preference among different possible populations does not in fact obey the laws of probability. To distinguish it from probability, I have used the term ‘likelihood’ to designate this quantity. . . ”—R.A. Fisher, Statistical Methods for Research Workers. It is generally more convenient to work with the log of likelihood—the loglikelihood. Since ln is a monotonically increasing function of its argument, the same values of θ maximize the log-likelihood as the ones that maximize the likelihood. yi ln(1 − θ ). yi ln θ + n − ln L(θ ) = ln θ yi (1 − θ )n− yi = In order to find the value of θ that maximizes this expression, we differentiate with respect to θ and solve for the value of θ that sets the (partial) derivative to zero. ∂ ln L(θ ) = ∂θ n − yi + . θ −1 Equating this to zero and solving for θ , we obtain the maximum likelihood estimate for θ : 4 Assessing the Quality of Our Estimator: Bias and Variance θˆML = yi . n To confirm that this value does indeed maximize the log-likelihood, we take the second derivative with respect to θ , ∂2 n − yi yi ln L(θ ) = − 2 − < 0. ∂θ 2 θ (θ − 1)2 Since this quantity is strictly negative for all 0 < θ < 1, it is negative at θˆML , and we do indeed have a maximum. Example 2.2 Bernoulli Trials Example (continued) Note that θˆML depends only on the sum of yi s, we can answer our question: if in a sequence of 50 coin tosses exactly twelve heads come up, then θˆML = 12 yi = = 0.24. n 50 A frequentist approach gives at a single value (a single “point”) as our estimate, 0.24—in this sense we are performing point estimation. When we apply a Bayesian approach to the same problem, we shall see that the Bayesian estimate is a probability distribution, rather than a single point. Despite some mathematical formalism, the answer is intuitively obvious. If we toss a coin fifty times, and out of those twelve times it lands with heads up, it is natural to estimate the probability of getting heads as 12 50 . It is encouraging that the result of our calculation agrees with our intuition and common sense. 4 Assessing the Quality of Our Estimator: Bias and Variance When we obtained our maximum likelihood estimate, we plugged in a specific number for yi , 12. In this sense the estimator is an ordinary function. However, we could also view it as a function of the random sample, θˆML = Yi , n each Yi being a random variable. A function of a random variable is itself a random variable, so we can compute its expectation and variance. In particular, an expectation of the error 2 Probabilistic Modeling e = θˆ − θ is known as bias, ! " ! " ˆ θ ) = E(e) = E θˆ − θ = E θˆ − E [θ ] . bias(θ, As frequentists, we view the true value of θ as a single, deterministic, fixed point, so we take it outside of the expectation: ! " ˆ θ ) = E θˆ − θ . bias(θ, In our case it is E[θˆML − θ ] = E[θˆML ] − θ = E 1 Yi E[Yi ] − θ −θ = n n = 1 · n(θ · 1 + (1 − θ ) · 0) − θ = 0, n we see that the bias is zero, so this particular maximum likelihood estimator is unbiased (otherwise it would be biased). What about the variance of this estimator? Yi independence 1 1 1 ˆ = Var[θML ] = Var Var[Yi ] = 2 ·n·θ (1−θ ) = θ (1−θ ), 2 n n n n and we see that the variance of the estimator depends on the true value of θ . For multivariate θ, it is useful to examine the error covariance matrix given by " ! P = E[ee ] = E (θˆ − θ )(θˆ − θ ) . When estimating θ , our goal is to minimize the estimation error. This error can be expressed using loss functions. Supposing our parameter vector θ takes values on some space , a loss function L(θˆ ) is a mapping from × into R which quantifies the “loss” incurred by estimating θ with θˆ . We have already seen loss functions in earlier chapters, but we shall restate the definitions here for completeness. One frequently used loss function is the absolute error, # ˆ ˆ L1 (θ , θ ) := θ − θ 2 = (θˆ − θ ) (θˆ − θ ), where · 2 is the Euclidean norm (it coincides with the absolute value when ⊆ R). One advantage of the absolute error is that it has the same units as θ . We use the squared error perhaps even more frequently than the absolute error: L2 (θˆ , θ ) := θˆ − θ 22 = (θˆ − θ ) (θˆ − θ ). 5 The Bias–Variance Tradeoff (Dilemma) for Estimators While the squared error has the disadvantage compared to the absolute error of being expressed in √quadratic units of θ , rather than the units of θ , it does not contain the cumbersome · and is therefore easier to deal with mathematically. The expected value of a loss function is known as the statistical risk of the estimator. The statistical risks corresponding to the above loss functions are, respectively, the mean absolute error, # " ! " ! ˆ θ ) :=E θˆ − θ 2 = E MAE(θˆ , θ ):=R1 (θˆ , θ ):=E L1 (θ, (θˆ − θ ) (θˆ − θ) , and, by far the most commonly used, mean squared error (MSE), " ! ! " ! " ˆ θ ) := E θˆ − θ 2 = E (θˆ − θ ) (θˆ − θ ) . MSE(θˆ , θ ) := R2 (θˆ , θ ) := E L2 (θ, 2 The square root of the mean squared error is called the root mean squared error (RMSE). The minimum mean squared error (MMSE) estimator is the estimator that minimizes the mean squared error. 5 The Bias–Variance Tradeoff (Dilemma) for Estimators It can easily be shown that the mean squared error separates into a variance and bias term: ! " MSE(θˆ , θ ) = trVar θˆ + bias(θˆ , θ ) 22 , where tr(·) is the trace operator. In the case of a scalar θ , this expression simplifies to ! " MSE(θˆ , θ ) = Var θˆ + bias(θˆ , θ )2 . In other words, the MSE is equal to the sum of the variance of the estimator and the squared bias. The bias–variance tradeoff or bias–variance dilemma consists in the need to minimize these two sources of error, the variance and bias of an estimator, in order to minimize the mean squared error. Sometimes there is a tradeoff between minimizing bias and minimizing variance to achieve the least possible MSE. The concept of a bias–variance tradeoff in machine learning will be revisited in Chap. 4, within the context of statistical learning theory. 2 Probabilistic Modeling 6 Bayesian Inference from Data As before, let θ be the parameter of some statistical model and let y = y1 , . . . , yn be n i.i.d. observations of some random variable Y . We capture our subjective assumptions about the model parameter θ, before observing the data, in the form of a prior probability distribution p(θ). Bayes’ rule converts a prior probability into a posterior probability by incorporating the evidence provided by the observed data: p(θ | y) = p(y | θ ) p(θ ) p(y) allows us to evaluate the uncertainty in θ after we have observed y. This uncertainty is characterized by the posterior probability p(θ | y). The effect of the observed data is expressed through p(y | θ)—a function of θ referred to as the likelihood function. It expresses how likely the observed dataset was generated by a model with parameter θ. Let us summarize some of the notation that will be important: • The prior is p(θ ); $ • The likelihood is p(y | θ ) = ni=1%p(yi | θ ), since the data is i.i.d.; • The marginal likelihood p(y) = p(y | θ )p(θ )dθ is the likelihood with the dependency on θ marginalized out; and • The posterior is p(θ | y). > Bayesian Inference Informally, Bayesian inference involves the following steps: 1. Formulate your statistical model as a collection of probability distributions conditional on different values for a parameter θ , about which you wish to learn; 2. Organize your beliefs about θ into a (prior) probability distribution; 3. Collect the data and insert them into the family of distributions given in Step 1; 4. Use Bayes’ rule to calculate your new beliefs about θ ; and 5. Criticize your model and revise your modeling assumptions. 6 Bayesian Inference from Data The following example shall illustrate the application of Bayesian inference for the Bernoulli parameter θ . Example 2.3 Bernoulli Trials Example (continued) θ is a probability, so it is bounded and must belong to the interval [0, 1]. We could assume that all values of θ in [0, 1] are equally likely. Thus our prior could be that θ is uniformly distributed on [0, 1], i.e. θ ∼ U(a = 0, b = 1). This assumption would constitute an application of Laplace’s principle of indifference, also known as the principle of insufficient reason: when faced with multiple possibilities, whose probabilities are unknown, assume that the probabilities of all possibilities are equal. In the context of Bayesian estimation, applying Laplace’s principle of indifference constitutes what is known as an uninformative prior. Our goal is, however, not to rely too much on the prior, but use the likelihood to proceed to a posterior based on new information. The pdf of the uniform distribution, U(a, b), is given by p(θ ) = 1 b−a if θ ∈ [a, b] and zero elsewhere. In our case, a = 0, b = 1, and so our uninformative uniform prior is given by p(θ ) = 1, ∀θ ∈ [0, 1]. Let us derive the posterior based on this prior assumption. Bayes’ theorem tells us that posterior ∝ likelihood · prior, where ∝ stands for “proportional to,” so the left- and right-hand side are equal up to a normalizing constant which depends on the data but not on θ . The posterior is p(θ | x1:n ) ∝ p(x1:n | θ )p(θ ) = θ (1 − θ )n− · 1. If the prior is uniform, i.e. p(θ ) = 1, then after n = 5 trials we see from the data that p(θ | x1:n ) ∝ θ (1 − θ )4 . After 10 trials we have (continued) 2 Probabilistic Modeling Example 2.3 p(θ | x1:n ) ∝ θ (1 − θ )4 × θ (1 − θ )4 = θ 2 (1 − θ )8 . From the shape of the resulting pdf, we recognize it as the pdf of the Beta distributiona Beta θ | xi , n − xi , and we immediately know that the missing normalizing constant factor is 1 & '= xi , n − xi B & ' & ' xi n − xi . (n) Let us now assume that we have tossed the coin fifty times and, out of those fifty coin tosses, we get heads on twelve. Then our posterior distribution becomes θ | x1:n ∼ Beta(θ | 12, 38). Then, from the properties of this distribution, E[θ | x1:n ] = xi = xi + (n − xi ) 12 12 xi = = = 0.24, n 12 + 38 50 & ' & ' xi n − xi (2.4) Var[θ | x1:n ] = & ' '2 & xi + n − xi xi + n − xi + 1 & '2 xi n xi − 12 · 38 456 = = = 0.00357647058. = 2 2 n (n + 1) (12 + 38) (12 + 38 + 1) 127500 (2.5) # 456 The standard deviation being, in units of probability, = 127500 0.05980360012. Notice that the mean of the posterior, 0.24, matches the frequentist maximum likelihood estimate of θ , θˆML , and our intuition. Again, it is not unreasonable to assume that the probability of getting heads is 0.24 if we observe heads on twelve out of fifty coin tosses. a The function’s argument is now θ, not xi , so it is not the pdf of a Bernoulli distribution. 6 Bayesian Inference from Data first 10 trials 0 1 2 3 4 5 6 7 first 5 trials 0 1 2 3 4 5 6 7 Fig. 2.1 The posterior distribution of θ against successive numbers of trials. The x-axis shows the values of theta. The shape of the distribution tightens as the Bayesian model is observed to 0.0 0.2 0.4 0.0 0.2 0.4 0.6 0.8 1.0 0.6 0.8 1.0 first 40 trials first 50 trials 0 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Note that we did not need to evaluate the marginal likelihood in the example above, only the θ dependent terms were evaluated for the purpose of the plot. Thus each plot in Fig. 2.1 is only representative of the posterior up to a scaling. ! The Principle of Indifference In practice, the principle of indifference should be used with great care, as we are assuming a property of the data strictly greater than we know. Saying “the probabilities of the outcomes are equally likely” contains strictly more information than “I don’t know what the probabilities of the outcomes are.” If someone tosses a coin and then covers it with her hand, asking you, “heads or tails?” it is probably relatively sensible to assume that the two possibilities are equally likely, effectively assuming that the coin is unbiased. If an investor asks you, “Will the stock price of XYZ increase?” you should think twice before applying Laplace’s principle of indifference and replying “Well, there is a 50% chance that XYZ will grow, you can either long or short XYZ.” Clearly there are other important considerations such as the amount by which the stock could increase versus decrease, limits on portfolio exposure to market risk factors, and anticipation of other market events such as earnings announcements. In other words, the implications of going long or short will not necessarily be equal. 2 Probabilistic Modeling 6.1 A More Informative Prior: The Beta Distribution Continuing with the above example, let us question our prior. Is it somewhat too uninformative? After all, most coins in the world are probably close to being unbiased. We could use a Beta(α, β) prior instead of the Uniform prior. Picking α = β = 2, for example, will give a distribution on [0, 1] centered on 12 , incorporating a prior assumption that the coin is unbiased. The pdf of this prior is given by p(θ ) = 1 θ α−1 (1 − θ )β−1 , ∀θ ∈ [0, 1], B(α, β) and so the posterior becomes p(θ | x1:n ) ∝ p(x1:n | θ )p(θ ) =θ (1 − θ )n− 1 θ α−1 (1 − θ )β−1 ∝ θ (α+ xi )−1 (1 − θ )(β+n− xi )−1 , B(α, β) which we recognize as a pdf of the distribution xi . Beta θ | α + xi , β + n − Why did we pick this prior distribution? One reason is that its pdf is defined over the compact interval [0, 1], unlike, for example, the normal distribution, which has tails extending to −∞ and +∞. Another reason is that we are able to choose parameters which center the pdf at θ = 12 , incorporating the prior assumption that the coin is unbiased. If we initially assume a Beta(θ | α = 2, β = 2) prior, then the posterior expectation is α + xi α + xi = E [θ | x1:n ] = α + xi + β + n − xi α+β +n = 7 2 + 12 = ≈ 0.259. 2 + 2 + 50 27 Notice that both the prior and posterior belong to the same probability distribution family. In Bayesian estimation theory we refer to such prior and posterior as conjugate distributions (with respect to this particular likelihood function). Unsurprisingly, since now our prior assumption is that the coin is unbiased, 12 50 < 1 E [θ | x1:n ] < 2 . Perhaps surprisingly, we are also somewhat more certain about the posterior (its variance is smaller) than when we assumed the uniform prior. 6 Bayesian Inference from Data Notice that the results of Bayesian estimation are sensitive—to varying degree in each specific case—to the choice of prior distribution: p(θ | α, β) = (α + β − 1)! α−1 θ (1 − θ )β−1 = (α, β)θ α−1 (1 − θ )β−1 . (α − 1)!(β − 1)! (2.6) So for the above example, this marginal likelihood would be evaluated with α = 13 and β = 39 since there are 12 observed 1s and 38 observed 0s. 6.2 Sequential Bayesian updates In the previous section we saw that, starting with the prior Beta (θ | α, β) , we arrived at the Beta-distributed posterior, xi . Beta θ | α + xi , β + n − What would happen if, instead of observing all twelve coin tosses at once, we (i) considered each coin toss in turn; (ii) obtained our posterior; and (iii) used that posterior as a prior for an update based on the information from the next coin toss? The above two formulae give the answer to this question. We start with our initial prior, Beta (θ | α, β) , then, substituting n = 1 into the second formula, we get Beta (θ | α + x1 , β + 1 − x1 ) . Using this posterior as a prior before the second coin toss, we obtain the next posterior as Beta (θ | α + x1 + x2 , β + 2 − x1 − x2 ) . Proceeding along these lines, after all ten coin tosses, we end up with xi , Beta θ | α + xi , β + n − 2 Probabilistic Modeling the same result that we would have attained if we processed all ten coin tosses as a single “batch,” as we did in the previous section. This insight forms the basis for a sequential or iterative application of Bayes’ theorem—sequential Bayesian updates—the foundation for real-time Bayesian filtering. In machine learning, this mechanism for updating our beliefs in response to new data is referred to as “online learning.” Online Learning An important aspect of Bayesian learning is the capacity to update the posterior in response to the arrival of new data, y . The posterior over y now becomes the prior, and the new posterior is updated to p(θ | y , y) = % p(y | θ )p(θ | y) . θ∈ p(y | θ )p(θ | y)dθ In auto-correlated data, often encountered in financial econometrics, it is common to use Bayesian models for prediction. We can write that the density of the new predicted value y given the previous data y is the expected value of the likelihood of the new data under the posterior density p(θ | y): p(y | y) = Eθ | y [p(y ( | y, θ )] = p(y | y, θ )p(θ | y)dθ. ? Multiple Choice Question 1 Which of the following statements are true: 1. A frequentist performs statistical inference by finding the best fit parameters. The Bayesian finds the distribution of the parameters assuming a prior. 2. Frequentist inference can be regarded as a special case of Bayesian inference when the prior is a Dirac delta-function. 3. Bayesian inference is well suited to online learning, an experimental design under which the model is continuously updated as new data arrives. 4. Prediction, under Bayesian inference, is the conditional expectation of the predicted variable under the posterior distribution of the parameter. 7 Model Selection 6.3 Practical Implications of Choosing a Classical or Bayesian Estimation Framework If the sample size is large and the likelihood function “well-behaved” (which usually means a simple function with a clear maximum, plus a small dimension for θ), classical and Bayesian analysis are essentially on the same footing and will produce virtually identical results. This is because the likelihood function and empirical data will dominate any prior assumptions in the Bayesian approach. If the sample size is large but the dimensionality of θ is high and the likelihood function is less tractable (which usually means highly non-linear, with local maxima, flat spots, etc.), a Bayesian approach may be preferable purely from a computational standpoint. It can be very difficult to attain reliable estimates via maximum likelihood estimation (MLE) techniques, but it is usually straightforward to derive a posterior distribution for the parameters of interest using Bayesian estimation approaches, which often operate via sequential draws from known distributions. If the sample size is small, Bayesian analysis can have substantial advantages over a classical approach. First, Bayesian results do not depend on asymptotic theory to hold for their interpretability. Second, the Bayesian approach combines the sparse data with subjective priors. Well-informed priors can increase the accuracy and efficiency of the model. Conversely, of course, poorly chosen priors3 can produce misleading posterior inference in this case. Thus, under small sample conditions, the choice between Bayesian and classical estimation often distills to a choice between trusting the asymptotic properties of estimators and trusting one’s priors. 7 Model Selection Beyond the inference challenges described above, there are a number of problems with the classical approach to model selection which Bayesian statistics solves. For example, it has been shown by Breiman (2001) that the following three linear regression models have a residual sum of squares (RSS) which are all within 1%: Model 1 Yˆ = 2.1 + 3.8X3 − 0.6X8 + 83.2X13 − 2.1X17 + 3.2X27 , Model 2 Yˆ = − 8.9 + 4.6X5 + 0.01X6 + 12.0X15 + 17.5X21 + 0.2X22 , (2.10) Model 3 Yˆ = − 76.7 + 9.3X2 + 22.0X7 − 13.2X8 + 3.4X11 + 7.2X28 . (2.11) 3 For example, priors that place substantial probability mass on practically infeasible ranges of θ—this often happens inadvertently when parameter transformations are involved in the analysis. 2 Probabilistic Modeling You could, for example, think of each model being used to find the fair price of an asset Y , where each Xi are the contemporaneous (i.e., measured at the same time) firm characteristics. • Which model is better? • How would your interpretation of which variables are the most important change between models? • Would you arrive at different conclusions about the market signals if you picked, say, Model 1 versus Model 2? • How would you eliminate some of the ambiguity resulting from this outcome of statistical inference? Of course one direction is to simply analyze the F-scores of each independent variable and select the model which has the most statistically significant fitted coefficients. But this is unlikely to reliably discern the models when the fitted coefficients are comparable in statistical significance. It is well known that the goodness-of-fit measures, such as RSS’s and F-scores, do not scale well to more complex datasets where there are several independent variables. This leads to modelers drawing different conclusions about the same data, and is famously known as the “Rashomon effect.” Yet many studies and models in finance are still built this way and make use of information criterion and regularization techniques such as Akaike’s information criteria (AIC). A limitation for more robust frequentist model comparison is the requirement that the models being compared are “nested.” That is, one model should be a subset of the other model being compared, e.g. Model 1 Yˆ = β0 + β1 X1 + β2 X2 Model 2 Yˆ = β0 + β1 X1 + β2 X2 + β11 X12 . Model 1 is nested in Model 2 and we refer to the model selection as a “nested model selection.” In contrast to classical model selection, Bayesian model selection need not be restricted to nested 7.1 Bayesian Inference We now consider the more general setting—selection and updating of several candidate models in response to a dataset y. The “model” can be any data model, not just a regression, and the notation used here reflects that. In Bayesian inference, a model is a family of probability distributions, each of which can explain the observed data. More precisely, a model M is the set of likelihoods p(x n | θ ) over all possible parameter values . 7 Model Selection For example, consider the case of flipping a coin n times with an unknown bias θ ∈ ≡ [0, 1]. The data x n = {xi }ni=1 is now i.i.d. Bernoulli and if we observe the number of heads X = x, the model is the family of binomial distributions M := {P [X = x | n, θ ] = ) * n x θ (1 − θ )n−x }θ∈ . x Each one of these distributions is a potential explanation of the observed head count x. In the Bayesian method, we maintain a belief over which elements in the model are considered plausible by reasoning about p(θ | x n ). See Example 1.1 for further details of this experiment. We start by re-writing the Bayesian inference formula with explicit inclusion of model indexes. You will see that we have dropped X since the exact composition of explanatory data is implicitly covered by model index Mi : p (θ i | x n , Mi ) = p (θ i | Mi ) p (x n | θ i , Mi ) p (x n | Mi ) i = 1, 2. This expression shows that differences across models can occur due to differing priors for θ and/or differences in the likelihood function. The marginal likelihood in the denominator will usually also differ across models. 7.2 Model Selection So far, we just considered parameter inference when the model has already been selected. The Bayesian setting offers a very flexible framework for the comparison of competing models—this is formally referred to as “model selection.” The models do not have to be nested—all that is required is that the competing specifications share the same x n . Suppose there are two models, denoted M1 and M2 , each associated with a respective set of parameters θ 1 and θ 2 . We seek the most “probable” model given the observed data x n . We first apply Bayes’ rule to derive an expression for the posterior model probability p (Mi ) p (x n | Mi ) ' & ' p (Mi | x n ) = & j p x n | Mj p Mj i = 1, 2. Here p (Mi ) is a prior distribution over models that we have selected; a common practice is to set this to a uniform distribution over the models. The value p (x n | Mi ) is a marginal likelihood function—a likelihood function over the space of models in which the parameters have been marginalized out: 2 Probabilistic Modeling ( p(x n | Mi ) = θ i ∈i p(x n | θ i , Mi )p(θ i | Mi )dθ i . From a sampling perspective, this marginal likelihood can be interpreted as the probability that the model could have generated the observed data, under the chosen prior belief over its parameters. More precisely, the marginal likelihood can be viewed as the probability of generating x n from a model Mi whose parameters θ i ∈ i are sampled at random from the prior p(θ i | Mi ). For this reason, it is often referred to here as the model evidence and plays an important role in model selection that we will see later. We can now construct the posterior odds ratio for the two models as p (M1 ) p (x n | M1 ) p (M1 | x n ) = , p (M2 | x n ) p (M2 ) p (x n | M2 ) which is simply the prior odds multiplied by the ratio of the evidence for each model. Under equal model priors (i.e., p (M1 ) = p (M2 )) this reduces to the Bayes’ factor for Model 1 vs. 2, i.e. B1,2 = p (x n | M1 ) , p (x n | M2 ) which is simply the ratio of marginal likelihoods for the two models. Since Bayes’ factors can become quite large, we usually prefer to work with its logged version logB1,2 = logp (x n | M1 ) − logp (x n | M2 ) . The derivation of BFs and thus model comparison is straightforward if expressions for marginal likelihoods are analytically known or can be easily derived. However, often this can be quite tricky, and we will learn a few techniques to compute marginal likelihoods in this book. 7.3 Model Selection When There Are Many Models Suppose now that a set of models {Mi } may be used to explain the data x n . θ i represents the parameters of model Mi . Which model is “best”? We answer this question by estimating the posterior distribution over models: % p(Mi | x n ) = θ i ∈i p(x n | θ i , Mi )p(θ i | Mi )dθ i p(Mi ) . j p(x n | Mj )p(Mj ) 7 Model Selection Table 2.1 Jeffreys’ scale is used to assess the comparative strength of evidence in favor of one model over another As before we can compare any two models via the posterior odds, or if we assume equal priors, by the BFs. Model selection is always relative rather than absolute. We must always pick a reference model M2 and decide whether model M1 has more strength. We use Jeffreys’ scale to assess the strength of evidence as shown in Table 2.1. Example 2.4 Model Selection You compare two models for explaining the behavior of a coin. The first model, M1 , assumes that the probability of a head is fixed to 0.5. Notice that this model does not have any parameters. The second model, M2 , assumes the probability of a head is set to an unknown θ ∈ = (0, 1) with a uniform prior on θ : p(θ | M2 ) = 1. For simplicity, we additionally choose a uniform model prior p(M1 ) = p(M2 ). Suppose we flip the coin n = 200 times and observe X = 115 heads. Which model should we prefer in light of this data? We compute the model evidence for each model. The model evidence for M1 ) * n 1 p(x | M1 ) = ≈ 0.005956. (2.22) x 2200 The model evidence of M2 requires integrating over θ : ( p(x | M2 ) = 0 ( = p(x | θ, M2 )p(θ | M2 )dθ 1 )n* θ 115 (1 − θ )200−115 dθ 1 ≈ 0.004975. 201 (2.23) (2.24) (2.25) (continued) 2 Probabilistic Modeling Example 2.4 Note that we have used the definition of the Beta density function p(θ | α, β) = (α + β − 1)! α−1 θ (1 − θ )β−1 (α − 1)!(β − 1)! to evaluate the integral in the marginal density function above. The Bayes’ factor in favor of M1 is 1.2 and thus |lnB| = 0.182 and there is no evidence in favor of M1 . ! Frequentist Approach An interesting aside here is that a frequentist hypothesis test would reject the null hypothesis θ = 0.5 at the α = 0.05 level. The probability of generating at least 115 heads under model M1 is approximately 0.02. The probability of generating at least 115 tails is also 0.02. So a two-sided test would give a p-value of approximately 4%. ! Hyperparameters We note in passing that the prior distribution in the example above does not involve any parameterization. If the prior is a parameterized distribution, then the parameters of the prior are referred to as hyperparameters. The distributions of the hyperparameters are known as hyperpriors. “Bayesian hierarchical modeling” is a statistical model written in multiple levels (hierarchical form) that estimates the parameters of the posterior distribution using the Bayesian method. 7 Model Selection 7.4 Occam’s Razor The model evidence performs a vital role in the prevention of model over-fitting. Models that are too simple are unlikely to generate the dataset. On the other hand, models that are too complex can generate many possible data sets, but they are unlikely to generate any particular dataset at random. Bayesian inference therefore automates the determination of model complexity using the training data x n alone and does not need special “fixes” (a.k.a regularization and information criteria) to prevent over-fitting. The underlying philosophical principle of selecting the simplest model, if given a choice, is known as “Occam’s razor” (Fig. 2.2). We maintain a belief over which parameters in the model we consider plausible by reasoning with the posterior p(θ i | x n , Mi ) = p(x n | θ i , Mi )p(θ i | Mi ) , p(x n | Mi ) and we may choose the parameter value which maximizes the posterior distribution (MAP). 7.5 Model Averaging Marginal likelihoods can also be used to derive model weights in Bayesian model averaging (BMA). Informally, the intuition behind BMA is that we are never fully convinced that a single model is the correct one for our analysis at hand. There are usually several (and often millions of) competing specifications. To explicitly incorporate this notion of “model uncertainty,” one can estimate every model separately, compute relative probability weights for each model, and then generate model-averaged posterior distributions for the parameters (and predictions) Fig. 2.2 The model evidence p(D | m) performs a vital role in the prevention of model over-fitting. Models that are too simple are unlikely to generate the dataset. Models that are too complex can generate many possible data sets, but they are unlikely to generate any particular dataset at random. Source: Rasmussen and Ghahramani (2001) 2 Probabilistic Modeling of interest. We often choose BMA when there is not strong enough evidence for any particular model. Prediction of a new point y∗ under BMA is given over m models as the weighted average p(y∗ |y) = p(y∗ |y, Mi )p(Mi |y). Note that model-averaged prediction would be cumbersome to accomplish in a classical framework, and thus constitutes another advantage of employing a Bayesian estimation approach. ? Multiple Choice Question 2 Which of the following statements are true: 1. Bayesian inference is ideally suited to model selection because the model evidence effectively penalizes over-parameterized models. 2. The principle of Occam’s razor is to simply choose the model with the least bias. 3. Bayesian model averaging uses the uncertainty from the model to weight the output from each model. 4. Bayesian model averaging is a method of consensus voting between models—the best candidate model is selected for each new observation. 5. Hierarchical Bayesian modeling involves nesting Bayesian models through parameterizations of prior distributions and their distributions. 8 Probabilistic Graphical Models Graphical models (a.k.a. Bayesian networks) are a method for representing relationships between random variables in a probabilistic model. They provide a useful tool for big data, providing graphical representations of complex datasets. To see how graphical models arise, we can revisit the familiar perceptron model from the previous chapter in a probabilistic framework, i.e. the network weights are now assumed to be probabilistic. As a starting point, consider a logistic regression classifier with probabilistic output: P [G | X] = σ (U ) = 1 , U = wX + b, G ∈ {0, 1}, X ∈ Rp . 1 + e−U 8 Probabilistic Graphical Models By Bayes’ law, we know that the posterior probabilities must be given by the likelihood, prior and evidence: P [G | X] = P [X | G] P [G] = P [X] , | G]) P[G] − log PP[X [X | Gc ] +log P[Gc ] where Gc is the complement of G. So the outputs are only posterior probabilities when the weights and biases are, respectively, likelihood and log-odds ratios: P Xj | G , ∀j ∈ {1, . . . , p}, wj = P Xj | Gc * P [G] . b = log P [Gc ] ) In particular, the Xj s must be conditionally independent over G; otherwise, the outputs from the logistic regression are not the true posterior probabilities. Put differently, the posteriors of the input given the class can only be found when the input is mutually independent given the class G. In this case, the logistic regression is a naive Bayes’ classifier—a type of generative model which the joint dis$p models tribution as the products of marginals, P [X, G] = P [G] j =1 P Xj | G . Hence, under this data assumption, logistic regression is the discriminative counterpart to naive Bayes. Figure 2.3b shows an example of an equivalent logistic regression models and naive Bayes’ binary classifier for the case when the inputsare binary. Furthermore, if the conditional density functions of the inputs, P Xj | G , are Gaussian (but not necessarily identical), then we can establish equivalence between logistic regression and a Gaussian naive Bayes’ classifier. See Exercise 2.7 for establishing the equivalence when the inputs are binary. The graphical model captures the causal process (Pearl, 1988) by which the observed data was generated. For this reason, such models are often called generative models. Fig. 2.3 Logistic regression fw (X) = σ (w T X) and an equivalent naive Bayes’ classifier. (a) Logistic regression weights and resulting predictions. (b) A naive Bayes’ classifier with the same probabilistic output X 1 X 2 F w (X ) −1.16 w = 2.23 −0.20 0.70 0.74 0.20 0.24 (a) G X1 P 1 (c ) 1 [X 1 | G] 0.8 0.3 G 1 0 1 [X 2 | G] 0.45 0.5 2 Probabilistic Modeling Naive Bayes’ classification is the simplest form of a probabilistic graphical model (PGM) with a directed graph G = (X, E), where the edges, E, represent the conditional dependence between the random variables X = (X, Y ). For example, Fig. 2.3b shows the dependence of the response G on X in the naive Bayes’ classifier. Such graphs, provided they are directed, are often referred to as “Bayesian networks.” Such graphical model captures the causal process by which the observed data was generated. If the graph is undirected—an undirected graphical model (UGM), as in restricted Boltzmann machines (RBMs), then the network is referred to as a “Markov network” or Markov random field. RBMs have the specific restriction that there are no observed–observed and hidden–hidden node connections. RBMs are an example of continuous latent variable models which find the probability of an observed variable by marginalizing over the continuous latent variable, Z, ( p(x) = p(x | z)p(z) dz. (2.32) This type of graphical model corresponds to that of factor analysis. Other related types of graphical models include mixture models. 8.1 Mixture Models A standard mixture probability density of a continuous and independently (but not identically) distributed random variable X, whose value is denoted by x, is defined as p(x; υ) = πk p(x; θk ). The mixture density has K components (or states) and is defined by the parameter set υ = {θ, π }, where π = {π1 , · · · , πK } is the set of weights given to each component and θ = {θ1 , · · · , θK } is the set of parameters describing each component distribution. A well-known mixture model is the Gaussian mixture model (GMM): p(x) = πk N x; μk , σk2 , where each component parameter vector θk is the mean and variance parameters, μk and σk2 . 8 Probabilistic Graphical Models When X is discrete, the graphical model is referred to as a “discrete mixture model.” Examples of GMMs are common in finance. Risk managers, for example, speak in terms of “correlation breakdowns,” “market contagion,” and “volatility shocks.” Finger (1997) presents a two-component GMM for modeling risk under normal and stressed market scenarios which has become standard methodology for stressed Value-at-Risk and Economic Capital modeling in the investment banking sector. Mixture models can also be used to cluster data and have a non-probabilistic analog called the K-means algorithm which is a well-known unsupervised learning method used in finance and other fields. Before such a model can be fitted, it is necessary to introduce an additional variable which represents the current state of the data, i.e. which of the mixture component distributions is the current observation drawn from. Hidden Indicator Variable Representation of Mixture Models Let us first suppose that the independent random variable, X, has been observed over N data points, xN = {x1 , · · · , xN }. The set is assumed to be generated by a K-component mixture model. To indicate the mixture component from which a sample was drawn, we introduce an independent hidden (a.k.a. latent) discrete random variable, S ∈ {1, . . . , K}. For each observation xi , the value of S is denoted as si , and is encoded as a binary vector of length K. We set the vector’s k-th component, (si )k = 1 to indicate that the k-th mixture component is selected, while all other states are set to 0. As a consequence, 1= K (si )k . We can now specify the joint probability distribution of X and S in terms of a marginal density p(si ; π ) and a conditional density p(xi | si ; θ ) as p(xn , sn ; υ) = p(xi |si ; θ )p(si ; π ), where the marginal densities p(si ; π ) are drawn from a multinomial distribution that is parameterized by the mixing weights π = {π1 , · · · , πK }: p(si ; π ) = K k=1 or, more simply, (si )k 2 Probabilistic Modeling P [(si )k = 1] = πk . Naturally the mixing weights, πk ∈ [0, 1], must satisfy 1= πk . Maximum Likelihood Estimation The maximum likelihood method of estimating mixture models is known as the expectation–maximization (EM) algorithm. The goal of the EM is to maximize the likelihood of the data given the model, i.e. maximize L(υ) = log + p(xn , sn ; υ) = K N (si )k log {πk p(xi ; θk )} . i=1 k=1 If the sequence of states sn were known, then the estimation of the model parameters π, θ would be straightforward; conditioned on the state variables and the observations, Eq. 2.40 could be maximized with respect to the model parameters. However, the value of the state variable is unknown. This suggests an alternative two-stage iterated optimization algorithm: If we know the expected value of S, one could use this expectation in the first step to perform a weighted maximum likelihood estimation of Eq. 2.40 with respect to the model parameters. These estimates will be incorrect since the expectation S is inaccurate. So, in the second step, one could update the expected value of all S pretending the model parameters υ := (π, θ ) are known and held fixed at their values from the past iteration. This is precisely the strategy of the expectation–maximization (EM) algorithm—a statistically selfconsistent, iterative, algorithm for maximum likelihood estimation. With the context of mixture models, the EM algorithm is outlined as follows: • E-step: In this step, the parameters υ are held fixed at the old values, υ old , obtained from the previous iteration (or at their initial settings during the algorithm’s initialization). Conditioned on the observations, the E-step then computes the probability density of the state variables Si , ∀i given the current model parameters and observation data, i.e. p(si | xi , υ old ) ∝ p(xi | si ; θ )p(si ; π old ). In particular, we compute 8 Probabilistic Graphical Models p(xi | (si )k = 1; θk )πk P((si )k = 1 | xi , υ old ) = . p(xi | (si ) = 1; θ )π The likelihood terms p(xi | (si )k = 1; θk ) are evaluated using the observation densities defined for each of the states. • M-step: In this step, the hidden state probabilities are considered given and maximization is performed with respect to the parameters: υ new = arg max L(υ). υ This results in the following update equations for the parameters in the probability distributions: 1 N i=1 (γi )k xi μk = N N i=1 (γi )k N 1 i=1 (γi )k (xi − μk )2 , ∀k ∈ {1, . . . , K}, σk2 = N N i =1 (γi )k where (γi )k := E[(si )k | xi ] are the responsibilities—conditional expectations which measure how strongly a data point, xi , “belongs” to each component, k, of the mixture model. The number of components needed to model the data depends on the data and can be determined by a Kolmogorov–Smirnoff test or based on entropy criteria. Heavy tailed data required at least two light tailed components to compensate. More components require larger sample sizes to ensure adequate fitting. In the extreme case there may be insufficient data available to calibrate a given mixture model with a certain degree of accuracy. In summary, while GMMs are flexible they may not be the most appropriate model. If more is known about the data distribution, such as its behavior in the tails, incorporation of this knowledge can only help improve the model. ? Multiple Choice Question 3 Which of the following statements are true: 1. Mixture models assume that the data is multi-modal and drawn from a linear combination of uni-modal distributions. 2. The expectation–maximization (EM) algorithm is a type of iterative unsupervised learning algorithm which alternates between updating the probability density of the state variables, based on model parameters (E-step) and updating the parameters by maximum likelihood estimation (M-step). 2 Probabilistic Modeling 3. The EM algorithm automatically determines the modality of the distribution and hence the number of components. 4. A mixture model is only appropriate for use in finance if the modeler specifies which component is the most relevant for each observation. 9 Summary Probabilistic modeling is an important class of models in financial data, which is often noisy and incomplete. Additionally much of finance rests on being able to make financial decisions under uncertainty, a task perfectly suited to probabilistic modeling. In this chapter we have identified and demonstrated how probabilistic modeling is used for financial modeling. In particular we have: – Applied Bayesian inference to data using simple probabilistic models; – Show how linear regression with probabilistic weights can be viewed as a simple probabilistic graphical model; and – Developed more versatile representations of complex data with probabilistic graphical models such as mixture models and hidden Markov models. 10 Exercises Exercise 2.1: Applied Bayes’ Theorem An accountant is 95 percent effective in detecting fraudulent accounting when it is, in fact, present. However, the audit also yields a “false positive” result for one percent of the non-fraudulent companies audited. If 0.1 percent of the companies are actually fraudulent, what is the probability that a company is fraudulent given that the audit revealed fraudulent accounting? Exercise 2.2*: FX and Equity A currency strategist has estimated that JPY will strengthen against USD with probability 60% if S&P 500 continues to rise. JPY will strengthen against USD with probability 95% if S&P 500 falls or stays flat. We are in an upward trending market at the moment, and we believe that the probability that S&P 500 will rise is 70%. We then learn that JPY has actually strengthened against USD. Taking this new information into account, what is the probability that S&P 500 will rise? Hint: | A) Recall Bayes’ rule: P (A | B) = P (B P (B) P (A). Exercise 2.3**: Bayesian Inference in Trading Suppose there are n conditionally independent, but not identical, Bernoulli trials G1 , . . . , Gn generated from the map P (Gi = 1 | X = xi ) = g1 (xi | θ ) with θ ∈ [0, 1]. Show that the likelihood of G | X is given by 10 Exercises p(G | X, θ ) = (g1 (xi | θ ))Gi · (g0 (xi | θ ))1−Gi and the log-likelihood of G | X is given by ln p(G | X, θ ) = Gi ln(g1 (xi | θ )) + (1 − Gi )ln(g0 (xi | θ )). Using Bayes’ rule, write the condition probability density function of θ (the “posterior”) given the data (X, G) in terms of the above likelihood function. From the previous example, suppose that G = 1 corresponds to JPY strengthening against the dollar and X are the S&P 500 daily returns and now g1 (x | θ ) = θ 1x>0 + (θ + 35)1x≤0 . Starting with a neutral view on the parameter θ (i.e., θ ∈ [0, 1]), learn the distribution of the parameter θ given that JPY strengthens against the dollar for two of the three days and S&P 500 is observed to rise for 3 consecutive days. Hint: You can use the Beta density function with a scaling constant (α, β) (α + β − 1)! α−1 θ (1 − θ )β−1 = (α, β)θ α−1 (1 − θ )β−1 (α − 1)!(β − 1)! (2.49) to evaluate the integral in the marginal density function. If θ represents the currency analyst’s opinion of JPY strengthening against the dollar, what is the probability that the model overestimates the analyst’s estimate? p(θ |α, β) = Exercise 2.4*: Bayesian Inference in Trading Suppose that you observe the following daily sequence of directional changes in the JPY/USD exchange rate (U (up), D(down or stays flat)): U, D, U, U, D and the corresponding daily sequence of S&P 500 returns is -0.05, 0.01, -0.01, -0.02, 0.03 You propose the following probability model to explain the behavior of JPY against USD given the directional changes in S&P 500 returns: Let G denote a Bernoulli R.V., where G = 1 corresponds to JPY strengthening against the dollar and r are the S&P 500 daily returns. All observations of G are conditionally independent (but *not* identical) so that the likelihood is p(G | r, θ ) = n i=1 p(G = Gi | r = ri , θ ) 2 Probabilistic Modeling where p(Gi = 1 | r = ri , θ ) = θu , ri > 0 θd , ri ≤ 0. Compute the full expression for the likelihood that the data was generated by this model. Exercise 2.5: Model Comparison Suppose you observe the following daily sequence of direction changes in the stock market (U (up), D(down)): U, D, U, U, D, D, D, D, U, U, U, U, U, U, U, D, U, D, U, D, U, D, D, D, D, U, U, D, U, D, U, U, U, D, U, D, D, D, U, U, D, D, D, U, D, U, D, U, D, D You compare two models for explaining its behavior. The first model, M1 , assumes that the probability of an upward movement is fixed to 0.5 and the data is i.i.d. The second model, M2 , also assumes the data is i.i.d. but that the probability of an upward movement is set to an unknown θ ∈ = (0, 1) with a uniform prior on θ : p(θ |M2 ) = 1. For simplicity, we additionally choose a uniform model prior p(M1 ) = p(M2 ). Compute the model evidence for each model. Compute the Bayes’ factor and indicate which model should we prefer in light of this data? Exercise 2.6: Bayesian Prediction and Updating Using Bayesian prediction, predict the probability of an upward movement given the best model and data in Exercise 2.5. Suppose now that you observe the following new daily sequence of direction changes in the stock market (U (up), D(down)): D, U, D, D, D, D, U, D, U, D, U, D, D, D, U, U, D, U, D, D, D, U, U, D, D, D, U, D, U, D, U, D, D, D, U, D, U, D, U, D, D, D, D, U, U, D, U, D, U, U Using the best model from Exercise 2.5, compute the new posterior distribution function based on the new data and the data in the previous question and predict the probability of an upward price movement given all data. State all modeling assumptions clearly. Exercise 2.7: Logistic Regression Is Naive Bayes Suppose that G and X ∈ {0, 1}p are Bernoulli random $ variables and the Xi s are p mutually independent given G—that is, P [X | G] = i=1 P [Xi | G]. Given a naive Bayes’ classifier P [G | X], show that the following logistic regression model produces equivalent output if the weights are 10 Exercises P [G] P [Xi = 0 ∈ G] + log P [Gc ] P [Xi = 0 ∈ Gc ] p w0 = log P [Xi = 1 ∈ G] P [Xi = 0 ∈ Gc ] , · wi = log P [Xi = 1 ∈ Gc ] P [Xi = 0 ∈ G] i = 1, . . . , p. Exercise 2.8**: Restricted Boltzmann Machines Consider a probabilistic model with two types of binary variables: visible binary stochastic units v ∈ {0, 1}D and hidden binary stochastic units h ∈ {0, 1}F , where D and F are the number of visible and hidden units, respectively. The joint probability density to observe their values is given by the exponential distribution 1 exp (−E(v, h)) exp (−E (v, h)) , Z = Z p(v, h) = and where the energy E(v, h) of the state {v, h} is E(v, h) = −v W h − b v − a h = − T F D i=1 j =1 Wij vi hj − bi vi − aj hj , j =1 with model parameters a, b, W . This probabilistic model is called the restricted Boltzmann machine. Show that conditional probabilities for visible and hidden nodes are given by the sigmoid function σ (x) = 1/(1 + e−x ): ⎛ P [vi = 1 | h] = σ ⎝ ⎛ ⎞ Wij hj + bi ⎠ , P [hi = 1 | v] = σ ⎝ Wij vj + ai ⎠ . ⎞ Appendix Answers to Multiple Choice Questions Question 1 Answer: 1,3,4. Question 2 Answer: 1,3,5. Question 3 Answer: 1,2. Mixture models assume that the data is multi-modal—the data is drawn from a linear combination of uni-modal distributions. The expectation– maximization (EM) algorithm is a type of iterative, self-consistent, unsupervised 2 Probabilistic Modeling learning algorithm which alternates between updating the probability density of the state variables, based on model parameters (E-step) and updating the parameters by maximum likelihood estimation (M-step). The EM algorithm does not automatically determine the modality of the data distribution, although there are statistical tests to determine this. A mixture model assigns a probabilistic weight for every component that each observation might belong to. The component with the highest weight is chosen. References Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science 16(3), 199–231. Duembgen, M., & Rogers, L. C. G. (2014). Estimate nothing. https://arxiv.org/abs/1401.5666. Finger, C. (1997). A methodology to stress correlations, fourth Quarter. RiskMetrics Monitor. Rasmussen, C. E., & Ghahramani, Z. (2001). Occam’s razor. In In Advances in Neural Information Processing Systems 13, (pp. 294–300). MIT Press. Chapter 3 Bayesian Regression and Gaussian Processes This chapter introduces Bayesian regression and shows how it extends many of the concepts in the previous chapter. We develop kernel based machine learning methods—specifically Gaussian process regression, an important class of Bayesian machine learning methods—and demonstrate their application to “surrogate” models of derivative prices. This chapter also provides a natural starting point from which to develop intuition for the role and functional form of regularization in a frequentist setting—the subject of subsequent chapters. 1 Introduction In general, it is difficult to develop intuition about how the distribution of weights in a parametric regression model represents the data. Rather than induce distributions over variables, as we have seen in the previous chapter, we could instead induce distributions over functions. Specifically, we can express those intuitions using a “covariance kernel .” We start by exploring Bayesian regression in a more general setup that enables us to easily move from a toy regression model to a more complex non-parametric Bayesian regression model, such as Gaussian process regression. By introducing Bayesian regression in more depth, we show how it extends many of the concepts in the previous chapter. We develop kernel based machine learning methods (specifically Gaussian process regression), and demonstrate their application to “surrogate” models of derivative prices.1 1 Surrogate models learn the output of an existing mathematical or statistical model as a function of input data. © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https:// 3 Bayesian Regression and Gaussian Processes Chapter Objectives The key learning points of this chapter are: – Formulate a Bayesian linear regression model; – Derive the posterior distribution and the predictive distribution; – Describe the role of the prior as an equivalent form of regularization in maximum likelihood estimation; and – Formulate and implement Gaussian Processes for kernel based probabilistic modeling, with programming examples involving derivative modeling. 2 Bayesian Inference with Linear Regression Consider the following linear regression model which is affine in x ∈ R: y = f (x) = θ0 + θ1 x, θ0 , θ1 ∼ N(0, 1), x ∈ R, and suppose that we observe the value of the function over the inputs x := [x1 , . . . , xn ]. The random parameter vector θ := [θ0 , θ1 ] is unknown. This setup is referred to as “noise-free,” since we assume that y is strictly given by the function f (x) without noise. The graphical model representation of this model is given in Fig. 3.1 and clearly specifies that the ith model output only depends on xi . Note that the graphical model also holds in the case when there is noise. In the noise-free setting, the expectation of the function under known data is Eθ [f (xi )|xi ] = Eθ [θ0 ] + Eθ [θ1 ]xi = 0, ∀i, where the expectation operator is w.r.t. the prior density of θ ( Eθ [·] = Fig. 3.1 This graphical model represents Bayesian linear regression. The features x := {xi }ni=1 and responses y := {yi }ni=1 are known and the random parameter vector θ is unknown. The ith model output only depends on xi (·)p(θ )dθ . 2 Bayesian Inference with Linear Regression Then the covariance of the function values between any two points, xi and xj is Eθ [f (xi )f (xj )|xi , xj ] = Eθ [θ02 + θ0 θ1 (xi + xj ) + θ12 xi xj ] = Eθ [θ02 ] + Eθ [θ02 ]xi xj + Eθ [θ0 θ1 ](xi + xj ), (3.5) = 1 + xi xj , where the last term is zero because of the independence of θ0 and θ1 . Then any collection of function values [f (x1 ), . . . , f (xn )] with given data has a joint Gaussian distribution with covariance matrix Kij := Eθ [f (xi )f (xj )|xi , xj ] = 1 + xi xj . Such a probabilistic model is the simplest example of a more general, non-linear, Bayesian kernel learning method referred to as “Gaussian Process Regression” or simply “Gaussian Processes” (GPs) and is the subject of the later material in this chapter. Noisy Data The above example is in a noise-free setting where the function values [f (x1 ), . . . , f (xn )] are observed. In practice, we do not observe these function values, but rather some target values y = [y1 , . . . , yn ] which depend on x by the function, f (x), and some zero-mean Gaussian i.i.d. additive noise with known variance σn2 : yi = f (xi ) + i , i ∼ N(0, σn2 ). Hence the observed i.i.d. data is D := (x, y). Following Rasmussen and Williams (2006), under this noise assumption and the linear model we can write down the likelihood function of the data: p(y|x, θ ) = p(yi |xi , θ ) = √ 1 2π σn exp{−(yi − xi θ1 − θ0 )2 /(2σn2 )} and hence y|x, θ ∼ N(θ0 + θ1 x, σn2 I ). Bayesian inference of the parameters in this linear regression model is based on the posterior distribution over the weights: p(θi |y, x) = p(y|x, θi )p(θi ) , i ∈ {0, 1}, p(y|x) where the marginal likelihood in the denominator is given by integrating over the parameters as 3 Bayesian Regression and Gaussian Processes ( p(y|x) = p(y|x, θ )p(θ)dθ . If we define the matrix X, where [X]i := [1, xi ], and under more general conjugate priors, we have θ ∼ N(μ, ), y|X, θ ∼ N(θ T X, σn2 I ), and the product of Gaussian densities is also Gaussian, we can simply use standard results of moments of affine transformations to give E[y|X] = E[θ T X + ] = E[θ T ]X = μT X. The conditional covariance is Cov(y|X) = Cov(θ T X)+σn2 I = XCov(θ )XT +σn2 I = XXT +σn2 I. To derive the posterior of θ , it is convenient to transform the prior density function from a moment parameterization to a natural parameterization by completing the square. This is useful for multiplying normal density functions such as normalized likelihoods and conjugate priors. The quadratic form for the prior transforms to 1 p(θ ) ∝ exp{− (θ − μ)T −1 (θ − μ)}, 2 1 ∝ exp{μT −1 θ − θ T −1 θ}, 2 (3.12) (3.13) where the 12 μT −1 μ term is absorbed in the normalizing term as it is independent of θ . Using this transformation, the posterior p(θ|D) is proportional to: p(y|X, θ )p(θ ) ∝ exp{− 1 1 (y − θ T X)T (y − θ T X)} exp{μT −1 − θ T −1 θ} 2 2σn2 (3.14) ∝ exp{− 1 1 (−2yθ T X + θ T XXT θ )} exp{μT −1 − θ T −1 θ } 2 2σn2 (3.15) = exp{( −1 μ + 1 T T T 1 1 y X) θ − θ T ( −1 + 2 XXT )θ } 2 σn2 σn (3.16) 1 = exp{a T θ − θ T Aθ }. 2 2 Bayesian Inference with Linear Regression The posterior follows the distribution θ | D ∼ N(μ , ), where the moments of the posterior are μ = a = ( −1 + 1 1 XXT )−1 ( −1 μ + 2 yT X) 2 σn σn = A−1 = ( −1 + 1 XXT )−1 σn2 and we use the inverse of transformation above, from natural back to moment parameterization to write 1 p(θ |D) ∝ exp{− (θ − μ )T ( )−1 (θ − μ )}. 2 −1 , the inverse of a covariance matrix, is referred to as the precision matrix. The mean of this distribution is the maximum a posteriori (MAP) estimate of the weights—it is the mode of the posterior distribution. We will show shortly that it corresponds to the penalized maximum likelihood estimate of the weights, with a L2 (ridge) penalty term given by the log prior. Figure 3.2 demonstrates Bayesian learning of the posterior distribution of the weights. A bi-variate Gaussian prior is initially chosen for the prior distribution and there are an infinite number of possible lines that could be drawn in the data space [−1, 1] × [−1, 1]. The data is generated under the model f (x) = 0.3 + 0.5x with a small amount of additive i.i.d. Gaussian noise. As the number of points that the likelihood function is evaluated over increases, the posterior distribution sharpens and eventually contracts to a point. See the Bayesian Linear regression Python notebook for details of the implementation. ? Multiple Choice Question 1 Which of the following statements are true: 1. Bayesian regression treats the regression weights as random variables. 2. In Bayesian regression the data function f (x) is assumed to always be observed. 3. The posterior distribution of the parameters is always Gaussian if the prior is Gaussian. 4. The posterior distribution of the regression weights will typically contract with increasing data. 5. The mean of the posterior distribution depends on both the mean and covariance of the prior if it is Gaussian. 3 Bayesian Regression and Gaussian Processes Fig. 3.2 This figure demonstrates Bayesian inference for the linear model. The data has been generated from the function f (x) = 0.3 + 0.5x with a small amount of additive white noise. Source: Bishop 2.1 Maximum Likelihood Estimation Let us briefly revisit parameter estimation in a frequentist setting to solidify our understanding of Bayesian inference. Assuming that σn2 is a known parameter, we can easily derive the maximum likelihood estimate of the parameter vector, θˆ . The gradient of the negative log-likelihood function (a.k.a. loss function) w.r.t. θ is d d L(θ ) := − dθ dθ log p(yi |xi , θ ) , 1 d T 2 ||y − θ X|| + c 2 2σn2 dθ 1 (−yT X + θ T XT X), σn2 where the constant c := − n2 (log(2π ) + log(σn2 )). Setting this gradient to zero gives the orthogonal projection of y on to the subspace spanned by X: 2 Bayesian Inference with Linear Regression θˆ = (XT X)−1 XT y, where θˆ is the vector in the subspace spanned by X which is closest to y. This result states that the maximum likelihood estimate of an unpenalized loss function (i.e., without including the prior) is the OLS estimate when the noise variance is known. If the noise variance is unknown then the loss function is L(θ , σn2 ) = n 1 ||y − θ T X||22 + c, log(σn2 ) + 2 2σn2 where now c = n2 log(2π ). Taking the partial derivative n 1 ∂L(θ, σn2 ) = − ||y − θ T X||22 , 2 2 ∂σn 2σn 2σn4 and setting it to zero gives2 σˆ n2 = n1 ||y − θ T X||22 . Maximum likelihood estimation is prone to overfitting and therefore should be avoided. We instead maximize the posterior distribution to arrive at the MAP estimate, θˆ MAP . Returning to the above computation under known noise: , n d d L(θ ) := − log p(yi |xi , θ ) + log p(θ ) , dθ dθ i=1 ) * d 1 1 T 2 T −1 = ||y − θ X||2 + (θ − μ) (θ − μ) + c dθ 2σn2 2 = 1 (−yT X + θ T XXT ) + (θ − μ)T −1 . σn2 Setting this derivative to zero gives 1 T (y X − θ T XXT ) = (θ − μ)T −1 , σn2 and after some rearrangement we obtain θˆ MAP = (XXT + σn2 −1 )−1 (σn2 Σ −1 μ + XT y) = A−1 (Σ −1 μ + σn−2 XT y), (3.26) which is equal to the mean of the posterior derived in Eq. 3.19. Of course, this is to be expected since the mean of a Gaussian distribution is also its mode. The difference between θˆ MAP and θˆ are the σn2 −1 terms. This term has the effect of 2 Note that the factor of 2 in the denominator of the second term does not cancel out because the derivative is w.r.t. σn2 and not σn . 3 Bayesian Regression and Gaussian Processes reducing the condition number of XT X. Forgetting the mean of the prior, the linear system (XT X)θ = XT y becomes the regularized linear system: Aθ = σn−2 XT y. 1 Note that choosing the isotropic Gaussian prior p(θ ) = N(0, 2λ I ) gives the 2 ridge penalty term in the loss function: λ||θ||2 , i.e. the negative log Gaussian prior matches the ridge penalty term up to a constant. In the limit, λ → 0 recovers maximum likelihood estimation—this corresponds to using the uninformative prior. Of course, in Bayesian inference, we do not perform point-estimation of the parameters, however it was a useful exercise to confirm that the mean of the posterior in Eq. 3.19 did indeed match the MAP estimate. Furthermore, we have made explicit the interpretation of the prior as a regularization term used in ridge regression. 2.2 Bayesian Prediction Recall from Chap. 2 that Bayesian prediction requires evaluating the density of f∗ := f (x∗ ) w.r.t. a new data point x∗ and the training data D. In general, we predict the model output at a new point, f∗ , by averaging the model output over all possible weights, with the weight density function given by the posterior. That is we seek to find the marginal density p(f∗ | x∗ , D) = Eθ |D [p(f∗ |x∗ , θ )], where the dependency on θ has been integrated out. This conditional density is Gaussian f∗ |x∗ , D ∼ N(μ∗ , ∗ ), with moments μ∗ = Eθ|D [f∗ |x∗ , D] = x∗T Eθ|D [θ|x∗ , D] = x∗T Eθ|D [θ|D] = x∗T μ ∗ = Eθ|D [(f∗ − μ∗ )(f∗ − μ∗ )T |x∗ , D] = x∗T Eθ|D [(θ − μ )(θ − μ )|x∗ , D]x∗ = x∗T Eθ|D [(θ − μ )(θ − μ )|D]x∗ = x∗T x∗ , where we have avoided taking the expectation of the entire density function p(f∗ |x∗ , θ ), but rather just the moments because we know that f∗ is Gaussian. ? Multiple Choice Question 2 Which of the following statements are true: 1. Prediction under a Bayesian linear model requires first estimating the posterior distribution of the parameters; 2. The predictive distribution is Gaussian only if the posterior and likelihood distributions are Gaussian; 2 Bayesian Inference with Linear Regression 3. The predictive distribution depends on the weights in the models; 4. The variance of the predictive distribution typically contracts with increasing training data. 2.3 Schur Identity There is another approach to deriving the predictive distribution from the conditional distribution of the model output which relies on properties of inverse matrices. We can write the joint density between Gaussian random variables X and Y in terms of the partitioned covariance matrix: * ) xx xy X μx , , =N μy Σyx Σyy Y where xx = V(X), xy = Cov(XY ) and yy = V(Y ), how can we find the conditional density p(y|x)? In order to express the moments in terms of the partitioned covariance matrix we shall use the following Schur identity: −MBD −1 . = −D −1 CM D −1 + D −1 CMBD −1 where the Schur complement w.r.t. the submatrix D is M := (A − BD −1 C)−1 . Applying the Schur identity to the partitioned precision matrix A gives −1 yy yx Ayy Ayx , = xy xx Axy Axx where −1 Ayy = (yy − yx xx xy )−1 −1 −1 Ayx = −(yy − yx xx xy )−1 yx xx , and thus the moments of the Gaussian distribution p(y|x) are −1 (x − μx ), μy|x = μy + yx xx −1 y|x = yy − yx xx xy . Hence the density of the condition distribution Y |X can alternatively be derived by using the Schur identity. In the special case when the joint density p(x, y) is bi-Gaussian, the expression for the moments simplify to 3 Bayesian Regression and Gaussian Processes μy|x = μy + y|x = σy − σyx (x − μx ), σx2 2 σyx (3.33) (3.34) where σxy is the covariance between X and Y . Now returning to the predictive distribution, the joint density between y and f∗ is ) * μy yy yf∗ y =N , . (3.35) μf∗ f∗ y f∗ f∗ f∗ We can immediately write down the moments of the condition distribution −1 μf∗ |X,y,x∗ = μf∗ + f∗ y yy (y − μy ), −1 yf∗ . f∗ |X,y,x∗ = f∗ f∗ − f∗ y yy Since we know the form of the function f (x), we can simplify this expression by writing that yy = KX,X + σn2 I, where KX,X is the covariance of f (X), which for linear regression takes the form KX,X = E[θ12 (X − μx )2 ], f∗ f∗ = Kx∗ ,x∗ and yf∗ = KX,x∗ . Now we can write the moments of the predictive distribution as −1 (y − μy ), μf∗ |X,y,x∗ = μf∗ + Kx∗ ,X KX,X −1 KX,x∗ . Kf∗ |X,y,x∗ = Kx∗ ,x∗ − Kx∗ ,X KX,X Discussion Note that we have assumed that the functional form of the map, f (x) is known and parameterized. Here we assumed that the map is linear in the parameters and affine in the features. Hence our approximation of the map is in the data space and, for prediction, we can subsequently forget about the map and work with its moments. The moments of the prior on the weights also no longer need to be specified. 3 Gaussian Process Regression If we do not know the form of the map but want to specify structure on the covariance of the map (i.e., the kernel), then we are said to be approximating in the kernel space rather than in the data space. If the kernels are given by continuous functions of X, then such an approximation corresponds to learning a posterior distribution over an infinite dimensional function space rather than a finite dimensional vector space. Put differently, we perform non-parametric regression rather than parametric regression. This is the remaining topic of this chapter and is precisely how Gaussian process regression models data. 3 Gaussian Process Regression Whereas, statistical inference involves learning a latent function Y = f (X) of the training data, (X, Y ) := {(xi , yi ) | i = 1, . . . , n}, the idea of GPs is to, without parameterizing3 f (X), place a probabilistic prior directly on the space of functions (MacKay 1998). Restated, the GP is hence a Bayesian non-parametric model that generalizes the Gaussian distributions from finite dimensional vector spaces to infinite dimensional function spaces. GPs do not provide some parameterized map, Yˆ = fθ (X), but rather the posterior distribution of the latent function given the training data. The basic theory of prediction with GPs dates back to at least as far as the time series work of Kolmogorov or Wiener in the 1940s (see (Whittle and Sargent 1983)). GPs are an example of a more general class of supervised machine learning techniques referred to as “kernel learning,” which model the covariance matrix from a set of parameterized kernels over the input. GPs extend and put in a Bayesian framework spline or kernel interpolators, and Tikhonov regularization (see (Rasmussen and Williams 2006) and (Alvarez et al. 2012)). On the other hand, (Neal 1996) observed that certain neural networks with one hidden layer converge to a Gaussian process in the limit of an infinite number of hidden units. We refer to the reader to (Rasmussen and Williams 2006) for an excellent introduction to GPs. In addition to a number of favorable statistical and mathematical properties, such as universality (Micchelli et al. 2006), the implementation support infrastructure is mature—provided by GpyTorch, scikit-learn, Edward, STAN, and other open-source machine learning packages. In this section we restrict ourselves to the simpler case of single-output GPs where f is real-valued. Multi-output GPs are considered in the next section. 3 This is in contrast to non-linear regressions commonly used in finance, which attempt to parameterize a non-linear function with a set of weights. 3 Bayesian Regression and Gaussian Processes 3.1 Gaussian Processes in Finance The adoption of GPs in financial derivative modeling is more recent and sometimes under the name of “kriging” (see, e.g., (Cousin et al. 2016) or (Ludkovski 2018)). Examples of applying GPs to financial time series prediction are presented in (Roberts et al. 2013). These authors helpfully note that AR(p) processes are discrete-time equivalents of GP models with a certain class of covariance functions, known as Matérn covariance functions. Hence, GPs can be viewed as a Bayesian non-parametric generalization of well-known econometrics techniques. da Barrosa et al. (2016) present a GP method for optimizing financial asset portfolios. Other examples of GPs include metamodeling for expected shortfall computations (Liu and Staum 2010), where GPs are used to infer portfolio values in a scenario based on inner-level simulation of nearby scenarios, and Crépey and Dixon (2020), where multiple GPs infer derivative prices in a portfolio for market and credit risk modeling. The approach of Liu and Staum (2010) significantly reduces the required computational effort by avoiding inner-level simulation in every scenario and naturally takes account of the variance that arises from inner-level simulation. The caveat is that the portfolio remains fixed. The approach of Crépey and Dixon (2020), on the other hand, allows for the composition of the portfolio to be changed, which is especially useful for portfolio sensitivity analysis, risk attribution and stress testing. Derivative Pricing, Greeking, and Hedging In the general context of derivative pricing, Spiegeleer et al. (2018) noted that many of the calculations required for pricing a wide array of complex instruments, are often similar. The market conditions affecting OTC derivatives may often only slightly vary between observations by a few variables, such as interest rates. Accordingly, for fast derivative pricing, greeking, and hedging, Spiegeleer et al. (2018) propose offline learning the pricing function, through Gaussian Process regression. Specifically, the authors configure the training set over a grid and then use the GP to interpolate at the test points. We emphasize that such GP estimates depend on option pricing models, rather than just market data - somewhat counter the motivation for adopting machine learning, but also the case in other computational finance applications such as Hernandez (2017), Weinan et al. (2017), or Hans Bühler et al. (2018). Spiegeleer et al. (2018) demonstrate the speed up of GPs relative to MonteCarlo methods and tolerable accuracy loss applied to pricing and Greek estimation with a Heston model, in addition to approximating the implied volatility surface. The increased expressibility of GPs compared to cubic spline interpolation, a popular numerical approximation techniques useful for fast point estimation, is also demonstrated. However, the applications shown in (Spiegeleer et al. 2018) are limited to single instrument pricing and do not consider risk modeling aspects. In particular, their study is limited to single-output GPs, without consideration 3 Gaussian Process Regression of multi-output GPs (respectively referred to as single- vs. multi-GPs for brevity hereafter). By contrast, multi-GPs directly model the uncertainty in the prediction of a vector of derivative prices (responses) with spatial covariance matrices specified by kernel functions. Thus the amount of error in a portfolio value prediction, at any point in space and time, can only be adequately modeled using multi-GPs (which, however, do not provide any methodology improvement in estimation of the mean with respect to single-GPs). See Crépey and Dixon (2020) for further details of how multi-GPs can be applied to estimate market and credit risk. The need for uncertainty quantification in the prediction is certainly a practical motivation for using GPs, as opposed to frequentist machine learning techniques such as neural networks, etc., which only provide point estimates. A high uncertainty in a prediction might result in a GP model estimate being rejected in favor of either retraining the model or even using full derivative model repricing. Another motivation for using GPs, as we will see, is the availability of a scalable training method for the model hyperparameters. 3.2 Gaussian Processes Regression and Prediction We say that a random function f : Rp → R is drawn from a GP with a mean function μ and a covariance function, called kernel, k, i.e. f ∼ GP(μ, k), if for any input points x1 , x2 , . . . , xn in Rp , the corresponding vector of function values is Gaussian: [f (x1 ), f (x2 ), . . . , f (xn )] ∼ N(μ, KX,X ), for some mean vector μ, such that μi = μ (xi ), and covariance matrix KX,X that satisfies (KX,X )ij = k(xi , xj ). We follow the convention4 in the literature of assuming μ = 0. Kernels k can be any symmetric positive semidefinite function, which is the infinite dimensional analogue of the notion of a symmetric positive semidefinite (i.e., covariance) matrix, i.e. such that n k(xi , xj )ξi ξj ≥ 0, for any points xk ∈ Rp and reals ξk . i,j =1 Radial basis functions (RBF) are kernels that only depend on ||x − x ||, such as the squared exponential (SE) kernel 4 This choice is not a real limitation in practice (since it is for the prior) and does not prevent the mean of the predictor from being nonzero. 3 Bayesian Regression and Gaussian Processes k(x, x ) = exp{− 1 ||x − x ||2 }, 22 where the length-scale parameter can be interpreted as “how far you need to move in input space for the function values to become uncorrelated,” or the Matern (MA) kernel ) *ν ) * √ ||x − x || 21−ν √ ||x − x || (3.42) k(x, x ) = σ 2 2ν Kν 2ν (ν) (which converges to (3.41) in the limit where ν goes to infinity), where is the gamma function, Kν is the modified Bessel function of the second kind, and and ν are non-negative parameters. GPs can be seen as distributions over the reproducing kernel Hilbert space (RKHS) of functions which is uniquely defined by the kernel function, k (Scholkopf and Smola 2001). GPs with RBF kernels are known to be universal approximators with prior support to within an arbitrarily small epsilon band of any continuous function (Micchelli et al. 2006). Assuming additive Gaussian noise, y | x ∼ N(f (x), σn2 ), and a GP prior on f (x), given training inputs x ∈ X and training targets y ∈ Y , the predictive distribution of the GP evaluated at an arbitrary test point x∗ ∈ X∗ is: f∗ | X, Y, x∗ ∼ N(E[f∗ |X, Y, x∗ ], var[f∗ |X, Y, x∗ ]), where the moments of the posterior over X∗ are E[f∗ |X, Y, X∗ ] = μX∗ + KX∗ ,X [KX,X + σn2 I ]−1 Y, var[f∗ |X, Y, X∗ ] = KX∗ ,X∗ − KX∗ ,X [KX,X + σn2 I ]−1 KX,X∗ . Here, KX∗ ,X , KX,X∗ , KX,X , and KX∗ ,X∗ are matrices that consist of the kernel, k : Rp × Rp → R, evaluated at the corresponding points, X and X∗ , and μX∗ is the mean function evaluated on the test inputs X∗ . One key advantage of GPs over interpolation methods is their expressibility. In particular, one can combine the kernels, using convolution, to generalize the base kernels (c.f. “multi-kernel” GPs (Melkumyan and Ramos 2011)). 3.3 Hyperparameter Tuning GPs are fit to the data by optimizing the evidence-the marginal probability of the data given the model with respect to the learned kernel hyperparameters. The evidence has the form (see, e.g., (Murphy 2012, Section 15.2.4, p. 523)): 3 Gaussian Process Regression ! " n log p(Y | X, λ) = − Y (KX,X + σn2 I )−1 Y + log det(KX,X + σn2 I ) − log 2π, 2 (3.45) where KX,X implicitly depends on the kernel hyperparameters λ (e.g., [, σ ], assuming an SE kernel as per (3.41) or an MA kernel for some exogenously fixed value of ν in (3.42) ). The first and second term in the [· · · ] in (3.45) can be interpreted as a model fit and a complexity penalty term (see (Rasmussen and Williams 2006, Section 5.4.1)). Maximizing the evidence with respect to the kernel hyperparameters, i.e. computing λ∗ = argmaxλ log p(y | x, λ), results in an automatic Occam’s razor (see (Alvarez et al. 2012, Section 2.3) and (Rasmussen and Ghahramani 2001)), through which we effectively learn the structure of the space of functional relationships between the inputs and the targets. In practice, the negative evidence is minimized by stochastic gradient descent (SGD). The gradient of the evidence is given analytically by ∂λ log p(y | x, λ) = tr (αα T − (K + σn2 I )−1 )∂λ (K + σn2 I )−1 , where α := (K + σn2 I )−1 y and ∂ (K + σn2 I )−1 = −(K + σn2 I )−2 ∂ K, ∂σ (K + σn2 I )−1 = −2σ (K + σn2 I )−2 , (3.47) (3.48) with ∂ k(x, x ) = −3 ||x − x ||2 k(x, x ). ? Multiple Choice Question 3 Which of the following statements are true: 1. Gaussian Processes are a Bayesian modeling approach which assumes that the data is Gaussian distributed. 2. Gaussian Processes place a probabilistic prior directly on the space of functions. 3. Gaussian Processes model the posterior of the predictor using a parameterized kernel representation of the covariance matrix. 4. Gaussian Processes can be fitted to data by maximizing the evidence for the kernel parameters. 5. During evidence maximization, different kernels are evaluated, and the optimal kernel is chosen. 3 Bayesian Regression and Gaussian Processes 3.4 Computational Properties If uniform grids $pare used (as opposed to a mesh-free GP as described in Sect. 5.2), we have n = k=1 nk , where nk are the number of grid points per variable. However, although each kernel matrix KX,X is n × n, we only store the n-vector α in (3.46), which brings reduced memory requirements. Training time, required for maximizing (3.45) numerically, scales poorly with the number of observations n. This complexity stems from the need to solve linear systems and compute log determinants involving an n × n symmetric positive definite covariance matrix K. This task is commonly performed by computing the Cholesky decomposition of K incurring O(n3 ) complexity. Prediction, however, is faster and can be performed in O(n2 ) with a matrix–vector multiplication for each test point, and hence the primary motivation for using GPs is real-time risk estimation performance. Online Learning If the option pricing model is recalibrated intra-day, then the corresponding GP model should be retrained. Online learning techniques permit performing this incrementally (Pillonetto et al. 2010). To enable online learning, the training data should be augmented with the constant model parameters. Each time the parameters are updated, a new observation (x , y ) is generated from the option model prices under the new parameterization. The posterior at test point x∗ is then updated with the new training point following p(f∗ |X, Y, x , y , x∗ ) = % p(x , y |f∗ )p(f∗ |X, Y, x∗ ) , f∗ p(x , y |z)p(z|X, Y, x∗ )dz where the previous posterior p(f∗ |X, Y, x∗ ) becomes the prior in the update. Hence the GP learns over time as model parameters (which are an input to the GP) are updated through pricing model 4 Massively Scalable Gaussian Processes Massively scalable Gaussian processes (MSGP) are a significant extension of the basic kernel interpolation framework described above. The core idea of the framework, which is detailed in (Gardner et al. 2018), is to improve scalability by combining GPs with “inducing point methods.” The basic setup is as follows; Using structured kernel interpolation (SKI), a small set of m inducing points are carefully selected from the original training points. The covariance matrix has a Kronecker and Toeplitz structure, which is exploited by the Fast Fourier Transform (FFT). Finally, output over the original input points is interpolated from the output at the 4 Massively Scalable Gaussian Processes inducing points. The interpolation complexity scales linearly with dimensionality p of the input data by expressing the kernel interpolation as a product of 1D kernels. Overall, SKI gives O(pn + pmlogm) training complexity and O(1) prediction time per test point. 4.1 Structured Kernel Interpolation (SKI) Given a set of m inducing points, the n × m cross-covariance matrix, KX,U , between the training inputs, X, and the inducing points, U, can be approximated as K˜ X,U = WX KU,U using a (potentially sparse) n × m matrix of interpolation weights, WX . This allows to approximate KX,Z for an arbitrary set of inputs Z as KX,Z ≈ K˜ X,U WZ . For any given kernel function, K, and a set of inducing points, U, structured kernel interpolation (SKI) procedure (Gardner et al. 2018) gives rise to the following approximate kernel: KSKI (x, z) = WX KU,U Wz , which allows to approximate KX,X ≈ WX KU,U WX . Gardner et al. (2018) note that standard inducing point approaches, such as subset of regression (SoR) or fully independent training conditional (FITC), can be reinterpreted from the SKI perspective. Importantly, the efficiency of SKI-based MSGP methods comes from, first, a clever choice of a set of inducing points which exploit the algebraic structure of KU,U , and second, from using very sparse local interpolation matrices. In practice, local cubic interpolation is used. 4.2 Kernel Approximations If inducing points, U , form a regularly spaced P -dimensional grid, and we use a stationary product kernel (e.g., the RBF kernel), then KU,U decomposes as a Kronecker product of Toeplitz matrices: KU,U = T1 ⊗ T2 ⊗ · · · ⊗ TP . The Kronecker structure allows one to compute the eigendecomposition of KU,U by separately decomposing T1 , . . . , TP , each of which is much smaller than KU,U . Further, a Toeplitz matrix can be approximated by a circulant matrix5 which eigendecomposes by simply applying a discrete Fourier transform (DFT) to its 5 Gardner et al. (2018) explored 5 different approximation methods known in the numerical analysis 3 Bayesian Regression and Gaussian Processes first column. Therefore, an approximate eigendecomposition of each T1 , . . . , TP is computed via the FFT in only O(m log m) time. Structure Exploiting Inference To perform inference, we need to solve (KSKI + σn2 I )−1 y; kernel learning requires evaluating log det(KSKI + σn2 I ). The first task can be accomplished by using an iterative scheme—linear conjugate gradients—which depends only on matrix vector multiplications with (KSKI + σn2 I ). The second is performed by exploiting the Kronecker and Toeplitz structure of KU,U for computing an approximate eigendecomposition, as described above. In this chapter, we primarily use the basic interpolation approach for simplicity. However for completeness, Sect. 5.3 shows the scaling of the time taken to train and predict with MSGPs. 5 Example: Pricing and Greeking with Single-GPs In the following example, the portfolio holds a long position in both a European call and a put option struck on the same underlying, with K = 100. We assume that the underlying follows Heston dynamics: . dSt = μdt + Vt dWt1 , St . dVt = κ(θ − Vt )dt + σ Vt dWt2 , dW 1 , W 2 t = ρdt, (3.53) (3.54) (3.55) where the notation and fixed parameter values used for experiments are given in Table 3.1 under μ = r0 . We use a Fourier Cosine method (Fang and Oosterlee 2008) to generate the European Heston option price training and testing data for the GP. We also use this method to compare the GP Greeks, obtained by differentiating the kernel function. Table 3.1 lists the values of the parameters for the Heston dynamics and terms of the European Call and Put option contract used in our numerical experiments. Table 3.2 shows the values for the Euler time stepper used for simulating Heston dynamics and the credit risk model. For each pricing time ti , we simultaneously√fit a multi-GP to both gridded call and put prices over stock price S and volatility V , keeping time to maturity fixed. Figure 3.3 shows the gridded call (top) and put (bottom) price surfaces at various time to maturities, together with the GP estimate. Within each column in the figure, the same GP model has been simultaneously fitted to both the call and put price 5 Example: Pricing and Greeking with Single-GPs Table 3.1 This table shows the values of the parameters for the Heston dynamics and terms of the European Call and Put option contracts Parameter description Mean reversion rate Mean reversion level Vol. of Vol. Risk-free rate Strike Maturity Correlation Table 3.2 This table shows the values for the Euler time stepper used for market risk factor simulation Parameter description Number of simulation Number of time steps Initial stock price Initial variance Symbol κ θ σ r0 K T ρ Symbol M ns S0 V0 Value 0.1 0.15 0.1 0.002 100 2.0 −0.9 Value 1000 100 100 0.1 (a) Call: T − t = 1.0 (b) Call: T − t = 0.5 (c) Call: T − t = 0.1 (a) Put: T − t = 1.0 (b) Put: T − t = 0.5 (c) Put: T − t = 0.1 Fig. 3.3 This figure compares the gridded Heston model call (top) and put (bottom) price surfaces at various time to maturities, with the GP estimate. The GP estimate is observed to be practically identical (slightly below in the first five panels and slightly above in the last one). Within each column in the figure, the same GP model has been simultaneously fitted to both the Heston model call and put price surfaces over a 30 × 30 grid of prices and volatilities, fixing the time to maturity. Across each column, corresponding to different time to maturities, a different GP model has been fitted. The GP is then evaluated out-of-sample over a 40 × 40 grid, so that many of the test samples are new to the model. This is repeated over various time to maturities surfaces over a 30 × 30 grid h ⊂ := [0, 1] × [0, 1] of prices and volatilities,6 fixing the time to maturity. The scaling to the unit domain is not essential. However, we observed superior numerical stability when scaling. 6 Note that the plot uses the original coordinates and not the re-scaled coordinates. 3 Bayesian Regression and Gaussian Processes Across each column, corresponding to different time to maturities, a different GP model has been fitted. The GP is then evaluated out-of-sample over a 40 × 40 grid h ⊂ , so that many of the test samples are new to the model. This is repeated over various time to maturities.7 Extrapolation One instance where kernel combination is useful in derivative modeling is for extrapolation—the appropriate mixture or combination of kernels can be chosen so that the GP is able to predict outside the domain of the training set. Noting that the payoff is linear when a call or put option is respectively deeply in and out-of-the money, we can configure a GP as a combination of a linear kernel and, say, a SE kernel. The linear kernel is included to ensure that prediction outside the domain preserves the linear property, whereas the SE kernel captures non-linearity. Figure 3.4 shows the results of using this combination of kernels to extrapolate the prices of a call struck at 110 and a put struck at 90. The linear property of the payoff function is preserved by the GP prediction and the uncertainty increases as the test point is further from the training set. Fig. 3.4 This figure assesses the GP option price prediction in the setup of a Black–Scholes model. The GP with a mixture of a linear and SE kernel is trained on n = 50 X, Y pairs, where X ∈ h ⊂ (0, 300] is the gridded underlying of the option prices and Y is a vector of call or put prices. These training points are shown by the black “+” symbols. The exact result using the Black–Scholes pricing formula is given by the black line. The predicted mean (blue solid line) and variance of the posterior are estimated from Eq. 3.44 over m = 100 gridded test points, X∗ ∈ h∗ ⊂ [300, 400], for the (left) call option struck at 110 and (center) put option struck at 90. The shaded envelope represents the 95% confidence interval about the mean of the posterior. This confidence interval is observed to increase the further the test point is from the training set. The time to maturity of the options are fixed to two years. (a) Call price. (b) Put price 7 Such maturities might correspond to exposure evaluation times in CVA simulation as in Crépey and Dixon (2020). The option model versus GP model are observed to produce very similar values. 5 Example: Pricing and Greeking with Single-GPs 5.1 Greeking The GP provides analytic derivatives with respect to the input variables ∂X∗ E[f∗ |X, Y, X∗ ] = ∂X∗ μX∗ + ∂X∗ KX∗ ,X α, where ∂X∗ KX∗ ,X = 12 (X − X∗ )KX∗ ,X and we recall from Sect. (3.46) that α = [KX,X + σn2 I ]−1 y (and in the numerical experiments we set μ = 0). Second-order sensitivities are obtained by differentiating once more with respect to X∗ . Note that α is already calculated at training time (for pricing) by Cholesky matrix factorization of [KX,X + σn2 I ] with O(n3 ) complexity, so there is no significant computational overhead from Greeking. Once the GP has learned the derivative prices, Eq. 3.56 is used to evaluate the first order MtM Greeks with respect to the input variables over the test set. Example source code illustrating the implementation of this calculation is presented in the notebook Example-2-GP-BS-Derivatives.ipynb. Figure 3.5 shows (left) the GP estimate of a call option’s delta := ∂C ∂S and (right) vega ν := ∂C ∂σ , having trained on the underlying, respectively implied volatility, and on the BS option model prices. For avoidance of doubt, the model is not trained on the BS Greeks. For comparison in the figure, the BS delta and vega are also shown. In each case, the two graphs are practically indistinguishable, with one graph superimposed over the 5.2 Mesh-Free GPs The above numerical examples have trained and tested GPs on uniform grids. This approach suffers from the curse of dimensionality, as the number of training points grows exponentially with the dimensionality of the data. This is why, in order to estimate the MtM cube, we advocate divide-and-conquer, i.e. the use of numerous Fig. 3.5 This figure shows (left) the GP estimate of the call option’s delta := ∂C ∂S and (right) vega ν := ∂C ∂σ , having trained on the underlying, respectively implied volatility, and on the BS option model prices 3 Bayesian Regression and Gaussian Processes low input dimensional space, p, GPs run in parallel on specific asset classes. However, use of fixed grids is by no means necessary. We show here how GPs can show favorable approximation properties with a relatively few number of simulated reference points (cf. also (Gramacy and Apley 2015)). Figure 3.6 shows the predicted Heston call prices using (left) 50 and (right) 100 simulated training points, indicated by “+”s, drawn from a uniform random distribution. The Heston call option is struck at K = 100 with a maturity of T = 2 years. Figure 3.7 (left) shows the convergence of the GP MSE of the prediction, based on the number of Heston simulated training points. Fixing the number of simulated points to 100, but increasing the input space dimensionality, p, of each observation point (to include varying Heston parameters, Fig. 3.7 (right) shows the wall-clock time for training a GP with SKI (see Sect. 3.4), Note that the number of SGD iterations has been fixed to 1000. 120 GP Prediction Analytical Model V 60 V 60 GP Prediction Analytical Model Fig. 3.6 Predicted Heston Call prices using (left) 50 and (right) 100 simulated training points, indicated by “+”s, drawn from a uniform random distribution Fig. 3.7 (Left) The convergence of the GP MSE of the prediction is shown based on the number of simulated Heston training points. (Right) Fixing the number of simulated points to 100, but increasing the dimensionality p of each observation point (to include varying Heston parameters), the figure shows the wall-clock time for training a GP with SKI 6 Multi-response Gaussian Processes Fig. 3.8 (Left) The elapsed wall-clock time is shown for training against the number of training points generated by a Black–Scholes model. (Right) The elapsed wall-clock time for prediction of a single point is shown against the number of testing points. The reason that the prediction time increases (whereas the theory reviewed in Sect. 3.4 says it should be constant) is due to memory latency in our implementation—each point prediction involves loading a new test point into memory 5.3 Massively Scalable GPs Figure 3.8 shows the increase of MSGP training time and prediction time against the number of training points n from a Black Scholes model. Fixing the number of inducing points to m = 30 (see Sect. 3.4), we increase the number of observations, n, in the p = 1 dimensional training set. Setting the number of SGD iterations to 1000, we observe an approximate 1.4x increase in training time for a 10x increase in the training sample. We observe an approximate 2x increase in prediction time for a 10x increase in the training sample. The reason that the prediction time does not scale independently of n is due to memory latency in our implementation—each point prediction involves loading a new test point into memory. Fast caching approaches can be used to reduce this memory latency, but are beyond the scope of this section. Note that training and testing times could be improved with CUDA on a GPU, but are not evaluated here. 6 Multi-response Gaussian Processes A multi-output Gaussian process is a collection of random vectors, any finite number of which have a matrix-variate Gaussian distribution. We borrow from Chen et al. (2017) the following formulation of a separable multi-output kernel specification as per (Alvarez et al. 2012, Eq. (9)): Definition (MGP) f is a d variate Gaussian process on Rp with vector-valued mean function μ : Rp → Rd , kernel k : Rp × Rp → R, and positive semi-definite 3 Bayesian Regression and Gaussian Processes parameter covariance matrix ∈ Rd×d , if the vectorization of any finite collection of vectors f(x1 ), . . . , f(xn ) have a joint multivariate Gaussian distribution, vec([f(x1 ), . . . , f(xn )]) ∼ N (vec(M), ⊗ ), where f(xi ) ∈ Rd is a column vector whose components are the functions fl (xi )}dl=1 , M is a matrix in Rd×n with Mli = μl (xi ), is a matrix in Rn×n with ij = k(xi , xj ), and ⊗ is the Kronecker product ⎞ 11 · · · 1n ⎜ .. .. ⎟. .. ⎝ . . . ⎠ m1 · · · mn ⎛ Sometimes is called the column covariance matrix while is the row (or task) covariance matrix. We denote f ∼ MGP(mμ, k, ). As explained after Eq. (10) in (Alvarez et al. 2012), the matrices and encode dependencies among the inputs, respectively outputs. 6.1 Multi-Output Gaussian Process Regression and Prediction Given n pairs of observations {(xi , yi )}ni=1 , xi ∈ Rp , yi ∈ Rd , we assume the model yi = f(xi ), i ∈ {1, . . . , n}, where f ∼ MGP(μ, k , ) with k = k(xi , xj ) + δij σn2 , in which σn2 is the variance of the additive Gaussian noise. That is, the vectorization of the collection of functions [f(x1 ), . . . , f(xn )] follows a multivariate Gaussian distribution vec([f(x1 ), . . . , f(xn )]) ∼ N(0, K ⊗ ), where K is the n × n covariance matrix of which the (i, j )-th element [K ]ij = k (xi , xj ). To predict a new variable f∗ = [f∗1 , . . . , f∗m ] at the test locations X∗ = [xn+1 , . . . , xn+m ], the joint distribution of the training observations Y = [y1 , . . . , yn ] and the predictive targets f∗ are given by ) * Y K (X, X) K (X∗ , X)T ∼ MN 0, , , K (X∗ , X) K (X∗ , X∗ ) f∗ where K (X, X) is an n × n matrix of which the (i, j )-th element [K (X, X)]ij = k (xi , xj ), K (X∗ , X) is an m × n matrix of which the (i, j )-th element 7 Summary [K (X∗ , X)]ij = k (xn+i , xj ), and K (X∗ , X∗ ) is an m × m matrix with the (i, j )-th element [K (X∗ , X∗ )]ij = k (xn+i , xn+j ). Thus, taking advantage of conditional distribution of multivariate Gaussian process, the predictive distribution is: ˆ ˆ ⊗ ), ˆ p(vec(f∗ )|X, Y, X∗ ) = N(vec(M), where Mˆ = K (X∗ , X)T K (X, X)−1 Y, ˆ = K (X∗ , X∗ ) − K (X∗ , X) K (X, X) ˆ = . K (X∗ , X), (3.60) (3.61) The hyperparameters and elements of the covariance matrix are found by minimizing the negative log marginal likelihood of observations: nd d 1 n ln(2π ) + ln |K | + ln || + tr((K )−1 Y −1 Y T ). 2 2 2 2 (3.62) Further details of the multi-GP are given in (Bonilla et al. 2007; Alvarez et al. 2012; Chen et al. 2017). The computational remarks made in Sect. 3.4 also apply here, with the additional comment that the training and prediction time also scale linearly (proportionally) with the number of dimensions d. Note that the task covariance matrix is estimated via a d-vector factor b by = bbT + σΩ2 I (where the σ2 component corresponds to a standard white noise term). An alternative computational approach, which exploits separability of the kernel, is the one described in Section 6.1 of (Alvarez et al. 2012), with complexity O(d 3 + n3 ). L(Y |X, λ, ) = 7 Summary In this chapter we have introduced Bayesian regression and shown how it extends many of the concepts in the previous chapter. We develop kernel based machine learning methods, known as Gaussian processes, and demonstrate their application to surrogate models of derivative prices. The key learning points of this chapter are: – Introduced Bayesian linear regression; – Derived the posterior distribution and the predictive distribution; – Described the role of the prior as an equivalent form of regularization in maximum likelihood estimation; and – Developed Gaussian Processes for kernel based probabilistic modeling, with programming examples in derivative modeling. 3 Bayesian Regression and Gaussian Processes 8 Exercises Exercise 3.1: Posterior Distribution of Bayesian Linear Regression Consider the Bayesian linear regression model yi = θ T X + , ∼ N(0, σn2 ), θ ∼ N(μ, ). Show that the posterior over data D is given by the distribution θ|D ∼ N(μ , ), with moments: μ = a = ( −1 + 1 1 XXT )−1 ( −1 μ + 2 yT X) 2 σn σn = A−1 = ( −1 + 1 XXT )−1 . σn2 Exercise 3.2: Normal Conjugate Distributions Suppose that the prior is p(θ ) = φ(θ ; μ0 , σ02 ) and the likelihood is given by p(x1:n | θ ) = φ(xi ; θ, σ 2 ), where σ 2 is assumed to be known. Show that the posterior is also normal, 2 ), where p(θ | x1:n ) = φ(θ ; μpost , σpost μpost = σ02 σ2 n + σ02 2 σpost = where x¯ := 1 n x¯ + σ2 σ2 n 1 1 σ02 n σ2 + σ02 μ0 , i=1 xi . Exercise 3.3: Prediction with GPs Show that the predictive distribution for a Gaussian Process, with model output over a test point, f∗ , and assumed Gaussian noise with variance σn2 , is given by f∗ | D, x∗ ∼ N(E[f∗ |D, x∗ ], var[f∗ |D, x∗ ]), where the moments of the posterior over X∗ are E[f∗ |D, X∗ ] = μX∗ + KX∗ ,X [KX,X + σn2 I ]−1 Y, var[f∗ |D, X∗ ] = KX∗ ,X∗ − KX∗ ,X [KX,X + σn2 I ]−1 KX,X∗ . 8.1 Programming Related Questions* Exercise 3.4: Derivative Modeling with GPs Using the notebook Example-1-GP-BS-Pricing.ipynb, investigate the effectiveness of a Gaussian process with RBF kernels for learning the shape of a European derivative (call) pricing function Vt = ft (St ) where St is the underlying stock’s spot price. The risk free rate is r = 0.001, the strike of the call is KC = 130, the volatility of the underlying is σ = 0.1 and the time to maturity τ = 1.0. Your answer should plot the variance of the predictive distribution against the stock price, St = s, over a dataset consisting of n ∈ {10, 50, 100, 200} gridded values of the stock price s ∈ h := {is | i ∈ {0, . . . , n−1}, s = 200/(n−1)} ⊆ [0, 200] and the corresponding gridded derivative prices V (s). Each observation of the dataset, (si , vi = ft (si )) is a gridded (stock, call price) pair at time t. Appendix Answers to Multiple Choice Questions Question 1 Answer: 1,4,5. Parametric Bayesian regression always treats the regression weights as random variables. In Bayesian regression the data function f (x) is only observed if the data is assumed to be noise-free. Otherwise, the function is not directly observed. The posterior distribution of the parameters will only be Gaussian if both the prior and the likelihood function are Gaussian. The distribution of the likelihood function depends on the assumed error distribution. The posterior distribution of the regression weights will typically contract with increasing data. The precision matrix grows with decreasing variance and hence the variance of the posterior shrinks with increasing data. There are exceptions if, for example, there are outliers in the data. The mean of the posterior distribution depends on both the mean and covariance of the prior if it is Gaussian. We can see this from Eq. 3.19. Question 2 Answer: 1, 2, 4. Prediction under a Bayesian linear model requires first estimating the moments of the posterior distribution of the parameters. This is because the prediction is the expected likelihood of the new data under the posterior distribution. 3 Bayesian Regression and Gaussian Processes The predictive distribution is Gaussian only if the posterior and likelihood distributions are Gaussian. The product of Gaussian density functions is also Gaussian. The predictive distribution does not depend on the weights in the models it is marginalized out under the expectation w.r.t. the posterior distribution. The variance of the predictive distribution typically contracts with increasing training data because the variance of the posterior and the likelihood typically decreases with increasing training data. Question 3 Answer: 2, 3, 4. Gaussian Process regression is a Bayesian modeling approach but they do not assume that the data is Gaussian distributed, neither do they make such an assumption about the error. Gaussian Processes place a probabilistic prior directly on the space of functions and model the posterior of the predictor using a parameterized kernel representation of the covariance matrix. Gaussian Processes are fitted to data by maximizing the evidence for the kernel parameters. However, it is not necessarily the case that the choice of kernel is effectively a hyperparameter that can be optimized. While this could be achieved in an ad hoc way, there are other considerations which dictate the choice of kernel concerning smoothness and ability to extrapolate. Python Notebooks A number of notebooks are provided in the accompanying source code repository, beyond the two described in this chapter. These notebooks demonstrate the use of Multi-GPs and application to CVA modeling (see Crépey and Dixon (2020) for details of these models). Further details of the notebooks are included in the README.md file. References Alvarez, M., Rosasco, L., & Lawrence, N. (2012). Kernels for vector-valued functions: A review. Foundations and Trends in Machine Learning, 4(3), 195–266. Bishop, C. M. (2006). Pattern recognition and machine learning (information science and statistics). Berlin, Heidelberg: Springer-Verlag. Bonilla, E. V., Chai, K. M. A., & Williams, C. K. I. (2007). Multi-task Gaussian process prediction. In Proceedings of the 20th International Conference on Neural Information Processing Systems, NIPS’07, USA (pp. 153–160). Curran Associates Inc. Chen, Z., Wang, B., & Gorban, A. N. (2017, March). Multivariate Gaussian and student−t process regression for multi-output prediction. ArXiv e-prints. Cousin, A., Maatouk, H., & Rullière, D. (2016). Kriging of financial term structures. European Journal of Operational Research, 255, 631–648. Crépey, S., & M. Dixon (2020). Gaussian process regression for derivative portfolio modeling and application to CVA computations. Computational Finance. da Barrosa, M. R., Salles, A. V., & de Oliveira Ribeiro, C. (2016). Portfolio optimization through kriging methods. Applied Economics, 48(50), 4894–4905. Fang, F., & Oosterlee, C. W. (2008). A novel pricing method for European options based on Fourier-cosine series expansions. SIAM J. SCI. COMPUT. Gardner, J., Pleiss, G., Wu, R., Weinberger, K., & Wilson, A. (2018). Product kernel interpolation for scalable Gaussian processes. In International Conference on Artificial Intelligence and Statistics (pp. 1407–1416). Gramacy, R., & D. Apley (2015). Local Gaussian process approximation for large computer experiments. Journal of Computational and Graphical Statistics, 24(2), 561–578. Hans Bühler, H., Gonon, L., Teichmann, J., & Wood, B. (2018). Deep hedging. Quantitative Finance. Forthcoming (preprint version available as arXiv:1802.03042). Hernandez, A. (2017). Model calibration with neural networks. Risk Magazine (June 1–5). Preprint version available at SSRN.2812140, code available at https://github.com/ Andres-Hernandez/ CalibrationNN. Liu, M., & Staum, J. (2010). Stochastic kriging for efficient nested simulation of expected shortfall. Journal of Risk, 12(3), 3–27. Ludkovski, M. (2018). Kriging metamodels and experimental design for Bermudan option pricing. Journal of Computational Finance, 22(1), 37–77. MacKay, D. J. (1998). Introduction to Gaussian processes. In C. M. Bishop (Ed.), Neural networks and machine learning. Springer-Verlag. Melkumyan, A., & Ramos, F. (2011). Multi-kernel Gaussian processes. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Two, IJCAI’11 (pp. 1408–1413). AAAI Press. Micchelli, C. A., Xu, Y., & Zhang, H. (2006, December). Universal kernels. J. Mach. Learn. Res., 7, 2651–2667. Murphy, K. (2012). Machine learning: a probabilistic perspective. The MIT Press. Neal, R. M. (1996). Bayesian learning for neural networks, Volume 118 of Lecture Notes in Statistics. Springer. Pillonetto, G., Dinuzzo, F., & Nicolao, G. D. (2010, Feb). Bayesian online multitask learning of Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2), 193–205. Rasmussen, C. E., & Ghahramani, Z. (2001). Occam’s razor. In In Advances in Neural Information Processing Systems 13 (pp. 294–300). MIT Press. Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. MIT Press. Roberts, S., Osborne, M., Ebden, M., Reece, S., Gibson, N., & Aigrain, S. (2013). Gaussian processes for time-series modelling. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 371(1984). Scholkopf, B., & Smola, A. J. (2001). Learning with kernels: support vector machines, regularization, optimization, and beyond. Cambridge, MA, USA: MIT Press. Spiegeleer, J. D., Madan, D. B., Reyners, S., & Schoutens, W. (2018). Machine learning for quantitative finance: fast derivative pricing, hedging and fitting. Quantitative Finance, 0(0), 1–9. Weinan, E, Han, J., & Jentzen, A. (2017). Deep learning-based numerical methods for highdimensional parabolic partial differential equations and backward stochastic differential equations. arXiv:1706.04702. Whittle, P., & Sargent, T. J. (1983). Prediction and regulation by linear least-square methods (NED - New edition ed.). University of Minnesota Chapter 4 Feedforward Neural Networks This chapter provides a more in-depth description of supervised learning, deep learning, and neural networks—presenting the foundational mathematical and statistical learning concepts and explaining how they relate to real-world examples in trading, risk management, and investment management. These applications present challenges for forecasting and model design and are presented as a reoccurring theme throughout the book. This chapter moves towards a more engineering style exposition of neural networks, applying concepts in the previous chapters to elucidate various model design 1 Introduction Artificial neural networks have a long history in financial and economic statistics. Building on the seminal work of (Gallant and White 1988; Andrews 1989; Hornik et al. 1989; Swanson and White 1995; Kuan and White 1994; Lo 1994; Hutchinson, Lo, and Poggio Hutchinson et al.; Baillie and Kapetanios 2007; Racine 2001) develop various studies in the finance, economics, and business literature. Most recently, the literature has been extended to include deep neural networks (Sirignano et al. 2016; Dixon et al. 2016; Feng et al. 2018; Heaton et al. 2017). In this chapter we shall introduce some of the theory of function approximation and out-of-sample estimation with neural networks when the observation points are independent and typically also identically distributed. Such a case is not suitable for times series data and shall be the subject of later chapters. We shall restrict our attention to feedforward neural networks in order to explore some of the theoretical arguments which help us reason scientifically about architecture design and approximation error. Understanding these networks from a statistical, mathematical, and information-theoretic perspective is key to being able to successfully apply them in practice. While this chapter does present some simple financial examples to © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https://doi.org/10.1007/978-3-030-41068-1_4 4 Feedforward Neural Networks highlight problematic conceptual issues, we defer the realistic financial applications to later chapters. Also, note that the emphasis of this chapter is how to build statistical models suitable for financial modeling, thus our emphasis is less on engineering considerations and more on how theory can guide the design of useful machine learning methods. Chapter Objectives By the end of this chapter, the reader should expect to accomplish the following: – Develop mathematical reasoning skills to guide the design of neural networks; – Gain familiarity with the main theory supporting statistical inference with neural networks; – Relate feedforward neural networks with other types of machine learning methods; – Perform model selection with ridge and LASSO neural network regression; – Learn how to train and test a neural network; and – Gain familiarity with Bayesian neural networks. Note that section headers ending with * are more mathematically advanced, often requiring some background in analysis and probability theory, and can be skipped by the less mathematically advanced 2 Feedforward Architectures 2.1 Preliminaries Feedforward neural networks are a form of supervised machine learning that use hierarchical layers of abstraction to represent high-dimensional non-linear predictors. The paradigm that deep learning provides for data analysis is very different from the traditional statistical modeling and testing framework. Traditional fit metrics, such as R 2 , t-values, p-values, and the notion of statistical significance has been replaced in the machine learning literature by out-of-sample forecasting and understanding the bias–variance tradeoff ; that is the tradeoff between a more complex model versus over-fitting. Deep learning is data-driven and focuses on finding structure in large datasets. The main tools for variable or predictor selection are regularization and dropout. There are a number of issues in any architecture design. How many layers? How many neurons Nl in each hidden layer? How to perform “variable selection?” Many of these problems can be solved by a stochastic search technique, called dropout 2 Feedforward Architectures Srivastava et al. (2014), which we discuss in Sect. 5.2.2. Recall from Chap. 1 that a feedforward neural network model takes the general form of a parameterized map Y = FW,b (X) + , where FW,b is a deep neural network with L layers (Fig. 4.1) and is i.i.d. error. The deep neural network takes the form of a composition of simpler functions: (L) (1) Yˆ (X) := FW,b (X) = fW (L) ,b (L) ◦ · · · ◦ fW (1) ,b(1) (X), where W = (W (1) , . . . , W (L) ) and b = (b(1) , . . . , b(L) ) are weight matrices and bias vectors. Any weight matrix W () ∈ Rm×n can be expressed as n column m () () () vectors W () = [w() . ,1 , . . . , w,n ]. We denote each weight as wij := W ij More formally and under additional restrictions, we can form this parameterized map in the class of compositions of semi-affine functions. > Semi-Affine Functions Let σ : R → B ⊂ R denote a continuous, monotonically increasing function () whose codomain is a bounded subset of the real line. A function fW () ,b() : (continued) Input layer x1 x2 x3 Hidden layer 1 Hidden layer 2 Output layer yˆ1 x4 yˆ2 x5 x6 Fig. 4.1 An illustrative example of a feedforward neural network with two hidden layers, six features, and two outputs. Deep learning network classifiers typically have many more layers, use a large number of features and several outputs or classes. The goal of learning is to find the weight on every edge and the bias for every neuron (not illustrated) that minimizes the out-of-sample error 4 Feedforward Neural Networks Rn → Rm , given by f (v) = W () σ (−1) (v) + b() , W () ∈ Rm×n and b() ∈ Rm , is a semi-affine function in v, e.g. f (v) = wtanh(v) + b. σ (·) are the activation functions of the output from the previous layer. If all the activation functions are linear, FW,b is just linear regression, regardless of the number of layers L and the hidden layers are redundant. For any such network we can always find an equivalent network without hidden units. This follows from the fact that the composition of successive linear transformations is itself a linear transformation.1 For example if there is one hidden layer and σ (1) is the identify function, then ˜ Yˆ (X) = W (2) (W (1) X + b(1) ) + b(2) = W (2) W (1) X + W (2) b(1) + b(2) = W˜ X + b. (4.3) Informally, the main effect of activation is to introduce non-linearity into the model, and in particular, interaction terms between the input. The geometric interpretation of the activation units will be discussed in the next section. We can view the special case when the network has one hidden layer and will see that the activation function introduces interaction terms Xi Xj . Consider the partial derivative (2) (1) w,i σ (Ii(1) )wij , (4.4) ∂Xj Yˆ = i (2) where w,i is the ith column vector of W (2) , I () (X) := W () X + b() , and differentiate again with respect to Xk , k = i to give ∂X2 j ,Xk Yˆ = −2 (1) (1) (1) (1) w(2) ,i σ (Ii )σ (Ii )wij wik , which is not in general zero unless σ is the identity map. 2.2 Geometric Interpretation of Feedforward Networks We begin by considering a simple feedforward binary classifier with only two features, as illustrated in Fig. 4.2. The simplest configuration we shall consider has just two inputs and one output unit—this is a multivariate regression model. More precisely, because we shall fit the model to binary responses, this network 1 Note that there is a potential degeneracy in this case; There may exist “flat directions”—hypersurfaces in the parameter space that have exactly the same loss function. 2 Feedforward Architectures No hidden units (linear) Two hidden units Many hidden units Fig. 4.2 Simple two variable feedforward networks with and without hidden layers. The yellow nodes denote input variables, the green nodes denote hidden units, and the red nodes are outputs. A feedforward network without hidden layers is a linear regressor. A feedforward network with one hidden layer is a shallow learner and a feedforward network with two or more hidden layers is a deep is a logistic regression. Recall that only one output unit is required to represent the probability of a positive label, i.e. P [G = 1 | X]. The next configuration we shall consider has one hidden layer—the number of hidden units shall be equal to the number of input neurons. This choice serves as a useful reference point as many hidden units are often needed for sufficient expressibility. The final configuration has substantially more hidden units. Note that the second layer has been introduced purely to visualize the output from the hidden layer. This set of simple configurations (a.k.a. architectures) is ample to illustrate how a neural network method works. In Fig. 4.3 the data has been arranged so that no separating linear plane can perfectly separate the points in [−1, 1] × [−1, 1]. The activation function is chosen to be ReLU (x). The weight and biases of the network have been trained on this data. For each network, we can observe how the input space is transformed by the layers by viewing the top row of the figure. We can also view the linear regression in the original, input, space in the bottom row of the figure. The number of units in the first hidden layers is observed to significantly affect the classifier performance.2 Determining the weight and bias matrices, together with how many hidden units are needed for generalizable performance is the goal of parameter estimation and model selection. However, we emphasize that some conceptual understanding of neural networks is needed to derive interpretability, the topic of Chap. 5. Partitioning The partitioning of the input space is a distinguishing feature of neural networks compared to other machine learning methods. Each hidden unit defines a manifold 2 There is some redundancy in the construction of the network and around 50 units are needed. 4 Feedforward Neural Networks No hidden units Two hidden units Many hidden units Fig. 4.3 This figure compares various feedforward neural network classifiers applied to a toy, non-linearly separable, binary classification dataset. Its purpose is to illustrate that increasing the number of hidden units in the first hidden layer provides substantial expressibility, even when the number of input variables is small. (Top) Each neural network classifier attempts to separate the labels with a hyperplane in the space of the output from the last hidden layer, Z (L−1) . If the network has no hidden layers, then Z (L−1) = Z (0) = X. The features are shown in the space of Z (L−1) . (Bottom) The separating hyperplane in the space of Z (L−1) is projected to the input space in order to visualize how the layers partition the input space. (Left) A feedforward classifier with no hidden layers is a logistic regression model—it partitions the input space with a plane. (Center) One hidden layer transforms the features by rotation, dilatation, and truncation. (Right) Two hidden layers with many hidden units perform an affine projection into high-dimensional space where points are more separable. See the Deep Classifiers notebook for an implementation of the classifiers and additional diagnostic tests (not shown here) which divides the input space into convex regions. In other words, each unit in the hidden layer implements a half-space predictor. In the case of a ReLU activation function f (x) = max(x, 0), each manifold is simply a hyperplane and the neuron gets activated when the observation is on the “best” side of this hyperplane, the activation amount is equal to how far from the boundary the given point is. The set of hyperplanes defines a hyperplane arrangement (Montúfar & '2014). In general, pet al. an arrangement of n ≥ p hyperplanes in Rp has at most j =0 nj convex regions. For example, in a two-dimensional input space, threeneurons & ' with ReLU activation functions will divide the space into no more than 2j =0 j3 = 7 regions, as shown in Fig. 4.4. Multiple Hidden Layers We can easily extend this geometrical interpretation to three-layered perceptrons (L = 3). Clearly, the neurons in the first (hidden) layer partition the network input space by corresponding hyperplanes into various half-spaces. Hence, the number of these half-spaces equals the number of neurons in the first layer. Then, the neurons in the second layer can classify the intersections of some of these half-spaces, i.e. 2 Feedforward Architectures Fig. 4.4 Hyperplanes defined by three neurons in the hidden layer, each with ReLU activation functions, form a hyperplane arrangement. An arrangement of 3 hyperplanes in R2 has at & ' most 2j =0 j3 = 7 convex regions they represent convex regions in the input space. This means that a neuron from the second layer is active if and only if the network input corresponds to a point in the input space that is located simultaneously in all half-spaces, which are classified by selected neurons from the first layer. The maximal number of linear regions of the functions computed by a neural network with p input units and L − 1 hidden layers, with equal n() = n ≥ p )! width "(L−2)p * rectifiers at the th layer, can compute functions that have pn np linear regions (Montúfar et al. 2014). We see that the number of linear regions of deep models grows exponentially in L and polynomially in n. See Montúfar et al. (2014) for a more detailed exposition of how the additional layers partition the input space. While this form of reasoning guides our intuition towards designing neural network architectures it falls short at explaining why projection into a higher dimensional space is complementary to how the networks partition the input space. To address this, we turn to some informal probabilistic reasoning to aid our understanding. 2.3 Probabilistic Reasoning Data Dimensionality First consider any two independent standard Gaussian random p-vectors X, Y ∼ N(0, I ) and define their distance in Euclidean space by the 2-norm 4 Feedforward Neural Networks d(X, Y )2 := ||X − Y ||22 = (Xi − Yi )2 . E[Xi2 ] + E[Yi2 ] = 2p. Taking expectations gives E[d(X, Y )2 ] = p i=1 Under these i.i.d. assumptions, the mean of the pairwise distance squared between any random points in Rp is increasingly linear with the√dimensionality of the space. By Jensen’s inequality for concave functions, such as x . . . E[d(X, Y )] = E[ d(X, Y )2 ] ≤ E[d(X, Y )2 ] = 2p, and hence the expected distance is bounded above by a function which grows to the power of p1/2 . This simple observation supports the characterization of random points as being less concentrated as the dimensionality of the input space increases. In particular, this property suggests machine learning techniques which rely on concentration of points in the input space, such as linear kernel methods, may not scale well with dimensionality. More importantly, this notion of loss of concentration with dimensionality of the input space does not conflict with how the input space is partitioned—the model defines a convex polytope with a less stringent requirement for locality of data for approximation accuracy. Size of Hidden Layer A similar simple probabilistic reasoning can be applied to the output from a onelayer network to understand how concentration varies with the number of units in the hidden layer. Consider, as before two i.i.d. random vectors X and Y in Rp . Suppose now that these vectors are projected by a bounded semi-affine function g : Rp → Rq . Assume that the output vectors g(X), g(Y ) ∈ Rq are i.i.d. with zero mean and variance σ 2 I . Defining the distance between the output vectors as the 2−norm dg2 ||g(X) − g(Y )||22 q = (gi (X) − gi (Y ))2 . Under expectations E[dg2 ] = p i=1 E[g(X)2i ] + E[g(Y )2i ] = 2qσ 2 ≤ q(g¯ − g) 2 Feedforward Architectures and again by Jensen’s inequality, E[d] ≤ √ √ . 2 qσ ≤ q(g¯ − g), we observe that the distance between the two output vectors, corresponding to the output of a hidden layer g under different inputs X and Y , can be less concentrated as the dimensionality of the output space increases. In other words, points in the codomain of g are on average more separate as q increases. 2.4 Function Approximation with Deep Learning* While the above informal geometric and probabilistic reasoning provides some intuition for the need for multiple units in the hidden layer of a two-layer MLP, it does not address why deep networks are needed. The most fundamental mathematical concept in neural networks is the universal representation theorem. Simply put, this is a statement about the ability of a neural network to approximate any continuous, and unknown, function between input and output pairs with a simple, and known, functional representation. Hornik et al. (1989) show that a feedforward network with a single hidden layer can approximate any continuous function, regardless of the choice of activation function or data. Formally, let C p := {F : Rp → R | F (x) ∈ C(R)} be the set of continuous functions from Rp to R. Denote p (g) as the class of functions {F : Rp → R : F (x) = W (2) σ (W (1) x + b(1) ) + b(2) }. Consider = (0, 1] and let C0 be the collection of all open intervals in (0, 1]. Then σ (C0 ), the σ -algebra generated by C0 , is called the Borel σ -algebra. It is denoted by B((0, 1]). An element of B((0, 1]) is called a Borel set. A map f : X → Y between two topological spaces X, Y is called Borel measurable if f −1 (A) is a Borel set for any open set A. Let M p := {F : Rp → R | F (x) ∈ B(R)} be the set of all Borel measurable functions from Rp to R. We denote the Borel σ -algebra of Rp as Bp . > Universal Representation Theorem (Hornik et al. (1989)) For every monotonically increasing activation function σ , every input dimension size p, and every probability measure μ on (Rp , Bp ), p (g) is uniformly dense on compacta in C p and ρμ -dense in M p . 4 Feedforward Neural Networks This theorem shows that standard feedforward networks with only a single hidden layer can approximate any continuous function uniformly on any compact set and any measurable function arbitrarily well in the ρμ metric, regardless of the activation function (provided it is measurable), regardless of the dimension of the input space, p, and regardless of the input space. In other words, by taking the number of hidden units, k, large enough, every continuous function over Rp can be approximated arbitrarily closely, uniformly over any bounded set by functions realized by neural networks with one hidden layer. The universal approximation theorem is important because it characterizes feedforward networks with a single hidden layer as a class of approximate solutions. However, the theorem is not constructive—it does not specify how to configure a multilayer perceptron with the required approximation properties. The theorem has some important limitations. It says nothing about the effect of adding more layers, other than to suggest they are redundant. It assumes that the optimal network weight vectors are reachable by gradient descent from the initial weight values, but this may not be possible in finite computations. Hence there are additional limitations introduced by the learning algorithm which are not apparent from a functional approximation perspective. The theorem cannot characterize the prediction error in any way, the result is purely based on approximation theory. An important concern is over-fitting and performance generalization on out-of-sample datasets, both of which it does not address. Moreover, it does not inform how MLPs can recover other approximation techniques, as a special case, such as polynomial spline interpolation. As such we shall turn to alternative theory in this section to assess the learnability of a neural network and to further understand it, beginning with a perceptron binary classifier. The reason why multiple hidden layers are needed is still an open problem, but various clues are provided in the next section and later in Sect. 2.7. 2.5 VC Dimension In addition to expressive power, which determines the approximation error of the model, there is the notion of learnability, which determines the level of estimation error. The former measures the error introduced by an approximating function and the latter error measures the performance lost as a result of using a finite training sample. One classical measure of the learnability of neural network classifiers is the Vapnik–Chervonenkis (VC) dimension. The VC dimension of a binary model g = FW,b (X) is the maximum number of points that can be arranged so that FW,b (X) shatters them, i.e. for all possible assignments of labels to those points, there exists a W, b such that FW,b makes no errors when classifying that set of data points. In the simplest case, a perceptron with n inputs units and a linear threshold activation σ (x) := sgn(x) has a VC dimension of n+1. For example, if n = 1, then 2 Feedforward Architectures Right point activated Left point activated None activated Both activated Fig. 4.5 For the points {−0.5, 0.5}, there are weights and biases that activate only one of them (W = 1, b = 0 or W = −1, b = 0), none of them (W = 1, b = −0.75), and both of them (W = 1, b = 0.75) only two distinct points can always be correctly classified under all possible binary label assignments. As shown in Fig. 4.5, for the points {−0.5, 0.5}, there are weights and biases that activate both of them (W = 1, b = 0.75), only one of them (W = 1, b = 0 or W = −1, b = 0), and none of them (W = 1, b = −0.75). Every distinct pair of points is separable with the linear threshold perceptron. So every dataset of size 2 is shattered by the perceptron. However, this linear threshold perceptron is incapable of shattering triplets, for example, X ∈ {−0.5, 0, 0.5} and Y ∈ {0, 1, 0}. In general, the VC dimension of the class of half-spaces in Rk is k + 1. For example, a 2d plane shatters any three points, but cannot shatter four points. The VC dimension determines both the necessary and sufficient conditions for the consistency and rate of convergence of learning processes (i.e., the process of choosing an appropriate function from a given set of functions). If a class of functions has a finite VC dimension, then it is learnable. This measure of capacity is more robust than arbitrary measures such as the number of parameters. It is possible, for example, to find a simple set of functions that depends on only one parameter and that has infinite VC dimension. ? VC Dimension of an Indicator Function Determine the VC dimension of the indicator function over = [0, 1] F (x) = {f : → {0, 1}, f (x) = 1x∈[t1 ,t2 ) , or f (x) = 1−1x∈[t1 ,t2 ) , t1 < t2 ∈ }. (4.12) Suppose there are three points x1 , x2 , and x3 and assume x1 < x2 < x3 without loss of generality. All possible labeling of the points is reachable; therefore, we assert that V C(F ) ≥ 3. With four points x1 , x2 , x3 , and x4 (assumed increasing as always), you cannot label x1 and x3 with the value 1 and x2 and x4 with the value 0, for example. Hence V C(F ) = 3. Recently (Bartlett et al. 2017a) prove upper and lower bounds on the VC dimension of deep feedforward neural network classifiers with the piecewise linear activation function, such as ReLU activation functions. These bounds are tight for almost the entire range of parameters. Letting |W | be the number of weights and L be the number of layers, they proved that the VC dimension is O(|W |Llog(|W 4 Feedforward Neural Networks They further showed the effect of network depth on VC dimension with different non-linearities: there is no dependence for piecewise constant, linear dependence for piecewise-linear, and no more than quadratic dependence for general piecewisepolynomials. Vapnik (1998) formulated a method of VC dimension based inductive inference. This approach, known as structural empirical risk minimization, achieved the smallest bound on the test error by using the training errors and choosing the machine (i.e., the set or functions) with the smallest VC dimension. The minimization problem expresses the bias–variance tradeoff . On the one hand, to minimize the bias, one needs to choose a function from a wide set of functions, not necessarily with a low VC dimension. On the other hand, the difference between the training error and the test error (i.e., variance) increases with VC dimension (a.k.a. expressibility). The expected risk is an out-of-sample measure of performance of the learned model and is based on the joint probability density function (pdf) p(x, y): R[Fˆ ] = E[L(Fˆ (X), Y )] = L(Fˆ (x), y)dp(x, y). If one could choose Fˆ to minimize the expected risk, then one would have a definite measure of optimal learning. Unfortunately, the expected risk cannot be measured directly since this underlying pdf is unknown. Instead, we typically use the risk over the training set of N observations, also known as the empirical risk measure (ERM): Remp (Fˆ ) := N 1 L(Fˆ (xi ), yi ). N Under i.i.d. data assumptions, the law of large numbers ensures that the empirical risk will asymptotically converge to the expected risk. However, for small samples, one cannot guarantee that ERM will also minimize the expected risk. A famous result from statistical learning theory (Vapnik 1998) is that the VC dimension provides bounds on the expected risk as a function of the ERM and the number of training observations N , which holds with probability (1 − η): R[Fˆ ] ≤ Remp (Fˆ ) + 0 0 h ln 2N + 1 − ln & η ' 1 h 4 N where h is the VC dimension of Fˆ (X) and N > h. Figure 4.6 shows the tradeoff between VC dimension and the tightness of the bound. As the ratio N/ h gets larger, i.e. for a fixed N, we decrease h, the VC confidence becomes smaller, and the actual risk becomes closer to the empirical risk. On the other hand, choosing a model with a higher VC dimension reduces the ERM at the expense of increasing the VC confidence. 2 Feedforward Architectures Fig. 4.6 This figure shows the tradeoff between VC dimension and the tightness of the bound. As the ratio N/ h gets larger, i.e. for a fixed N, we decrease h, the VC confidence becomes smaller, and the actual risk becomes closer to the empirical risk. On the other hand, choosing a model with a higher VC dimension reduces the ERM at the expense of increasing the VC confidence The VC dimension plays a more dominant role in small-scale learning problems, where i.i.d. training data is limited and optimization error, that is the error introduced by the optimizer, is negligible. Beyond a certain sample size, computing power and the optimization algorithm become more dominant and the VC dimension is limited as a measure of learnability. Several studies demonstrate that VC dimension based error bounds are too weak and its usage, while providing some intuitive notion of model complexity, have faded in favor of alternative theories. Perhaps most importantly for finance, the bound in Eq. 4.15 only holds for i.i.d. data and little is known in the case when the data is auto-correlated. ? Multiple Choice Question 1 Which of the following statements are true: 1. The hidden units of a shallow feedforward networkpartition, with n hidden units, p & ' partition the input space in Rp into no more than j =0 nj convex regions. 2. The VC dimension of a Heaviside activated shallow feedforward network, with one hidden unit, and p features, is p + 1. 3. The bias–variance tradeoff is equivalently expressed through the VC confidence and the empirical risk measure. 4. The upper bound on the out-of-sample error of a feedforward network depends on its VC dimension and the number of training samples. 5. The VC dimension always grows linearly with the number of layers in a deep network. 4 Feedforward Neural Networks 2.6 When Is a Neural Network a Spline?* Under certain choices of the activation function, we can construct MLPs which are a certain type of piecewise polynomial interpolants referred to as “splines.” Let f (x) be any function whose domain is and the function values fk := f (xk ) are known only at grid points h := {xk | xk = kh, k ∈ {1, . . . , K}} ⊂ ⊂ R which are spaced by h. Note that the requirement that the data is gridded is for ease of exposition and is not necessary. We construct an orthogonal basis over to give the interpolant fˆ(x) = φk (x)fk , x ∈ , where the {φk }K k=1 are orthogonal basis functions. Under additional restrictions of the function space of f , we can derive error bounds which are a function of h. We can easily demonstrate how a MLP with hidden units activated by Heaviside functions (unit step functions) is a piecewise constant functional approximation. Let f (x) be any function whose domain is = [0, 1]. Suppose that the function is Lipschitz continuous, that is, ∀x, x ∈ [0, 1), |f (x ) − f (x)| ≤ L|x − x|, for some constant L ≥ 0. Using Heaviside functions to activate the hidden units H (x) = x ≥ 0, x < 0, L we construct a neural network with K = 2 + 1 units in a single hidden layer that approximates f (x) within > 0. That is, ∀x ∈ [0, 1), |f (x) − fˆ(x)| ≤ , where fˆ(x) is the output of the neural network given input x. Let = L . We shall show that the neural network is a linear combination of indicator functions, φk , with compact support over [xk − , xk + ) and centered about xk : φk (x) = 1 [xk − , xk + ), 0 otherwise. The {φk }K k=1 are piecewise constant basis functions, φi (xj ) = δij , and the first few are illustrated in Fig. 4.7 below. The basis functions satisfy the partition of unity property K k=1 φk (x) = 1, ∀x ∈ . 2 Feedforward Architectures Fig. 4.7 The first three piecewise constant basis functions produced by the difference of neighboring step function activated units, φk (x) = H (x − (xk − )) − H (x − (xk + )) We shall construct such basis functions as the difference of Heaviside functions φk (x) = H (x − (xk − )) − H (x − (xk + )), xk = (2k − 1) , by choosing the bias bk(1) = −2(k − 1) and W (1) = 1 so that the neural network, fˆ(X) = W (2) H (W (1) X + b(1) ) has values based on ⎤ H (x) ⎥ ⎢ H (x − 2 ) ⎥ ⎢ ⎥ ⎢ . . . ⎥ ⎢ H (W (1) x + b(1) ) = ⎢ ⎥. ⎢ H (x − 2(k − 1) ) ⎥ ⎥ ⎢ ⎦ ⎣ ... H (x − (2K − 1) ) Then W (2) is set equal to exact function values and their differences: W (2) = [f (x1 ), f (x2 ) − f (x1 ), . . . , f (xK ) − f (xK−1 )], so that 4 Feedforward Neural Networks y 0.0 ^ |f (x)−f(x)| 0.00 0.02 0.04 0.06 0.08 0.10 Function values Absolute error Fig. 4.8 The approximation of cos(2π x) using gridded input data and Heaviside activation L functions. The error in approximation is at most with K = 2 + 1 hidden units ⎧ ⎪ f (x1 ), ⎪ ⎪ ⎪ ⎪ ⎪ f (x2 ), ⎪ ⎪ ⎪ ⎨. . . fˆ = ⎪ f (xk ), ⎪ ⎪ ⎪ ⎪ ⎪ ... ⎪ ⎪ ⎪ ⎩ f (xK−1 ), x ≤ 2 , 2 < x ≤ 4 , ... 2(k) < x ≤ 2(k + 1) , ... 2(K − 1) < x ≤ 2K . Figure 4.8 illustrates the function approximation for the case when f (x) = cos(2π x). Since xk = (2k −1) ,we have that Yˆ = f (xk ) over the interval [xk − , xk + ], which is the support of φk (x). By the Lipschitz continuity of f (x), it follows that the worst-case error appears at the mid-point of any interval [xk , xk+1 ) |f (xk + ) − fˆ(x + )| = |f (x + ) − f (xk )| ≤ |f (xk )| + L − |f (xk )| = . (4.22) This example is a special case of a more general representation permitted by MLPs. If we relax that the points need to be gridded, but instead just assume there are K data points in Rp , then the region boundaries created by the K hidden units define a Voronoi diagram. Informally, a Voronoi diagram is a partitioning of a plane into regions based on distance to points in a specific subset of the plane. The set of points are referred to as “seeds.” For each seed there is a corresponding region consisting of all points closer to that seed than to any other. The discussion of Voronoi diagrams is beyond the scope of this chapter, but suffice to say that the representation of MLPs as splines extends to higher dimensional input spaces and higher degree splines. 2 Feedforward Architectures Hence, under a special configuration of the weights and biases, with the hidden units defining Voronoi cells for each observation, we can show that a neural network is a univariate spline. This result generalizes to higher dimensional and higher order splines. Such a result enables us to view splines as a special case of a neural network which is consistent with our reasoning of neural networks as generalized approximation and regression techniques. The formulation of neural networks as splines allows approximation theory to guide the design of the network. Unfortunately, equating neural networks with splines says little about why and when multiples layers are needed. 2.7 Why Deep Networks? The extension to deep neural networks is in fact well motivated on statistical and information-theoretical grounds (Tishby and Zaslavsky 2015; Poggio 2016; Mhaskar et al. 2016; Martin and Mahoney 2018; Bartlett et al. 2017a). Poggio (2016) shows that deep networks can achieve superior performance versus linear additive models, such as linear regression, while avoiding the curse of dimensionality. There are additionally many recent theoretical developments which characterize the approximation behavior as a function of network depth, width, and sparsity level (Polson and Rockova 2018). Recently (Bartlett et al. 2017b) prove upper and lower bounds on the expressibility of deep feedforward neural network classifiers with the piecewise linear activation function, such as ReLU activation functions. These bounds are tight for almost the entire range of parameters. Letting n denote the total number of weights, they prove that the VC dimension is O(nLlog(n)). > VC Dimension Theorem Theorem (Bartlett et al. (2017b)) There exists a universal constant C such that the following holds. Given any W, L with W > CL > C 2 , there exists a ReLU network with ≤ L layers and ≤ W parameters with VC dimension ≥ W Llog(W/L)/C. They further showed the effect of network depth on VC dimension with different non-linearities: there is no dependence for piecewise constant, linear dependence for piecewise-linear, and no more than quadratic dependence for general piecewisepolynomial. Thus the relationship between expressibility and depth is determined by the degree of the activation function. There is further ample theoretical evidence to 4 Feedforward Neural Networks suggest that shallow networks cannot approximate the class of non-linear functions represented by deep ReLU networks without blow-up. Telgarsky (2016) shows that there is a ReLU network with L layers such that any network approximating it with 1/3 only O(L1/3 ) layers must have (2L ) units. Mhaskar et al. (2016) discuss the differences between composition versus additive models and show that it is possible to approximate higher polynomials much more efficiently with several hidden layers than a single hidden layer. Martin and Mahoney (2018) shows that deep networks are implicitly selfregularizing behaving like Tikhonov regularization. Tishby and Zaslavsky (2015) characterizes the layers as “statistically decoupling” the input variables. Approximation with Compositions of Functions To gain some intuition as to why function composition can lead to successively more accurate function representation with each layer, consider the following example of a binary expansion of a decimal x. Example 4.1 Binary Expansion of a Decimal For each integer n ≥ 1 and x ∈ [0, 1], define fn (x)= xn , where xn is the xn nth binary digit of x. The binary expansion of x = ∞ n=1 2n , where xn is 1 1 or 0 depends on whether Xn−1 ≥ 2n or otherwise, respectively, and Xn := x − ni=1 x2ii . For example, we can find the first binary digit, x1 as either 1 or 0 depending on whether x0 = x ≥ 12 . Now consider X1 = x − x1 /2 and set x2 = 1 if X1 ≥ 212 or x2 = 0 otherwise. Example 4.2 Neural Network for Binary Expansion of a Decimal A deep feedforward network for such a binary expansion of a decimal uses two neurons in each layer with different activations—Heaviside and identity functions. The input weight matrix, W (1) , is the identity matrix, the other weight matrices, {W () }>1 are W () () 1 − 2−1 1 = , 1 1 − 2−1 and σ1 (x) = H (x, 21 ) and σ2 (x) = id(x) = x. There are no bias terms. The output after hidden layers is the error, X ≤ 21 . 2 Feedforward Architectures While the example of converting a decimal in binary format using a binary expansion is simple, the approach can be readily generalized to the binary expansion of polynomials. Theorem 4.2 (Liang and Srikant (2016)) For the pth order polynomial f (x) = p p i , x ∈ [0, 1] and a x |a there exists a multilayer neural network i=0 i i=1 &i | ≤ 1, & & ' p' p' ˆ f (x) with O p + log ε layers, O log ε Heaviside units, and O p log pε rectifier linear units such that |f (x) − fˆ(x)| ≤ ε, ∀x ∈ [0, 1]. Proof The sketch of the proof is as follows. Liang and Srikant (2016) use the deep structure shown in Fig. 4.9 to find the n-step binary expansion ni=0 ai x i of x. Then they construct a multilayer network to approximate polynomials gi (x) = x i , i = 1, . . . , p. Finally, they analyze the approximation error which is |f (x) − fˆ(x)| ≤ p . 2n−1 See Appendix “Proof of Theorem 4.2” for the proof. Composition with ReLU Activation An intuitive way to understand the importance of multiple network layers is to consider the effect of composing piecewise affine functions instead of adding them. It is easy to see that combinations of ReLU activated neurons give piecewise affine X3 x3 H (x − x 1 /2 − x 2 / 4, 1/8) id (x − x 1 /2 − x 2 / 4) x2 H (x − x 1 /2, 1/4) id (x − x 1 /2) x1 H (x, 1/2) id (x ) x Fig. 4.9 An illustrative example of a deep feedforward neural network for binary expansion of a decimal. Each layer has two neurons with different activations—Heaviside and identity functions 4 Feedforward Neural Networks approximations. For example, consider the shallow ReLU network with 2 and 4 perceptrons in Fig. 4.10: FW,b = W (2) σ (W (1) x + b(1) ), σ := max(x, 0). Let us start by defining σ : R → R to be t-sawtooth if it is piecewise affine with t pieces, meaning R is partitioned into t consecutive intervals, and σ is affine within each interval. Consequently, ReLU (x) is 2-sawtooth, but this class also includes many other functions, for instance, the decision stumps used in boosting are 2-sawtooth, and decision trees with t − 1 nodes correspond to t-sawtooths. The following lemma serves to build intuition about the effect of adding versus composing sawtooth functions which is illustrated in Fig. 4.11. Lemma 4.1 Let f : R → R and g : R → R be, respectively, k- and l-sawtooth. Then f + g is (k + l)-sawtooth, and f ◦ g is kl-sawtooth. Fig. 4.10 A Shallow ReLU network with (left) two perceptrons and (right) four perceptrons 2σ (x) − 4σ (x − Two units W (2) = [2, −4] b(1) = [0, − 12 ]T 4σ (x) − 8σ (x − 14 )+ 4σ (x − 12 ) − 8σ (x − 34 ). Four units W (2) = [4, −8, 4, −8] b(1) = [0, − 14 , − 12 , − 34 ]T f(x) + g(x) 1 2) f(x) y 0.4 y 0.4 g(x) f(g(x)) Fig. 4.11 Adding versus composing 2-sawtooth functions. (a) Adding 2-sawtooths. (b) Composing 2-sawtooths 2 Feedforward Architectures Let us now build on this result by considering the mirror map fm : R → R, which is shown in Fig. 4.12, and defined as fm (x) := ⎧ ⎪ ⎪ ⎨2x when 0 ≤ x ≤ 1/2, ⎪ ⎪ ⎩0 2(1 − x) when 1/2 < x ≤ 1, Note that fm can be represented by a two-layer ReLU activated network with two neurons; For instance, fm (x) = (2σ (x)−4σ (x−1/2)). Hence fmk is the composition of k (identical) ReLU sub-networks. A key observation is that fewer hidden units are needed to shatter a set of points when the network is deep versus shallow. Consider, for example, the sequence of n = 2k points with alternating labels, referred to as the n-ap, and illustrated in Fig. 4.13 for the case when k = 3. As the x values pass from left to right, the labels change as often as possible and provide the most challenging arrangement for shattering n points. There are many ways to measure the representation power of a network, but we will consider the classification error here. Suppose that we have a σ activated network with m units per layer and l layers. Given a function f : Rp → R let f˜ : Rp → {0, 1} denote the corresponding classifier f˜(x) := 1f (x)≥1/2 , and additionally given a sequence of points ((xi , yi ))ni=1 with xi ∈ Rp and yi ∈ {0, 1}, define the classification error as E(f ) := n1 i 1f˜(xi )=yi . Given a sawtooth function, its classification error on the n-ap may be lower bounded as fm fm2 fm3 . Fig. 4.12 The mirror map composed with itself 1 Fig. 4.13 The n-ap consists of n uniformly spaced points with alternating labels over the interval [0, 1 − 2−n ]. That is, the points ((xi , yi ))ni=1 with xi = i2−n and yi = 0 when i is even, and otherwise yi = 1 4 Feedforward Neural Networks Lemma 4.2 Let ((xi , yi ))ni=1 be given according to the n-ap. Then every t-sawtooth function f : R → R satisfies E(f ) ≥ (n − 4t)/(3n). The proof in the appendix relies on a simple counting argument for the number of crossings of 1/2. If there are m t-saw-tooth functions, then by Lemma 4.1, the resultant is a piecewise affine function over mt intervals. The main theorem now directly follows from Lemma 4.2. Theorem 4.3 Let positive integer k, number of layers l, and number of nodes per layer m be given. Given a t-sawtooth σ : R → R and n := 2k points as specified by the n-ap, then min E(f ) ≥ n − 4(tm)l . 3n From this result one can say, for example, that on the n-ap one needs m = 2k−3 many units when classifying with a ReLU activated shallow network versus only m = 2(1/ l(k−2)−1) units per layer for a l ≥ 2 deep network. Research on deep learning is very active and there are still many questions that need to be addressed before deep learning is fully understood. However, the purpose of these examples is to build intuition and motivate the need for many hidden layers in addition to the effect of increasing the number of neurons in each hidden layer. In the remaining part of this chapter we turn towards the practical application of neural networks and consider some of the primary challenges in the context of financial modeling. We shall begin by considering how to preserve the shape of functions being approximated and, indeed, how to train and evaluate a network. 3 Convexity and Inequality Constraints It may be necessary to restrict the range of fˆ(x) or impose certain properties which are known about the shape of the function f (x) being approximated. For example, V = f (S) might be an option price and S the value of the underlying asset and convexity and non-negativity of fˆ(S) are necessary. Consider the following feedforward network architecture FW,b (X) : Rp → Rd : (L) (1) Yˆ = FW,b (X) = fW (L) ,b(L) ◦ · · · ◦ fW (1) ,b(1) (X), where () fW () ,b() (x) = σ (W () x + b() ), ∀ ∈ {1, . . . , L}. 3 Convexity and Inequality Constraints Convexity For convexity of Yˆ w.r.t. x, the activation function, σ (x), must be a convex function of x. For avoidance of doubt, this convexity constraint should not be confused with convexity of the loss function w.r.t. the weights as in, for example, Bengio et al. (2006). Examples3 include ReLU(x) := max(x, 0) and softplus(x; t) := 1t ln(1 + exp{tx}). For this class of activation functions, the semi-affine function () fW () ,b() (x) = σ (W () x + b() ) must also be convex in x since a convex function of a linear combination of x is also convex in x. The composition, (+1) () (+1) fW (+1) ,b (+1) ◦ fW () ,b() (x), is convex if and only if fW (+1) ,b(+1) (x) is non() decreasing convex and fW () ,b() (x) is convex. The proof is left to the reader as a straightforward exercise. Hence, for convexity of fˆ(x) = FW,b (x) w.r.t. x we require that the weights in all but the first layer be positive: () wij ≥ 0, ∀i, j, ∀ ∈ {2, . . . , L}. The constraints on the weights needed to enforce convexity guarantee non-negative (L) (L) output if the bias bi ≥ 0, ∀i ∈ {1, . . . , d} and σ (x) ≥ 0, ∀x. Since wij ≥ 0, ∀i, j it follows that (L) wij σ (Ii ) ≥ 0, and with non-negative bias terms, fˆi is non-negative. We now separately consider bounding the network output independently of imposing convexity on fˆi (x) w.r.t. x. If we choose a bounded activation function σ ∈ [σ , σ¯ ], then we can easily impose linear inequality constraints to ensure that fˆi ∈ [ci , di ] ci ≤ fˆi (x) ≤ di , di > ci , i ∈ {1, . . . , d}, by setting bi(L) = ci − (L−1) n (L) (L) min(sij |σ |, sij |σ¯ |)|wij |, sij := sign(wij ). j =1 = 1t ln(1+exp{tx}), with a model parameter t >> 1, converges to the ReLU function in the limit t → ∞. 3 The parameterized softplus function σ (x; t) 4 Feedforward Neural Networks Note that the expression inside the min function can be simplified further to min(sij |σ |, sij |σ¯ |)|wij | = min(wij |σ |, wij |σ¯ |). Training of the weights and biases is a constrained optimization problem with the linear constraints (L−1) n (L) max(sij |σ |, sij |σ¯ |)|wij | ≤ di − bi(L) , j =1 which can be solved with the method of Lagrange multipliers or otherwise. If we require that fˆi should be convex and bounded in the interval [ci , di ], then the (L) additional constraint, wij ≥ 0, ∀i, j , is needed of course and the above simplifies to (L) bi = ci − σ (L−1) n j =1 (L) and solving the underdetermined system for wij , ∀j : (L−1) n (L) σ¯ wij ≤ di − bi(L) j =1 (L−1) n j =1 (L) wij ≤ di − ci . (σ¯ − σ ) The following toy examples provide simplified versions of constrained learning problems that arise in derivative modeling and calibration. The examples are intended only to illustrate the methodology introduced here. The first example is motivated by the need to learn an arbitrage free option price as a function of the underlying asset price. In particular, there are three scenarios where neural networks, and more broadly, supervised machine learning is useful for pricing. First, it provides a “model-free” framework, where no data generation process is assumed for the underlying dynamics. Second, machine learning can price complex derivatives where no analytic solution is known. Finally, machine learning does not suffer from the curse of dimensionality w.r.t. to the input space and can thus scale to basket options, options on many underlying assets. Each of these aspects merits further exploration and our example illustrates some of the challenges with learning pricing functions. Perhaps the single largest defect of conventional derivative pricing models, however, is their calibration to data. Machine learning, too, provides an answer here—it provides a method for learning the relationship between market and contract variables and the model parameters. 3 Convexity and Inequality Constraints Example 4.3 Approximating Option Prices The payoff of a European call option at expiry time T is VT = max(ST − K, 0) and is convex with respect to S. Under the risk-neutral measure the option price at time t is the conditional expectation Vt = Et [exp{−r(T − t)}VT ]. Since the conditional expectation is a linear operator, it preserves the convexity of the payoff function, so that the option price is always convex w.r.t. the underlying price. Thus, the second derivative, γ is always non-negative. Furthermore, the option price must always be non-negative. Let us approximate the surface of a European call option with strike K over ¯ The input variable X ∈ R+ are underlying all underlying values St ∈ (0, S). asset prices and the outputs are call prices, so that the data is {Si , Vi }. We use a neural network to learn the relation V = f (S) and enforce the property that f is non-negative and convex w.r.t. S. In the following example, we train the MLP over a uniform grid of 100 training points Si ∈ h ⊂ [0.001, 300], and Vi = f (Si ) generated by the Black–Scholes (BS) pricing formula. The risk-free rate r = 0.01, the strike is 130, the volatility is σ , and time to maturity is T = 2.0. The test data of 100 observations are on a different uniform gridded over a wider domain [0.001, 600]. The network uses one hidden layer (L = 2) with 100 units, a (L) (L) softplus activation function, and wij , bi ≥ 0, ∀i, j . Figure 4.14 compares the prediction with the BS model over the test set. Yˆ is observed to be convex (2) w.r.t. S because wij is non-negative. Additionally, because bi(2) ≥ 0 and σ = 0, Yˆ ≥ 0. The figure also compares the Black–Scholes formula for the delta of the call option, (X), the derivative of the price w.r.t. to S with the gradient of Yˆ : ˆ (X) = ∂X Yˆ = (W (2) )T DW (1) , Dii = 1 (1) 1 + exp{−w(1) i, X − bi } Under the BS model, the delta of a call option is in the interval [0, 1]. Note that the delta, although observed positive here, could be negative since there are no restrictions on W (1) . Similarly, the delta approximation is observed to exceed unity. Thus, additional constraints are needed to bound the delta. For (1) this architecture, imposing wij ≥ 0 preserves the non-negativity of the delta (1) (2) (1) and nj wij wj, ≤ 1, ∀i bounds the delta at unity. 4 Feedforward Neural Networks (a) Estimated call prices (b) Derived delta Fig. 4.14 (a) The out-of-sample call prices are estimated using a single-layer neural network with constraints to ensure non-negativity and convexity of the price approximation w.r.t. the underlying price S. (b) The analytic derivative of Yˆ is taken as the approximation of delta and compared over the test set with the Black–Scholes delta. We observe that additional constraints on the weights are needed to ensure that ∂X Yˆ ∈ [0, 1] Example 4.4 Calibrating Options The goal is to learn the inverse of the Black–Scholes formula, as a function of moneyness, M = S/K. For simplicity, it considers the calibration of a chain of European in-the-money put or in-the-money equity call options with fixed time to maturity only. The input is moneyness for each option in the chain. The output of the neural network is the BS implied volatility—this is the implied volatility needed to calibrate the BS model to option price data corresponding to each moneyness. The neural network preserves the positivity of the volatility and, in this example, imposes a convexity constraint on the surface w.r.t. to moneyness. The latter ensures consistency with liquid option markets, the implied volatility for both puts and calls typically monotonically increases as the strike price moves away from the current stock price—the so-called implied volatility smile. In markets, such as the equity markets, an implied volatility skew occurs because money managers usually prefer to write calls over puts. The input variable X ∈ R+ is moneyness and the output is volatility so that the training data is {Mi , σi }. We use a neural network to learn the relation σ = f (M) and enforce the property that f is non-negative and convex w.r.t. M. Note, in this example, that we do not directly learn the relationship between option prices and implied volatilities. Instead we learn how a BS root finder approximates the implied volatility as a function of the moneyness. (continued) 3 Convexity and Inequality Constraints Example 4.4 In the following example, we train the MLP over a uniform grid of n = 100 training points Mi ∈ h ⊂ [0.5, 1 × 104 ], and σi = f (Mi ) is generated by using a root finder for V (σ ; S, Ki , τ, r) − Vˆi = 0, ∀i = 1, . . . , n and τ = 0.2 years using the option price with strike Ki and time to maturity τ . The risk-free rate r = 0.01. The test data of 100 observations are on a different uniform gridded over a wider domain [0.4166, 1 × 104 ]. The network uses one hidden layer (L = 2) with 100 units, a softplus activation function, and (L) (L) wij , bi ≤ 0, ∀i, j . Figure 4.15 compares the out-of-sample model output with the root finder for the BS model over the test set. Yˆ is observed to be (2) (2) convex w.r.t. M because wij is non-negative. Additionally, because bi ≥ 0 and σ = 0, Yˆ ≥ 0. No-Arbitrage Pricing The previous examples are simple enough to illustrate the application of constraints in neural networks. However, one would typically need to enforce more complex constraints for no-arbitrage pricing and calibration. Pricing approximations should be monotonically increasing w.r.t. to maturity and convex w.r.t. strike. Such constraints require that the neural network is fitted with more input variables K and T . Accelerating Calibrations One promising direction, which does not require neural network derivative pricing, is to simply learn a stochastic volatility based pricing model, such as the Heston model, as a function of underlying price, strike, and maturity, and then use the neural network pricing function to calibrate the pricing model. Such a calibration Fig. 4.15 The out-of-sample MLP estimation of implied volatility as a function of moneyness is compared with the true values 4 Feedforward Neural Networks avoids fitting a few parameters to the chain of observed option prices or implied volatilities. Replacement of expensive pricing functions, which may require FFTs or Monte Carlo methods, with trained neural networks reduces calibration time considerably. See Horvath et al. (2019) for further details. Dupire Local Volatility Another challenge is how to price exotic options consistently with the market prices of their European counterpart. The former are typically traded over-thecounter, whereas the latter are often exchange traded and therefore “fully” observable. To fix ideas, let C(K, T ) denote an observed call price, for some fixed strike, K, maturity, T , and underlying price St . Modulo a short rate and dividend term, the unique “effective” volatility, σ02 , is given by the Dupire formula: σ02 = 2∂T C(K, T ) . 2 C(K, T ) K 2 ∂K The challenge arises when calibrating the local volatility model, extracting effective volatility from market option prices is an ill-posed inverse problem. Such a challenge has recently been addressed by Chataigner et al. (2020) in their paper on deep local volatility. ? Multiple Choice Question 2 Which of the following statements are true: 1. A feedforward architecture is always convex w.r.t. each input variables if every activation function is convex and the weights are constrained to be either all positive or all negative. 2. A feedforward architecture with positive weights is a monotonically increasing function of the input for any choice of monotonically increasing activation function. 3. The weights of a feedforward architecture must be constrained for the output of a feedforward network to be bounded. 4. The bias terms in a network simply shift the output and have no effect on the derivatives of the output w.r.t. to the input. 3.1 Similarity of MLPs with Other Supervised Learners Under special circumstances, MLPs are functionally equivalent to a number of other machine learning techniques. As previously mentioned, when the network has no hidden layer, it is either a regression or logistic regression. Neural networks with 3 Convexity and Inequality Constraints one hidden layer is essentially a projection pursuit regression (PPR), both project the input vector onto a hyperplane, apply a non-linear transformation into feature space, followed by an affine transformation. The mapping of input vectors to feature space by the hidden layer is conceptually similar to kernel methods, such as support vector machines (SVMs), which map to a kernel space, where classification and regression are subsequently performed. Boosted decision stumps, one level boosted decision trees, can even be expressed as a single-layer MLP. Caution must be exercised in over-stretching these conceptual similarities. Data generation assumptions aside, there are differences in the classes of non-linear functions and learning algorithms used. For example, the non-linear function being fitted in PPR can be different for each combination of input variables and is sequentially estimated before updating the weights. In contrast, neural networks fix these functions and estimate all the weights belonging to a single layer simultaneously. A summary of other machine learning approaches is given in Table 4.1 and we refer the reader to numerous excellent textbooks (Bishop 2006; Hastie et al. 2009) covering such methods. Table 4.1 This table compares supervised machine learning algorithms (reproduced from Mullainathan and Spiess (2017)) Function class F (and its parameterization) Global/parametric predictors Linear β x (and generalizations) Regularizer R(f ) Subset selection β 0 = kj =1 1βj =0 k LASSO β 1 = j =1 |βj | Ridge β 22 = kj =1 βj2 Elastic net α β 1 + (1 − α) β 22 Local/non-parametric predictors Decision/regression trees Random forest (linear combination of trees) Nearest neighbors Kernel regression Mixed predictors Deep learning, neural nets, convolutional neural networks Splines Combined predictors Bagging: unweighted average of predictors from bootstrap draws Boosting: linear combination of predictions of residual Ensemble: weighted combination of different predictors Depth, number of nodes/leaves, minimal leaf size, information gain at splits Number of trees, number of variables used in each tree, size of bootstrap sample, complexity of trees (see above) Number of neighbors Kernel bandwidth Number of levels, number of neurons per level, connectivity between neurons Number of knots, order Number of draws, size of bootstrap samples (and individual regularization parameters) Learning rate, number of iterations (and individual regularization parameters) Ensemble weights (and individual regularization parameters) 4 Feedforward Neural Networks 4 Training, Validation, and Testing Deep learning is a data-driven approach which focuses on finding structure in large datasets. The main tools for variable or predictor selection are regularization and dropout. Out-of-sample predictive performance helps assess the optimal amount of regularization, the problem of finding the optimal hyperparameter selection. There is still a very Bayesian flavor to the modeling procedure and the modeler follows two key steps: 1. Training phase: pair the input with expected output, until a sufficiently close match has been found. Gauss’ original least squares procedure is a common example. 2. Validation and test phase: assess how well the deep learner has been trained for out-of-sample prediction. This depends on the size of your data, the value you would like to predict, the input, etc., and various model properties including the mean-error for numeric predictors and classification errors for classifiers. Often, the validation phase is split into two parts. 2.a First, estimate the out-of-sample accuracy of all approaches (a.k.a. validation). 2.b Second, compare the models and select the best performing approach based on the validation data (a.k.a. verification). Step 2.b. can be skipped if there is no need to select an appropriate model from several rivaling approaches. The researcher then only needs to partition the dataset into a training and test set. To construct and evaluate a learning machine, we start with training data of input– output pairs D = {Y (i) , X(i) }N i=1 . The goal is to find the machine learner of Y = F (X), where we have a loss function L(Y, Yˆ ) for a predictor, Yˆ , of the output signal, Y . In many cases, there is an underlying probability model, p(Y | Yˆ ), then the loss function is the negative log probability L(Y, Yˆ ) = − log p(Y | Yˆ ). For example, under a Gaussian model L(Y, Yˆ ) = ||Y − Yˆ ||2 is a L2 norm, for binary classification, L(Y, Yˆ ) = −Y log Yˆ is the negative cross-entropy. In its simplest form, we then solve an optimization problem minimizef (W, b) + λφ(W, b) W,b f (W, b) = N 1 L(Y (i) , Yˆ (X(i) )) N i=1 with a regularization penalty, φ(W, b). The loss function is non-convex, possessing many local minima and is generally difficult to find a global minimum. An important assumption, which is often not explicitly stated, is that the errors are assumed to be “homoscedastic.” Homoscedasticity is the assumption that the error has an identical distribution over each observation. This assumption can be relaxed by 4 Training, Validation, and Testing weighting the observations differently. However, we shall regard such extensions as straightforward and compatible with the algorithms for solving the unweighted optimization problem. Here λ is a global regularization parameter which we tune using the out-of-sample predictive mean squared error (MSE) of the model. The regularization penalty, φ(W, b), introduces a bias–variance tradeoff . ∇L is given in closed form by a chain rule and, through back-propagation, each layer’s weights Wˆ () are fitted with stochastic gradient descent. Recall from Chap. 1 that a 1-of-K encoding is used for a categorical response, so that G is a K-binary vector G ∈ [0, 1]K and the value k is presented as Gk = ˆ k := 1 and Gj = 0, ∀j = k, where ||Gk ||1 = 1. The predictor is given by G ˆ k ||1 = 1 and the loss function is the negative cross-entropy for gk (X|(W, b)), ||G discrete random variables ˆ ˆ L(G, G(X)) = −GT lnG. For example, if there are K = 3 classes, then G = [0, 0, 1], G = [0, 1, 0], or G = [1, 0, 0] to represent the three classes. When K > 2, the output layer has K neurons and the loss function is the negative cross-entropy ˆ L(G, G(X)) =− ˆ k. Gk lnG For the case when K = 2, i.e. binary classification, there is only one neuron in the output layer and the loss function is ˆ ˆ − (1 − G)ln(1 − G), ˆ L(G, G(X)) = −GlnG ˆ = g1 (X|(W, b)) = σ (I (L−1) ) and σ is a sigmoid function. where G We observe that when there are no hidden layers, I (1) = W (1) X + b(1) and g1 (X|(W, b)) is a logistic regression. The softmax function, σs generalizes binary classifiers to multi-classifiers. σs : RK → [0, 1]K is a continuous K-vector function given by σs (x)k = exp(xk ) , k ∈ {1, . . . , K}, || exp(x)||1 where ||σs (x)||1 =1. The softmax function is used to represent a probability distribution over K possible states: ˆ k = P (G = k | X) = σs (W X + b) = exp((W X + b)k ) . G || exp(W X + b)||1 4 Feedforward Neural Networks > Derivative of the Softmax Function Using the quotient rule f (x) = can be written as: g (x)h(x)−h (x)g(x) , [h(x)]2 the derivative σ := σs (x) ∂σi exp(xi )|| exp(x)||1 − exp(xi ) exp(xi ) = ∂xi || exp(x)||21 = exp(xi ) || exp(x)||1 − exp(xi ) · || exp(x)||1 || exp(x)||1 = σi (1 − σi ) (4.40) (4.41) (4.42) For the case i = j , the derivative of the sum is 0 − exp(xi ) exp(xj ) ∂σi = ∂xj || exp(x)||21 =− exp(xj ) exp(xi ) · || exp(x)||1 || exp(x)||1 = −σj σi This can be written compactly as Kronecker delta function. (4.43) (4.44) (4.45) ∂σi ∂xj = σi (δij − σj ), where δij is the 5 Stochastic Gradient Descent (SGD) Stochastic gradient descent (SGD) method or its variation is typically used to find the deep learning model weights by minimizing the penalized loss function, f (W, b). The method minimizes the function by taking a negative step along an estimate g k of the gradient ∇f (W k , bk ) at iteration k. The approximate gradient is calculated by gk = 1 ∇LW,b (Y (i) , Yˆ k (X(i) )), bk i∈Ek 5 Stochastic Gradient Descent (SGD) where Ek ⊂ {1, . . . , N } and bk = |Ek | is the number of elements in Ek (a.k.a. batch size). When bk > 1 the algorithm is called batch SGD and simply SGD otherwise. A usual strategy to choose subset E is to go cyclically and pick consecutive elements of {1, . . . , N } and Ek+1 = [Ek mod N ]+1, where modular arithmetic is applied to the set. The approximated direction g k is calculated using a chain rule (a.k.a. backpropagation) for deep learning. It is an unbiased estimator of ∇f (W k , bk ), and we have E(g k ) = 1 ∇LW,b Y (i) , Yˆ k (X(i) ) = ∇f (W k , bk ). N i=1 At each iteration, we update the solution (W, b)k+1 = (W, b)k − tk g k . Deep learning applications use a step size tk (a.k.a. learning rate) as constant or a reduction strategy of the form, tk = a exp{−kt}. Appropriate learning rates or the hyperparameters of reduction schedule are usually found empirically from numerical experiments and observations of the loss function progression. In order to update the weights across the layers, back-propagation is needed and will now be explained. ? Multiple Choice Question 3 Which of the following statements are true: 1. The training of a neural network involves minimizing a loss function w.r.t. the weights and biases over the training data. 2. L1 regularization is used during model selection to penalize models with too many parameters. 3. In deep learning, regularization can be applied to each layer of the network. 4. Back-propagation uses the chain rule to update the weights of the network but is not guaranteed to convergence to a unique minimum. 5. Stochastic gradient descent and back-propagation are two different optimization algorithms for minimizing the loss function and the user must choose the best one. 5.1 Back-Propagation Staying with a multi-classifier, we can begin by informally motivating the need for a recursive approach to updating the weights and biases. Let us express Yˆ ∈ [0, 1]K as a function of the final weight matrix W ∈ RK×M and output bias RK so that Yˆ (W, b) = σ ◦ I (W, b), 4 Feedforward Neural Networks where the input function I : RK×M ×RK → RK is of the form I (W, b) := W X +b and σ : RK → RK is the softmax function. Applying the multivariate chain rule gives the Jacobian of Yˆ (W, b): ∇ Yˆ (W, b) = ∇(σ ◦ I )(W, b) = ∇σ (I (W, b)) · ∇I (W, b). (4.47) (4.48) Updating the Weight Matrices Recall that the loss function for a multi-classifier is the cross-entropy L(Y, Yˆ (X)) = − Yk lnYˆk . Since Y is a constant vector we can express the cross-entropy as a function of (W, b) L(W, b) = L ◦ σ (I (W, b)). Applying the multivariate chain rule gives ∇L(W, b) = ∇(L ◦ σ )(I (W, b)) = ∇L(σ (I (W, b))) · ∇σ (I (W, b)) · ∇I (W, b). (4.51) (4.52) Stochastic gradient descent is used to find the minimum ˆ = arg minW,b (Wˆ , b) N 1 L(yi , Yˆ W,b (xi )). N Because of the compositional form of the model, the gradient must be derived using the chain rule for differentiation. This can be computed by a forward and then a backward sweep (“back-propagation”) over the network, keeping track only of quantities local to each neuron. Forward Pass Set Z (0) = X and for ∈ {1, . . . , L} set () Z () = fW () ,b() (Z (−1) ) = σ () (W () Z (−1) + b() ). On completion of the forward pass, the error Yˆ − Y is evaluated using Yˆ := Z (L) . 5 Stochastic Gradient Descent (SGD) Back-Propagation Define the back-propagation error δ () := ∇b() L, where given δ (L) = Yˆ − Y , and for = L − 1, . . . , 1 the following recursion relation gives the updated backpropagation error and weight update for layer : δ () = (∇I () σ () )W (+1)T δ (+1) , ∇W () L = δ () ⊗ Z (−1) , (4.55) (4.56) and ⊗ is the outer product of two vectors. See Appendix “Back-Propagation” for a derivation of Eqs. 4.55 and 4.56. The weights and biases are updated for all ∈ {1, . . . , L} according to the expression W () = −γ ∇W () L = −γ δ () ⊗ Z (−1) , b() = −γ δ () , where γ is a user defined learning rate parameter. Note the negative sign: this indicates that weight changes are in the direction of decrease in error. Mini-batch or off-line updates involve using many observations of X at the same time. The batch size refers to the number of observations of X used in each pass. An epoch refers to a round-trip (i.e., forward+backward pass) over all training samples. Example 4.5 Back-Propagation with a Three-Layer Network Suppose that a feedforward network classifier has two sigmoid activated hidden layers and a softmax activated output layer. After a forward pass, the values of {Z () }3=1 are stored and the error Yˆ − Y , where Yˆ = Z (3) , is calculated for one observation of X. The back-propagation and weight updates in the final layer are evaluated: δ (3) = Yˆ − Y ∇W (3) L = δ (3) ⊗ Z (2) . Now using Eqs. 4.55 and 4.56, we update the back-propagation error and weight updates for hidden layer 2 δ (2) = Z (2) (1 − Z (2) )(W (3) )T δ (3) , ∇W (2) L = δ (2) ⊗ Z (1) . (continued) Example 4.4 4 Feedforward Neural Networks Repeating for hidden layer 1 δ (1) = Z (1) (1 − Z (1) )(W (2) )T δ (2) , ∇W (1) L = δ (1) ⊗ X. We update the weights and biases using Eqs. 4.57 and 4.57, so that b(3) → b(3) − γ δ (3) , W (3) → W (3) − γ δ (3) ⊗ Z (2) and repeat for the other weight-bias pairs, {(W () , b() )}2=1 . See the back-propagation notebook for further details of a worked example in Python and then complete Exercise 4.12. 5.2 Momentum One disadvantage of SGD is that the descent in f is not guaranteed or can be very slow at every iteration. Furthermore, the variance of the gradient estimate g k is near zero as the iterates converge to a solution. To address those problems a coordinate descent (CD) and momentum-based modifications of SGD are used. Each CD step evaluates a single component Ek of the gradient ∇f at the current point and then updates the Ek th component of the variable vector in the negative gradient direction. The momentum-based versions of SGD or the so-called accelerated algorithms were originally proposed by Nesterov (2013). The use of momentum in the choice of step in the search direction combines new gradient information with the previous search direction. These methods are also related to other classical techniques such as the heavy-ball method and conjugate gradient methods. Empirically momentum-based methods show a far better convergence for deep learning networks. The key idea is that the gradient only influences changes in the “velocity” of the update v k+1 =μv k − tk g k , (W, b)k+1 =(W, b)k + v k . The parameter μ controls the dumping effect on the rate of update of the variables. The physical analogy is the reduction in kinetic energy that allows “slow down” in the movements at the minima. This parameter is also chosen empirically using cross-validation. Nesterov’s momentum method (a.k.a. Nesterov acceleration) instead calculate gradient at the point predicted by the momentum. We can think of it as a look-ahead strategy. The resulting update equations are 5 Stochastic Gradient Descent (SGD) v k+1 =μv k − tk g((W, b)k + v k ), (W, b)k+1 =(W, b)k + v k . Another popular modification to the SGD method is the AdaGrad method, which adaptively scales each of the learning parameters at each iteration ck+1 =ck + g((W, b)k )2 , . (W, b)k+1 =(W, b)k − tk g(W, b)k )/( ck+1 − a), where a is usually a small number, e.g. a = 10−6 that prevents dividing by zero. PRMSprop takes the AdaGrad idea further and places more weight on recent values of the gradient squared to scale the update direction, i.e. we have ck+1 = dck + (1 − d)g((W, b)k )2 . The Adam method combines both PRMSprop and momentum methods and leads to the following update equations: v k+1 =μv k − (1 − μ)tk g((W, b)k + v k ), ck+1 =dck + (1 − d)g((W, b)k )2 , . (W, b)k+1 =(W, b)k − tk v k+1 /( ck+1 − a). Second-order methods solve the optimization problem by solving a system of nonlinear equations ∇f (W, b) = 0 with Newton’s method (W, b)+ = (W, b) − {∇ 2 f (W, b)}−1 ∇f (W, b). SGD simply approximates ∇ 2 f (W, b) by 1/t. The advantages of a second-order method include much faster convergence rates and insensitivity to the conditioning of the problem. In practice, second-order methods are rarely used for deep learning applications (Dean et al. 2012). The major disadvantage is the inability to train the model using batches of data as SGD does. Since typical deep learning models rely on large-scale datasets, second-order methods become memory and computationally prohibitive at even modest-sized training datasets. Computational Considerations Batching alone is not sufficient to scale SGD methods to large-scale problems on modern high-performance computers. Back-propagation through a chain rule creates an inherit sequential dependency in the weight updates which limits the 4 Feedforward Neural Networks dataset dimensions for the deep learner. Polson et al. (2015) consider a proximal Newton method, a Bayesian optimization technique which provides an efficient solution for estimation and optimization of such models and for calculating a regularization path. The authors present a splitting approach, alternating direction method of multipliers (ADMM), which overcomes the inherent bottlenecks in backpropagation by providing a simultaneous block update of parameters at all layers. ADMM facilitates the use of large-scale computing. A significant factor in the widespread adoption of deep learning has been the creation of TensorFlow (Abadi et al. 2016), an interface for easily expressing machine learning algorithms and mapping compute intensive operations onto a wide variety of different hardware platforms and in particular GPU cards. Recently, TensorFlow has been augmented by Edward (Tran et al. 2017) to combine concepts in Bayesian statistics and probabilistic programming with deep learning. Model Averaging via Dropout We close this section by briefly mentioning one final technique which has proved indispensable in preventing neural networks from over-fitting. Dropout is a computationally efficient technique to reduce model variance by considering many model configurations and then averaging the predictions. The layer input space Z = (Z1 , . . . , Zn ), where n is large, needs dimension reduction techniques which are designed to avoid over-fitting in the training process. Dropout works by removing layer inputs randomly with a given probability θ . The probability, θ , can be viewed as a further hyperparameter (like λ) which can be tuned via cross-validation. Heuristically, if there are 1000 variables, then a choice of θ = 0.1 will result in a search for models with 100 variables. The dropout architecture with stochastic search for the predictors can be used () ∼ Ber(θ ), Z˜ () = d () ◦ Z () , 1 ≤ < L, Z () = σ () (W () Z˜ (−1) + b() ). Effectively, this replaces the layer input Z by d ◦ Z, where ◦ denotes the elementwise product and d is a vector of independent Bernoulli, Ber(θ ), distributed random variables. The overall objective function is closely related to ridge regression with a g-prior (Heaton et al. 2017). 6 Bayesian Neural Networks 6 Bayesian Neural Networks* Bayesian deep learning (Neal 1990; Saul et al. 1996; Frey and Hinton 1999; Lawrence 2005; Adams et al. 2010; Mnih and Gregor 2014; Kingma and Welling 2013; Rezende et al. 2014) provides a powerful and general framework for statistical modeling. Such a framework allows for a completely new approach to data modeling and solves a number of problems that conventional models cannot address: (i) DLs (deep learners) permit complex dependencies between variables to be explicitly represented which are difficult, if not impossible, to model with copulas; (ii) they capture correlations between variables in high-dimensional datasets; and (iii) they characterize the degree of uncertainty in predicting large-scale effects from large datasets relevant for quantifying uncertainty. Uncertainty refers to the statistically unmeasurable situation of Knightian uncertainty, where the event space is known but the probabilities are not (Chen et al. 2017). Oftentimes, a forecast may be shrouded in uncertainty arising from noisy data or model uncertainty, either through incorrect modeling assumptions or parameter error. It is desirable to characterize this uncertainty in the forecast. In conventional Bayesian modeling, uncertainty is used to learn from small amounts of lowdimensional data under parametric assumptions on the prior. The choice of the prior is typically the point of contention and chosen for solution tractability rather than modeling fidelity. Recently, deterministic deep learners have been shown to scale well to large, high-dimensional, datasets. However, the probability vector obtained from the network is often erroneously interpreted as model confidence (Gal 2016). A typical approach to model uncertainty in neural network models is to assume that model parameters (weights and biases) are random variables (as illustrated in Fig. 4.16). Then ANN model approaches Gaussian process as the number of weights goes to infinity (Neal 2012; Williams 1997). In the case of finite number of weights, a network with random parameters is called a Bayesian neural network (MacKay 1992b). Recent advances in “variational inference” techniques and software that represent mathematical models as a computational graph (Blundell et al. 2015a) enable probabilistic deep learning models to be built, without having to worry about how to perform testing (forward propagation) or inference (gradient- based optimization, with back-propagation and automatic differentiation). Variational inference is an approximate technique which allows multi-modal likelihood functions to be extremized with standard stochastic gradient descent algorithm. An alternative to variational and MCMC algorithms was recently proposed by Gal (2016) and builds on efficient dropout regularization technique. All of the current techniques rely on approximating the true posterior over the model parameters p(w | X, Y ) by another distribution qθ (w) which can be evaluated in a computationally tractable way. Such a distribution is chosen to be as close as possible to the true posterior and is found by minimizing the Kullback– Leibler (KL) divergence 4 Feedforward Neural Networks 3 1 0.60 0 0.45 -1 0.30 -3 -3 Probability of class (label=0) 0.40 0.35 0.25 0.20 -1 0.10 -3 -3 Fig. 4.16 Bayesian classification of the half-moon problem with neural networks. (top) The posterior mean and (bottom) the posterior std. dev. θ ∗ ∈ arg min θ ( qθ (w) log qθ (w) dw. p(w | X, Y ) There are numerous approaches to Bayesian deep learning for uncertainty quantification including MCMC (Markov chain Monte Carlo) methods. These are known to scale poorly with the number of observations and recent studies have 6 Bayesian Neural Networks developed SG-MCMC (stochastic gradient MCMC) and related methods such as PX-MCMC (parameter expansion MCMC) to ease the computational burden. A Bayesian extension of feedforward network architectures has been considered by several authors (Neal 1990; Saul et al. 1996; Frey and Hinton 1999; Lawrence 2005; Adams et al. 2010; Mnih and Gregor 2014; Kingma and Welling 2013; Rezende et al. 2014). Recent results show how dropout regularization can be used to represent uncertainty in deep learning models. In particular, Gal (2015) shows that dropout provides uncertainty estimates for the predicted values. The predictions generated by the deep learning models with dropout are nothing but samples from the predictive posterior distribution. A classical example of using neural networks to model a vector of binary variables is the Boltzmann machine (BM), with two layers. The first layer encodes latent variables and the second layer encodes the observed variables. Both conditional distributions p(data | latent variables) and p(latent variables | data) are specified using logistic functions parameterized by weights and offset vectors. The size of the joint distribution table grows exponentially with the number of variables and (Hinton and Sejnowski 1983) proposed using Gibbs sampler to calculate updates to model weights on each iteration. The multi-modal nature of the posterior distribution leads to prohibitive computational times required to learn models of a practical size. Tieleman (2008) proposed a variational approach that replaces the posterior p(latent variables | data) and approximates it with another easy to calculate distribution and was considered in Salakhutdinov (2008). Several extensions to the BMs have been proposed. Exponential family extensions have been considered by (Smolensky 1986; Salakhutdinov 2008; Salakhutdinov and Hinton 2009; Welling et al. 2005). There have also been multiple approaches to building inference algorithms for deep learning models (MacKay 1992a; Hinton and Van Camp 1993; Neal 1992; Barber and Bishop 1998). Performing Bayesian inference on a neural network calculates the posterior distribution over the weights given the observations. In general, such a posterior cannot be calculated analytically, or even efficiently sampled from. However, several recently proposed approaches address the computational problem for some specific deep learning models (Graves 2011; Kingma and Welling 2013; Rezende et al. 2014; Blundell et al. 2015b; Hernández-Lobato and Adams 2015; Gal and Ghahramani 2016). The recent successful approaches to develop efficient Bayesian inference algorithms for deep learning networks are based on the reparameterization techniques for calculating Monte Carlo gradients while performing variational inference. Such an approach has led to an explosive development in the application of stochastic variational inference. Given the data D = (X, Y ), the variational inference relies on approximating the posterior p(θ | D) with a variation distribution q(θ | D, φ), where θ = (W, b). Then q is found by minimizing the Kullback–Leibler divergence between the approximate distribution and the posterior, namely ( KL(q || p) = q(θ | D, φ) log q(θ | D, φ) dθ. p(θ | D) 4 Feedforward Neural Networks Since p(θ | D) is not necessarily tractable, we replace minimization of KL(q || p) with maximization of the evidence lower bound (ELBO) ( ELBO(φ) = q(θ | D, φ) log p(Y | X, θ )p(θ ) dθ q(θ | D, φ) The log of the total probability (evidence) is then log p(D) = ELBO(φ) + KL(q || p). The sum does not depend on φ, thus minimizing KL(q || p) is the same as maximizing ELBO(q). Also, since KL(q || p) ≥ 0, which follows from Jensen’s inequality, we have log p(D) ≥ ELBO(φ). Thus, the evidence lower bound name. The resulting maximization problem ELBO(φ) → maxφ is solved using stochastic gradient descent. To calculate the gradient, it is convenient to write the ELBO as ( ELBO(φ) = ( q(θ | D, φ) log p(Y | X, θ )dθ − q(θ | D, φ) log q(θ | D, φ) dθ p(θ ) % The gradient of the first term ∇φ q(θ | D, φ) log p(Y | X, θ )dθ = ∇φ Eq log p(Y | X, θ ) is not an expectation and thus cannot be calculated using Monte Carlo methods. The idea is to represent the gradient ∇φ Eq log p(Y | X, θ ) as an expectation of some random variable, so that Monte Carlo techniques can be used to calculate it. There are two standard methods to do it. First, the logderivative trick uses the following identity ∇x f (x) = f (x)∇x log f (x) to obtain ∇φ Eq log p(Y | θ ). Thus, if we select q(θ | φ) so that it is easy to compute its derivative and generate samples from it, the gradient can be efficiently calculated using Monte Carlo techniques. Second, we can use the reparameterization trick by representing θ as a value of a deterministic function, θ = g(, x, φ), where ∼ r() does not depend on φ. The derivative is given by ( ∇φ Eq log p(Y | X, θ ) = r()∇φ log p(Y | X, g(, x, φ))d = E [∇g log p(Y | X, g(, x, φ))∇φ g(, x, φ)]. The reparameterization is trivial when q(θ | D, φ) = N(θ | μ(D, φ), (D, φ)), and θ = μ(D, φ) + (D, φ), ∼ N(0, I ). Kingma and Welling (2013) propose using (D, φ) = I and representing μ(D, φ) and as outputs of a neural network (multilayer perceptron), the resulting approach was called a variational autoencoder. A generalized reparameterization has been proposed by Ruiz et al. (2016) and combines both log-derivative and reparameterization techniques by assuming that can depend on φ. 8 Exercises 7 Summary In this chapter we have introduced some of the theory of function approximation and out-of-sample estimation with neural networks when the observation points are i.i.d. Such a case is not suitable for times series data and shall be the subject of later chapters. We restricted our attention to feedforward neural networks in order to explore some of the theoretical arguments which help us reason scientifically about architecture design. We have seen that feedforward networks use hidden units, or perceptrons, to partition the input space into regions bounded with manifolds. In the case of ReLU activated units, each manifold is a hyperplane and the hidden units form a hyperplane arrangement. We have introduced various approaches to reason about the effect of the number of units in each layer in addition to reasoning about the effect of hidden layers. We also introduced various concepts and methods necessary for understanding and applying neural networks to i.i.d. data including – Fat shattering, VC dimension, and the empirical risk measure (ERM) as the basis for characterizing the learnability of a class of MLPs; – The construction of neural networks as splines and their pointwise approximation error bound; – The reason for composing layers in deep learning; – Stochastic gradient descent and back-propagation as techniques for training neural networks; and – Imposing constraints on the network needed for approximating financial derivatives and other constrained optimization problems in finance. 8 Exercises Exercise 4.1 Show that substituting ∇ij Ik = Xj , i = k, 0, i = k, into Eq. 4.47 gives ∇ij σk ≡ ∂σk = ∇i σk Xj = σk (δki − σi )Xj . ∂wij Exercise 4.2 Show that substituting the derivative of the softmax function w.r.t. wij into Eq. 4.52 gives for the special case when the output is Yk = 1, k = i, and Yk = 0, ∀k = i: 4 Feedforward Neural Networks ∇ij L(W, b) := [∇W L(W, b)]ij = (σi − 1)Xj , Yi = 1, Yk = 0, ∀k = i. Exercise 4.3 Consider feedforward neural networks constructed using the following two types of activation functions: – Identity I d(x) := x – Step function (a.k.a. Heaviside function) 2 H (x) := 1 if x ≥ 0, 0 otherwise. 1. Consider a feedforward neural network with one input x ∈ R, a single hidden layer with K units having step function activations, H (x), and a single output with identity (a.k.a. linear) activation, I d(x). The output can be written as , K (2) (1) (1) (2) ˆ wk H (bk + wk x) . f (x) = I d b + k=1 Construct neural networks using these activation functions. a. Consider the step function 2 u(x; a) := yH (x − a) = y, if x ≥ a, 0, otherwise. Construct a neural network with one input x and one hidden layer, whose response is u(x; a). Draw the structure of the neural network, specify the activation function for each unit (either I d or H ), and specify the values for all weights (in terms of a and y). b. Now consider the indicator function 2 1, if x ∈ [a, b), 1[a,b) (x) = 0, otherwise. Construct a neural network with one input x and one hidden layer, whose response is y1[a,b) (x), for given real values y, a and b. Draw the structure of the neural network, specify the activation function for each unit (either I d or H ), and specify the values for all weights (in terms of a, b and y). 8 Exercises Exercise 4.4 A neural network with a single hidden layer can provide an arbitrarily close approximation to any 1-dimensional bounded smooth function. This question will guide you through the proof. Let f (x) be any function whose domain is [C, D), for real values C < D. Suppose that the function is Lipschitz continuous, that is, ∀x, x ∈ [C, D), |f (x ) − f (x)| ≤ L|x − x|, for some constant L ≥ 0. Use the building blocks constructed in the previous part to construct a neural network with one hidden layer that approximates this function within > 0, that is, ∀x ∈ [C, D), |f (x) − fˆ(x)| ≤ , where fˆ(x) is the output of your neural network given input x. Your network should use only the identity or the Heaviside activation functions. You need to specify the number K of hidden units, the activation function for each unit, and a formula for calculating each weight w0 , (k) (k) wk , w0 , and w1 , for each k ∈ {1, . . . , K}. These weights may be specified in terms of C, D, L, and , as well as the values of f (x) evaluated at a finite number of x values of your choosing (you need to explicitly specify which x values you use). You do not need to explicitly write the fˆ(x) function. Why does your network attain the given accuracy ? Exercise 4.5 Consider a shallow neural network regression model with n tanh activated units in (2) the hidden layer and d outputs. The hidden-outer weight matrix Wij = n1 and the input-hidden weight matrix W (1) = 1. The biases are zero. If the features, X1 , . . . , Xp are i.i.d. Gaussian random variables with mean μ = 0, variance σ 2 , show that a. Yˆ ∈ [−1, 1]. b. Yˆ is independent of the number of hidden units, n ≥ 1. c. The expectation, E[Yˆ ] = 0, and the variance V[Yˆ ] ≤ 1. Exercise 4.6 Determine the VC dimension of the sum of indicator functions where = [0, 1] Fk (x) = {f : → {0, 1}, f (x) = 1x∈[t2i ,t2i+1 ) , 0 ≤ t0 < · · · < t2k+1 ≤ 1, k ≥ 1}. Exercise 4.7 Show that a feedforward binary classifier with two Heaviside activated units shatters the data {0.25, 0.5, 0.75}. Exercise 4.8 Compute the weight and bias updates of W (2) and b(2) given a shallow binary classifier (with one hidden layer) with unit weights, zero biases, and ReLU activation of two hidden units for the labeled observation (x = 1, y = 1). 4 Feedforward Neural Networks 8.1 Programming Related Questions* Exercise 4.9 Consider the following dataset (taken from Anscombe’s quartet): (x1 , y1 ) = (10.0, 9.14), (x2 , y2 ) = (8.0, 8.14), (x3 , y3 ) = (13.0, 8.74), (x4 , y4 ) = (9.0, 8.77), (x5 , y5 ) = (11.0, 9.26), (x6 , y6 ) = (14.0, 8.10), (x7 , y7 ) = (6.0, 6.13), (x8 , y8 ) = (4.0, 3.10), (x9 , y9 ) = (12.0, 9.13), (x10 , y10 ) = (7.0, 7.26), (x11 , y11 ) = (5.0, 4.74). a. Use a neural network library of your choice to show that a feedforward network with one hidden layer consisting of one unit and a feedforward network with no hidden layers, each using only linear activation functions, do not outperform linear regression based on ordinary least squares (OLS). b. Also demonstrate that a neural network with a hidden layer of three neurons using the tanh activation function and an output layer using the linear activation function captures the non-linearity and outperforms the linear regression. Exercise 4.10 Review the Python notebook deep_classifiers.ipynb. This notebook uses Keras to build three simple feedforward networks applied to the half-moon problem: a logistic regression (with no hidden layer); a feedforward network with one hidden layer; and a feedforward architecture with two hidden layers. The halfmoons problem is not linearly separable in the original coordinates. However you will observe—after plotting the fitted weights and biases—that a network with many hidden neurons gives a linearly separable representation of the classification problem in the coordinates of the output from the final hidden layer. Complete the following questions in your own words. a. Did we need more than one hidden layer to perfectly classify the half-moons dataset? If not, why might multiple hidden layers be useful for other datasets? b. Why not use a very large number of neurons since it is clear that the classification accuracy improves with more degrees of freedom? c. Repeat the plotting of the hyperplane, in Part 1b of the notebook, only without the ReLU function (i.e., activation=“linear”). Describe qualitatively how the decision surface changes with increasing neurons. Why is a (non-linear) activation function needed? The use of figures to support your answer is expected. Exercise 4.11 Using the EarlyStopping callback in Keras, modify the notebook Deep_Classifiers.ipynb to terminate training under the following stopping criterion |L(k+1) − L(k) | ≤ δ with δ = 0.1. Exercise 4.12*** Consider a feedforward neural network with three inputs, two units in the first hidden layer, two units in the second hidden layer, and three units in the output layer. The activation function for hidden layer 1 is ReLU, for hidden layer 2 is sigmoid, and for the output layer is softmax. The initial weights are given by the matrices W (1) ⎞ ⎛ ) ) * * 0.5 0.6 0.1 0.3 0.7 0.4 0.3 = , W (2) = , W (3) = ⎝0.6 0.7⎠ , 0.9 0.4 0.4 0.7 0.2 0.3 0.2 and all the biases are unit vectors. ' & ' & Assuming that the input 0.1 0.7 0.3 corresponds to the output 1 0 0 , manually compute the updated weights and biases after a single epoch (forward + backward pass), clearly stating all derivatives that you have used. You should use a learning rate of 1. As a practical exercise, you should modify the implementation of a stochastic gradient descent routine in the back-propagation Python notebook. Note that the notebook example corresponds to the example in Sect. 5, which uses sigmoid activated hidden layers only. Compare the weights and biases obtained by TensorFlow (or your ANN library of choice) with those obtained by your procedure after 200 epochs. Appendix Answers to Multiple Choice Questions Question 1 Answer: 1, 2, 3, 4. All answers are found in the text. Question 2 Answer: 1,2. A feedforward architecture is always convex w.r.t. each input variable if every activation function is convex and the weights are constrained to be either all positive or all negative. Simply using convex activation functions is not sufficient, since the composition of a convex function and the affine transformation of a convex function do not preserve the convexity. For example, if σ (x) = x 2 , w = −1, and b = 1, then σ (wσ (x) + b) = (−x 2 + 1)2 is not convex in x. A feedforward architecture with positive weights is a monotonically increasing function of the input for any choice of monotonically increasing activation function. The weights of a feedforward architecture need not be constrained for the output of a feedforward network to be bounded. For example, activating the output with a softmax function will bound the output. Only if the output is not activated, should the weights and bias in the final layer be bounded to ensure bounded output. 4 Feedforward Neural Networks The bias terms in a network shift the output but also effect the derivatives of the output w.r.t. to the input when the layer is activated. Question 3 Answer: 1,2,3,4. The training of a neural network involves minimizing a loss function w.r.t. the weights and biases over the training data. L1 regularization is used during model selection to penalize models with too many parameters. The loss function is augmented with a Lagrange penalty for the number of weights. In deep learning, regularization can be applied to each layer of the network. Therefore each layer has an associated regularization parameter. Back-propagation uses the chain rule to update the weights of the network but is not guaranteed to convergence to a unique minimum. This is because the loss function is not convex w.r.t. the weights. Stochastic gradient descent is a type of optimization method which is implemented with back-propagation. There are variants of SGD, however, such as adding Nestov’s momentum term, ADAM , or RMSProp. Back-Propagation Let us consider a feedforward architecture with an input layer, L − 1 hidden layers, and one output layer, with K units in the output layer for classification of K categories. As a result, we have L sets of weights and biases (W () , b() ) for = 1, . . . , L, corresponding to the layer inputs Z (−1) and outputs Z () for = 1, . . . , L. Recall that each layer is an activation of a semi-affine transformation, I () (Z (−1) ) := W (L) Z (−1) + b(L) . The corresponding activation functions are denoted as σ () . The activation function for the output layer is a softmax function, σs (x). Here we use the cross-entropy as the loss function, which is defined as L := − Yk log Yˆk . The relationship between the layers, for ∈ {1, . . . , L} are Yˆ (X) = Z (L) = σs (I (L) ) ∈ [0, 1]K , Z () = σ () I () , = 1, . . . , L − 1, Z (0) = X. The update rules for the weights and biases are W () = −γ ∇W () L, b() = −γ ∇b() L. We now begin the back-propagation, tracking the intermediate calculations carefully using Einstein summation notation. For the gradient of L w.r.t. W (L) we have ∂L (L) K (L) ∂L ∂Zk (L) K K (L) (L) ∂L ∂Zk ∂Im (L) But ∂L (L) ∂Zk (L) ∂Im = = Zk ∂ [σ (I (L) )]k (L) exp[Ik ] (L) K (L) ∂Im n=1 exp[In ] ⎧ (L) (L) exp[I ] ⎪ ⎨− K k (L) Kexp[Im ](L) ⎪ ⎩ n=1 exp[In (L) exp[Ik ] K (L) n=1 exp[In ] n=1 exp[In ] (L) exp[Ik ] K (L) n=1 exp[In ] −σk σm if k = m σk (1 − σm ) = σk (δkm − σm ) if k = m (L) exp[Im ] K (L) n=1 exp[In ] where δkm is the Kronecker’s Delta (L) ∂wij ∂L (L) = δmi Zj K K Yk (L) = −Zj(L−1) Yk (δki − Zi(L) ) (L−1) (L) Zj (Zi (L) (L) Zm (δkm − Zm )δmi Zj m=1 K − Yi ), 4 Feedforward Neural Networks where we have used the fact that b(L) , we have ∂L (L) = 1 in the last equality. Similarly for k=1 Yk K K (L) (L) ∂L ∂Zk ∂Im (L) = Zi(L) − Yi It follows that ∇b(L) L = Z (L) − Y ∇W (L) L = ∇b(L) L ⊗ Z (L−1) , where ⊗ denotes the outer product. Now for the gradient of L w.r.t. W (L−1) we have ∂L (L−1) ∂wij K ∂L k=1 (L−1) ∂Zk(L) ∂wij (L−1) K K (L) n ∂L ∂Zk (L) (L−1) n If we assume that σ () (x) = sigmoid(x), ∈ {1, . . . , L − 1}, then (L) (L−1) ∂Zn (L) = wmn = = 1 (L−1) 1 + exp(−In ) (L−1) (L−1) 1 + exp(−In ) δnp (L−1) 1 + exp(−In ) = Zn(L−1) (1 − Zn(L−1) ) δnp = σn(L−1) (1 − σn(L−1) )δnp ∂Ip(L−1) (L−1) ∂wij ⇒ ∂L (L) ∂wij = δpi Zj(L−2) K K Yk (L) k=1 Zk m=1 (L) Zk (δkm − Zm ) 161 (L−1) n (L) wmn (L−1) n Zn(L−1) (1 − Zn(L−1) ) δnp δpi Zj (L−1) n (L−2) (L) (L) (L−1) − Yk (δkm − Zm ) wmn Zn (1 − Zn(L−1) ) δni Zj k=1 m=1 n=1 (L) (L−2) (L) (δkm − Zm )wmi Zi (1 − Zi(L−1) )Zj(L−2) = −Zj (1 − Zi m=1 (L−2) = Zj (1 − Zi K (L) (δkm Yk − Zm Yk ) k=1 (L) wmi (Zm − Ym ) (L−2) (L−1) (L−1) Zi (1 − Zi )(Z (L) Zj − Y )T w,i Similarly we have ∂L (L−1) ∂bi = Zi (1 − Zi )(Z (L) − Y )T w,i . It follows that we can define the following recursion relation for the loss gradient: T ∇b(L−1) L = Z (L−1) ◦ (1 − Z (L−1) ) ◦ (W (L) ∇b(L) L) ∇W (L−1) L = ∇b(L−1) L ⊗ Z (L−2) T = Z (L−1) ◦ (1 − Z (L−1) ) ◦ (W (L) ∇W (L) L), where ◦ denotes the Hadamard product (element-wise multiplication). This recursion relation can be generalized for all layers and choice of activation functions. To see this, let the back-propagation error δ () := ∇b() L, and since ∂σ () ∂I () = ij ∂σi() () = σi() (1 − σi() )δij or equivalently in matrix–vector form 4 Feedforward Neural Networks ∇I () σ () = diag(σ () ◦ (1 − σ () )), we can write, in general, for any choice of activation function for the hidden layer, δ () = ∇I () σ () (W (+1) )T δ (+1) , and ∇W () L = δ () ⊗ Z (−1) . Proof of Theorem 4.2 Using the same deep structure shown in Fig. 4.9, Liang and Srikant (2016) find the binary expansion sequence {x0 , . . . , xn }. In this step, they used n binary steps units in total. Then they rewrite gm+1 ( ni=0 2xni ), gm+1 , n xi i=0 n j =0 n j =0 , n - xi 1 xj · j gm 2 2i , n - xi 1 max 2(xj − 1) + j gm ,0 . 2 2i Clearly Eq. 4.57 defines iterations between the outputs of neighboring layers. Defin p xj n ˆ ing the output of the multilayer neural network as f (x) = i=0 ai gi j =0 2j . For this multilayer network, the approximation error is ⎛ ⎞ p n p xj i ⎠− |f (x) − fˆ(x)| = ai gi ⎝ a x i j 2 i =0 j =0 i=0 ⎛ ⎤ ⎡ ⎞ p n xj i ⎣|ai | · gi ⎝ ⎠−x ⎦≤ p . ≤ j 2 2n−1 i=0 j =0 3 4 This indicates, to achieve ε-approximation error, one should choose n = log pε + 1. Besides, since O(n + p) layers with O(n) binary step units& and O(pn)' ReLU units are used in total, this multilayer neural network thus has O p + log pε layers, & ' & ' O log pε binary step units, and O p log pε ReLU units. Table 4.2 Definitions of the functions f (x) and g(x) f (x) := max(x − 14 , 0), cIf = {[0, 14 ], ( 14 , 1]}, g(x) := max(x − 12 , 0) cIg = {[0, 12 ], ( 12 , 1]}. Proof of Lemmas from Telgarsky (2016) Proof (Proof of 4.1) Let cIf denote the partition of R corresponding to f , and cIg denote the partition of R corresponding to g. First consider f + g, and moreover any intervals Uf ∈ cIf and Ug ∈ cIg . Necessarily, f + g has a single slope along Uf ∩ Ug . Consequently, f + g is |cI|-sawtooth, where cI is the set of all intersections of intervals from cIf and cIg , meaning cI := {Uf ∩ Ug : Uf ∈ cIf , Ug ∈ cIg }. By sorting the left endpoints of elements of cIf and cIg , it follows that |cI| ≤ k + l (the other intersections are empty). For example, consider the example in Fig. 4.11 with partitions given in Table 4.2. The set of all intersections of intervals from cIf and cIg contains 3 elements: 1 1 1 1 1 1 cI = {[0, ] ∩ [0, ], ( , 1] ∩ [0, ], ( , 1] ∩ ( , 1]} 4 2 4 2 4 2 Now consider f ◦ g, and in particular consider the image f (g(Ug )) for some interval Ug ∈ cIg . g is affine with a single slope along Ug ; therefore, f is being considered along a single unbroken interval g(Ug ). However, nothing prevents g(Ug ) from hitting all the elements of cIf ; since Ug was arbitrary, it holds that f ◦ g is (|cIf | · |cIg |)-sawtooth. 1 Proof Recall the notation f˜(x) : = [f (x) ≥ 1/2], whereby E(f ) := n i [yi = f˜(xi )]. Since f is piecewise monotonic with a corresponding partition R having at most t pieces, then f has at most 2t − 1 crossings of 1/2: at most one within each interval of the partition, and at most 1 at the right endpoint of all but the last interval. Consequently, f˜ is piecewise constant, where the corresponding partition of R is into at most 2t intervals. This means n points with alternating labels must land in 2t buckets, thus the total number of points landing in buckets with at least three points is at least n − 4t. Python Notebooks The notebooks provided in the accompanying source code repository are designed to gain insight in toy classification datasets. They provide examples of deep feedforward classification, back-propagation, and Bayesian network classifiers. Further details of the notebooks are included in the README.md file. 4 Feedforward Neural Networks References Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). Tensor flow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16 (pp. 265–283). Adams, R., Wallach, H., & Ghahramani, Z. (2010). Learning the structure of deep sparse graphical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 1–8). Andrews, D. (1989). A unified theory of estimation and inference for nonlinear dynamic models a.r. gallant and h. white. Econometric Theory, 5(01), 166–171. Baillie, R. T., & Kapetanios, G. (2007). Testing for neglected nonlinearity in long-memory models. Journal of Business & Economic Statistics, 25(4), 447–461. Barber, D., & Bishop, C. M. (1998). Ensemble learning in Bayesian neural networks. Neural Networks and Machine Learning, 168, 215–238. Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017a). Nearly-tight VC-dimension bounds for piecewise linear neural networks. CoRR, abs/1703.02930. Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017b). Nearly-tight VC-dimension bounds for piecewise linear neural networks. CoRR, abs/1703.02930. Bengio, Y., Roux, N. L., Vincent, P., Delalleau, O., & Marcotte, P. (2006). Convex neural networks. In Y. Weiss, Schölkopf, B., & Platt, J. C. (Eds.), Advances in neural information processing systems 18 (pp. 123–130). MIT Press. Bishop, C. M. (2006). Pattern recognition and machine learning (information science and statistics). Berlin, Heidelberg: Springer-Verlag. Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015a, May). Weight uncertainty in neural networks. arXiv:1505.05424 [cs, stat]. Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015b). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424. Chataigner, Crepe, & Dixon. (2020). Deep local volatility. Chen, J., Flood, M. D., & Sowers, R. B. (2017). Measuring the unmeasurable: an application of uncertainty quantification to treasury bond portfolios. Quantitative Finance, 17(10), 1491– 1507. Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., et al. (2012). Large scale distributed deep networks. In Advances in neural information processing systems (pp. 1223– 1231). Dixon, M., Klabjan, D., & Bang, J. H. (2016). Classification-based financial markets prediction using deep neural networks. CoRR, abs/1603.08604. Feng, G., He, J., & Polson, N. G. (2018, Apr). Deep learning for predicting asset returns. arXiv e-prints, arXiv:1804.09314. Frey, B. J., & Hinton, G. E. (1999). Variational learning in nonlinear Gaussian belief networks. Neural Computation, 11(1), 193–213. Gal, Y. (2015). A theoretically grounded application of dropout in recurrent neural networks. arXiv:1512.05287. Gal, Y. (2016). Uncertainty in deep learning. Ph.D. thesis, University of Cambridge. Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In international Conference on Machine Learning (pp. 1050– 1059). Gallant, A., & White, H. (1988, July). There exists a neural network that does not make avoidable mistakes. In IEEE 1988 International Conference on Neural Networks (vol.1 ,pp. 657–664). Graves, A. (2011). Practical variational inference for neural networks. In Advances in Neural Information Processing Systems (pp. 2348–2356). Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: data mining, inference and prediction. Springer. Heaton, J. B., Polson, N. G., & Witte, J. H. (2017). Deep learning for finance: deep portfolios. Applied Stochastic Models in Business and Industry, 33(1), 3–12. Hernández-Lobato, J. M., & Adams, R. (2015). Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning (pp. 1861– 1869). Hinton, G. E., & Sejnowski, T. J. (1983). Optimal perceptual inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 448–453). IEEE New York. Hinton, G. E., & Van Camp, D. (1993). Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory (pp. 5–13). ACM. Hornik, K., Stinchcombe, M., & White, H. (1989, July). Multilayer feedforward networks are universal approximators. Neural Netw., 2(5), 359–366. Horvath, B., Muguruza, A., & Tomas, M. (2019, Jan). Deep learning volatility. arXiv e-prints, arXiv:1901.09647. Hutchinson, J. M., Lo, A. W., & Poggio, T. (1994). A nonparametric approach to pricing and hedging derivative securities via learning networks. The Journal of Finance, 49 (3), 851–889. Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114. Kuan, C.-M., & White, H. (1994). Artificial neural networks: an econometric perspective. Econometric Reviews, 13(1), 1–91. Lawrence, N. (2005). Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research, 6(Nov), 1783–1816. Liang, S., & Srikant, R. (2016). Why deep neural networks? CoRR abs/1610.04161. Lo, A. (1994). Neural networks and other nonparametric techniques in economics and finance. In AIMR Conference Proceedings, Number 9. MacKay, D. J. (1992a). A practical Bayesian framework for backpropagation networks. Neural Computation, 4(3), 448–472. MacKay, D. J. C. (1992b, May). A practical Bayesian framework for backpropagation networks. Neural Computation, 4(3), 448–472. Martin, C. H., & Mahoney, M. W. (2018). Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. CoRR abs/1810.01075. Mhaskar, H., Liao, Q., & Poggio, T. A. (2016). Learning real and Boolean functions: When is deep better than shallow. CoRR abs/1603.00988. Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030. Montúfar, G., Pascanu, R., Cho, K., & Bengio, Y. (2014, Feb). On the number of linear regions of deep neural networks. arXiv e-prints, arXiv:1402.1869. Mullainathan, S., & Spiess, J. (2017). Machine learning: An applied econometric approach. Journal of Economic Perspectives, 31(2), 87–106. Neal, R. M. (1990). Learning stochastic feedforward networks, Vol. 64. Technical report, Department of Computer Science, University of Toronto. Neal, R. M. (1992). Bayesian training of backpropagation networks by the hybrid Monte Carlo method. Technical report, CRG-TR-92-1, Dept. of Computer Science, University of Toronto. Neal, R. M. (2012). Bayesian learning for neural networks, Vol. 118. Springer Science & Business Media. bibtex: aneal2012bayesian. Nesterov, Y. (2013). Introductory lectures on convex optimization: A basic course, Volume 87. Springer Science & Business Media. Poggio, T. (2016). Deep learning: mathematics and neuroscience. A sponsored supplement to science brain-inspired intelligent robotics: The intersection of robotics and neuroscience, pp. 9–12. Polson, N., & Rockova, V. (2018, Mar). Posterior concentration for sparse deep learning. arXiv e-prints, arXiv:1803.09138. Polson, N. G., Willard, B. T., & Heidari, M. (2015). A statistical theory of deep learning via proximal splitting. arXiv:1509.06061. 4 Feedforward Neural Networks Racine, J. (2001). On the nonlinear predictability of stock returns using financial and economic variables. Journal of Business & Economic Statistics, 19(3), 380–382. Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. Ruiz, F. R., Aueb, M. T. R., & Blei, D. (2016). The generalized reparameterization gradient. In Advances in Neural Information Processing Systems (pp. 460–468). Salakhutdinov, R. (2008). Learning and evaluating Boltzmann machines. Tech. Rep., Technical Report UTML TR 2008-002, Department of Computer Science, University of Toronto. Salakhutdinov, R., & Hinton, G. (2009). Deep Boltzmann machines. In Artificial Intelligence and Statistics (pp. 448–455). Saul, L. K., Jaakkola, T., & Jordan, M. I. (1996). Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research, 4, 61–76. Sirignano, J., Sadhwani, A., & Giesecke, K. (2016, July). Deep learning for mortgage risk. ArXiv e-prints. Smolensky, P. (1986). Parallel distributed processing: explorations in the microstructure of cognition (Vol. 1. pp. 194–281). Cambridge, MA, USA: MIT Press. Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1), 1929–1958. Swanson, N. R., & White, H. (1995). A model-selection approach to assessing the information in the term structure using linear models and artificial neural networks. Journal of Business & Economic Statistics, 13(3), 265–275. Telgarsky, M. (2016). Benefits of depth in neural networks. CoRR abs/1602.04485. Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning (pp. 1064–1071). ACM. Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle. CoRR abs/1503.02406. Tran, D., Hoffman, M. D., Saurous, R. A., Brevdo, E., Murphy, K., & Blei, D. M. (2017, January). Deep probabilistic programming. arXiv:1701.03757 [cs, stat]. Vapnik, V. N. (1998). Statistical learning theory. Wiley-Interscience. Welling, M., Rosen-Zvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems (pp. 1481–1488). Williams, C. K. (1997). Computing with infinite networks. In Advances in Neural Information Processing systems (pp. 295–301). Chapter 5 This chapter presents a method for interpreting neural networks which imposes minimal restrictions on the neural network design. The chapter demonstrates techniques for interpreting a feedforward network, including how to rank the importance of the features. An example demonstrating how to apply interpretability analysis to deep learning models for factor modeling is also presented. 1 Introduction Once the neural network has been trained, a number of important issues surface around how to interpret the model parameters. This aspect is a prominent issue for practitioners in deciding whether to use neural networks in favor of other machine learning and statistical methods for estimating factor realizations, sometimes even if the latter’s predictive accuracy is inferior. In this section, we shall introduce a method for interpreting multilayer perceptrons which imposes minimal restrictions on the neural network design. Chapter Objectives By the end of this chapter, the reader should expect to accomplish the following: – Apply techniques for interpreting a feedforward network, including how to rank the importance of the features. – Learn how to apply interpretability analysis to deep learning models for factor modeling. © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https://doi.org/10.1007/978-3-030-41068-1_5 5 Interpretability 2 Background on Interpretability There are numerous techniques for interpreting machine learning methods which treat the model as a black-box. A good example are Partial Dependence Plots (PDPs) as described by Greenwell et al. (2018). Other approaches also exist in the literature. Garson (1991) partitions hidden-output connection weights into components associated with each input neuron using absolute values of connection weights. Olden and Jackson (2002) determine the relative importance, [R]ij , of the ith output to the j th predictor variable of the model as a function of the weights, according to a simple linear expression. We seek to understand the limitations on the choice of activation functions and understand the effect of increasing layers and numbers of neurons on probabilistic interpretability. For example, under standard Gaussian i.i.d. data, how robust are the model’s estimate of the importance of each input variable with variable number of neurons? 2.1 Sensitivities We shall therefore turn to a “white-box” technique for determining the importance of the input variables. This approach generalizes Dimopoulos et al. (1995) to a deep neural network with interaction terms. Moreover, the method is directly consistent with how coefficients are interpreted in linear regression—they are model sensitivities. Model sensitivities are the change of the fitted model output w.r.t. input. As a control, we shall use this property to empirically evaluate how reliably neural networks, even deep networks, learn data from a linear model. Such an approach is appealing to practitioners who are evaluating the comparative performance of linear regression with neural networks and need the assurance that a neural network model is at least able to reproduce and match the coefficients on a linear dataset. We also offset the common misconception that the activation functions must be deactivated for a neural network model to produce a linear output. Under linear data, any non-linear statistical model should be able to reproduce a statistical linear model under some choice of parameter values. Irrespective of whether data is linear or non-linear in practice - the best control experiment for comparing a neural network estimator with an OLS estimator is to simulate data under a linear regression model. In this scenario, the correct model coefficients are known and the error in the coefficient estimator can be studied. To evaluate fitted model sensitivities analytically, we require that the function Yˆ = f (X) is continuous and differentiable everywhere. Furthermore, for stability of 3 Explanatory Power of Neural Networks the interpretation, we shall require that f (x) is Lipschitz continuous.1 That is, there is a positive real constant K s.t. ∀x1 , x2 ∈ Rp , |F (x1 ) − F (x2 )| ≤ K|x1 − x2 |. Such a constraint is necessary for the first derivative to be bounded and hence amenable to the derivatives, w.r.t. to the inputs, providing interpretability. Fortunately, provided that the weights and biases are finite, each semi-affine function is Lipschitz continuous everywhere. For example, the function tanh(x) is continuously differentiable and its derivative 1−tanh2 (x) is globally bounded. With finite weights, the composition of tanh(x) with an affine function is also Lipschitz. Clearly ReLU(x) := max(·, 0) is not continuously differentiable and one cannot use the approach described here. Note that for the following examples, we are indifferent to the choice of homoscedastic or heteroscedastic error, since the model sensitivities are independent of the error. 3 Explanatory Power of Neural Networks In a linear regression model Yˆ = Fβ (X) := β0 + β1 X1 + · · · + βK XK , the model sensitivities are ∂Xi Yˆ = βi . In a feedforward neural network, we can use the chain rule to obtain the model sensitivities ∂Xi Yˆ = ∂Xi FW,b (X) = ∂Xi σW (L) ,b(L) ◦ · · · ◦ σW (1) ,b(1) (X). (L) (1) For example, with one hidden layer, σ (x) := tanh(x) and σW (1) ,b(1) (X) := σ (I (1) ) := σ (W (1) X + b(1) ): ∂Xj Yˆ = w,i (1 − σ 2 (Ii ))wij where ∂x σ (x) = (1 − σ 2 (x)). In matrix form, with general σ , the Jacobian2 of σ w.r.t X is J = D(I (1) )W (1) of σ , ∂X Yˆ = W (2) J (I (1) ) = W (2) D(J (1) )W (1) , 1 If Lipschitz continuity is not imposed, then a small change in one of the input values could result in an undesirable large variation in the derivative. 2 When σ is an identity function, the Jacobian J (I (1) ) = W (1) . 5 Interpretability where Dii (I ) = σ (Ii ), Dij = 0, i = j is a diagonal matrix. Bounds on the sensitivities are given by the product of the weight matrices min(W (2) W (1) , 0) ≤ ∂X Yˆ ≤ max(W (2) W (1) , 0). 3.1 Multiple Hidden Layers The model sensitivities can be readily generalized to an L layer deep network by evaluating the Jacobian matrix: ∂X Yˆ = W (L) J (I (L−1) ) = W (L) D(I (L−1) )W (L−1) . . . D(I (1) )W (1) . 3.2 Example: Step Test To illustrate our interpretability approach, we shall consider a simple example. The model is trained to the following data generation process where the coefficients of the features are stepped and the error, here, is i.i.d. uniform: Yˆ = iXi , Xi ∼ U(0, 1). Figure 5.1 shows the ranked importance of the input variables in a neural network with one hidden layer. Our interpretability method is compared with well-known black-box interpretability methods such as Garson’s algorithm (Garson 1991) and Olden’s algorithm (Olden and Jackson 2002). Our approach is the only technique to interpret the fitted neural network which is consistent with how a linear regression model would interpret the input variables. 4 Interaction Effects The previous example is too simplistic to illustrate another important property of our interpretability method, namely the ability to capture pairwise interaction terms. The pairwise interaction effects are readily available by evaluating the elements of the Hessian matrix. For example, with one hidden layer, the Hessian takes the form: (1) (1) ∂X2 i Xj Yˆ = W (2) diag (Wi )D (I (1) )Wj , where it is assumed that the activation function is at least twice differentiable everywhere, e.g. tanh(x). 4 Interaction Effects 2.5 5.0 7.5 10.0 Importance (Sensitivity) 0.00 0.05 0.10 0.15 0.20 Importance (Garson’s algorithm) 0 10 20 30 40 Importance (Olden’s algorithm) Fig. 5.1 Step test: This figure shows the ranked importance of the input variables in a neural network with one hidden layer. (left) Our sensitivity based approach for input interpretability. (Center) Garson’s algorithm and (Right) Olden’s algorithm. Our approach is the only technique to interpret the fitted neural network which is consistent with how a linear regression model would interpret the input variables 4.1 Example: Friedman Data To illustrate our input variable and interaction effect ranking approach, we will use a classical nonlinear benchmark regression problem. The input space consists of ten i.i.d. uniform U (0, 1) random variables; however, only five out of these ten actually appear in the true model. The response is related to the inputs according to the formula 5 Interpretability 0.0 2.5 5.0 7.5 10.0 Importance (Sensitivity) 0.0 0.1 0.2 0.3 Importance (Garson's algorithm) −10 0 10 20 30 Importance (Olden's algorithm) Fig. 5.2 Friedman test: Ranked model sensitivities of the fitted neural network to the input. (left) Our sensitivity based approach for input interpretability. (Center) Garson’s algorithm and (Right) Olden’s algorithm Y = 10 sin (π X1 X2 ) + 20 (X3 − 0.5)2 + 10X4 + 5X5 + , ' & using white noise error, ∼ N 0, σ 2 . We fit a NN with one hidden layer containing eight units and a weight decay of 0.01 (these parameters were chosen using 5-fold cross-validation) to 500 observations simulated from the above model with σ = 1. The cross-validated R 2 value was 0.94. Figures 5.2 and 5.3, respectively, compare the ranked model sensitivities and ranked interaction terms of the fitted neural network with Garson’s and Olden’s algorithm. 5 Bounds on the Variance of the Jacobian General results on the bound of the variance of the Jacobian for any activation function are difficult to derive. However, we derive the following result for a ReLU activated single-layer feedforward network. In matrix form, with σ (x) = max(x, 0), the Jacobian, J , can be written as a linear combination of Heaviside functions: J := J (X) = ∂X Yˆ (X) = W (2) J (I (1) ) = W (2) H (W (1) X + b(1) )W (1) , 5 Bounds on the Variance of the Jacobian x_12 x_24 Interaction Term x_25 x_14 x_45 x_15 x_310 x_28 x_34 x_23 0 2 Importance (Sensitivity) Fig. 5.3 Friedman test: Ranked pairwise interaction terms in the fitted neural network to the input. (Left) Our sensitivity based approach for ranking interaction terms. (Center) Garson’s algorithm and (Right) Olden’s algorithm where Hii (Z) = H (Ii ) = 1{I (1) >0} , Hij = 0, j ≥ i. We assume that the mean i of the Jacobian is independent of the number of hidden units, μij := E[Jij ]. Then we can state the following bound on the Jacobian of the network for the special case when the input is one-dimensional. Theorem (Dixon and Polson 2019) If X ∈ Rp is i.i.d. and there are n hidden units, with ReLU activation i, then the variance of a single-layer feedforward network with K outputs is bounded by μij V[Jij ] = μij n−1 < μij , ∀i ∈ {1, . . . , K} and ∀j ∈ {1, . . . , p}. n See Appendix “Proof of Variance Bound on Jacobian” for the proof. Remark 5.1 The theorem establishes a negative result for a ReLU activated shallow network—increasing the number of hidden units, increases the bound on the variance of the Jacobian, and hence reduces interpretability of the sensitivities. Note that if we do not assume that the mean of the Jacobian is fixed under varying n, then we have the more general bound: V[Jij ] ≤ μij , 5 Interpretability and hence the effect of network architecture on the bound of the variance of the Jacobian is not clear. Note that the theorem holds without (i) distributional assumptions on X other than i.i.d. data and (ii) specifying the number of data points. Remark 5.2 This result also suggests that the inputs should be rescaled so that each μij , the expected value of the Jacobian, is a small positive value, although it may not be possible to find such a scaling for all (i, j ) pairs. 5.1 Chernoff Bounds We can derive probabilistic bounds on the Jacobians for any choice of activation function. Let δ > 0 and a1 , . . . , an−1 be reals in (0, 1]. Let X1 , . . . , Xn−1 be independent Bernoulli trials with E[Xk ] = pk so that E[J ] = ak pk = μ. The Chernoff-type bound exists on deviations of J above the mean eδ Pr(J > (1 + δ)μ) = (1 + δ)1+δ μ (5.14) A similar bound exists for deviations of J below the mean. For γ ∈ (0, 1]: eγ Pr(J − μ < −γ μ) < (1 + γ )1+γ μ . These bounds are generally weak and are suited to large deviations, i.e. the tail regions. The bounds are shown in the Fig. 5.4 for different values of μ. Here, μ is increasing towards the upper right-hand corner of the plot. 5.2 Simulated Example In this section, we demonstrate the estimation properties of neural network sensitivities applied to data simulated from a linear model. We show that the sensitivities in a neural network are consistent with the linear model, even if the neural network model is non-linear. We also show that the confidence intervals, estimated by sampling, converge with increasing hidden units. We generate 400 simulated training samples from the following linear model with i.i.d. Gaussian error: 5 Bounds on the Variance of the Jacobian 0.85 0.70 Pr[J> μ(1+δ)] Fig. 5.4 The Chernoff-type bounds for deviations of J above the mean, μ. Various μ are shown in the plot, with μ increasing towards the upper right-hand corner of the plot Table 5.1 This table compares the functional form of the variable sensitivities and values with an OLS estimator. NN0 is a zero hidden layer feedforward network and NN1 is a one hidden layer feedforward network with 10 hidden neurons and tanh activation functions Model OLS NN0 NN1 Intercept Sensitivity of X1 Sensitivity of X2 0.011 βˆ1 1.015 βˆ2 1.018 βˆ0 (1) (1) bˆ (1) 0.020 Wˆ 1 1.018 Wˆ 2 1.021 Wˆ (2) σ (bˆ (1) ) + bˆ (2) 0.021 E[Wˆ (2) D(I (1) )Wˆ 1(1) ] 1.014 E[Wˆ (2) D(I (1) )Wˆ 2(1) ] 1.022 Y = β1 X1 + β2 X2 + , X1 , X2 , ∼ N(0, 1), β1 = 1, β2 = 1. Table 5.1 compares an OLS estimator with a zero hidden layer feedforward network (NN0 ) and a one hidden layer feedforward network with 10 hidden neurons and tanh activation functions (NN1 ). The functional form of the first two regression models is equivalent, although the OLS estimator has been computed using a matrix solver, whereas the zero layer hidden network parameters have been fitted with stochastic gradient descent. The fitted parameters values will vary slightly with each optimization as the stochastic gradient descent is randomized. However, the sensitivity terms are given in closed form and easily mapped to the linear model. In an industrial setting, such a one-to-one mapping is useful for migrating to a deep factor model where, for model validation purposes, compatibility with linear models should be recovered in a limiting case. Clearly, if the data is not generated from a linear model, then the parameter values would vary across models. 5 Interpretability Fig. 5.5 This figure shows the empirical distribution of the sensitivities βˆ1 and βˆ2 . The sharpness of the distribution is observed to converge with the number of hidden units. (a) Density of βˆ1 . (b) Density of βˆ2 Table 5.2 This table shows the moments and 99% confidence interval of the empirical distribution of the sensitivity βˆ1 . The sharpness of the distribution is observed to converge monotonically with the number of hidden units Hidden Units 2 10 50 100 200 Mean 0.980875 0.9866159 0.99183553 1.0071343 1.0152218 Median 1.0232913 1.0083131 1.0029879 1.0175397 1.0249312 Std.dev 0.10898393 0.056483902 0.03123002 0.028034585 0.026156902 1% C.I. 0.58121675 0.76814914 0.8698967 0.89689034 0.9119074 99% C.I. 1.0729908 1.0322522 1.0182846 1.0296803 1.0363332 Table 5.3 This table shows the moments and the 99% confidence interval of the empirical distribution of the sensitivity βˆ2 . The sharpness of the distribution is observed to converge monotonically with the number of hidden units Hidden Units 2 10 50 100 200 Mean 0.98129386 0.9876832 0.9903236 0.9842479 0.9976638 Median 1.0233982 1.0091512 1.0020974 0.9946766 1.0074166 Std.dev 0.10931312 0.057096474 0.031827927 0.028286876 0.026751818 1% C.I. 0.5787732 0.76264584 0.86471796 0.87199813 0.8920307 99% C.I. 1.073728 1.0339714 1.0152498 1.0065105 1.0189484 Figure 5.5 and Tables 5.2 and 5.3 show the empirical distribution of the fitted sensitivities using the single hidden layer model with increasing hidden units. The sharpness of the distributions is observed to converge monotonically with the number of hidden units. The confidence intervals are estimated under a nonparametric distribution. In general, provided the weights and biases of the network are finite, the variances of the sensitivities are bounded for any input and choice of activation function. 6 Factor Modeling We do not recommend using ReLU activation because it does not permit identification of the interaction terms and has provably non-convergent sensitivity variances as a function of the number of hidden units (see Appendix “Proof of Variance Bound on Jacobian”). 6 Factor Modeling Rosenberg and Marathe (1976) introduced a cross-sectional fundamental factor model to capture the effects of macroeconomic events on individual securities. The choice of factors are microeconomic characteristics—essentially common factors, such as industry membership, financial structure, or growth orientation (Nielsen and Bender 2010). The BARRA fundamental factor model expresses the linear relationship between K fundamental factors and N asset returns: rt = Bt ft + t , t = 1, . . . , T , where Bt = [1 | β 1&(t) |'· · · | β K (t)] is the N ×K +1 matrix of known factor loadings (betas): βi,k (t) := β k i (t) is the exposure of asset i to factor k at time t. The factors are asset specific attributes such as market capitalization, industry classification, style classification. ft = [αt , f1,t , . . . , fK,t ] is the K + 1 vector of unobserved factor realizations at time t, including αt . rt is the N -vector of asset returns at time t. The errors are assumed independent 2 ] = σ 2. of the factor realizations ρ(fi,t , j,t ) = 0, ∀i, j, t with Gaussian error, E[j,t 6.1 Non-linear Factor Models We can extend the linear model to a non-linear cross-sectional fundamental factor model of the form rt = Ft (Bt ) + t , where rt are asset returns, Ft : RK → R is a differentiable non-linear function that maps the ith row of B to the ith asset return at time t. The map is assumed to incorporate a bias term so that Ft (0) = αt . In the special case when Ft (Bt ) is linear, the map is Ft (Bt ) = Bt ft . A key feature is that we do not assume that t is described by a parametric distribution, such as a Gaussian distribution. In our example, we shall treat t as i.i.d., however, we can extend the methodology to non-i.i.d. idea as in Dixon and Polson (2019). In our setup, the model shall just be used to predict the next period returns only and stationarity of the factor realizations is not required. 5 Interpretability We approximate a non-linear map, Ft (Bt ), with a feedforward neural network cross-sectional factor model: rt = FWt ,bt (Bt ) + t , where FWt ,bt is a deep neural network with L layers. 6.2 Fundamental Factor Modeling This section presents an application of deep learning to a toy fundamental factor model. Factor models in practice include significantly more fundamental factors that used here. But the purpose, here, is to illustrate the application of interpretable deep learning to financial data. We define the universe as the top 250 stocks from the S&P 500 index, ranked by market cap. Factors are given by Bloomberg and reported monthly over a hundred month period beginning in February 2008. We remove stocks with too many missing factor values, leaving 218 stocks. The historical factors are inputs to the model and are standardized to enable model interpretability. These factors are (i) current enterprise value; (ii) Priceto-Book ratio; (iii) current enterprise value to trailing 12 month EBITDA; (iv) Price-to-Sales ratio; (v) Price-to-Earnings ratio; and (vi) log market cap. The responses are the monthly asset returns for each stock in our universe based on the daily adjusted closing prices of the stocks. We use Tensorflow (Abadi et al. 2016) to implement a two hidden layer feedforward network and develop a custom implementation for the least squares error and variable sensitivities and is available in the deep factor models notebook. The OLS regression is implemented by the Python StatsModels module. All deep learning results are shown using L1 regularization and tanh activation functions. The number of hidden units and regularization parameters are found by three-fold cross-validation and reported alongside the results. Figure 5.6 compares the performance of an OLS estimator with the feedforward neural network with 10 hidden units in the first hidden layer and 10 hidden units in the second layer and λ1 = 0.001. Figure 5.7 shows the in-sample MSE as a function of the number of hidden units in the hidden layer. The neural networks are trained here without L1 regularization to demonstrate the effect of solely increasing the number of hidden units in the first layer. Increasing the number of hidden units reduces the bias in the model. Figure 5.8 shows the effect of L1 regularization on the MSE errors for a network with 10 units in each of the two hidden layers. Increasing the level of L1 regularization increases the in-sample bias but reduces the out-of-sample bias, and hence the variance of the estimation error. Figure 5.9 compares the distribution of sensitivities to each factor over the entire 100 month period using the neural network (top) and OLS regression (bottom). The 6 Factor Modeling MSE (in-sample) MSE (out-of-sample) Fig. 5.6 This figures compares the in-sample and out-of-sample performances of an OLS estimator (OLS) with a feedforward neural network (NN), as measured by the mean squared error (MSE). The neural network is observed to always exhibit slightly lower out-of-sample MSE, although the effect of deep networks on this problem is marginal because the dataset is too simplistic. (a) Insample error. (b) Out-of-sample error 0.0060 MSE In-Sample Fig. 5.7 This figure shows the in-sample MSE as a function of the number of hidden units in the hidden layer. Increasing the number of hidden units reduces the bias in the model 0.0055 0.0050 0.0045 0.0040 Zero 0.0135 MSE Out-of-sample (50N) 0.0052 MSE in-sample (50N) 10 25 Number of Neurons 0.0050 0.0048 0.0046 0.0044 0.0130 0.0125 0.0120 0.0115 0.0110 0.0105 0.0042 0.00 0.04 0.06 0.08 L1 Regularization 0.04 0.06 0.08 L1 Regularization Fig. 5.8 These figures show the effect of L1 regularization on the MSE errors for a network with 10 neurons in each of the two hidden layers. (a) In-sample. (b) Out-of-sample 5 Interpretability Fig. 5.9 The distribution of sensitivities to each factor over the entire 100 month period using the neural network (top). The sensitivities are sorted in ascending order from left to right by their median values. The same sensitivities using OLS linear regression (bottom) sensitivities are sorted in ascending order from left to right by their median values. We observe that the OLS regression is much more sensitive to the factors than the NN. We further note that the NN ranks the top sensitivities differently to OLS. Clearly, the above results are purely illustrative of the interpretability methodology and not intended to be representative of a real-world factor model. Such a choice of factors is observed to provide little benefit on the information ratios of a simple stock selection strategy. Larger Dataset For completeness, we provide evidence that our neural network factor model generates positive and higher information ratios than OLS when used to sort portfolios from a larger universe, using up to 50 factors (see Table 5.4 for a description of the factors). The dataset is not provided due to data licensing restrictions. We define the universe as 3290 stocks from the Russell 3000 index. Factors are given by Bloomberg and reported monthly over the period from November 2008 to November 2018. We train a two-hidden layer deep network with 50 hidden units using ReLU activation. Figure 5.10 compares the out-of-sample performance of neural networks and OLS regression by the MSE (left) and the information ratios of a portfolio selection strategy which selects the n stocks with the highest predicted monthly returns (right). The information ratios are evaluated for various size portfolios, using the Russell 3000 index as the benchmark. Also shown, for control, are randomly selected portfolios. 6 Factor Modeling Table 5.4 A short description of the factors used in the Russell 3000 deep learning factor model demonstrated at the end of this chapter B/P CF/P E/P S/EV EB/EV FE/P MC S TA TrA Value factors Book to price Cash flow to price Earning to price Sales to enterprise value (EV). EV is given by EV=Market Cap + LT Debt + max(ST Debt-Cash,0), where LT (ST) stands for long (short) term EBIDTA to EV Forecasted E/P. Forecast earnings are calculated from Bloomberg earnings consensus estimates data For coverage reasons, Bloomberg uses the 1-year and 2-year forward earnings Dividend yield. The exposure to this factor is just the most recently announced annual net dividends divided by the market price Stocks with high dividend yields have high exposures to this factor Size factors Log (Market capitalization) Log (sales) Log (total assets) Trading activity factors Trading activity is a turnover based measure Bloomberg focuses on turnover which is trading volume normalized by shares outstanding This indirectly controls for the Size effect The exponential weighted average (EWMA) of the ratio of shares traded to shares outstanding In addition, to mitigate the impacts of those sharp short-lived spikes in trading volume, Bloomberg winsorizes the data first daily trading volume data is compared to the long-term EWMA volume(180 day half-life), then the data is capped at 3 standard deviations away from the EWMA average Earnings variability factors Earnings volatility to total assets Earnings volatility is measured over the last 5 years/median total assets over the last 5 years Cash flow volatility to total assets Cash flow volatility is measured over the last 5 years/median total assets over the last 5 years Sales volatility to total assets Sales volatility over the last 5 years/median total assets over the last 5 year (continued) 5 Interpretability Table 5.4 (continued) Volatility factors Rolling volatility which is the return volatility over the latest 252 trading days CB Rolling CAPM beta which is the regression coefficient from the rolling window regression of stock returns on local index returns Growth factors TAG Total asset growth is the 5-year average growth in total assets divided by the average total assets over the last 5 years EG Earnings growth is the 5-year average growth in earnings divided by the average total assets over the last 5 years GSIC sectorial codes (I)ndustry {10, 20, 30, 40, 50, 60, 70} (S)ub-(I)ndustry {10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80} (S)ector {10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60} (I)ndustry (G)roup {10, 20, 30, 40, 50} RV Fig. 5.10 (a) The out-of-sample MSE is compared between OLS and a two-hidden layer deep network applied to a universe of 3290 stocks from the Russell 3000 index over the period from November 2008 to November 2018. (b) The information ratios of a portfolio selection strategy which selects the n stocks from the universe with the highest predicted monthly returns. The information ratios are evaluated for various size portfolios. The information ratios are based on outof-sample predicted asset returns using OLS regression, neural networks, and randomized selection with no predictive Finally, Fig. 5.11 compares the distribution of sensitivities to each factor over the entire 100 month period using the neural network (top) and OLS regression (bottom). The sensitivities are sorted in ascending order from left to right by their median values. We observe that the NN ranking of the factors differs substantially from the OLS regression. 7 Summary Fig. 5.11 The distribution of factor model sensitivities to each factor over the entire ten-year period using the neural network applied to the Russell 3000 asset factor loadings (top). The sensitivities are sorted in ascending order from left to right by their median values. The same sensitivities using OLS linear regression (bottom). See Table 5.4 for a short description of the fundamental factors 7 Summary An important aspect in adoption of neural networks in factor modeling is the existence of a statistical framework which provides the transparency and statistical interpretability of linear least squares estimation. Moreover, one should expect to use such a framework applied to linear data and obtain similar results to linear regression, thus isolating the effects of non-linearity versus the effect of using different optimization algorithms and model implementation environments. In this chapter, we introduce a deep learning framework for interpretable cross-sectional modeling and demonstrate its application to a simple fundamental factor model. Deep learning generalizes the linear fundamental factor models by capturing non-linearity, interaction effects, and non-parametric shocks in financial econometrics. This framework provides interpretability, with confidence intervals, and ranking of the factor importance and interaction effects. In the case when the network contains no hidden layers, our approach recovers a linear fundamental factor model. The framework allows the impact of non-linearity and non-parametric treatment of the error on the factors over time and forms the basis for generalized interpretability of fundamental factors. 5 Interpretability 8 Exercises Exercise 5.1* Consider the following data generation process Y = X1 + X2 + , X1 , X2 , ∼ N (0, 1), i.e. β0 = 0 and β1 = β2 = 1. a. For this data, write down the mathematical expression for the sensitivities of the fitted neural network when the network has – – – – – zero hidden layers; one hidden layer, with n unactivated hidden units; one hidden layer, with n tanh activated hidden units; one hidden layer, with n ReLU activated hidden units; and two hidden layers, each with n tanh activated hidden units. Exercise 5.2** Consider the following data generation process Y = X1 + X2 + X1 X2 + , X1 , X2 ∼ N(0, 1), ∼ N (0, σn2 ), i.e. β0 = 0 and β1 = β2 = β12 = 1, where β12 is the interaction term. σn2 is the variance of the noise and σn = 0.01. a. For this data, write down the mathematical expression for the interaction term (i.e., the off-diagonal components of the Hessian matrix) of the fitted neural network when the network has – – – – – zero hidden layers; one hidden layer, with n unactivated hidden units; one hidden layer, with n tanh activated hidden units; one hidden layer, with n ReLU activated hidden units; and two hidden layers, each with n tanh activated hidden units. Why is the ReLU activated network problematic for estimating interaction terms? 8.1 Programming Related Questions* Exercise 5.3* For the same problem in the previous exercise, use 5000 simulations to generate a regression training set dataset for the neural network with one hidden layer. Produce a table showing how the mean and standard deviation of the sensitivities βi behave as the number of hidden units is increased. Compare your result with tanh and ReLU activation. What do you conclude about which activation function to use for interpretability? Note that you should use the notebook Deep-Learning-Interpretability.ipynb as the starting point for experimental analysis. Exercise 5.4* Generalize the sensitivities function in Exercise 5.3 to L layers for either tanh or ReLU activated hidden layers. Test your function on the data generation process given in Exercise 5.1. Exercise 5.5** Fixing the total number of hidden units, how do the mean and standard deviation of the sensitivities βi behave as the number of layers is increased? Your answer should compare using either tanh or ReLU activation functions. Note, do not mix the type of activation functions across layers. What you conclude about the effect of the number of layers, keeping the total number of units fixed, on the interpretability of the sensitivities? Exercise 5.6** For the same data generation process as the previous exercise, use 5000 simulations to generate a regression training set for the neural network with one hidden layer. Produce a table showing how the mean and standard deviation of the interaction term behave as the number of hidden units is increased, fixing all other parameters. What do you conclude about the effect of the number of hidden units on the interpretability of the interaction term? Note that you should use the notebook Deep-Learning-Interaction.ipynb as the starting point for experimental analysis. Appendix Other Interpretability Methods Partial Dependence Plots (PDPs) evaluate the expected output w.r.t. the marginal density function of each input variable, and allow the importance of the predictors to be ranked. More precisely, partitioning the data X into an interest set, Xs , and its complement, Xc = X \ Xs , then the “partial dependence” of the response on Xs is defined as ( (5.20) fs (Xs ) = EXc f5(Xs , X c ) = f5(Xs , X c ) pc (Xc ) dXc , where pc (Xc ) is the marginal probability density of X c : pc (Xc ) = Equation (5.20) can be estimated from a set of training data by n ' 1 5& f¯s (Xs ) = f X s , X i,c , n i=1 p (x) dx s . 5 Interpretability where Xi,c (i = 1, 2, . . . , n) are the observations of Xc in the training set; that is, the effects of all the other predictors in the model are averaged out. There are a number of challenges with using PDPs for model interpretability. First, the interaction effects are ignored by the simplest version of this approach. While Greenwell et al. (2018) propose a methodology extension to potentially address the modeling of interactive effects, PDPs do not provide a 1-to-1 correspondence with the coefficients in a linear regression. Instead, we would like to know, under strict control conditions, how the fitted weights and biases of the MLP correspond to the fitted coefficients of linear regression. Moreover in the context of neural networks, by treating the model as a black-box, it is difficult to gain theoretical insight in to how the choice of the network architecture affects its interpretability from a probabilistic perspective. Garson (1991) partitions hidden-output connection weights into components associated with each input neuron using absolute values of connection weights. Garson’s algorithm uses the absolute values of the connection weights when calculating variable contributions, and therefore does not provide the direction of the relationship between the input and output variables. Olden and Jackson (2002) determines the relative importance, rij = [R]ij , of the ith output to the j th predictor variable of the model as a function of the weights, according to the expression (2) rij = Wj k Wki . The approach does not account for non-linearity introduced into the activation, which is the most critical aspects of the model. Furthermore, the approach presented was limited to a single hidden Proof of Variance Bound on Jacobian Proof The Jacobian can be written in matrix element form as Jij = [∂X Yˆ ]ij = wik wkj H (Ik ) = ck Hk (I ) (2) (1) wkj and Hk (I ) := H (Ik(1) ) is the Heaviside function. As where ck := cij k := wik a linear combination of indicator functions, we have Jij = n−1 k=1 ak 1{I (1) >0,I (1) ≤0} + an 1{I (1) >0} , ak := k ci . Alternatively, the Jacobian can be expressed in terms of a weighted sum of independent Bernoulli trials involving X: Jij = 187 n−1 ak 1{w(1) X>−b(1) ,w(1)} k, (1) k+1, X≤−bk+1 } + an 1{w(1) X>−b(1) } . n, Without loss of generality, consider the case when p = 1, the dimension of the input space is one. Then Eq. 5.25 simplifies to: Jij = ak 1xk 0} ] = ≤0} ak pk (1 − pk ). (5.28) Under the assumption that the mean of the Jacobian is invariant to the number of hidden units, or if the weights are constrained so that the mean is constant, then the μ weights are ak = npijk . Then the variance is bounded by the mean: V[Jij ] = μij n−1 < μij . n If we relax the assumption that μij is independent of n then, under the original weights ak := ki=1 ci : V[Jij ] = ak pk (1 − pk ) ak pk = μij ≤ ak . 5 Interpretability Russell 3000 Factor Model Description Python Notebooks The notebooks provided in the accompanying source code repository are designed to gain familiarity with how to implement interpretable deep networks. The examples include toy simulated data and a simple factor model. Further details of the notebooks are included in the README.md file. References Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). Tensor flow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16 (pp. 265–283). Dimopoulos, Y., Bourret, P., & Lek, S. (1995, Dec). Use of some sensitivity criteria for choosing networks with good generalization ability. Neural Processing Letters, 2(6), 1–4. Dixon, M. F., & Polson, N. G. (2019). Deep fundamental factor models. Garson, G. D. (1991, April). Interpreting neural-network connection weights. AI Expert, 6(4), 46– 51. Greenwell, B. M., Boehmke, B. C., & McCarthy, A. J. (2018, May). A simple and effective modelbased variable importance measure. arXiv e-prints, arXiv:1805.04755. Nielsen, F., & Bender, J. (2010). The fundamentals of fundamental factor models. Technical Report 24, MSCI Barra Research Paper. Olden, J. D., & Jackson, D. A. (2002). Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling, 154(1), 135–150. Rosenberg, B., & Marathe, V. (1976). Common factors in security returns: Microeconomic determinants and macroeconomic correlates. Research Program in Finance Working Papers 44, University of California at Berkeley. Part II Sequential Learning Chapter 6 Sequence Modeling This chapter provides an overview of the most important modeling concepts in financial econometrics. Such methods form the conceptual basis and performance baseline for more advanced neural network architectures presented in the next chapter. In fact, each type of architecture is a generalization of many of the models presented here. This chapter is especially useful for students from an engineering or science background, with little exposure to econometrics and time series analysis. 1 Introduction More often in finance, the data consists of observations of a variable over time, e.g. stock prices, bond yields, etc. In such a case, the observations are not independent over time, rather observations are often strongly related to their recent histories. For this reason, the ordering of the data matters (unlike cross-sectional data). This is in contrast to most methods of machine learning which assume that the data is i.i.d. Moreover algorithms and techniques for fitting machine learning models, such as back-propagation for neural networks and cross-validation for hyperparameter tuning, must be modified for use on time series data. “Stationarity” of the data is a further important delineation necessary to successfully apply models to time series data. If the estimated moments of the data change depending on the window of observation, then the modeling problem is much more difficult. Neural network approaches to addressing these challenges are presented in the next chapter. An additional consideration is the data frequency—the frequency at which the data is observed assuming that the timestamps are uniform. In general, the frequency of the data governs the frequency of the time series model. For example, support that we seek to predict the week ahead stock price from daily historical adjusted close prices on business days. In such a case, we would build a model from daily prices © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https://doi.org/10.1007/978-3-030-41068-1_6 6 Sequence Modeling and then predict 5 daily steps ahead, rather than building a model using only weekly intervals of data. In this chapter we shall primarily consider applications of parametric, linear, and frequentist models to uniform time series data. Such methods form the conceptual basis and performance baseline for more advanced neural network architectures presented in the next chapter. In fact, each type of architecture is a generalization of many of the models presented here. Please note that the material presented in this chapter is not intended as a substitute for a more comprehensive and rigorous treatment of econometrics, but rather to provide enough background for Chap. 8. Chapter Objectives By the end of this chapter, the reader should expect to accomplish the following: – Explain and analyze linear autoregressive models; – Understand the classical approaches to identifying, fitting, and diagnosing autoregressive models; – Apply simple heteroscedastic regression techniques to time series data; – Understand how exponential smoothing can be used to predict and filter time series; and – Project multivariate time series data onto lower dimensional spaces with principal component Note that this chapter can be skipped if the reader is already familiar with econometrics. This chapter is especially useful for students from an engineering or physical sciences background, with little exposure to econometrics and time series analysis. 2 Autoregressive Modeling We begin by considering a single variable Yt indexed by t to indicate the variable changes over time. This variable may depend on other variables Xt ; however, we shall simply consider the case when the dependence of Yt is on past observations of itself—this is known as univariate time series analysis. 2.1 Preliminaries Before we can build a model to predict Yt , we recall some basic definitions and terminology, starting with a continuous time setting and then continuing thereafter solely in a discrete-time setting. 2 Autoregressive Modeling > Stochastic Process A stochastic process is a sequence of random variables, indexed by continuous time: {Yt }∞ t=−∞ . > Time Series A time series is a sequence of observations of a stochastic process at discrete times over a specific interval: {yt }nt=1 . > Autocovariance The j th autocovariance of a time series is γj t := E[(yt − μt )(yt−j − μt−j )], where μt := E[yt ]. > Covariance (Weak) Stationarity A time series is weak (or wide-sense) covariance stationary if it has time constant mean and autocovariances of all orders: μt = μ, γj t = γj , As we have seen, this implies that γj = γ−j : the autocovariances depend only on the interval between observations, but not the time of the observations. > Autocorrelation The j th autocorrelation, τj is just the j th autocovariance divided by the variance: τj = γj . γ0 > White Noise White noise, t , is i.i.d. error which satisfies all three conditions: 6 Sequence Modeling a. E[t ] = 0, ∀t; b. V[t ] = σ 2 , ∀t; and c. t and s are independent, t = s, ∀t, s. Gaussian white noise just adds a normality assumption to the error. White noise error is often referred to as a “disturbance,” “shock,” or “innovation” in the financial econometrics literature. With these definitions in place, we are now ready to define autoregressive processes. Tacit in our usage of these models is that the time series exhibits autocorrelation.1 If this is not the case, then we would choose to use cross-sectional models seen in Part I of this book. 2.2 Autoregressive Processes Autoregressive models are parametric time series models describing yt as a linear combination of p past observations and white noise. They are referred to as “processes” as they are representative of random processes which are dependent on one or more past values. > AR(p) Process The pth order autoregressive process of a variable Yt depends only on the previous values of the variable plus a white noise disturbance term yt = μ + φi yt−i + t , i=1 p where t is independent of {yt−1 }i=1 . We refer to μ as the drift term. p is referred to as the order of the model. Defining the polynomial function φ(L) := (1 − φ1 L − φ2 L2 − · · · − φp Lp ), where yt−j is a j th lagged observation of yt given by the Lag operator or Backshift operator, yt−j = Lj [yj ] . The AR(p) process can be expressed in the more compact form φ(L)[yt ] = μ + t . 1 We shall identify statistical tests for establishing autocorrelation later in this chapter. 2 Autoregressive Modeling This compact form shall be conducive to analysis describing the properties of the AR(p) process. We mention in passing that the identification of the parameter p from data, i.e. the number of lags in the model rests on the data being weakly covariance stationary.2 2.3 Stability An important property of AR(p) processes is whether past disturbances exhibit an inclining or declining impact on the current value of y as the lag increases. For example, think of the impact of a news event about a public company on the stock price movement over the next minute versus if the same news event had occurred, say, six months in the past. One should expect that the latter is much less significant than the former. To see this, consider the AR(1) process and write yt in terms of the inverse of (L) yt = (L)[μ + t ], so that for an AR(1) process ∞ yt = 1 φ j Lj [μ + t ], [μ + t ] = 1 − φL j =0 and the infinite sum will be stable, i.e. the φ j terms do not grow with j , provided that |φ| < 1. Conversely, unstable AR(p) processes exhibit the counter-intuitive behavior that the error disturbance terms become increasingly influential as the lag increases. t We can calculate the Impulse Response Function (IRF), ∂∂yt−j ∀j , to characterize the influence of past disturbances. For the AR(p) model, the IRF is given by φ j and hence is geometrically decaying when the model is stable. 2.4 Stationarity Another desirable property of AR(p) models is that their autocorrelation function convergences to zero as the lag increases. A sufficient condition for convergence is stationary. From the characteristic equation 2 Statistical tests for identifying the order of the model will be discussed later in the chapter. 6 Sequence Modeling (z) = (1 − z z z ) · (1 − ) · . . . · (1 − ) = 0, λ1 λ2 λp it follows that a AR(p) model is strictly stationary and ergodic if all the roots lie outside the unit sphere in the complex plane C. That is |λi | > 1, i ∈ {1, . . . , p} and | · | is the modulus of a complex number. Note that if the characteristic equation has at least one unit root, with all other roots lying outside the unit sphere, then this is a special case of non-stationarity but not strict stationarity. > Stationarity of Random Walk We can show that the following random walk (zero mean AR(1) process) is not strictly stationary: yt = yt−1 + t Written in compact form gives (L)[yt ] = t , (L) = 1 − L, and the characteristic polynomial, (z) = 1 − z = 0, implies that the real root z = 1. Hence the root is on the unit circle and the model is a special case of nonstationarity. Finding roots of polynomials is equivalent to finding eigenvalues. The Cayley– Hamilton theorem states that the roots of any polynomial can be found by turning it into a matrix and finding the eigenvalues. Given the p degree polynomial3 : q(z) = c0 + c1 z + . . . + cp−1 zp−1 + zp , we define the p × p companion matrix ⎛ ⎜ ⎜ ⎜ C := ⎜ ⎜ ⎜ ⎝ 3 Notice that the zp coefficient is 1. 0 0 .. . 0 .. . 0 0 −c0 −c1 1 0 .. .. . . 0 0 . . . −cp−2 ⎞ 0 .. ⎟ . ⎟ ⎟ .. ⎟ , . ⎟ ⎟ 1 ⎠ −cp−1 2 Autoregressive Modeling then the characteristic polynomial det (C − λI ) = q(λ), and so the eigenvalues of C are the roots of q. Note that if the polynomial does not have a unit leading coefficient, then one can just divide the polynomial by that coefficient to arrive at the form of Eq. 6.9, without changing its roots. Hence the roots of any polynomial can be found by computing the eigenvalues of a companion matrix. The AR(p) has a characteristic polynomial of the form (z) = 1 − φ1 z − · · · − φp zp and dividing by −φp gives q(z) = − (z) 1 φ1 =− + z + · · · + zp φp φp φp and hence the companion matrix is of the form ⎛ ⎜ ⎜ ⎜ ⎜ C := ⎜ ⎜ ⎜ ⎝ 0 .. . .. . 1 ⎟ ⎟ 0 1 0 ⎟ ⎟ .. .. .. ⎟. . . . ⎟ ⎟ 0 0 0 0 ⎠ φ φ φ1 p−1 p−1 1 − . . . − − φp φp φp φp 0 .. . 2.5 Partial Autocorrelations Autoregressive models carry a signature which allows its order, p, to be determined from time series data provided the data is stationary. This signature encodes the memory in the model and is given by “partial autocorrelations.” Informally each partial autocorrelation measures the correlation of a random variable, yt , with its lag, yt−h , while controlling for intermediate lags. The formal definition of the partial autocorrelation is now given. > Partial Autocorrelation A partial autocorrelation at lag h ≥ 2 is a conditional autocorrelation between a variable, yt , and its hth lag, yt−h under the assumption that the values of the intermediate lags, yt−1 , . . . , yt−h+1 are controlled: τ˜h := τ˜t,t−h := . γ˜h . , γ˜t,h γ˜t−h,h 6 Sequence Modeling where γ˜h := γ˜t,t−h := E[yt − P (yt | yt−1 , . . . , yt−h+1 ), yt−h − P (yt−h | yt−1 , . . . , yt−h+1 )] is the lag-h partial autocovariance, P (W | Z) is an orthogonal projection of W onto the set Z and γ˜t,h := E[(yt − P (yt | yt−1 , . . . , yt−h+1 ))2 ]. The partial autocorrelation function τ˜h : N → [−1, 1] is a map h :→ τ˜h . The plot of τ˜h against h is referred to as the partial correlogram. AR(p) Processes Using the property that a linear orthogonal projection yˆt = P (yt | yt−1 , . . . , yt−h+1 ) is given by the OLS estimator as yˆt = φ1 yt−1 + · · · + φh−1 yt−h+1 , gives the Yule-Walker equations for an AR(p) process, relating the partial autocorrelations T˜ p := [τ˜1 , . . . , τ˜p ] to the autocorrelations Tp := [τ1 , . . . , τp ]: ⎡ ⎢ ⎢ τ1 ˜ R p T p = T p , Rp = ⎢ ⎢ .. ⎣ . τp−1 ⎤ τ1 . . . τp−1 . ⎥ .. .. . . .. ⎥ ⎥ .. ⎥ . .. .. . . . ⎦ τp−2 . . . 1 For h ≤ p, we can solve for the hth lag partial autocorrelation by writing τ˜h = |Rh∗ | , |Rh | where | · | is the matrix determinant and the j th column of [Rh∗ ],j = [Rh ],j , j = h and the hth column is [Rh∗ ],h = Th . For example, the lag-1 partial autocorrelation is τ˜1 = τ1 and the lag-2 partial autocorrelation is 1 τ1 τ1 τ2 τ2 − τ12 . (6.17) τ˜2 = = 1 τ1 1 − τ12 τ1 1 We note, in particular, that the lag-2 partial autocorrelation of an AR(1) process, with autocorrelation T2 = [τ1 , τ12 ] is τ˜2 = τ12 − τ12 1 − τ12 = 0, 2 Autoregressive Modeling and this is true for all lag orders greater than the order of the AR process. We can reason about this property from another perspective—through the partial autocovariances. The lag-2 partial autocovariance of an AR(1) process is γ˜2 := γ˜t,t−2 := E[yt − yˆt , yt−2 − yˆt−2 ], where yˆ = P (yt | yt−1 ) and yˆt−2 = P (yt−2 | yt−1 ). When P is a linear orthogonal projection, we have from the property of an orthogonal projection P (W | Z) = μW + Cov(W, Z) (Z − μZ ) V[Z] t−1 ) and P (yt | yt−1 ) = φ V(y V(yt−1 ) = φ so that yˆt = φyt−1 , yˆt−2 = φyt−1 and hence t = yt − yˆt so the lag-2 partial autocovariance is γ˜2 = E[t , yt−2 − φyt−1 ] = 0. Clearly the lag-1 partial autocovariance of an AR(1) process is γ˜1 = E[yt − μ, yt−1 − μ] = γ1 = φγ0 . 2.6 Maximum Likelihood Estimation The exact likelihood when the density of the data is independent of (φ, σn2 ) is L(y, x; φ, σn2 ) = fYt |Xt (yt |xt ; φ, σn2 )fXt (xt ). Under this assumption, the exact likelihood is proportional to the conditional likelihood function: L(y, x; φ, σn2 ) ∝ L(y|x; φ, σn2 ) = fYt |Xt (yt |xt ; φ, σn2 ) = (σn2 2π )−T /2 exp{− T 1 (yt − φ T xt )2 }. 2σn2 t=1 6 Sequence Modeling In many cases such an assumption about the independence of the density of the data and the parameters is not warranted. For example, consider the zero mean AR(1) with unknown noise variance: yt = φyt−1 + t , t ∼ N(0, σn2 ) Yt |Yt−1 ∼ N(φyt−1 , σn2 ) Y1 ∼ N(0, σn2 ). 1 − φ2 The exact likelihood is L(x; φ, σn2 ) = fYt |Yt−1 (yt |yt−1 ; φ, σn2 )fY1 (y1 ; φ, σn2 ) ) *−1/2 T −1 σn2 1 − φ2 2 2 = 2π exp{− y1 }(σn 2π )− 2 2 2 1−φ 2σn exp{− T 1 (yt − φyt−1 )2 }, 2σn2 t=2 where we made use of the moments of Yt —a result which is derived in Sect. 2.8. Despite the dependence of the density of the data on the parameters, there may be practically little advantage of using exact maximum likelihood against the conditional likelihood method (i.e., dropping the fY1 (y1 ; φ, σn2 ) term). This turns out to be the case for linear models. Maximizing the conditional likelihood is equivalent to ordinary least squares estimation. 2.7 Heteroscedasticity The AR model assumes that the noise is i.i.d. This may be an overly optimistic assumption which can be relaxed by assuming that the noise is time dependent. Treating the noise as time dependent is exemplified by a heteroscedastic AR(p) model yt = μ + 2 φi yt−i + t , t ∼ N(0, σn,t ). There are many tests for heteroscedasticity in time series models and one of them, the ARCH test, is summarized in Table 6.3. The estimation procedure for heteroscedastic models is more complex and involves two steps: (i) estimation of the errors from the maximum likelihood function which treats the errors as independent and (ii) estimation of model parameters under a more general maximum 2 Autoregressive Modeling estimation which treats the errors as time-dependent. Note that such a procedure could be generalized further to account for correlation in the errors but requires the inversion of the covariance matrix, which is computationally intractable with large time series. The conditional likelihood is L(y|X; φ, σn2 ) = fYt |Xt (yt |xt ; φ, σn2 ) 1 = (2π )−T /2 det (D)−1/2 exp{− (y − φ T X)T D −1 (y − φ T X)}, 2 2 is the diagonal covariance matrix and X ∈ RT ×p is the data where Dtt = σn,t matrix defined as [X]t = xt . The advantage of this approach is its relative simplicity. The treatment of noise variance as time dependent in finance has long been addressed by more sophisticated econometrics models and the approach presented here brings AR models into line with the specifications of more realistic models. On the other hand, the use of the sample variance of the residuals is only appropriate when the sample size is sufficient. In practice, this translates into the requirement for a sufficiently large historical period before a prediction can be made. Another disadvantage is that the approach does not explicitly define the relationship between the variances. We shall briefly revisit heteroscedastic models and explore a model for regressing the conditional variance on previous conditional variances in Sect. 2.9. 2.8 Moving Average Processes The Wold representation theorem (a.k.a. Wold decomposition) states that every covariance stationary time series can be written as the sum of two time series, one deterministic and one stochastic. In effect, we have already considered the deterministic component when choosing an AR process.4 The stochastic component can be represented as a “moving average process” or MA(q) process which expresses yt as a linear combination of current and q past disturbances. Its definition is as follows: > MA(q) Process The qth order moving average process is the linear combination of the white noise q process {t−i }t=0 , ∀t 4 This is an overly simplistic statement because the AR(1) process can be expressed as a MA(∞) process and vice versa. 6 Sequence Modeling yt = μ + θi t−i + t . 2 It turns out that yt−1 depends on {t−1 , t−2 , . . . }, but not t and hence γt,t−2 = 0. It should be apparent that this property holds even when P is a non-linear projection provided that the errors are independent (but not necessarily identical). Another brief point of discussion is that an AR(1) process can be rewritten as a MA(∞) process. Suppose that the AR(1) process has a mean μ and the variance of the noise is σn2 , then by a binomial expansion of the operator (1 − φL)−1 we have ∞ μ φ j t−j , yt = + 1−φ j =0 where the moments can be easily found and are E[yt ] = V[yt ] = μ 1−φ ∞ 2 φ 2j E[t−j ] j =0 = σn2 ∞ j =0 φ 2j = σn2 . 1 − φ2 AR and MA models are important components of more complex models which are known as ARMA or, more generally, ARIMA models. The expression of a pattern as a linear combination of past observations and past innovations turns out to be more flexible in time series modeling than any single component. These are by no means the only useful techniques and we briefly turn to another technique which smooths out shorter-term fluctuations and consequently boosts the signal to noise ratio in longer term predictions. 2.9 GARCH Recall from Sect. 2.7 that heteroscedastic time series models treat the error as time dependent. A popular parametric, linear, and heteroscedastic method used in financial econometrics is the Generalized Autoregressive Conditional Heteroscedastic (GARCH) model (Bollerslev and Taylor) . A GARCH(p,q) model specifies that the conditional variance (i.e., volatility) is given by an ARMA (p,q) model—there are p lagged conditional variances and q lagged squared noise terms: 2 Autoregressive Modeling σt2 := E[t2 |t−1 ] = α0 + 2 αi t−i + 2 βi σt−i . This model gives an explicit relationship between the current volatility and previous volatilities. Such a relationship is useful for predicting volatility in the model, with obvious benefits for volatility modeling in trading and risk management. This simple relationship enables us to characterize the behavior of the model, as we shall see shortly. A necessary condition for model stationarity is the following constraint: ( αi + βi ) < 1. When the model is stationary, the long-run volatility converges to the unconditional variance of t : σ 2 := var(t ) = α0 . p i=1 αi + i=1 βi ) To see this, let us consider the l-step ahead forecast using a GARCH(1,1) model: 2 2 + β1 σt−1 σt2 = α0 + α1 t−1 2 σˆ t+1 α0 + α1 E[t2 |t−1 ] + β1 σt2 = σ 2 + (α1 + β1 )(σt2 − σ 2 ) 2 2 2 σˆ t+2 = α0 + α1 E[t+1 |t−1 ] + β1 E[σt+1 |t−1 ] = σ 2 + (α1 + β1 )2 (σt2 − σ 2 ) 2 2 2 σˆ t+l = α0 + α1 E[t+l−1 |t−1 ] + β1 E[σt+l−1 |t−1 ] = σ 2 + (α1 + β1 )l (σt2 − σ 2 ), (6.28) (6.29) (6.30) (6.31) (6.32) (6.33) (6.34) where we have substituted for the unconditional variance, σ 2 = α0 /(1 − α1 − β1 ). 2 → σ 2 as l → ∞ so as the forecast From the above equation we can see that σˆ t+1 horizon goes to infinity, the variance forecast approaches the unconditional variance of t . From the l-step ahead variance forecast, we can see that (α1 + β1 ) determines how quickly the variance forecast converges to the unconditional variance. If the variance sharply rises during a crisis, the number of periods, K, until it is halfway between the first forecast and the unconditional variance is (α1 + β1 )K = 0.5, so the half-life5 is given by K = ln(0.5)/ ln(α1 + β1 ). 5 The half-life is the lag k at which its coefficient is equal to a half. 6 Sequence Modeling For example, if (α1 + β1 ) = 0.97 and steps are measured in days, the half-life is approximately 23 days. 2.10 Exponential Smoothing Exponential smoothing is a type of forecasting or filtering method that exponentially decreases the weight of past and current observations to give smoothed predictions y˜t+1 . It requires a single parameter, α, also called the smoothing factor or smoothing coefficient. This parameter controls the rate at which the influence of the observations at prior time steps decay exponentially. α is often set to a value between 0 and 1. Large values mean that the model pays attention mainly to the most recent past observations, whereas smaller values mean more of the history is taken into account when making a prediction. Exponential smoothing takes the forecast for the previous period y˜t and adjusts with the forecast error, yt − y˜t . The forecast for the next period becomes y˜t+1 = y˜t + α(yt − y˜t ), y˜t+1 = αyt + (1 − α)y˜t . or equivalently Writing this as a geometric decaying autoregressive series back to the first observation: y˜t+1 = αyt + α(1 − α)yt−1 + α(1 − α)2 yt−2 + α(1 − α)3 yt−3 + · · · + α(1 − α)t−1 y1 + α(1 − α)t y˜1 , hence we observe that smoothing introduces a long-term model of the entire observed data, not just a sub-sequence used for prediction in a AR model, for example. For geometrically decaying models, it is useful to characterize it by the half-life—the lag k at which its coefficient is equal to a half: 1 , 2 ln(2α) . ln(1 − α) α(1 − α)k = or k=− The optimal amount of smoothing, α, ˆ is found by maximizing a likelihood function. 3 Fitting Time Series Models: The Box–Jenkins Approach 3 Fitting Time Series Models: The Box–Jenkins Approach While maximum likelihood estimation is the approach of choice for fitting the ARMA models described in this chapter, there are many considerations beyond fitting the model parameters. In particular, we know from earlier chapters that the bias–variance tradeoff is a central consideration which is not addressed in maximum likelihood estimation without adding a penalty term. Machine learning achieves generalized performance through optimizing the bias–variance tradeoff, with many of the parameters being optimized through crossvalidation. This is both a blessing and a curse. On the one hand, the heavy reliance on numerical optimization provides substantial flexibility but at the expense of computational cost and, often-times, under-exploitation of structure in the time series. There are also potential instabilities whereby small changes in hyperparameters lead to substantial differences in model performance. If one were able to restrict the class of functions represented by the model, using knowledge of the relationship and dependencies between variables, then one could in principle reduce the complexity and improve the stability of the fitting procedure. For some 75 years, econometricians and statisticians have approached the problem of time series modeling with ARIMA in a simple and intuitive way. They follow a three-step process to fit and assess AR(p). This process is referred to as a Box–Jenkins approach or framework. The three basic steps of the Box–Jenkins modeling approach are: a. (I)dentification—determining the order of the model (a.k.a. model selection); b. (E)stimation—estimation of model parameters; and c. (D)iagnostic checking—evaluating the fit of the model. This modeling approach is iterative and parsimonious—it favors models with fewer parameters. 3.1 Stationarity Before the order of the model can be determined, the time series must be tested for stationarity. A standard statistical test for covariance stationarity is the Augmented Dickey-Fuller (ADF) test which often accounts for the (c)onstant drift and (t)ime trend. The ADF test is a unit root test—the Null hypothesis is that the characteristic polynomial exhibits at least a unit root and hence the data is non-stationary. If the Null can be rejected at a confidence level, α, then the data is stationary. Attempting to fit a time series model to non-stationary data will result in dubious interpretations of the estimated partial autocorrelation function and poor predictions and should therefore be avoided. 6 Sequence Modeling 3.2 Transformation to Ensure Stationarity Any trending time series process is non-stationary. Before we can fit an AR(p) model, it is first necessary to transform the original time series into a stationary form. In some instances, it may be possible to simply detrend the time series (a transformation which works in a limited number of cases). However this is rarely full proof. To the potential detriment of the predictive accuracy of the model, we can however systematically difference the original time series one or more times until we arrive at a stationary time series. To gain insight, let us consider a simple example. Suppose we are given the following linear model with a time trend of the form: yt = α + βt + t , t ∼ N(0, 1). We first observe that the mean of yt is time dependent: E[yt ] = α + βt, and thus this is model is non-stationary. Instead we can difference the process to give yt − yt−1 = (α + βt + t ) − (α + β(t − 1) + t−1 ) = β + t − t−1 , and hence the mean and the variance of this difference process are constant and the difference process is stationary: E[yt − yt−1 ] = β E[(yt − yt−1 − β) ] = 2σ . 2 Any difference process can be written as an ARIMA(p,d,q) process, where here d = 1 is the order of differencing to achieve stationarity. There is, in general, no guarantee that first order differencing yields a stationary difference process. One can apply higher order differencing, d > 1, to the detriment of recovering the original signal, but one must often resort to non-stationary time series methods such as Kalman filters, Markov-switching models, and advanced neural networks for sequential data covered later in this part of the book. 3.3 Identification A common approach for determining the order of a AR(p) from a stationary time series is to estimate the partial autocorrelations and determine the largest lag which is significant. Figure 6.1 shows the partial correlogram, the plot of the estimated 3 Fitting Time Series Models: The Box–Jenkins Approach Fig. 6.1 This plot shows the partial correlogram, the plot of the estimated partial autocorrelations against the lag. The solid horizontal lines define the 95% confidence interval. We observe that all but the first lag are approximately within the envelope and hence we may determine the order of the AR(p) model as p = 1 partial autocorrelations against the lag. The solid horizontal lines define the 95% confidence interval which can be constructed for each coefficient using 1 ± 1.96 × √ , T where T is the number of observations. Note that we have assumed that T is sufficiently large that the autocorrelation coefficients are assumed to be normally distributed with zero mean and standard error of √1 .6 T We observe that all but the first lag are approximately within the envelope and hence we may determine the order of the AR(p) model as p = 1. The properties of the partial autocorrelation and autocorrelation plots reveal the orders of the AR and MA models. In Fig. 6.1, there is an immediate cut-off in the partial autocorrelation (acf) plot after 1 lag indicating an AR (1) process. Conversely, the location of a sharp cut-off in the estimated autocorrelation function determines the order, q, of a MA process. It is often assumed that the data generation process is a combination of an AR and MA model—referred to as an ARMA(p,q) model. Information Criterion While the partial autocorrelation function is useful for determining the AR(p) model order, in many cases there is an undesirable element of subjectively in the choice. It is often preferable to use the Akaike Information Criteria (AIC) to measure the quality of fit. The AIC is given by AI C = ln(σˆ 2 ) 6 This 2k , T assumption is admitted by the Central Limit Theorem. 6 Sequence Modeling where σˆ 2 is the residual variance (the residual sums of squares divided by the number of observations T ) and k = p + q + 1 is the total number of parameters estimated. This criterion expresses a bias–variance tradeoff between the first term, the quality of fit, and the second term, a penalty function proportional to the number of parameters. The goal is to select the model which minimizes the AIC by first using maximum likelihood estimation and then adding the penalty term. Adding more parameters to the model reduces the residuals but increases the right-hand term, thus the AIC favors the best fit with the fewest number of parameters. On the surface, the overall approach has many similarities with regularization in machine learning where the loss function is penalized by a LASSO penalty (L1 norm of the parameters) or a ridge penalty (L2 norm of the parameters). However, we emphasize that AIC is estimated post-hoc, once the maximum likelihood function is evaluated, whereas in machine learning models, the penalized loss function is directly minimized. 3.4 Model Diagnostics Once the model is fitted we must assess whether the residual exhibits autocorrelation, suggesting the model is underfitting. The residual of fitted time series model should be white noise. To test for autocorrelation in the residual, Box and Pierce propose the Portmanteau statistic Q∗ (m) = T ρˆl2 , as a test statistic for the Null hypothesis H0 : ρˆ1 = · · · = ρˆm = 0 against the alternative hypothesis Ha : ρˆi = 0 for some i ∈ {1, . . . , m}. ρˆi are the sample autocorrelations of the residual. The Box-Pierce statistic follows an asymptotically chi-squared distribution with m degrees of freedom. The closely related Ljung–Box test statistic increases the power of the test in finite samples: m ρˆl2 . Q(m) = T (T + 2) T −l l=1 3 Fitting Time Series Models: The Box–Jenkins Approach Fig. 6.2 This plot shows the results of applying a Ljung–Box test to the residuals of an AR(p) model. (Top) The standardized residuals are shown against time. (Center) The estimated ACF of the residuals is shown against the lag index. (Bottom) The p-values of the Ljung–Box test statistic are shown against the lag index This statistic also follows asymptotically a chi-squared distribution with m degrees of freedom. The decision rule is to reject H0 if Q(m) > χα2 where χα2 denotes the 100(1 − α)th percentile of a chi-squared distribution with m degrees of freedom and is the significance level for rejecting H0 . For a AR(p) model, the Ljung–Box statistic follows asymptotically a chisquared distribution with m − p degrees of freedom. Figure 6.2 shows the results of applying a Ljung–Box test to the residuals of an AR(p) model. (Top) The standardized residuals are shown against time. (Center) The estimated ACF of the residuals is shown against the lag index. (Bottom) The p-values of the Ljung–Box test statistic are shown against the lag index. The figure shows that if the maximum lag in the model is sufficiently large, then the p-value is small and the Null is rejected in favor of the alternative hypothesis. Failing the test requires repeating the Box–Jenkins approach until the model no longer under-fits. The only mild safe-guard against over-fitting is the use of AIC for model selection, but in general there is no strong guarantee of avoiding over-fitting as the performance of the model is not assessed out-of-sample in this framework. Assessing the bias variance tradeoff by cross-validation has arisen as the better approach for generalizing model performance. Therefore any model that has been fitted under a Box–Jenkins approach needs to be assessed out-of-sample by time series cross-validation—the topic of the next section. There are many diagnostic tests which have been developed for time series modeling which we have not discussed here. A small subset has been listed in Table 6.3. The readers should refer to a standard financial econometrics textbook 6 Sequence Modeling such as Tsay (2010) for further details of these tests and elaboration on their application to linear models. 4 Prediction While the Box–Jenkins approach is useful in identifying, fitting, and critiquing models, there is no guarantee that such a model shall exhibit strong predictive properties of course. We seek to predict the value of yt+h given information t up to and including time t. Producing a forecast is simply a matter of taking the conditional expectation of the data under the model. The h-step ahead forecast from a AR(p) model is given by yˆt+h = E[yt+h | t ] = φi yˆt+h−i , where yˆt+h = yt+h , h ≤ 0 and E[t+h | t ] = 0, h > 0. Note that conditional expectations of observed variables are not equal to the unconditional expectations. In particular E[t+h | t ] = t + h, h ≤ 0, whereas E[t+h ] = 0. The quality of the forecast is measured over the forecasting horizon from either the MSE or the MAE. 4.1 Predicting Events If the output is categorical, rather than continuous, then the ARMA model is used to predict the log-odds ratio of the binary event rather than the conditional expectation of the response. This is analogous to using a logit function as a link in logistic regression. Other general metrics are also used to assess model accuracy such as a confusion matrix, the F1-score and Receiver Operating Characteristic (ROC) curves. These metrics are not specific to time series data and could be applied to cross-section models to. The following example will illustrate a binary event prediction problem using time series data. Example 6.1 Predicting Binary Events Suppose we have conditionally i.i.d. Bernoulli r.v.s Xt with pt := P(Xt = 1 | t ) representing a binary event and conditional moments given by (continued) 4 Prediction Example 6.1 • E[Xt | ] = 0 · (1 − pt ) + 1 · pt = pt • V[Xt | ] = pt (1 − pt ) The log-odds ratio shall be assumed to follow an ARMA model, ) pt ln 1 − pt = φ −1 (L)(μ + θ (L)t ). and the category of the model output is determined by a threshold, e.g. pt >= 0.5 corresponds to a positive event. If the number of out-of-sample observations is 24, we can compare the prediction with the observed event and construct a truth table (a.k.a. confusion matrix) as illustrated in Table 6.1. In this example, the accuracy is (12 + 2)/24—the ratio of the sum of the diagonal terms to the set size. The type I (false positive) and type II (false negative) errors, shown by the off-diagonal elements as 8 and 2, respectively. Table 6.1 The confusion matrix for the above example Actual 1 0 Sum Predicted 1 0 12 2 8 2 20 4 Sum 14 10 24 In this example, the accuracy is (12+2)/24—the ratio of the sum of the diagonal terms to the set size. Of special interest are the type I (false positive) and type II (false negative) errors, shown by the off-diagonal elements as 8 and 2, respectively. In practice, careful consideration must be given as to whether there is equal tolerance for type 1 and type 2 errors. The significance of the classifier can be estimated from a chi-squared statistic with one degree of freedom under a Null hypothesis that the classifier is a white noise. In general, Chi-squared testing is used to determine whether two variables are independent of one another. In this case, if the Chi-squared statistic is above a given critical threshold value, associated with a significance level, then we can say that the classifier is not white noise. Let us label the elements of the confusion matrix as in Table 6.2 below. The column and row sums of the confusion matrices and the total number of test samples, m, are also shown. The chi-squared statistic with one degree of freedom is given by the squared difference of the expected result (i.e., a white noise model where the prediction is independent of the observations) and the model prediction, Yˆ , relative to the expected result. When normalized by the number of observations, each element of the confusion matrix is the joint probability [P(Y, Yˆ )] ij . Under a white noise model, 6 Sequence Modeling Table 6.2 The confusion matrix of a binary classification is shown together with the column and row sums and the total number of test samples, m Actual 1 0 Sum Predicted 1 0 m11 m12 m21 m22 m,1 m,2 Sum m1, m1, m the observed outcome, Y , and the predicted outcome, Yˆ , are independent and so [P(Y, Yˆ )]ij = [P(Y )]i [P(Yˆ )]j which is the ith row sum, mi, , multiplied by the j th column sum, m,j , divided by m. Since mi,j is based on the model prediction, the chi-squared statistic is thus χ2 = 2 2 (mij − mi, m,j /m)2 . mi, m,j /m i=1 j =1 Returning to the example above, the chi-squared statistic is χ 2 = (12 − (14 × 20)/24)2 /(14 × 20)/24 + (2 − (14 × 4)/24)2 /(14 × 4)/24 + (8 − (10 × 20)/24)2 /(10 × 20)/24 + (2 − (10 × 4)/24)2 /(10 × 4)/24 = 0.231. This value is far below the threshold value of 6.635 for a chi-squared statistic with one degree of freedom to be significant. Thus we cannot reject the Null hypothesis. The predicted model is not sufficiently indistinguishable from white noise. The example classification model shown above used a threshold of pt >= 0.5 to classify an event as positive. This choice of threshold is intuitive but arbitrary. How can we measure the performance of a classifier for a range of thresholds? A ROC-Curve contains information about all possible thresholds. The ROCCurve plots true positive rates against false positive rates, where these terms are defined as follows: – True Positive Rate (TPR) is T P /(T P + F N): fraction of positive samples which the classifier correctly identified. This is also known as Recall or Sensitivity. Using the confusion matrix in Table 6.1, the TPR=12/(12 + 2) = 6/7. – False Positive Rate (FPR) is F P /(F P +T N): fraction of positive samples which the classifier misidentified. In the example confusion matrix, the FPR=8/(8 + 2) = 4/5. – Precision is T P /(T P + F P ): fraction of samples that were positive from the group that the classifier predicted to be positive. From the example confusion matrix, the precision is 12/(12 + 8) = 3/5. 5 Principal Component Analysis Fig. 6.3 The ROC curve for an example model shown by the green line Each point in a ROC curve is a (TPR, FPR) pair for a particular choice of the threshold in the classifier. The straight dashed black line in Fig. 6.3 represents a random model. The green line shows the ROC curve of the model—importantly it is should always be above the line. The perfect model would exhibit a TPR of unity for all FPRs, so that there is no area above the curve. The advantage of this performance measure is that it is robust to class imbalance, e.g. rare positive events. This is not true of classification accuracy which leads to misinterpretation of the quality of the fit when the data is imbalanced. For example, a constant model Yˆ = f (X) = 1 would be x% accurate if the data consists of x% positive events. Additional related metrics can be derived. Common ones include Area Under the Curve (AUC), which is the area under the green line in Fig. 6.3. The F1-score is the harmonic mean of the precision and recall and is also frequently used. The F1-score reaches its best value at unity and worst at zero and is 2×3/5×6/7 given by F 1 = 2·precision·recall precision+recall . From the example above F1 = 3/5+6/7 = 0.706. 4.2 Time Series Cross-Validation Cross-validation—the method of hyperparameter tuning by rotating through K folds (or subsets) of training-test data—differs for time series data. In prediction models over time series data, no future observations can be used in the training set. Instead, a sliding window must be used to train and predict out-of-sample over multiple repetitions to allow for parameter tuning as illustrated in Fig. 6.4. One frequent challenge is whether to fix the length of the window or allow it to “telescope” by including the ever extending history of observations as the window is “walked forward.” In general, the latter has the advantage of including more observations in the training set but can lead to difficulty in interpreting the confidence of the parameters, due to the loss of control of the sample size. 5 Principal Component Analysis The final section in this chapter approaches data modeling from quite a different perspective, with the goal being to reduce the dimension of multivariate time series 6 Sequence Modeling Testing: assess performance Verification: tuning hyperparameters Out-ofsample period In-sample period Historical period Aggregate test period Fig. 6.4 Times series cross-validation, also referred to as “walk forward optimization,” is used instead of standard cross-validation for cross-sectional data to preserve the ordering of observations in time series data. This experimental design avoids look-ahead bias in the fitted model which occurs when one or more observations in the training set are from the future data. The approach is widely used in finance, especially when the dimensionality of the data presents barriers to computational tractability or practical risk management and trading challenges such as hedging exposure to market risk factors. For example, it may be advantageous to monitor a few risk factors in a large portfolio rather than each instrument. Moreover such factors should provide economic insight into the behavior of the financial markets and be actionable from an investment management perspective. 6 7N Formally, let yi i=1 be a set of N observation vectors, each of dimension n. We 6 7N assume that n ≤ N . Let Y ∈ Rn×N be a matrix whose columns are yi i=1 , ⎡ ⎤ | | Y = ⎣ y1 · · · yN ⎦ . | | The element-wise average of the N observations is an n dimensional signal which may be written as: 1 1 yi = Y1N , N N N y¯ = where 1N ∈ RN ×1 is a column vector of all-ones. Denote Y0 as a matrix whose columns are the demeaned observations (we center each observation yi by subtracting y¯ from it): 5 Principal Component Analysis Y0 = Y − y¯ 1TN . Projection A linear projection from Rm to Rn is a linear transformation of a finite dimensional vector given by a matrix multiplication: xi = WT yi , where yi ∈ Rn , xi ∈ Rm , and W ∈ Rn×m . Each element j in the vector xi is an inner product between yi and the j -th column of W, which we denote by wj . Let X ∈ Rm×N be a matrix whose columns are the set of N vectors of N xi = N1 X1N be the element-wise average, transformed observations, let x¯ = N1 i=1 and X0 = X − x¯ 1TN the demeaned matrix. Clearly, X = WT Y and X0 = WT Y0 . 5.1 Principal Component Projection When the matrix WT represents the transformation that applies principal component 7 analysis, 6 7n we denote W = P, and the columns of the orthonormal matrix, P, denoted pj j =1 , are referred to as loading vectors. The transformed vectors {xi }N i=1 are referred to as principal components or scores. The first loading vector is defined as the unit vector with which the inner products of the observations have the greatest variance: p1 = max wT1 Y0 YT0 w1 s.t. wT1 w1 = 1. w1 The solution to Eq. 6.50 is known to be the eigenvector of the sample covariance matrix Y0 YT0 corresponding to its largest eigenvalue.8 Next, p2 is the unit vector which has the largest variance of inner products between it and the observations after removing the orthogonal projections of the observations onto p1 . It may be found by solving: T p2 = max wT2 Y0 − p1 pT1 Y0 Y0 − p1 pT1 Y0 w2 s.t. wT2 w2 = 1. w2 is, P−1 = PT . normalize the eigenvector and disregard its sign. 7 That 8 We 6 Sequence Modeling The solution to Eq. 6.51 is known to be the eigenvector corresponding to the largest eigenvalue under the constraint that it is not collinear with p1 . Similarly, the remaining loading vectors are equal to the remaining eigenvectors of Y0 YT0 corresponding to descending eigenvalues. The eigenvalues of Y0 YT0 , which is a positive semi-definite matrix, are nonnegative. They are not necessarily distinct, but since it is a symmetric matrix it has n eigenvectors that are all orthogonal, and it is always diagonalizable. Thus, the matrix P may be computed by diagonalizing the covariance matrix: Y0 YT0 = P"P−1 = P"PT , where " = X0 XT0 is a diagonal matrix whose diagonal elements {λi }ni=1 are sorted in descending order. The transformation back to the observations is Y = PX. The fact that the covariance matrix of X is diagonal means that PCA is a decorrelation transformation and is often used to denoise data. 5.2 Dimensionality Reduction PCA is often used as a method for dimensionality reduction, the process of reducing the number of variables in a model in order to avoid the curse of dimensionality. PCA gives the first m principal components (m < n) by applying the truncated transformation Xm = PTm Y, where each column of Xm ∈ Rm×N is a vector whose elements are the first m principal components, and Pm is a matrix whose columns are the first m loading vectors, ⎡ ⎤ | | Pm = ⎣ p1 · · · pm ⎦ ∈ Rn×m . | | Intuitively, by keeping only m principal components, we are losing information, and we minimize this loss of information by maximizing their variances. 6 Summary An important concept in measuring the amount of information lost is the total reconstruction error Y − Yˆ F , where F denotes the Frobenius matrix norm. Pm is also a solution to the minimum total squared reconstruction min 8 82 8 8 8Y0 − WWT Y0 8 s.t. WT W = Im×m . F The m leading loading vectors form an orthonormal basis which spans the m dimensional subspace onto which the projections of the demeaned observations have the minimum squared difference from the original demeaned observations. In other words, Pm compresses each demeaned vector of length n into a vector of length m (where m ≤ n) in such a way that minimizes the sum of total squared reconstruction errors. The minimizer of Eq. 6.52 is not unique: W = Pm Q is also a solution, where Q ∈ Rm×m is any orthogonal matrix, QT = Q−1 . Multiplying Pm from the right by Q transforms the first m loading vectors into a different orthonormal basis for the same subspace. 6 Summary This chapter has reviewed foundational material in time series analysis and econometrics. Such material is not intended to substitute more comprehensive and formal treatment of methodology, but rather provide enough background for Chap. 8 where we shall develop neural networks analogues. We have covered the following objectives: – Explain and analyze linear autoregressive models; – Understand the classical approaches to identifying, fitting, and diagnosing autoregressive models; – Apply simple heteroscedastic regression techniques to time series data; – Understand how exponential smoothing can be used to predict and filter time series; and – Project multivariate time series data onto lower dimensional spaces with principal component analysis. It is worth noting that in industrial applications the need to forecast more than a few steps ahead often arises. For example, in algorithmic trading and electronic market making, one needs to forecast far enough into the future, so as to make the forecast economically realizable either through passive trading (skewing of the price) or through aggressive placement of trading orders. This economic realization of the trading signals takes time, whose actual duration is dependent on the frequency of trading. 6 Sequence Modeling We should also note that in practice linear regressions predicting the difference between a future and current price, taking as inputs various moving averages, are often used in preference to parametric models, such as GARCH. These linear regressions are often cumbersome, taking as inputs hundreds or thousands of variables. 7 Exercises Exercise 6.1 Calculate the mean, variance, and autocorrelation function (acf) of the following zero-mean AR(1) process: yt = φ1 yt−1 + t , where φ1 = 0.5. Determine whether the process is stationary by computing the root of the characteristic equation (z) = 0. Exercise 6.2 You have estimated the following ARMA(1,1) model for some time series data yt = 0.036 + 0.69yt−1 + 0.42ut−1 + ut , where you are given the data at time t − 1, yt−1 = 3.4 and uˆ t−1 = −1.3. Obtain the forecasts for the series y for times t, t + 1, t + 2 using the estimated ARMA model. If the actual values for the series are −0.032, 0.961, 0.203 for t, t + 1, t + 2, calculate the out-of-sample Mean Squared Error (MSE) and Mean Absolute Error (MAE). Exercise 6.3 Derive the mean, variance, and autocorrelation function (ACF) of a zero mean MA(1) process. Exercise 6.4 Consider the following log-GARCH(1,1) model with a constant for the mean equation yt = μ + ut , ut ∼ N(0, σt2 ) 2 ln(σt2 ) = α0 + α1 u2t−1 + β1 lnσt−1 – What are the advantages of a log-GARCH model over a standard GARCH model? – Estimate the unconditional variance of yt for the values α0 = 0.01, α1 = 0.1, β1 = 0.3. – Derive an algebraic expression relating the conditional variance with the unconditional variance. – Calculate the half-life of the model and sketch the forecasted volatility. Exercise 6.5 Consider the simple moving average (SMA) St = Xt + Xt−1 + Xt+2 + . . . + Xt−N +1 , N and the exponential moving average (EMA), given by E1 = X1 and, for t ≥ 2, Et = αXt + (1 − α)Et−1 , where N is the time horizon of the SMA and the coefficient α represents the degree of weighting decrease of the EMA, a constant smoothing factor between 0 and 1. A higher α discounts older observations faster. a. Suppose that, when computing the EMA, we stop after k terms, instead of going after the initial value. What fraction of the total weight is obtained? b. Suppose that we require 99.9% of the weight. What k do we require? c. Show that, by picking α = 2/(N + 1), one achieves the same center of mass in the EMA as in the SMA with the time horizon N . d. Suppose that we have set α = 2/(N + 1). Show that the first N points in an EMA represent about 87.48% of the total weight. Exercise 6.6 Suppose that, for the sequence of random variables {yt }∞ t=0 the following model holds: yt = μ + φyt−1 + t , |φ| ≤ 1, t ∼ i.i.d.(0, σ 2 ). Derive the conditional expectation E[yt | y0 ] and the conditional variance Var[yt | y0 ]. Appendix Hypothesis Tests 6 Sequence Modeling Table 6.3 A short summary of some of the most useful diagnostic tests for time series modeling in finance Name Chi-squared test Mariano-Diebold test ARCH test Portmanteau test Description Used to determine whether the confusion matrix of a classifier is statistically significant, or merely white noise Used to determine whether the output of two separate regression models are statistically different on i.i.d. data Used to determine whether the output of two separate time series models are statistically different The ARCH Engle’s test is constructed based on the property that if the residuals are heteroscedastic, the squared residuals are autocorrelated. The Ljung–Box test is then applied to the squared residuals A general test for whether the error in a time series model is auto-correlated Example tests include the Box-Ljung and the Box-Pierce test Python Notebooks Please see the code folder of Chap. 6 for e.g., implementations of ARIMA models applied to time series prediction. An example, applying PCA to decompose stock prices is also provided in this folder. Further details of these notebooks are included in the README.md file for Chap. 6. Reference Tsay, R. S. (2010). Analysis of financial time series (3rd ed.). Wiley. Chapter 7 Probabilistic Sequence Modeling This chapter presents a powerful class of probabilistic models for financial data. Many of these models overcome some of the severe stationarity limitations of the frequentist models in the previous chapters. The fitting procedure demonstrated is also different—the use of Kalman filtering algorithms for state-space models rather than maximum likelihood estimation or Bayesian inference. Simple examples of hidden Markov models and particle filters in finance and various algorithms are presented. 1 Introduction So far we have seen how sequences can be modeled using autoregressive processes, moving averages, GARCH, and similar methods. There exists another school of thought, which gave rise to hidden Markov models, Baum–Welch and Viterbi algorithms, Kalman and particle filters. In this school of thought, one assumes the existence of a certain latent process (say X), which evolves over time (so we may write Xt ). This unobservable, latent process drives another, observable process (say Yt ), which we may observe either at all times or at some subset of times. The evolution of the latent process Xt , as well as the dependence of the observable process Yt on Xt , may be driven by random factors. We therefore talk about a stochastic or probabilistic model. We also refer to such a model as a statespace model. The state-space model consists in a description of the evolution of the latent state over time and the dependence of the observables on the latent state. We have already seen probabilistic methods presented in Chaps. 2 and 3. These methods primarily assume that the data is i.i.d. On the other hand, the time series methods presented in the previous chapter are designed for time series data but are not probabilistic. This chapter shall build on these earlier chapters by considering a © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, 7 Probabilistic Sequence Modeling powerful class of models for financial data. Many of these models overcome some of the severe stationarity limitations of the frequentist models in the previous chapters. The fitting procedure is also different—we will see the use of Kalman filtering algorithms for state-space models rather than maximum likelihood estimation or Bayesian inference. Chapter Objectives By the end of this chapter, the reader should expect to accomplish the following: – Formulate hidden Markov models (HMMs) for probabilistic modeling over hidden states; – Gain familiarity with the Baum–Welch algorithmic for fitting HMMs to time series data; – Use the Viterbi algorithm to find the most likely path; – Gain familiarity with state-space models and the application of Kalman filters to fit them; and – Apply particle filters to financial time series. 2 Hidden Markov Modeling A hidden Markov model (HMM) is a probabilistic model representing probability distributions over sequences of observations. HMMs are the simplest “dynamic” Bayesian network1 and have proven a powerful model in many applied fields including finance. So far in this book, we have largely considered only i.i.d. observations.2 Of course, financial modeling is often seated in a Markovian setting where observations depend on, and only on, the previous observation. We shall briefly review HMMs in passing as they encapsulate important ideas in probabilistic modeling. In particular, they provide intuition for understanding hidden variables and switching. In the next chapter we shall see the examples of switching in dynamic recurrent neural networks, such as GRUs and LSTMs, which use gating. However, this gating is an implicit modeling step and cannot be controlled explicitly as may be needed for regime switching in finance. Let us assume at time t that the discrete state, st , is hidden from the observer. Furthermore, we shall assume that the hidden state is a Markov process. Note this setup differs from mixture models which treat the hidden variable as i.i.d. The time t observation, yt , is assumed to be independent of the state at all other times. By the Markovian property, the joint distribution of the sequence of states, s := {st }Ti=1 , 1 Dynamic Bayesian networks models are a graphical model used to model dynamic processes through hidden state evolution. 2 With the exception of heteroscedastic modeling in Chap. 6. 2 Hidden Markov Modeling Fig. 7.1 This figure shows the probabilistic graph representing the conditional dependence relations between the observed and the hidden variables in the HMM and sequence of observations, y = {yt }Tt=1 is given by the product of transition probability densities p(st | st−1 ) and emission probability densities, p(yt | st ): p(s, y) = p(s1 )p(y1 | s1 ) p(st | st−1 )p(yt | st ). Figure 7.1 shows the Bayesian network representing the conditional dependence relations between the observed and the hidden variables in the HMM. The conditional dependence relationships define the edges of a graph between parent nodes, Yt , and child nodes St . Example 7.1 Bull or Bear Market? Suppose that the market is either in a Bear or Bull market regime represented by s = 0 or s = 1, respectively. Such states or regimes are assumed to be hidden. Over each period, the market is observed to go up or down and represented by y = −1 or y = 1. Assume that the emission probability matrix—the conditional dependency matrix between observed and hidden variables—is independent of time and given by ⎡ ⎤ y/ s 0 1 P(yt = y | st = s) = ⎣ −1 0.8 0.2⎦ , 1 0.2 0.8 and the transition probability density matrix for the Markov process {St } is given by 0.9 0.1 A= , [A]ij := P(St = si | St−1 = sj ). 0.1 0.9 Given the observed sequence {−1, 1, 1} (i.e., T = 3), we can compute the probability of a realization of the hidden state sequence {1, 0, 0} using Eq. 7.1. Assuming that P(s1 = 0) = P(s1 = 1) = 12 , the computation is (continued) 7 Probabilistic Sequence Modeling Example 7.1 P(s, y) = P(s1 = 1)P(y1 = −1 | s1 = 1)P(s2 = 0 | s1 = 1)P(y2 = 1 | s2 = 0) P(s3 = 0 | s2 = 0)P(y3 = 1 | s3 = 0), = 0.5 · 0.2 · 0.1 · 0.2 · 0.9 · 0.2 = 0.00036. We first introduce the so-called forward and backward quantities, respectively, defined for all states st ∈ {1, . . . , K} and over all times Ft (s) := P(st = s, y1:t ) Bt (s) := p(yt+1:T | st = s) with the convention that BT (s) = 1. For all t ∈ {1, . . . , T } and for all r, s ∈ {1, . . . , K} we have P(st = s, y) = Ft (s)Bt (s), and combining the forward and backward quantities gives P(st−1 = r, st = s, y) = Ft−1 (r)P(st = s | st−1 = r)p(yt | st = s)Bt (s). The forward–backward algorithm, also known as the Baum–Welch algorithm, is an unsupervised learning algorithm for fitting HMMs which belongs to the class of EM algorithms. 2.1 The Viterbi Algorithm In addition to finding the probability of the realization of a particularly hidden state sequence, we may also seek the most likely sequence realization. This sequence can be estimated using the Viterbi algorithm. Suppose once again that we observe a sequence of T observations, y = {y1 , . . . , yT }. However, for each 1 ≤ t ≤ T , yt ∈ O, where O = {o1 , o2 , . . . , oN }, N ∈ N, is now in some observation space. We suppose that, for each 1 ≤ t ≤ T , the observation yt is driven by a (hidden) state st ∈ S, where S := {∫1 , . . . , ∫K }, K ∈ N, is some state space. For example, yt might be the credit rating of a corporate bond and st might indicate some latent variable such as the overall health of the relevant industry sector. Given y, what is the most likely sequence of hidden states, x = {x1 , x2 , . . . , xT }? 2 Hidden Markov Modeling To answer this question, we need to introduce a few more constructs. First, the set of initial probabilities must be given: π = {π1 , . . . , πK }, so that πi is the probability that s1 = ∫i , 1 ≤ i ≤ K. We also need to specify the transition matrix A ∈ RK×K , such that the element Aij , 1 ≤ i, j ≤ K is the transition probability of transitioning from state ∫i to state ∫j . Finally, we need the emission matrix B ∈ RK×N , such that the element Bij , 1 ≤ i ≤ K, 1 ≤ j ≤ N is the probability of observing oj from state ∫i . Let us now consider a simple example to fix ideas. Example 7.2 The Crooked Dealer A dealer has two coins, a fair coin, with P(Heads) = 12 , and a loaded coin, with P(Heads) = 45 . The dealer starts with the fair coin with probability 35 . The dealer then tosses the coin several times. After each toss, there is a 25 probability of a switch to the other coin. The observed sequence is Heads, Tails, Heads, Tails, Heads, Heads, Heads, Tails, Heads. In this case, the state space and observation space are, respectively, S = {∫1 = Fair, ∫2 = Loaded}, O = {o1 = Heads, o2 = Tails}, with initial probabilities π = {π1 = 0.6, π2 = 0.4}, transition probabilities ) * 0.6 0.4 A= , 0.4 0.6 and the emission matrix is B= ) * 0.5 0.5 . 0.8 0.2 Given the sequence of observations y = (Heads, Tails, Heads, Tails, Heads, Heads, Heads, Tails, Heads), we would like to find the most likely sequence of hidden states, s = {s1 , . . . , sT }, i.e., determine which of the two coins generated which of the coin tosses. 7 Probabilistic Sequence Modeling One way to answer this question is by applying the Viterbi algorithm as detailed in the notebook Viterbi.ipynb. We note that the most likely state sequence s, which produces the observation sequence y = {y1 , . . . , yT }, satisfies the recurrence relations V1,k = P(y1 | s1 = ∫k ) · πk , & ' Vt,k = max P(yt | st = ∫k ) · Aik · Vt−1,i , 1≤i≤K where Vt,k is the probability of the most probable state sequence {s1 , . . . , st } such that st = ∫k , Vt,k = P(s1 , . . . , st , y1 , . . . , yt | st = ∫k ). The actual Viterbi path can be obtained by, at each step, keeping track of which state index i was used in the second equation. Let ξ(k, t) be the function that returns the value of i that was used to compute Vt,k if t > 1, or k if t = 1. Then sT = ∫arg max (VT ,k ) , st−1 = ∫ξ(st ,t) . We leave the application of the Viterbi algorithm to our example as an exercise for the reader. Note that the Viterbi algorithm determines the most likely complete sequence of hidden states given the sequence of observations and the model specification, including the known transition and emission matrices. If these matrices are known, there is no reason to use the Baum–Welch algorithm. If they are unknown, then the Baum–Welch algorithm must be used. Filtering and Smoothing with HMMs Financial data is typically noisy and we need techniques which can extract the signal from the noise. There are many techniques for reducing the noise. Filtering is a general term for extracting information from a noisy signal. Smoothing is a particular kind of filtering in which low-frequency components are passed and highfrequency components are attenuated. Filtering and smoothing produce distributions of states at each time step. Whereas maximum likelihood estimation chooses the state with the highest probability at the “best” estimate at each time step, this may not lead to the best path in HMMs. We have seen that the Baum–Welch algorithm can be deployed to find the optimal state trajectory, not just the optimal sequence of “best” states. 3 Particle Filtering 2.2 State-Space Models HMMs belong in the same class as linear Gaussian state-space models. These are known as “Kalman filters” which are continuous latent state analogues of HMMs. Note that we have already seen examples of continuous state-space models, which are not necessarily Gaussian, in our exposition on RNNs in Chap. 8. The state transition probability p(st | st−1 ) can be decomposed into deterministic and noise: st = Ft (st−1 ) + t , for some deterministic function and t is zero-mean i.i.d. noise. Similarly, the emission probability p(yt | st ) can be decomposed as: yt = Gt (st ) + ξt , with zero-mean i.i.d. observation noise. If the functions Ft and Gt are linear and time independent, then we have st = Ast−1 + t , yt = Cst + ξt , (7.9) (7.10) where A is the state transition matrix and C is the observation matrix. For completeness, we contrast the Kalman filter with a univariate RNN, as described in Chap. 8. When the observations are predictors, xt , and the hidden variables are st we have st = F (st−1 , yt ) := σ (Ast−1 + Byt ), yt = Cst + ξt , where we have ignored the bias terms for simplicity. Hence, the RNN state equation differs from the Kalman filter in that (i) it is a non-linear function of both the previous state and the observation; and (ii) it is noise-free. 3 Particle Filtering A Kalman filter maintains its state as moments of the multivariate Gaussian distribution, N(m, P). This approach is appropriate when the state is Gaussian, or when the true distribution can be closely approximated by the Gaussian. What if the distribution is, for example, bimodal? 7 Probabilistic Sequence Modeling Arguably the simplest way to approximate more or less any distribution, including a bimodal distribution, is by data points sampled from that distribution. We refer to those data points as “particles.” The more particles we have, the more closely we can approximate the target distribution. The approximate, empirical distribution is then given by the histogram. Note that the particles need not be univariate, as in our example. They may be multivariate if we are approximating a multivariate distribution. Also, in our example the particles all have the same weight. More generally, we may consider weighted particles, whose weights are unequal. This setup gives rise to the family of algorithms known as particle filtering algorithms (Gordon et al. 1993; Kitagawa 1993). One of the most common of them is the Sequential Importance Resampling (SIR) algorithm: 3.1 Sequential Importance Resampling (SIR) a. Initialization step: At time t = 0, draw M i.i.d. samples from the initial distribution τ0 . Also, initialize M normalized (to 1) weights to an identical value 1 ˆ (i) 0 | 0 and the normalized M . For i = 1, . . . , M, the samples will be denoted x (i) weights λ0 . (i) b. Recursive step: At time t = 1, . . . , T , let (ˆxt−1 | t−1 )i=1,...,M be the particles generated at time t − 1. – Importance sampling: For i = 1, . . . M, sample xˆ (i) t | t−1 from the Markov (i) transition kernel τt (· | xˆ t−1 | t−1 ). For i = 1, . . . , M, use the observation density to compute the non-normalized weights (i) ωt := λt−1 · p(yt | xˆ t | t−1 ) and the values of the normalized weights before resampling (“br”) (i) br (i) λt ω := M t (k) k=1 ωt – Resampling (or selection): For i = 1, . . . , M, use an appropriate resampling (i) algorithm (such as multinomial resampling—see below) to sample xt | t from the mixture M k=1 br (k) λt δ(xt − xt | t−1 ), 3 Particle Filtering where δ(·) denotes the Dirac delta generalized function, and set the normalized weights after resampling, λ(i) t , appropriately (for most common resampling (i) 1 algorithms this means λt := M ). Informally, SIR shares some of the characteristics of genetic algorithms; based (i) on the likelihoods p(yt | xˆ t | t−1 ), we increase the weights of the more “successful” particles, allowing them to “thrive” at the resampling step. The resampling step was introduced to avoid the degeneration of the particles, with all the weight concentrating on a single point. The most common resampling scheme is the so-called multinomial resampling which we now review. 3.2 Multinomial Resampling Notice, from above, that we are using with the normalized weights computed before br (M) : resampling, br λ(1) t , . . . , λt a. For i = 1, . . . , M, compute the cumulative sums br "(i) t br (k) λt , k=1 (M) so that, by construction, br "t = 1. b. Generate M random samples from U(0, 1), u1 , u2 , . . . , uM . (j ) (i) c. For each i = 1, . . . , M, choose the particle xˆ t | t = xˆ t | t−1 with j ∈ " ! (j ) (j +1) {1, 2, . . . , M − 1} such that ui ∈ br "t , br "t . (1) Thus we end up with M new particles (children), xt | t , . . . , xt | t sampled from (1) the existing set xt | t−1 , . . . , xt | t−1 , so that some of the existing particles may disappear, while others may appear multiple times. For each i = 1, . . . , M the (i) number of times xt | t−1 appears in the resampled set of particles is known as the particle’s replication factor, Nt(i) . (i) 1 We set the normalized weights after resampling: λt := M . We could view (1) (M) this algorithm as the sampling of the replication factors Nt , . . . Nt from the (1) (M) br br multinomial distribution with probabilities λt , . . . , λt , respectively. Hence the name of the method. 7 Probabilistic Sequence Modeling 3.3 Application: Stochastic Volatility Models Stochastic Volatility (SV) models have been studied extensively in the literature, often as applications of particle filtering and Markov chain Monte Carlo (MCMC). Their broad appeal in finance is their ability to capture the “leverage effect”—the observed tendency of an asset’s volatility to be negatively correlated with the asset’s returns (Black 1976). In particular, Pitt, Malik, and Doucet apply the particle filter to the stochastic volatility with leverage and jumps (SVLJ) (Malik and Pitt 2009, 2011a,b; Pitt et al. 2014). The model has the general form of Taylor (1982) with two modifications. For t ∈ N ∪ {0}, let yt denote the log-return on an asset and xt denote the log-variance of that return. Then yt = t ext /2 + Jt #t , xt+1 = μ(1 − φ) + φxt + σv ηt , where μ is the mean log-variance, φ is the persistence parameter, σv is the volatility of log-variance. The first modification to Taylor’s model is the introduction of correlation between t and ηt : t ηt ) ∼ N(0, ), 1ρ ρ 1 * . The correlation ρ is the leverage parameter. In general, ρ < 0, due to the leverage effect. The second change is the introduction of jumps. Jt ∈ {0, 1} is a Bernoulli counter with intensity p (thus p is the jump intensity parameter), #t ∼ N(0, σJ2 ) determines the jump size (thus σJ is the jump volatility parameter). We obtain a stochastic volatility with leverage (SVL), but no jumps, if we delete the Jt #t term or, equivalently, set p to zero. Taylor’s original model is a special case of SVLJ with p = 0, ρ = 0. This, then, leads to the following adaptation of SIR, developed by Doucet, Malik, and Pitt, for this special case & with nonadditive, ' correlated noises. The initial distribution of x0 is taken to be N 0, σv2 /(1 − φ 2 ) . a. Initialization step: At time t = 0, draw M i.i.d. particles from the initial distribution N(0, σv2 /(1 − φ 2 )). Also, initialize M normalized (to 1) weights to 1 an identical value of M . For i = 1, 2, . . . , M, the samples will be denoted xˆ0(i)| 0 (i) and the normalized weights λ0 . (i) b. Recursive step: At time t ∈ N, let (xˆt−1 | t−1 )i=1,...,M be the particles generated at time t − 1. 4 Point Calibration of Stochastic Filters i Importance sampling: – First, (i) – For i = 1, . . . , M, sample ˆt−1 from p(t−1 | xt−1 = xˆt−1 | t−1 , yt−1 ). (If no (i) yt−1 is available, as at t = 1, sample from p(t−1 | xt−1 = xˆt−1 | t−1 )). (i) | t−1 – For i = 1, . . . , M, sample xˆt from p(xt | xt−1 = xˆt−1 | t−1 , yt−1 , ˆt−1 ). – For i = 1, . . . , M, compute the non-normalized weights: (i) (i) | t−1 ), ωt := λt−1 · pγt (yt | xˆt using the observation density ) p(yt | (i) xˆt | t−1 , p, σJ2 ) ) p = (1 − p) (i) | t−1 (i) | t−1 + σJ2 ) ) exp *−1/2 exp (i) | t−1 −yt2 /(2exˆt (i) | t−1 −yt2 /(2exˆt * ) + * + 2σJ2 ) and the values of the normalized weights before resampling (‘br’): (i) br (i) λt ω := M t (k) k=1 ωt ii Resampling (or selection): For i = 1, . . . , M, use an appropriate resampling (i) algorithm (such as multinomial resampling) sample xˆt | t from the mixture M br (k) λt δ(xt (k) | t−1 ), − xˆt where δ(·) denotes the Dirac delta generalized function, and set the normalized (i) weights after resampling, λt , according to the resampling algorithm. 4 Point Calibration of Stochastic Filters We have seen in the example of the stochastic volatility model with leverage and jumps (SVLJ) that the state-space model may be parameterized by a parameter vector, θ ∈ Rdθ , dθ ∈ N. In that particular case, 7 Probabilistic Sequence Modeling ⎞ μ ⎜φ ⎟ ⎜ ⎟ ⎜ 2⎟ ⎜σ ⎟ θ = ⎜ η ⎟. ⎜ρ ⎟ ⎜ 2⎟ ⎝ σJ ⎠ p ⎛ We may not know the true value of this parameter. How do we estimate it? In other words, how do we calibrate the model, given a time series of either historical or generated observations, y1 , . . . , yT , T ∈ N. The frequentist approach relies on the (joint) probability density function of the observations, which depends on the parameters, p(y1 , y2 , . . . , yT | θ). We can regard this as a function of θ with y1 , . . . , yT fixed, p(y1 , . . . , yT | θ ) =: L(θ )— the likelihood function. This function is sometimes referred to as marginal likelihood, since the hidden states, x1 , . . . , xT , are marginalized out. We seek a maximum likelihood estimator (MLE), θˆ ML , the value of θ that maximizes the likelihood function. Each evaluation of the objective function, L(θ ), constitutes a run of the stochastic filter over the observations y1 , . . . , yT . By the chain rule (i), and since we use a Markov chain (ii), (i) p(y1 , . . . , yT ) = p(yt | y0 , . . . , yt−1 ) = T ( p(yt | xt )p(xt | y0 , . . . , yt−1 ) dxt . Note that, for ease of notation, we have omitted the dependence of all the probability densities on θ , e.g., instead of writing p(y1 , . . . , yT ; θ ). For the particle filter, we can estimate the log-likelihood function from the nonnormalized weights: p(y1 , . . . , yT ) = T ( p(yt | xt )p(xt | y0 , . . . , yt−1 ) dxt ≈ T t=1 M 1 (k) ωt , M k=1 whence ln(L(θ )) = ln T t=1 M 1 (k) ωt M k=1 -+ = T t=1 M 1 (k) ln ωt . M This was first proposed by Kitagawa (1993, 1996) for the purposes of approximating θˆ ML . In most practical applications one needs to resort to numerical methods, perhaps quasi-Newton methods, such as Broyden–Fletcher–Goldfarb–Shanno (BFGS) (Gill et al. 1982), to find θˆ ML . 5 Bayesian Calibration of Stochastic Filters Pitt et al. (2014) point out the practical difficulties which result when using the above as an objective function in an optimizer. In the resampling (or selection) step of the particle filter, we are sampling from a discontinuous empirical distribution function. Therefore, ln(L(θ)) will not be continuous as a function of θ. To remedy this, they rely on an alternative, continuous, resampling procedure. A quasi-Newton method is then used to find θˆ ML for the parameters θ = (μ, φ, σv2 , ρ, p, σJ2 ) of the SVLJ model. We note in passing that Kalman filters can also be calibrated using a similar maximum likelihood approach. 5 Bayesian Calibration of Stochastic Filters Let us briefly discuss how filtering methods relate to Markov chain Monte Carlo methods (MCMC)—a vast subject in its own right; therefore, our discussion will be cursory at best. The technique takes its origin from Metropolis et al. (1953). Following Kim et al. (1998) and Meyer and Yu (2000); Yu (2005), we demonstrate how MCMC techniques can be used to estimate the parameters of the SVL model. They calibrate the parameters to the time series of observations of daily mean-adjusted log-returns, y1 , . . . , yT to obtain the joint prior density p(θ , x0 , . . . , xT ) = p(θ )p(x0 | θ ) p(xt | xt−1 , θ ) by successive conditioning. Here θ := (μ, φ, σv2 , ρ) is, as before, the vector of the model parameters. We assume prior independence of the parameters and choose the same priors (as in Kim et al. (1998)) for μ, φ, and σv2 , and a uniform prior for ρ. The observation model and the conditional independence assumption give the likelihood p(y1 , . . . , yT | θ , x0 , . . . , xT ) = p(yt | xt ), and the joint posterior distribution of the unobservables (the parameters θ and the hidden states x0 , . . . , xT ; in the Bayesian perspective these are treated identically and estimated in a similar manner) follows from Bayes’ theorem; for the SVL model, this posterior satisfies p(θ , x0 , . . . , xT | y1 , . . . , yT ) ∝ p(μ)p(φ)p(σv2 )p(ρ) T t=1 p(xt+1 | xt , μ, φ, σv2 ) T t=1 p(yt | xt+1 , xt , μ, φ, σv2 , ρ), 7 Probabilistic Sequence Modeling where p(μ), p(φ), p(σv2 ), p(ρ) are the appropriately chosen priors, xt+1 | xt , μ, φ, σv2 ∼ N μ(1 − φ) + φxt , σv2 , ) * ρ xt /2 2 xt 2 e yt | xt+1 , xt , μ, φ, σv , ρ ∼ N (xt+1 − μ(1 − φ) − φxt ) , e (1 − ρ ) . σv Meyer and Yu use the software package BUGS3 (Spiegelhalter et al. 1996; Lunn et al. 2000) represent the resulting Bayesian model as a directed acyclic graph (DAG), where the nodes are either constants (denoted by rectangles), stochastic nodes (variables that are given a distribution, denoted by ellipses), or deterministic nodes (logical functions of other nodes); the arrows either indicate stochastic dependence (solid arrows) or logical functions (hollow arrows). This graph helps visualize the conditional (in)dependence assumptions and is used by BUGS to construct full univariate conditional posterior distributions for all unobservables. It then uses Markov chain Monte Carlo algorithms to sample from these distributions. The algorithm based on the original work (Metropolis et al. 1953) is now known as the Metropolis algorithm. It has been generalized by Hastings (1930–2016) to obtain the Metropolis–Hastings algorithm (Hastings 1970) and further by Green to obtain what is known as the Metropolis–Hastings–Green algorithm (Green 1995). A popular algorithm based on a special case of the Metropolis–Hastings algorithm, known as the Gibbs sampler, was developed by Geman and Geman (1984) and, independently, Tanner and Wong (1987).4 It was further popularized by Gelfand and Smith (1990). Gibbs sampling and related algorithms (Gilks and Wild 1992; Ritter and Tanner 1992) are used by BUGS to sample from the univariate conditional posterior distributions for all unobservables. As a result we perform Bayesian estimation—obtain estimates of the distributions of the parameters μ, φ, σv2 , ρ— rather than frequentist estimation, where a single value of the parameters vector, which maximizes the likelihood, θˆ ML , is produced. Stochastic filtering, sometimes in combination with MCMC, can be used for both frequentist and Bayesian parameter estimation (Chen 2003). Filtering methods that update estimates of the parameters online, while processing observations in real-time, are referred to as adaptive filtering (see Sayed (2008); Vega and Rey (2013); Crisan and Míguez (2013); Naesseth et al. (2015) and references therein). We note that a Gibbs sampler (or variants thereof) is a highly nontrivial piece of software. In addition to the now classical BUGS/WinBUGS there exist powerful Gibbs samplers accessible via modern libraries, such as Stan, Edward, and PyMC3. 3 An acronym for Bayesian inference Using Gibbs Sampling. the Gibbs sampler is referred to as data augmentation following this paper. 4 Sometimes 7 Exercises 6 Summary This chapter extends Chap. 2 by presenting probabilistic methods for time series data. The key modeling assumption is the existence of a certain latent process Xt , which evolves over time. This unobservable, latent process drives another, observable process. Such an approach overcomes limitations of stationarity imposed on the methods in the previous chapter. The reader should verify that they have achieved the primary learning objectives of this chapter: – Formulate hidden Markov models (HMMs) for probabilistic modeling over hidden states; – Gain familiarity with the Baum–Welch algorithm for fitting HMMs to time series data; – Use the Viterbi algorithm to find the most likely path; – Gain familiarity with state-space models and the application of Kalman filters to fit them; and – Apply particle filters to financial time series. 7 Exercises Exercise 7.1: Kalman Filtering of Autoregressive Moving Average ARMA(p, q) Model The autoregressive moving average ARMA(p, q) model can be written as yt = φ1 yt−1 + . . . + φp yt−p + ηt + θ1 ηt−1 + . . . + θq ηt−q , where ηt ∼ N(0, σ 2 ) and includes as special cases all AR(p) and MA(q) models. Such models are often fitted to financial time series. Suppose that we would like to filter this time series using a Kalman filter. Write down a suitable process and the observation models. Exercise 7.2: The Ornstein–Uhlenbeck Process Consider the one-dimensional Ornstein–Uhlenbeck (OU) process, the stationary Gauss–Markov process given by the SDE dXt = θ (μ − Xt ) dt + σ dWt , where Xt ∈ R, X0 = x0 , and θ > 0, μ, and σ > 0 are constants. Formulate the Kalman process model for this process. Exercise 7.3: Deriving the Particle Filter for Stochastic Volatility with Leverage and Jumps 7 Probabilistic Sequence Modeling We shall regard the log-variance xt as the hidden states and the log-returns yt as observations. How can we use the particle filter to estimate xt on the basis of the observations yt ? a. Show that, in the absence of jumps, # xt = μ(1 − φ) + φxt−1 + σv ρyt−1 e−xt−1 /2 + σv 1 − ρ 2 ξt−1 i.i.d. for some ξt ∼ N(0, 1). b. Show that p(t | xt , yt ) =δ(t − yt e−xt /2 )P[Jt = 0 | xt , yt ] + φ(t ; μt | Jt =1 , σ2t | Jt =1 )P[Jt = 1 | xt , yt ], where μt | Jt =1 = yt exp(xt /2) exp(xt ) + σJ2 and σ2t | Jt =1 = σJ2 exp(xt ) + σJ2 c. Explain how you could implement random sampling from the probability distribution given by the density p(t | xt , yt ). d. Write down the probability density p(xt | xt−1 , yt−1 , t−1 ). e. Explain how you could sample from this distribution. f. Show that the observation density is given by ) (i) p(yt | xˆt | t−1 , p, σJ2 ) ) p = (1 − p) xˆt | t−1 + σJ2 ) 2π e xˆt | t−1 ) * (i) x ˆ exp −yt2 /(2e t | t−1 ) + ) * (i) xˆt | t−1 2 2 exp −yt /(2e + 2σJ ) . Exercise 7.4: The Viterbi Algorithm and an Occasionally Dishonest Casino The dealer has two coins, a fair coin, with P(Heads) = 12 , and a loaded coin, with P(Heads) = 45 . The dealer starts with the fair coin with probability 35 . The dealer then tosses the coin several times. After each toss, there is a 25 probability of a switch to the other coin. The observed sequence is Heads, Tails, Tails, Heads, Tails, Heads, Heads, Heads, Tails, Heads. Run the Viterbi algorithm to determine which coin the dealer was most likely using for each coin toss. Appendix Python Notebooks The notebooks provided in the accompanying source code repository are designed to gain familiarity with how to implement the Viterbi algorithm and particle filtering for stochastic volatility model calibration. Further details of the notebooks are included in the README.md file. References Black, F. (1976). Studies of stock price volatility changes. In Proceedings of the Business and Economic Statistics Section. Chen, Z. (2003). Bayesian filtering: From Kalman filters to particle filters, and beyond. Statistics, 182(1), 1–69. Crisan, D., & Míguez, J. (2013). Nested particle filters for online parameter estimation in discretetime state-space Markov models. ArXiv:1308.1883. Gelfand, A. E., & Smith, A. F. M. (1990, June). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85(410), 398–409. Geman, S. J., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–741. Gilks, W. R., & Wild, P. P. (1992). Adaptive rejection sampling for Gibbs sampling, Vol. 41, pp. 337–348. Gill, P. E., Murray, W., & Wright, M. H. (1982). Practical optimization. Emerald Group Publishing Limited. Gordon, N. J., Salmond, D. J., & Smith, A. F. M. (1993). Novel approach to nonlinear/nonGaussian Bayesian state estimation. In IEE Proceedings F (Radar and Signal Processing). Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4), 711–32. Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1), 97–109. Kim, S., Shephard, N., & Chib, S. (1998, July). Stochastic volatility: Likelihood inference and comparison with ARCH models. The Review of Economic Studies, 65(3), 361–393. Kitagawa, G. (1993). A Monte Carlo filtering and smoothing method for non-Gaussian nonlinear state space models. In Proceedings of the 2nd U.S.-Japan Joint Seminar on Statistical Time Series Analysis (pp. 110–131). Kitagawa, G. (1996). Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1), 1–25. Lunn, D. J., Thomas, A., Best, N. G., & Spiegelhalter, D. (2000). WinBUGS – a Bayesian modelling framework: Concepts, structure and extensibility. Statistics and Computing, 10, 325– 337. Malik, S., & Pitt, M. K. (2009, April). Modelling stochastic volatility with leverage and jumps: A simulated maximum likelihood approach via particle filtering. Warwick Economic Research Papers 897, The University of Warwick, Department of Economics, Coventry CV4 7AL. Malik, S., & Pitt, M. K. (2011a, February). Modelling stochastic volatility with leverage and jumps: A simulated maximum likelihood approach via particle filtering. document de travail 318, Banque de France Eurosystème. 7 Probabilistic Sequence Modeling Malik, S., & Pitt, M. K. (2011b). Particle filters for continuous likelihood evaluation and maximisation. Journal of Econometrics, 165, 190–209. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21. Meyer, R., & Yu, J. (2000). BUGS for a Bayesian analysis of stochastic volatility models. Econometrics Journal, 3, 198–215. Naesseth, C. A., Lindsten, F., & Schön, T. B. (2015). Nested sequential Monte Carlo methods. In Proceedings of the 32nd International Conference on Machine Learning. Pitt, M. K., Malik, S., & Doucet, A. (2014). Simulated likelihood inference for stochastic volatility models using continuous particle filtering. Annals of the Institute of Statistical Mathematics, 66, 527–552. Ritter, C., & Tanner, M. A. (1992). Facilitating the Gibbs sampler: The Gibbs stopper and the Griddy-Gibbs sampler. Journal of the American Statistical Association, 87(419), 861–868. Sayed, A. H. (2008). Adaptive filters. Wiley-Interscience. Spiegelhalter, D., Thomas, A., Best, N. G., & Gilks, W. R. (1996, August). BUGS 0.5: Bayesian inference using Gibbs sampling manual (version ii). Robinson Way, Cambridge CB2 2SR: MRC Biostatistics Unit, Institute of Public Health. Tanner, M. A., & Wong, W. H. (1987, June). The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 82(398), 528–540. Taylor, S. J. (1982). Time series analysis: theory and practice. Chapter Financial returns modelled by the product of two stochastic processes, a study of daily sugar prices, pp. 203–226. NorthHolland. Vega, L. R., & H. Rey (2013). A rapid introduction to adaptive filtering. Springer Briefs in Electrical and Computer Engineering. Springer. Yu, J. (2005). On leverage in a stochastic volatility model. Journal of Econometrics, 127, 165–178. Chapter 8 Advanced Neural Networks This chapter presents various neural network models for financial time series analysis, providing examples of how they relate to well-known techniques in financial econometrics. Recurrent neural networks (RNNs) are presented as nonlinear time series models and generalize classical linear time series models such as AR(p). They provide a powerful approach for prediction in financial time series and generalize to non-stationary data. This chapter also presents convolution neural networks for filtering time series data and exploiting different scales in the data. Finally, this chapter demonstrates how autoencoders are used to compress information and generalize principal component analysis. 1 Introduction The universal approximation theorem states that a feedforward network is capable of approximating any function. So why do other types of neural networks exist? One answer to this is efficiency. In this chapter, different architectures shall be explored for their ability to exploit the structure in the data, resulting in fewer weights. Hence the main motivation for different architectures is often parsimony of parameters and therefore less propensity to overfit and reduced training time. We shall see that other architectures can be used, in particular ones that change their behavior over time, without the need to retrain the networks. And we will see how neural networks can be used to compress data, analogously to principal component analysis. There are other neural network architectures which are used in financial applications but are too esoteric to list here. However, we shall focus on three other classes of neural networks which have proven to be useful in the finance industry. The first two are supervised learning techniques and the latter is an unsupervised learning technique. © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https://doi.org/10.1007/978-3-030-41068-1_8 8 Advanced Neural Networks Recurrent neural networks (RNNs) are non-linear time series models and generalize classical linear time series models such as AR(p). They provide a powerful approach for prediction in financial time series and share parameters across time. Convolution neural networks are useful as spectral transformations of spatial and temporal data and generalize techniques such as wavelets, which use fixed basis functions. They share parameters across space. Finally, autoencoders are used to compress information and generalize principal component analysis. Chapter Objectives By the end of this chapter, the reader should expect to accomplish the following: – Characterize RNNs as non-linear autoregressive models and analyze their stability; – Understand how gated recurrent units and long short-term memory architectures give a dynamic autoregressive model with variable memory; – Characterize CNNs as regression, classification, and time series regression of filtered data; – Understand principal component analysis for dimension reduction; – Formulate a linear autoencoder and extract the principal components; and – Understand how to build more complex networks by aggregating these different concepts. The notebooks provided in the accompanying source code repository demonstrate many of the methods in this chapter. See Appendix “Python Notebooks” for further details. 2 Recurrent Neural Networks Recall that if the data D := {xt , yt }N t=1 is auto-correlated observations of X and Y at times t = 1, . . . , N, then the prediction problem can be expressed as a sequence prediction problem: construct a non-linear times series predictor, yˆt+h , of a response, yt+h , using a high-dimensional input matrix of T length sub-sequences Xt : yˆt+h = f (Xt ) where Xt := seqT ,t (X) = (xt−T +1 , . . . , xt ), where xt−j is a j th lagged observation of xt , xt−j = Lj [xt ], for j = 0, . . . , T − 1. Sequence learning, then, is just a composition of a non-linear map and a vectorization of the lagged input variables. If the data is i.i.d., then no sequence is needed (i.e., T = 1), and we recover a feedforward neural network. 2 Recurrent Neural Networks Recurrent neural networks (RNNs) are times series methods or sequence learners which have achieved much success in applications such as natural language understanding, language generation, video processing, and many other tasks (Graves 2012). There are many types of RNNs—we will just concentrate on simple RNN models for brevity of notation. Like multivariate structural autoregressive models, (1) RNNs apply an autoregressive function fW (1) ,b(1) (Xt ) to each input sequence Xt , where T denotes the look back period at each time step—the maximum number of lags. However, rather than directly imposing an autocovariance structure, a RNN provides a flexible functional form to directly model the predictor, Yˆ . As illustrated in Fig. 8.1, this simple RNN is an unfolding of a single hidden layer neural network (a.k.a. Elman network (Elman 1991) ) over all time steps in the sequence, j = 0, . . . , T − 1. For each time step, j , this function fW(1)(1) ,b(1) (Xt,j ) generates a hidden state zt−j from the current input xt and the previous hidden state zt−1 and Xt,j = seqT ,t−j (X) ⊂ Xt : yˆt +h zti −5 zti −4 zti −3 zti −2 zti −1 xt −5 xt −4 xt −3 xt −2 xt −1 Fig. 8.1 An illustrative example of a recurrent neural network with one hidden layer, “unfolded” over a sequence of six time steps. Each input xt is in the sequence Xt . The hidden layer contains H units and the ith output at time step t is denoted by zti . The connections between the hidden units (1) are recurrent and are weighted by the matrix Wz . At the last time step t, the hidden units connect to a single unit output layer with continuous yˆt+h 8 Advanced Neural Networks yˆt+h = fW (2) ,b(2) (zt ) := σ (2) (W (2) zt + b(2) ), (1) hidden states: zt−j = fW (1) ,b(1) (Xt,j ) := σ (1) (Wz(1) zt−j −1 + Wx(1) xt−j + b(1) ), j ∈ {T − 1, . . . , 0}, where σ (1) is an activation function such as tanh(x), and σ (2) is either a softmax function or identity map depending on whether the response is categorical or continuous, respectively. The connections between the extremal inputs xt and the H (1) hidden units are weighted by the time invariant matrix Wx ∈ RH ×P . The recurrent connections between the H hidden units are weighted by the time invariant matrix (1) Wz ∈ RH ×H . Without such a matrix, the architecture is simply a single-layered feedforward network without memory—each independent observation xt is mapped to an output yˆt using the same hidden layer. (1) (1) The sets W (1) = (Wx , Wz ) refer to the input and recurrence weights. W (2) denotes the weights tied to the output of the H hidden units at the last time step, zt , and the output layer. If the response is a continuous vector, Y ∈ RM , then W (2) ∈ RM×H . If the response is categorical, with K states, then W (2) ∈ RK×H . The number of hidden units determines the degree of non-linearity in the model and must be at least the dimensionality of the input p. In our experiments the hidden layer is generally under a hundred units, but can increase to thousands in higher dimensional datasets. There are a number of issues in the RNN design. How many times should the network being unfolded? How many hidden neurons H in the hidden layer? How to perform “variable selection”? The answer to the first question lies in tests for autocorrelation of the data—the sequence length needed in a RNN can be determined by the largest significant lag in an estimated “partial autocorrelation” function. The answer to the second is no different to the problem of how to choose the number of hidden neurons in a MLP—the bias–variance tradeoff being the most important consideration. And indeed the third question is also closely related to the problem of choosing features in a MLP. One can take a principled approach to feature selection, first identifying a subset of features using a Granger-causality test, or the “laissez-machine” approach of including all potentially relevant features and allowing auto-shrinkage to determine the most important weights are more aligned with contemporary experimental design in machine learning. One important caveat on feature selection for RNNs is that each feature must be a time series and therefore exhibit autocorrelation. We shall begin with a simple, univariate, example to illustrate how a RNN, without activation, is a AR(p) time series model. 2 Recurrent Neural Networks Example 8.1 RNNs as Non-linear AR(p) Models Consider the simplest case of a RNN with one hidden unit, H = 1, no activation function, and the dimensionality of the input vector is P = 1. (1) (1) Suppose further that Wz = φz , |φz | < 1, Wx = φx , Wy = 1, bh = 0 and (1) by = μ. Then we can show that fW (1) ,b(1) (Xt ) is of an autoregressive, AR(p), model of order p with geometrically decaying autoregressive coefficients φi = φx φzi−1 : zt−p = φx xt−p zt−T +2 = φz zt−T +1 + φx xt−T +2 ... = ... zt−1 = φz zt−2 + φx xt−1 xˆt = zt−1 + μ then p−1 p xˆt = μ + φx (L + φz L2 + · · · + φz φi xt−i = μ+ L )[xt ] This special type of autoregressive model xˆt is “stable” and the order can be identified through autocorrelation tests on X such as the Durbin–Watson, Ljung– Box, or Box–Pierce tests. Note that if we modify the architecture so that the (1) recurrence weights Wz,i = φz,i are lag dependent then the unactivated hidden layer is zt−i = φz,i zt−i−1 + φx xt−i which gives xˆt = μ + φx (L + φz,1 L + · · · + 2 φz,i Lp )[xt ], $j −1 and thus the weights in this AR(p) model are φj = φx i=1 φz,i which allows a more flexible presentation of the autocorrelation structure than the plain RNN— which is limited to geometrically decaying weights. Note that a linear RNN with infinite number of lags and no bias corresponds to an exponential smoother, zt = αxt + (1 − α)zt−1 when Wz = 1 − α, Wx = α, and Wy = 1. 8 Advanced Neural Networks The generalization of a linear RNN from AR(p) to V AR(p) is trivial and can be written as xˆt =μ+ φj xt−j , φj :=W (2) (Wz(1) )j −1 Wx(1) , μ := W (2) j =1 p (Wz(1) )j −1 b(1) +b(2) , j =1 where the square matrix φj ∈ RP ×P and bias vector μ ∈ RP . 2.1 RNN Memory: Partial Autocovariance Generally, with non-linear activation, it is more difficult to describe the RNN as a classical model. However, the partial autocovariance function provides some additional insight here. Let us first consider a RNN(1) process. The lag-1 partial autocovariance is γ˜1 = E[yt − μ, yt−1 − μ] = E[yˆt + t − μ, yt−1 − μ], and using the RNN(1) model with, for simplicity, a single recurrence weight, φ: yˆt = σ (φyt−1 ) gives γ˜1 = E[σ (φyt−1 ) + t − μ, yt−1 − μ] = E[yt−1 σ (φyt−1 )], where we have assumed μ = 0 in the second part of the expression. Checking that we recover the AR(1) covariance, set σ := I d so that 2 ] = φV[yt−1 ]. γ˜1 = φE[yt−1 Continuing with the lag-2 autocovariance gives: γ˜2 = E[yt − P (yt | yt−1 ), yt−2 − P (yt−2 | yt−1 )], and P (yt | yt−1 ) is approximated by the RNN(1): yˆt = σ (φyt−1 ). Substituting yt = yˆt + t into the above gives γ˜2 = E[t , yt−2 − P (yt−2 | yt−1 )]. 2 Recurrent Neural Networks Approximating P (yt−2 | yt−1 ) with the backward RNN(1) yˆt−2 = σ (φ(yˆt−1 + t−1 )), we see, crucially, that yˆt−2 depends on t−1 but not on t . yt−2 − P (yt−2 | yt−1 ), hence depends on {t−1 , t−2 , . . . }. Thus we have that γ˜2 = 0. As a counterexample, consider the lag-2 partial autocovariance of the RNN(2) process & ' yˆt−2 = σ φσ (φ(yˆt + t ) + t−1 ) , which depends on t and hence the lag-2 partial autocovariance is not zero. It is easy to show that the partial autocorrelation τ˜s = 0, s > p and, thus, like the AR(p) process, the partial autocorrelation function for a RNN(p) has a cut-off at p lags. The partial autocorrelation function is independent of time. Such a property can be used to identify the order of the RNN model from the estimated PACF. 2.2 Stability We can generalize the stability constraint on AR(p) models presented in Sect. 2.3 to RNNs by considering the RNN(1) model: yt = (L)[t ] = (1 − σ (Wz L + b))−1 [t ], where we have set Wy = 1 and by = 0 without loss of generality, and dropped the superscript ()(1) for ease of notation. Expressing this as an infinite dimensional non-linear moving average model ∞ yt = 1 [t ] = σ j (Wz L + b)[t ], 1 − σ (Wz L + b) j =0 and the infinite sum will be stable when the σ j (·) terms do not grow with j , i.e. |σ | ≤ 1 for all values of φ and yt−1 . In particular, the choice tanh satisfies the requirement on σ . For higher order models, we follow an induction argument and show first that for a RNN(2) model we obtain yt = = 1 − σ (Wz σ (Wz ∞ j =0 1 [t ] + b) + Wx L + b) σ j (Wz σ (Wz L2 + b) + Wx L + b)[t ], 8 Advanced Neural Networks which again is stable if |σ | ≤ 1 and it follows for any model order that the stability condition holds. It follows that lagged unit impulses of the data strictly decay with the order of the lag when |σ | ≤ 1. Again by induction, at lag 1, the output from the hidden layer is zt = σ (Wz 1 + Wx 0 + b) = σ (Wz 1 + b). The absolute value of each component of the hidden variable under a unit vector impulse at lag 1 is strictly less than 1: |zt |j = |σ (Wz 1 + b)|j < 1, if |σ (x)| ≤ 1 and each element of Wz 1 + b is finite. Additionally if σ is strictly monotone increasing, then |zt |j under a lag two unit innovation is strictly less than |zt |j under a lag one unit innovation |σ (Wz 1) + b)j | > |σ (Wz σ (Wz 1 + b) + b)|j . The implication of this stability result is the reassuring attribute that past random disturbances decay in the model and the effect of lagged data becomes less relevant to the model output with increasing lag. 2.3 Stationarity For completeness, we mention in passing the extension of stationarity analysis in Sect. 2.4 to RNNs. The linear univariate RNN(p), considered above, has a companion matrix of the form ⎛ ⎜ ⎜ ⎜ C := ⎜ ⎜ ⎜ ⎝ 0 0 .. . 0 .. . 0 0 φ −p −φ −p+1 0 1 .. . 0 ... ⎞ 0 .. ⎟ . ⎟ 0 ⎟ .. ⎟ .. . . ⎟ ⎟ 0 1 ⎠ −φ −2 −φ −1 , ... and it turns out that for φ = 0 this model is non-stationary. We can hence rule out the choice of a linear activation since this would leave us with a linear RNN. Hence, it appears that some non-linear activation is necessary for the model to be stationary, but we cannot use the Cayley–Hamilton theorem to prove stationarity. 2 Recurrent Neural Networks Half-Life Suppose that the output of the RNN is in Rd . The half-life of the lag is the smallest number of function compositions, k, of σ˜ (x) := σ (Wz x + b) with itself such that the normalized j th output is (Wy σ˜ ◦1 σ˜ ◦2 · · · ◦k−1 σ˜ (1) + by )j ≤ 0.5, k ≥ 2, ∀j ∈ {1, . . . , d}. (Wy σ˜ (1) + by )j (8.19) Note the output has been normalized so that the lag-1 unit impulse ensures that the (1) ratio, rj = 1 for each j . This modified definition exists to account for the effects of the activation function and the semi-affine transformation which are not present in AR(p) model. In general, there is no guarantee that the half-life is finite but we can find parameter values for which the half-life can be found. For example, suppose for simplicity that a univariate RNN is given by xˆt = zt−1 and (k) zt = σ (zt−1 + xt ). Then the lag-1 impulse is xˆt = σ˜ (1) = σ (0+1), the lag-2 impulse is xˆt = σ (σ (1)+ 0) = σ˜ ◦ σ˜ (1), and so on. If σ (x) := tanh(x) and we normalize over the output from the lag-1 impulse to give the values in Table 8.1. Table 8.1 The half-life characterizes the memory decay of the architecture by measuring the number of periods before a lagged unit impulse has at least half of its effect at lag 1. The calculation of the half-life involves nested composition of the recursion relation for the hidden (k) layer until rj is less than a half. The calculations are repeated for each j , hence the half-life may vary depending on the component of the output. In this example, the half-life of the univariate RNN is 9 periods Lag k 1 2 3 4 5 6 7 8 9 r (k) 1.000 0.843 0.744 0.673 0.620 0.577 0.543 0.514 0.489 ? Multiple Choice Question 1 Which of the following statements are true: a. An augmented Dickey–Fuller test can be applied to time series to determine whether they are covariance stationary. 8 Advanced Neural Networks b. The estimated partial autocorrelation of a covariance stationary time series can be used to identify the design sequence length in a plain recurrent neural network. c. Plain recurrent neural networks are guaranteed to be stable, namely lagged unit impulses decay over time. d. The Ljung–Box test is used to test whether the fitted model residual error is autocorrelated. e. The half-life of a lag-1 unit impulse is the number of lags before the impulse has half its effect on the model output. 2.4 Generalized Recurrent Neural Networks (GRNNs) Classical RNNs, such as those described above, treat the error as homoscedastic— that is, the error is i.i.d. We mention in passing that we can generalize RNNs to heteroscedastic models by modifying the loss function to the squared Mahalanobis length of the residual vector. Such an approach is referred to here as generalized recurrent neural networks (GRNNs) and is mentioned briefly here with the caveat that the field of machine learning in econometrics is nascent and therefore incomplete and such a methodology, while appealing from a theoretic perspective, is not yet proven in practice. Hence the purpose of this subsection is simply to illustrate how more complex models can be developed which mirror some of the developments in parametric econometrics. In its simplest form, we solve a weighted least squares minimization problem using data, Dt : minimize W,b f (W, b) + λφ(W, b), L (Y, Yˆ ) := (Y − Yˆ )T −1 (Y − Yˆ ), tt = σt2 , tt = ρtt σt σt , f (W, b) = T 1 L (yt , yˆt ), T (8.20) (8.21) (8.22) where := E[ T | Xt ] is the conditional covariance matrix of the residual error and φ(W, b) is a regularization penalty term. The conditional covariance matrix of the error must be estimated. This is performed as follows using the notation ()T to denote the transpose of a vector and () to denote model parameters fitted under heteroscedastic error. 1) For each t = 1, . . . , T , estimate the residual error over the training set, t ∈ RN , using the standard (unweighted) loss function to find the weights, Wˆ t , and biases, bˆt where the error is 3 Gated Recurrent Units t = yt − FWˆ t ,bˆt (Xt ). ˆ is estimated accordingly: 2) The sample conditional covariance matrix ˆ = 1 T t t . T −1 T 3) Perform the generalized least squares minimization using Eq. 8.20 to obtain a fitted heteroscedastic neural network model, with refined error t = yt − FWˆ ,bˆ (Xt ). t The fitted GRNN FWˆ ,bˆ can then be used for forecasting without any further t t modification. The effect of the sample covariance matrix is to adjust the importance of the observation in the training set, based on the variance of its error and the error correlation. Such an approach can be broadly viewed as a RNN analogue of how GARCH models extend AR models. Of course, GARCH models treat the error distribution as parametric and provide a recurrence relation for forecasting the conditional volatility. In contrast, GRNNs rely on the empirical error distribution and do not forecast the conditional volatility. However, a separate regression could be performed over diagonals of the empirical conditional volatility by using time series cross-validation. 3 Gated Recurrent Units The extension of RNNs to dynamical time series models rests on extending foundational concepts in time series analysis. We begin by considering a smoothed RNN with hidden state hˆ t . Such a RNN is almost identical to a plain RNN, but with an additional scalar smoothing parameter, α, which provides the network with “long memory.” 3.1 α-RNNs Let us consider a univariate α-RNN(p) model in which the smoothing parameter is fixed: yˆt+1 = Wy hˆ t + by , hˆ t = σ (Uh h˜ t−1 + Wh yt + bh ), (8.26) (8.27) 8 Advanced Neural Networks h˜ t = α hˆ t−1 + (1 − α)h˜ t−1 , with the starting condition in each sequence, hˆ t−p+1 = yt−p+1 . This model augments the plain RNN by replacing hˆ t−1 in the hidden layer with an exponentially smoothed hidden state h˜ t−1 . The effect of the smoothing is to provide infinite memory when α = 1. For the special case when α = 1, we recover the plain RNN with short memory of length p . We can easily study this model by simplifying the parameterization and considering the unactivated case. Setting by = bh = 0, Uh = Wh = φ and Wy = 1: yˆt+1 = hˆ t , = φ(h˜ t−1 + yt ), = φ(α hˆ t−1 + (1 − α)h˜ t−2 + yt ). Without loss of generality, consider p = 2 lags in the model so that hˆ t−1 = φyt−1 . Then hˆ t = φ(αφyt−1 + (1 − α)h˜ t−2 + yt ) and the model can be written in the simpler form yˆt+1 = φ1 yt + φ2 yt−1 + φ(1 − α)h˜ t−2 , with autoregressive weights φ1 := φ and φ2 := αφ 2 . We now see, in comparison with an AR(2) model, that there is an additional term which vanishes when α = 1 but provides infinite memory to the model since h˜ t−2 depends on y0 , the first observation in the whole time series, not just the first observation in the sequence. The α-RNN model can be trained by treating α as a hyperparameter. The choice to fix α is obviously limited to stationary time series. We can extend the model to non-stationary time series by using a dynamic version of exponential smoothing. Dynamic αt -RNNs Dynamic exponential smoothing is a time-dependent, convex, combination of the smoothed output, y˜t , and the observation yt : y˜t+1 = αt yt + (1 − αt )y˜t , where αt ∈ [0, 1] denotes the dynamic smoothing factor which can be equivalently written in the one-step-ahead forecast of the form y˜t+1 = y˜t + αt (yt − y˜t ). 3 Gated Recurrent Units Hence the smoothing can be viewed as a form of dynamic forecast error correction; When αt = 0, the forecast error is ignored and the smoothing merely repeats the current hidden state h˜ t to the effect of the model losing its memory. When αt = 1, the forecast error overwrites the current hidden state h˜ t . The smoothing can also be viewed$a weighted sum of the lagged observations, with lower or equal weights, αt−s sr=1 (1 − αt−r+1 ) at the lag s ≥ 1 past observation, yt−s : y˜t+1 = αt yt + t−1 s=1 (1 − αt−r+1 )yt−s + (1 − αt−r )y˜1 , where the last term is a time-dependent constant and typically we initialize the exponential smoother with y˜1 = y1 . Note that for any αt−r+1 = 1, the prediction y˜t+1 will have no dependency on all lags {yt−s }s≥r . The model simply forgets the observations at or beyond the rth lag. In the special case when the smoothing is constant and equal to 1 − α, then the above expression simplifies to y˜t+1 = α (L)−1 yt , or equivalently written as a AR(1) process in y˜t+1 : (L)y˜t+1 = αyt , for the linear operator (z) := 1 + (α − 1)z and where L is the lag operator. 3.2 Neural Network Exponential Smoothing Let us suppose now that instead of smoothing the observed time series {ys }s≤1 , we instead smooth a hidden vector hˆ t with αˆ t ∈ [0, 1]H to give a filtered time series h˜ t = αˆ t ◦ hˆ t + (1 − αˆ t ) ◦ h˜ t−1 , where ◦ denotes the Hadamard product between vectors. This smoothing is a vectorized form of the above classical setting, only here we note that when (αt )i = 1, the ith component of the hidden variable is unmodified and the past filtered hidden variable is forgotten. On the other hand, when the (αt )i = 0, the ith component of the hidden variable is obsolete, instead setting the current filtered hidden variable to its past value. The smoothing in Eq. 8.39 can be viewed then as updating longterm memory, maintaining a smoothed hidden state variable as the memory through a convex combination of the current hidden variable and the previous smoothed hidden variable. 8 Advanced Neural Networks The hidden variable is given by the semi-affine transformation: hˆ t = σ (Uh h˜ t−1 + Wh xt + bh ) which in turns depends on the previous smoothed hidden variable. Substituting Eq. 8.40 into Eq. 8.39 gives a function of h˜ t−1 and xt : h˜ t = g(h˜ t−1 , xt ; α) := αˆ t ◦ σ (Uh h˜ t−1 + Wh xt + bh ) + (1 − αˆ t ) ◦ h˜ t−1 . We see that when αt = 0, the smoothed hidden variable h˜ t is not updated by the input xt . Conversely, when αt = 1, we observe that the hidden variable locally behaves like a non-linear autoregressive series. Thus the smoothing parameter can be viewed as the sensitivity of the smoothed hidden state to the input xt . The challenge becomes how to determine dynamically how much error correction is needed. GRUs address this problem by learning αˆ = F(Wα ,Uα ,bα ) (X) from the input variables with a plain RNN parameterized by weights and biases (Wα , Uα , bα ). The one-step-ahead forecast of the smoothed hidden state, h˜ t , is the filtered output of another plain RNN with weights and biases (Wh , Uh , bh ). Putting this together gives the following α − t model (simple GRU): smoothing : h˜ t = αˆ t ◦ hˆ t + (1 − αˆ t ) ◦ h˜ t−1 smoother update : αˆ t = σ (Uα h˜ t−1 + Wα xt + bα ) hidden state update : hˆ t = σ (Uh h˜ t−1 + Wh xt + bh ), (8.42) (8.43) (8.44) where σ (1) is a sigmoid or Heaviside function and σ is any activation function. Figure 8.2 shows the response of a αt -RNN when the input consists of two unit impulses. For simplicity, the sequence length is assumed to be 3 (i.e., the RNN has a memory of 3 lags), the biases are set to zero, all the weights are set to one, and σ (x) := tanh(x). Note that the weights have not been fitted here, we are merely observing the effect of smoothing on the hidden state for the simplest choice of parameter values. The RNN loses memory of the unit impulse after three lags, whereas the RNNs with smooth hidden states maintain memory of the first unit impulse even when the second unit impulse arrives. The difference between the dynamically smoothed RNN (the αt -RNN) and α-RNN with a fixed smoothing parameter appears insignificant. Keep in mind however that the dynamical smoothing model has much more flexibility in how it controls the sensitivity of the smoothing to the unit impulses. In the above αt -RNN, there is no means to directly occasionally forget the memory. This is because the hidden variables update equation always depends on the previous smoothed hidden state, unless Uh = 0. However, it can be expected that the fitted recurrence weight Uˆ h will not in general be zero and thus the model is without a “hard reset button.” GRUs also have the capacity to entirely reset the memory by adding an additional reset variable: 3 Gated Recurrent Units Fig. 8.2 An illustrative example of the response of an αt -RNN and comparison with a plain RNN and a RNN with an exponentially smoothed hidden state, under a constant α (α-RNN). The RNN(3) model loses memory of the unit impulse after three lags, whereas the α-RN N (3) models maintain memory of the first unit impulse even when the second unit impulse arrives. The difference between the αt -RNN (the toy GRU) and the α-RNN appears insignificant. Keep in mind however that the dynamical smoothing model has much more flexibility in how it controls the sensitivity of the smoothing to the unit impulses smoothing : h˜ t = αˆ t ◦ hˆ t + (1 − αˆ t ) ◦ h˜ t−1 smoother update : αˆ t = σ (Uα h˜ t−1 + Wα xt + bα ) hidden state update : hˆ t = σ (Uh rˆt ◦ h˜ t−1 + Wh xt + bh ) reset update : rˆt = σ (Ur h˜ t−1 + Wr xt + br ). (8.45) (8.46) (8.47) (8.48) The effect of introducing a reset, or switch, rˆt , is to forget the dependence of hˆ t on the smoothed hidden state. Effectively, we turn the update for hˆ t from a plain RNN to a FFN and entirely neglect the recurrence. The recurrence in the update of hˆ t is thus dynamic. It may appear that the combination of a reset and adaptive smoothing is redundant. But remember that αˆ t effects the level of error correction in the update of the smoothed hidden state, h˜ t , whereas rˆt adjusts the level of recurrence in the unsmoothed hidden state hˆ t . Put differently, αˆ t by itself cannot disable the memory in the smoothed hidden state (internal memory), whereas rˆt in combination with αˆ t can. More precisely, when αt = 1 and rˆt = 0, h˜ t = hˆ t = σ (Wh xt + bh ) which is reset to the latest input, xt , and the GRU is just a FFNN. Also, when αt = 1 and rˆt > 0, a GRU acts like a plain RNN. Thus a GRU can be seen as a more general architecture which is capable of being a FFNN or a plain RNN under certain parameter values. These additional layers (or cells) enable a GRU to learn extremely complex longterm temporal dynamics that a plain RNN is not capable of. The price to pay for this flexibility is the additional complexity of the model. Clearly, one must choose whether to opt for a simpler model, such as an αt -RNN, or use a GRU. Lastly, we 8 Advanced Neural Networks comment in passing that in the GRU, as in a RNN, there is a final feedforward layer to transform the (smoothed) hidden state to a response: yˆt = WY h˜ t + bY . 3.3 Long Short-Term Memory (LSTM) The GRU provides a gating mechanism for propagating a smoothed hidden state—a long-term memory—which can be overridden and even turn the GRU into a plain RNN (with short memory) or even a memoryless FFN. More complex models using hidden units with varying connections within the memory unit have been proposed in the engineering literature with empirical success (Hochreiter and Schmidhuber 1997; Gers et al. 2001; Zheng et al. 2017). LSTMs are similar to GRUs but have a separate (cell) memory, Ct , in addition to a hidden state ht . LSTMs also do not require that the memory updates are a convex combination. Hence they are more general than exponential smoothing. The mathematical description of LSTMs is rarely given in an intuitive form, but the model can be found in, for example, Hochreiter and Schmidhuber (1997). The cell memory is updated by the following expression involving a forget gate, αˆ t , an input gate zˆ t , and a cell gate cˆt ct = αˆ t ◦ ct−1 + zˆ t ◦ cˆt . In the language of LSTMs, the triple (αˆ t , rˆt , zˆ t ) are, respectively, referred to as the forget gate, output gate, and input gate. Our change of terminology is deliberate and designed to provide more intuition and continuity with GRUs and econometrics. We note that in the special case when zˆ t = 1 − αˆ t we obtain a similar exponential smoothing expression to that used in the GRU. Beyond that, the role of the input gate appears superfluous and difficult to reason with using time series analysis. Likely it merely arose from a contextual engineering model; however, it is tempting to speculate how the additional variable provides the LSTM with a more elaborate representation of complex temporal dynamics. When the forget gate, αˆ t = 0, then the cell memory depends solely on the cell memory gate update cˆt . By the term αˆ t ◦ ct−1 , the cell memory has long-term memory which is only forgotten beyond lag s if αˆ t−s = 0. Thus the cell memory has an adaptive autoregressive structure. The extra “memory,” treated as a hidden state and separate from the cell memory, is nothing more than a Hadamard product: ht = rˆt ◦ tanh(c)t , which is reset if rˆt = 0. If rˆt = 1, then the cell memory directly determines the hidden state. 4 Python Notebook Examples Thus the reset gate can entirely override the effect of the cell memory’s autoregressive structure, without erasing it. In contrast, the GRU has one memory, which serves as the hidden state, and it is directly affected by the reset gate. The reset, forget, input, and cell memory gates are updated by plain RNNs all depending on the hidden state ht . Reset gate : rˆt = σ (Ur ht−1 + Wr xt + br ) Forget gate : αˆ t = σ (Uα ht−1 + Wα xt + bα ) Input gate : zˆ t = σ (Uz ht−1 + Wz xt + bz ) Cell memory gate : cˆt = tanh(Uc ht−1 + Wc xt + bc ). Like the GRU, the LSTM can function as a short memory, plain RNN; just set αt = 0 in Eq. 8.50. However, the LSTM can also function as a coupling of FFNs; just set rˆt = 0 so that ht = 0 and hence there is no recurrence structure in any of the gates. Both GRUs and LSTMs, even if the nomenclature does not suggest it, can model long- and short-term autoregressive memory. The GRU couple these through a smoothed hidden state variable. The LSTM separates out the long memory, stored in the cellular memory, but uses a copy of it, which may additionally be reset. Strictly speaking, the cellular memory has long-short autoregressive memory structure, so it would be misleading in the context of time series analysis to strictly discern the two memories as long and short (as the nomenclature suggests). The latter can be thought of as a truncated version of the former. ? Multiple Choice Question 2 Which of the following statements are true: a. A gated recurrent unit uses dynamic exponential smoothing to propagate a hidden state with infinite memory. b. The gated recurrent unit requires that the data is covariance stationary. c. Gated recurrent units are unconditionally stable, for any choice of activation functions and weights. d. A GRU only has one memory, the hidden state, whereas a LSTM has an additional, cellular, memory. 4 Python Notebook Examples The following Python examples demonstrate the application of RNNs and GRUs to financial time series prediction. 8 Advanced Neural Networks Fig. 8.3 A comparison of out-of-sample forecasting errors produced by a RNN and GRU trained on minute snapshots of Coinbase mid-prices 4.1 Bitcoin Prediction ML_in_Finance-RNNs-Bitcoin.ipynb provides an example of how TensorFlow can be used to train and test RNNs for time series prediction. The example dataset is for predicting minute head mid-prices from minute snapshots of the USD value of Coinbase over 2018. Statistical methods for stationarity and autocorrelation shall be used to characterize the data, identify the sequence length needed in the RNN, and to diagnose the model error. Here we accept the Null as the p-value is larger than 0.01 in absolute value and thus we cannot reject the ADF test at the 99% confidence level. Since plain RNNs are not suited to non-stationary time series modeling, we can use a GRU or LSTM to model non-stationarity data, since these models exhibit dynamic autocorrelation structure. Figure 8.3 compares the out-of-sample forecasting errors produced by a RNN and GRU. See the notebook for further details of the architecture and experiment. 4.2 Predicting from the Limit Order Book The dataset is tick-by-tick, top of the limit order book, data such as mid-prices and volume weighted mid-prices (VWAP) collected from ZN futures. This dataset is heavily truncated for demonstration purposes and consists of 1033492 observations. The data has also been labeled to indicate whether the prices up-tick (1), remain 5 Convolutional Neural Networks Fig. 8.4 A comparison of out-of-sample forecasting errors produced by a plain RNN and GRU trained on tick-by-tick smart prices of ZN futures the same, or down-tick (−1) over the next tick. For demonstration purposes, the timestamps have been removed. In the simple forecasting experiment, we predict VWAPs (a.k.a. “smart prices”) from historical smart prices. Note that a classification experiment is also possible but not shown here. The ADF test is performed over the first 200k observations as it is computationally intensive to apply it to the entire dataset. The ADF test statistic is −3.9706 and the p-value is smaller in absolute value than 0.01 and we thus reject the Null of the ADF test at the 99% confidence level in favor of the data being stationary (i.e., there are no unit roots). The Ljung–Box test is used to identify the number of lags needed in the model. A comparison of out-of-sample VWAP prices produced by a plain RNN and GRU is shown in Fig. 8.4. Because the data is stationary, we observe little advantage in using a GRU over a plain RNN. See ML_in_Finance-RNNs-HFT.ipynb for further details of the network architectures and experiment. 5 Convolutional Neural Networks Convolutional neural networks (CNNs) are feedforward neural networks that can exploit local spatial structures in the input data. Flattening high-dimensional time series, such as limit order book depth histories, would require a very large number of weights in a feedforward architecture. CNNs attempt to reduce the network size by exploiting data locality (Fig. 8 Advanced Neural Networks Fig. 8.5 Convolutional neural networks. Source: Van Veen, F. & Leijnen, S. (2019), “The Neural Network Zoo”, Retrieved from https://www. asimovinstitute.org/neuralnetwork-zoo Deep CNNs, with multiple consecutive convolutions followed by non-linear functions, have shown to be immensely successful in image processing (Krizhevsky et al. 2012). We can view convolutions as spatial filters that are designed to select a specific pattern in the data, for example, straight lines in an image. For this reason, convolution is frequently used for image processing, such as for smoothing, sharpening, and edge detection of images. Of course, in financial modeling, we typically have different spatial structures, such as the limit order book depths or the implied volatility surface of derivatives. However, the CNN has established its place in time series analysis too. 5.1 Weighted Moving Average Smoothers A common technique in time series analysis and signal processing is to filter the time series. We have already seen exponential smoothing as a special case of a class of smoothers known as “weighted moving average (WMA)” smoothers. WMA smoothers take the form x˜t = i∈I wi i∈I wi xt−i , where x˜t is the local mean of the time series. The weights are specified to emphasize or deemphasize particular observations of xt−i in the span |I|. Examples of wellknown smoothers include the Hanning smoother h(3): x˜t = (xt−1 + 2xt + xt+1 )/4. Such smoothers have the effect of reducing noise in the time series. The moving average filter is a simple low pass finite impulse response (FIR) filter commonly used for regulating an array of sampled data. It takes |I| samples of input at a time and takes the weighted average of those to produce a single output point. As the 5 Convolutional Neural Networks length of the filter increases, the smoothness of the output increases, whereas the sharp modulations in the data are smoothed out. The moving average filter is in fact a convolution using a very simple filter kernel. More generally, we can write a univariate time series prediction problem as a convolution with a filter as follows. First, the discrete convolution gives the relation between xi and xj : xt−i = δij xt−j , i ∈ {0, . . . , t − 1} j =0 where we have used the Kronecker delta δ. The kernel filtered time series is a convolution x˜t−i = Kj +k+1 xt−i−j , i ∈ {k + 1, . . . , p − k}, (8.59) j ∈J where J := {−k, . . . , k} so that the span of the filter |J | = 2k + 1, where k is taken as a small integer, and the kernel is K. For simplicity, the ends of the sequence are assumed to be unfiltered but for notational reasons we set x˜t−i = xt−i for i ∈ {1, . . . , k, p − k + 1, . . . , p}. Then the filtered AR(p) model is xˆt = μ + φi x˜t−i = μ + (φ1 L + φ2 L2 + · · · + φp Lp )[x˜t ] = μ + [L, L2 , . . . , Lp ]φ[x˜t ], with coefficients φ := [φi , . . . , φp ]. Note that there is no look-ahead bias because we do not filter the last k values of the observed data {xs }ts=1 . We have just written our first toy 1D CNN consisting of a feedforward output layer and a non-activated hidden layer with one unit (i.e., kernel): xˆt = Wy zt + by , zt = [x˜t−1 , . . . , x˜t−p ]T , Wy = φ T , by = μ, where x˜t−i is the ith output from a convolution of the p length input sequence with a kernel consisting of 2k + 1 weights. These weights are fixed over time and hence the CNN is only suited to prediction from stationary time series. Note also, in contrast to a RNN, that the size of the weight matrix Wy increases with the number of lags in the model. The univariate CNN predictor with p lags and H activated hidden units (kernels) is xˆt = Wy vec(zt ) + by 8 Advanced Neural Networks [zt ]i,m = σ ( Km,j +k+1 xt−i−j + [bh ]m ) j ∈J = σ (K ∗ xt + bh ), where m ∈ {1, . . . , H } denotes the index of the kernel and the kernel matrix K ∈ RH ×2k+1 , hidden bias vector bh ∈ RH and output matrix Wy ∈ R1×pH . Dimension Reduction Since the size of Wy increases with both the number of lags and the number of kernels, it may be preferable to reduce the dimensionality of the weights with an additional layer and hence avoid over-fitting. We will return to this concept later, but one might view it as an alternative to auto-shrinkage or dropout. Non-sequential Models Convolutional neural networks are not limited to sequential models. One might, p for example, sample the past lags non-uniformly so that I = {2i }i=1 then the p maximum lag in the model is 2 . Such a non-sequential model allows a large maximum lag without capturing all the intermediate lags. We will also return to non-sequential models in the section on dilated convolution. Stationarity A univariate CNN predictor, with one kernel and no activation, can be written in the canonical form xˆt = μ + (1 − (L))[K ∗ xt ] = μ + K ∗ (1 − (L))[xt ] = μ + (φ˜ 1 L + . . . φ˜ p L )[xt ] := μ + (1 − ˜ (L))[xt ], where, by the linearity of (L) in xt , the convolution commutes and thus we can write φ˜ := K ∗ φ. Finding the roots of the characteristic equation ˜ (z) = 0, it follows that the CNN is strictly stationary and ergodic if all the roots lie outside the unit circle in the complex plane, |λi | > 1, i ∈ {1, . . . , p}. As before, we would compute the eigenvalues of the companion matrix to find the roots. Provided that ˜ (L)−1 forms a divergent sequence in the noise process {s }t then the model is s=1 stable. 5 Convolutional Neural Networks 5.2 2D Convolution 2D convolution involves applying a small kernel matrix (a.k.a. a filter), K ∈ R2k+1×2k+1 , over the input matrix (called an image), X ∈ Rm×n , to give a filtered image, Y ∈ Rm−2k×n−2k . In the context of convolutional neural networks, the elements of the filtered image are referred to as the feature map values and are calculated according to the following formula: k yi,j = [K ∗ X]i,j = Kk+1+p,k+1+q xi+p+1,j +q+1 , i ∈ {1, . . . , m}, j ∈ {1, . . . .n}. It is instructive to consider the following example to illustrate the 2D convolution with a small kernel matrix. Example 8.2 2D Convolution Consider the 4 × 4 input, 3 × 3 kernel, and 2 × 2 output matrices ⎤ ⎡ ⎤ ⎡ 1002 0 −1 1 ⎢0 0 0 3⎥ ⎥ , K = ⎣0 1 0⎦ , Y = 2 1 . The calculation of the X=⎢ ⎣2 0 1 0⎦ −2 5 1 −1 0 0210 outputs for the case when i = j = 1 is yi,j = [K ∗ X]i,j = Kk+1+p,k+1+q xi+p+1,j +q+1 , i ∈ {1, . . . , m}, j ∈ {1, . . . , n} = 0 · 1 + −1 · 0 + 1 · 0 + 0 · 0 + 1 · 0 + 0 · 0 + 1 · 2 + −1 · 0 + 0 · 1 =2 We leave it as an exercise for the reader to compute the output for the remaining values of i and j . As in the example above, when we perform convolution over the 4 × 4 image with a 3 × 3 kernel, we get a 2 × 2 feature map. This is because there are only 4 unique positions where we can place our filter inside this image. As convolutional neural networks were designed for image processing, it is common to represent the color values of the pixels with c color channels. For 8 Advanced Neural Networks example, RGB values are represented with three channels. The general form of the convolution layer map for a m × n × c input tensor and outputs m × n × H (with stride 1 and padding) is θ : Rm×n×c → Rm×n×H . Writing ⎛ ⎞ f1 ⎜ .. ⎟ f =⎝.⎠ fc we can then write the layer map as θ (f ) = K ∗ f + b, where K ∈ R[(2k+1)×(2k+1)]×H ×c , b ∈ Rm×n×H given by b := 1m×n ⊗ b and 1m×n is a m × n matrix with all elements being 1. In component form, the operation (8.73) is [θ (f )]j = c [K]i,j ∗ [f ]i + bj , j ∈ {1, . . . , H }, where [·]i,j contracts the 4-tensor to a 2-tensor by indexing the ith third component and j th fourth component of the tensor and for any g ∈ Rm×n and H ∈ R(2k+1)×(2k+1) [H ∗ g]i,j = Hk+1+p,k+1+q gi+p,j +q , i ∈ {1, . . . , m}, j ∈ {1, . . . .n}. (8.75) By analogy to a fully connected feedforward architecture, the weights in the layer are given by the kernel tensor, K, and the biases, b are H -vectors. Instead of a semi-affine transformation, the layer is given by an activated convolution σ (θ (f)). Furthermore, we note that not all neurons in the two consecutive layers are connected to each other. In fact, only the neurons which correspond to inputs within a 2k + 1 × 2k + 1 square connect to the same output neuron. Thus the filter size controls the receptive field of each output. We note, therefore, that some neurons share the same weights. Both of these properties result in far fewer parameters to learn than a fully connected feedforward architecture. Padding is needed to extend the size of the image f so that the filtered image has the same dimensions as the original image. Specifically padding means how to choose fi+p,j +q when (i + p, j + q) is outside of {1, . . . , m} or {1, . . . , n}. The following three choices are often used 5 Convolutional Neural Networks fi+p,j +q ⎧ ⎪ ⎪ ⎨0, = f(i+p) ⎪ ⎪ ⎩f zero padding, (mod m),(s+q) (mod n) , |i−1+p|,|j −1+q| , periodic padding, reflected padding, if i+p ∈ / {1, . . . , m} or j + q ∈ / {1, . . . , n}. Here d (mod m) ∈ {1, · · · , m} means the remainder when d is divided by m. The operation in Eq. 8.75 is also called a convolution with stride 1. Informally, we performed the convolution by sliding the image area by a unit increment. A common choice in CNNs is to take s = 2. Given an integer s ≥ 1, a convolution with stride s for f ∈ Rm×n is defined as [K ∗s f ]i,j = Kp,q fs(i−1)+1+p,s(j −1)+1+q , i ∈ {1, . . . , % n j ∈ {1, . . . , % &}. s Here % ms & denotes the smallest integer greater than m &}, s (8.78) m s. 5.3 Pooling Data with high spatial structure often results in observations which have similar values within a neighborhood. Such a characteristic leads to redundancy in data representation and motivates the use of data reduction techniques such as pooling. In addition to a convolution layer, a pooling layer is a map: R¯ +1 : Rm ×n → Rm+1 ×n+1 . One popular pooling is the so-called average pooling Ravr which can be a convolution with stride 2 or bigger using the kernel K in the form of ⎞ ⎛ 111 1⎝ K= 1 1 1⎠ . 9 111 Non-linear pooling operator is also used, for example, the (2k + 1) × (2k + 1) max-pooling operator with stride s as follows: [Rmax (f )]i,j = max {fs(i−1)+1+p,s(j −1)+1+q }. 8 Advanced Neural Networks 5.4 Dilated Convolution In addition to image processing, CNNs have also been successfully applied to time series. WaveNet, for example, is a CNN developed for audio processing (van den Oord et al. 2016). Time series often displays long-term correlations. Moreover, the dependent variable(s) may exhibit non-linear dependence on the lagged predictors. The WaveNet architecture is a non-linear p-autoregression of the form yt = φi (xt−i ) + t where the coefficient functions φi , i ∈ {1, . . . , p} are data-dependent and optimized through the convolutional network. To enable the network to learn these long-term, non-linear, dependencies Borovykh et al. (2017) use stacked layers of dilated convolutions. A dilated convolution effectively allows the network to operate on a coarser scale than with a normal convolution. This is similar to pooling or strided convolutions, but here the output has the same size as the input (van den Oord et al. 2016). In a dilated convolution the filter is applied to every dth element in the input vector, allowing the model to efficiently learn connections between far-apart data points. For an architecture with L layers of dilated convolutions ∈ {1, . . . , L}, a dilated convolution outputs a stack of “feature maps” given by [K () ∗d () f (−1) ]i = k p=−k Kp() fd () (i−1)+1+p , i ∈ {1, . . . , % m &}, d () where d is the dilation factor and we can choose the dilations to increase by a factor of two: d () = 2−1 . The filters for each layer, K () , are chosen to be of size 1 × (2k + 1) = 1 × 2. An example of a three-layer dilated convolutional network is shown in Fig. 8.6. Using the dilated convolutions instead of regular ones allows the output y to be influenced by more nodes in the input. The input of the network is given by the time series X. In each subsequent layer we apply the dilated convolution, followed by a non-linearity, giving the output feature maps f () , ∈ {1, . . . , L}. Since we are interested in forecasting the subsequent values of the time series, we will train the model so that this output is the forecasted time series Yˆ = {yˆt }N t=1 . The receptive field of a neuron was defined as the set of elements in its input that modifies the output value of that neuron. Now, we define the receptive field r of the model to be the number of neurons in the input in the first layer, i.e. the time series, that can modify the output in the final layer, i.e. the forecasted time series. This then depends on the number of layers L and the filter size 2k + 1, and is given 5 Convolutional Neural Networks Fig. 8.6 A dilated convolutional neural network with three layers. The receptive field is given by r = 8, i.e. one output value is influenced by eight input neurons. Source: van den Oord et al. r := 2L−1 (2k + 1). In Fig. 8.6, the receptive field is given by r = 8, one output value is influenced by eight input neurons. ? Multiple Choice Question 3 Which of the following statements are true: a. CNNs apply a collection of different, but equal width, filters to the data before using a feedforward network for regression or classification. b. CNNs are sparse networks, exploiting locality of the data, to reduce the number of weights. c. A dilated CNN is appropriate for multi-scale time series analysis—it captures a hierarchy of patterns at different resolutions (i.e., dependencies on past lags at different frequencies, e.g. days, weeks, months) d. The number of layers in a CNN is automatically determined during training. 5.5 Python Notebooks ML_in_Finance-1D-CNNs.ipynb demonstrates the application of 1D CNNs to predict the next element in a uniform sequence of integers. The CNN uses a sequence length of 50 and 4 kernels each of width 5. See Exercise 8.7 for a programming challenge involving applying this 1D CNN for time series to the HFT dataset described in the previous section on RNNs. For completeness, ML_in_Finance-2D-CNNs.ipynb demonstrates the application of a 2D CNN to image data from the MNIST dataset. Such an architecture might be appropriate for learning volatility surfaces but is not demonstrated here. 8 Advanced Neural Networks 6 Autoencoders An autoencoder is a self-supervised deep learner which trains the architecture to approximate the identity function, Y = F (Y ), via a bottleneck structure. This means we fit a model Yˆ = FW,b (Y ) which aims to very efficiently concentrate the information required to recreate Y . Put differently, an autoencoder is a form of compression that creates a much more cost-effective representation of Y . Its output layer has the same number of nodes as the input layer, and the cost function is some measure of the reconstruction error, Y − Yˆ . Autoencoders are often used for the purpose of dimensionality reduction and noise reduction. A simple autoencoder that implements dimensionality reduction is a feedforward autoencoder with at least one layer that has a smaller number of nodes, which functions as a bottleneck. After training the neural network using back-propagation, it is separated into two parts: the layers up to the bottleneck are used as an encoder, and the remaining layers are used as a decoder. In the simplest case, there is only one hidden layer (the bottleneck), and the layers in the network are fully connected. The compression capacity of autoencoders motivates their application in finance as a non-parametric, non-linear, analogue of the heavily used principal component analysis (PCA). It has been well known since the pioneering work of Baldi and Hornik (1989) that autoencoders are closely related to PCA. We follow Plaut (2018) and begin with a brief review of PCA and then show how exactly linear autoencoders enable PCA. Example A Simple Autoencoder For example, under a L2 -loss function, we wish to solve minimize ||FW,b (X) − Y ||2F W,B subject to a regularization penalty on the weights and offsets. An autoencoder with two layers can be written as a feedforward network: Z (1) =f (1) (W (1) Y + b1) ), Yˆ =f (2) (W (2) Z (1) + b(2) ), where Z (1) is a low-dimensional representation of Y . We find the weights and biases so that the number of rows of W (1) equals the number of columns of W (2) and the number of rows is much smaller than the columns, which looks like the architecture in Fig. 8.7. 6 Autoencoders Fig. 8.7 Autoencoders. Source: Van Veen, F. & Leijnen, S. (2019), “The Neural Network Zoo”, Retrieved from https://www. asimovinstitute.org/neuralnetwork-zoo 6.1 Linear Autoencoders In the case that no non-linear activation function is used, xi = W (1) yi + b(1) and yˆ i = W (2) xi + b(2) . If the cost function is the total squared difference between output and input, then training the autoencoder on the input data matrix Y solves min W (1) ,b(1) ,W (2) ,b(2) 82 8 8 8Y − W(2) W (1) Y + b(1) 1TN + b(2) 1TN 8 . F If we set the partial derivative with respect to b2 to zero and insert the solution into (8.85), then the problem becomes min W (1) ,W (2) 8 82 8 8 8Y0 − W (2) W (1) Y0 8 Thus, for any b1 , the optimal b2 is such that the problem becomes independent of b1 and of y¯ . Therefore, we may focus only on the weights W (1) , W (2) . Linear autoencoders give orthogonal projections, even though the columns of the weight matrices are not orthogonal. To see this, set the gradients to zero, W (1) is the left Moore–Penrose pseudoinverse of W (2) (and W (2) is the right pseudoinverse of W (1) ): −1 † W (1) = (W (2) ) = W (2)T W (2) (W (2) )T The minimization with respect to a single matrix is min W (2) ∈Rn×m 8 82 8 8 8Y0 − W (2) (W (2) )† Y0 8 & '−1 (W (2) )T is the orthogonal The matrix W (2) (W (2) )† = W (2) (W (2) )T W (2) (2) projection operator onto the column space of W when its columns are not 8 Advanced Neural Networks necessarily orthonormal. This problem is very similar to (6.52), but without the orthonormality constraint. It can be shown that W (2) is a minimizer of Eq. 8.86 if and only if its column space is spanned by the first m loading vectors of Y. The linear autoencoder is said to apply PCA to the input data in the sense that its output is a projection of the data onto the low-dimensional principal subspace. However, unlike actual PCA, the coordinates of the output of the bottleneck are correlated and are not sorted in descending order of variance. The solutions for reduction to different dimensions are not nested: when reducing the data from dimension n to dimension m1 , the first m2 vectors (m2 < m1 ) are not an optimal solution to reduction from dimension n to m2 , which therefore requires training an entirely new autoencoder. 6.2 Equivalence of Linear Autoencoders and PCA Theorem The first m loading vectors of Y are the first m left singular vectors of the matrix W (2) which minimizes (8.86). A sketch of the proof now follows. We train the linear autoencoder on the original dataset Y and then compute the first m left singular vectors of W (2) ∈ Rn×m , where typically m 0 is the trading volume of the sold stock, Zt ∼ N (0, 1) is a standard Gaussian noise, and σ is the stock volatility. The problem faced by the broker is to find an optimal partitioning of the sell order into smaller blocks of shares that are to be executed sequentially, for example, each minute over the T = 10 min time period. This can also be formulated as a Markov Decision Process (MDP) problem (see Sect. 3 for further details) with a state variable Xt given by the number of shares outstanding (plus potentially other relevant variables such as limit order book data). For a one-step reward rt of selling at shares at step t, we could consider a risk-adjusted payoff given by the total expected payoff μ := at St penalized by the variance of the remaining inventory price at the next step t + 1: rt = μ − λVar [St+1 Xt+1 ]. We will return to the optimal stock execution problem later in this chapter, after we introduce the relevant mathematical constructs. 9 Introduction to Reinforcement Learning 2.2 Value and Policy Functions In addition to an action chosen by the agent, its local reward depends on the state St of the environment. Different states of the environment can have different amount of attractiveness, or value, for the agent. Certain states may not exhibit any good options to receive high rewards. Furthermore, states of the environment in general change over a multi-step sequence of actions taken by the agent, and their future may be partly driven by their present state (and possibly actions of the agent). Therefore, reinforcement learning uses the concept of a value function as a numerical value of attractiveness of a state St for the agent, with a view on a multi-step character of an optimization problem faced by the agent. To relate it to the goal of reinforcement learning of maximization the cumulative reward over a period of time, a value function can be specified as a mean (expected) cumulative reward, that can be obtained by starting from this state, over the whole period (but as we will see below, there are other choices too). However, such a quantity would be under-specified as it stands, because rewards depend on actions at , in addition to their dependence on the state St . If we want to define the value function of a current (time-t) expected value of the cumulative reward obtained in T steps, we must know beforehand how the agent should act in any given possible state of the environment. This rule is specified by a policy function πt (St ) for how the agent should act at time t given the state St of the environment. A policy function may be a deterministic function of the state St , or alternatively it can be a probability distribution over a range of possible actions, specified by the current value of St . A value function V π (St ) is therefore a function of the current state St and a functional of the policy π . 2.3 Observable Versus Partially Observable Environments Finally, to complete the list of main concepts of reinforcement learning, we have to specify the notion of the state St , as well as define the law of its evolution. This process of an agent perceiving the environment and taking actions is extended over a certain period of time. During this period, the state of the environment changes. These changes might be determined by the previous history of the environment, and in addition, can be partially driven by some random factors, as well as an agent’s own actions. An immediate problem to be addressed is therefore how we model evolution of the environment. We start by describing autonomous evolution of the environment without any impact from the agent, and then generalize below to the case when such impact, i.e. a feedback loop, is present. In a most general form, the joint probability p0:T = p(s0 , . . . , sT −1 ) of a particular path (S0 = s0 , . . . , ST −1 = sT −1 ) can be written as follows: 2 Elements of Reinforcement Learning p(s0 , s1 , . . . , sT −1 ) = 287 T −1 p(si |s0:i−1 ). Note that this expression does not make any assumptions about the true datagenerating process—only the composition law of joint probabilities is used here. Unfortunately this expression is too general and useless in practice. This is because, for most problems of practical interest, the number of time steps encountered is in the tens or hundreds. Modeling path probabilities for sequences of states only relying on this general expression would lead to an exponential growth of the number of model parameters. We have to make some further assumptions in order to have a practically useful sequence model for the evolution of the environment. The simplest, and in many practical cases a reasonable “first order approximation” approach is to additionally assume Markovian dynamics where one assumes that conditional probabilities p(xi |x0:i−1 ) depend only on K most recent values, rather than on the whole history: p(st |s0:t−1 ) = p(st |st−K:i−1 ). The most common case is K = 1 where probabilities of states at time t only depend on the previously observed values of the state. Following the common tradition in the literature, unless specifically mentioned, in what follows we refer to K = 1 Markov processes as simply “Markov processes.” This is also the basic setting for more general Markov processes with K > 1: such cases can be still viewed as Markov processes with K = 1 but with an extended definition of a time-t state St → Sˆt = (St , St−1 , . . . , St−K ). If we assume Markov dynamics for the evolution of the environment, this results in tractable formulations for sequence modeling. But in many practical cases, dynamics of a system cannot be as simple as a Markov process with a sufficiently low value of K, say 1 < K < 10. For example, financial markets often have longer memories than 10 time steps. For such systems, approximating essentially non-Markov dynamics by Markov dynamics with K ∼ 10 may not be satisfactory. A better and still tractable way to model a non-Markov environment is to use hidden variables zt . The dynamics is assumed to be jointly Markov in the pair (st , zt ) so that path probabilities factorize into the product of single-step probabilities: p(s0:T −1 , z0:T −1 ) = p(s0 , s0 ) T −1 p(zt |zt−1 )p(st |zt ). Such processes with joint Markov dynamics in a pair (st , zt ) of an observable and unobservable components of a state are called hidden Markov models (HMM). Note that the dynamics of the marginal xt alone in HMMs would be in general nonMarkovian. 9 Introduction to Reinforcement Learning Remarkably, introducing hidden variables zt , we may construct models that both produce rich dynamics of observables, and have a substantially lower number of parameters than we would need with order K Markov processes. This means that such models might need considerably less data for training, and may behave better out-of-sample than Markov models. At the same time, models that are Markov in the pair (st , zt ) can be implemented in computationally efficient ways. Multiple examples in applications to speech and text recognition, robotics, finance demonstrated that HMMs are able to produce highly complex and sufficiently realistic sequences and time series. The key question in any type of HMM model is how to model a hidden state zt . Should it have a discrete or a continuous distribution? How many states are there? How should we specify the dynamics of hidden states, etc? While these are all practically important questions, here we want to focus on the conceptual aspect of modeling. First, introduction of hidden variables zt has a strong intuitive appeal. Traditionally, many important factors, e.g. political risk, are left outside of financial models, therefore they are “unknown” to the model. Treating such unknown risks as a de-correlated noise at each time step may be insufficient because such hidden risk factors usually include strong autocorrelations. This provides a second motivation to incorporate hidden processes in modeling of financial markets as not only a conventional tool to account for complex time dependencies of observable quantities xt , but also on their own, as a way to account for risk factors that we do not directly include in an observable state of the model. HMMs with their Markov dynamics in the pair (xt , zt ) provide a rather flexible set of non-Markov dynamics in observables xt . Even richer types of non-Markov dynamics can be obtained with recurrent extensions (such as RNN and LSTM neural networks) where hidden state probabilities depend on a long history of previous hidden states rather than just on a single last hidden state. While this implies that hidden variables might be very useful for financial machine learning, modeling decision-making of an agent in such a partially observable environment is more difficult than when an environment is fully observable. Therefore, in the rest of this chapter we will deal with models of decision-making within fully observable systems whose dynamics are assumed to be Markovian. The agent’s actions are added to the framework of Markov modeling, producing Markov Decision Process (MDP) based model. We will consider this topic next. Example 9.2 Portfolio Trading and Reinforcement Learning The problem of multi-period portfolio management can be described as a problem of stochastic optimal control. Consider a portfolio of stocks and a single Treasury bond, where stocks are considered risky investments with random returns, while the Treasury bond is a risk-free investment with a fixed return determined by a risk-free discount rate. Let pn (t) with n = 1, . . . , N be investments at time t in N different stocks, and bt is the investment in the (continued) 3 Markov Decision Processes Example 9.2 bond. Let Xt be a vector of all other relevant portfolio-specific and marketwide dynamic variables that may impact an investment decision at time t. The vector Xt may, e.g., include market prices of all stocks in the portfolio, plus market indices such as SPY500 and various sector indices, macroeconomic factors such as the inflation rate, etc. The total state vector for such system is then st = (pt , bt , Xt ). The action at would be an (N + 1)-dimensional vector of capital allocations to all stocks and the bond at time t. If all components of vector Xt are observable, we can model its dynamics as Markov. Otherwise, if some components of vector Xt are non-observable, we can use an HMM formulation for the dynamics. The dynamics of some components of Xt can be partially affected by actions of the trader agent, for example, market prices of stock can be moved by large trades via market impact mechanisms. A model of the interaction of the trader agent and its environment (the “market”) can therefore include feedback loop effects. The objective of multi-step portfolio optimization is to maximize the expected cumulative reward. We can, e.g., consider one-step random rewards given by the one-step portfolio return r$ (st , at ) at time step t, penalized by the one-step variance of the return: R(st , at ) = r$ (st , at ) − λVar [r$ (st , at )], where λ is a risk-aversion rate. Taking the expectation of this random reward, we recover the classical Markowitz quadratic reward (utility) function for portfolio optimization. Therefore, reinforcement learning with random rewards R(st , at ) extends Markowitz single-period portfolio optimization to a samplebased, multi-period setting. 3 Markov Decision Processes Markov Decision Process models extend Markov models by adding new degrees of freedom describing controls. In reinforcement learning, control variables can describe agents’ actions. Controls are decided upon by the agent, and via the presence of a feedback loop, can modify the future evolution of the environment. When we embed the idea of controls with a feedback loop into the framework of Markov processes, we obtain Markov Decision Process (MDP) models. The MDP framework provides a stylized description of goal-directed learning from interaction. It describes the agent–environment interaction as message-passing of three signals: a signal of actions by the agent, a signal of the state of an environment, and a signal defining the agent’s reward, i.e. the goal. In mathematical terms, a Markov6Decision Process is defined 7 by a set of discrete time steps t0 , . . . , tn and a tuple S, A(s), p(s |s, a), R, γ with the following elements. First, we have a set of states S, so that each observed state St ∈ S. The 9 Introduction to Reinforcement Learning space S can be either discrete or continuous. If S is finite, we have a finite MDP, otherwise we have a MDP with a continuous state space. Second, a set of actions A(s) defines possible actions At ∈ A(s) that can be taken in a state St = s. Again, the set A(s) can be either discrete or continuous. In the former case, we have a MDP model with a discrete action space, and in the latter case we obtain a continuous-action MDP model. Next, a MDP is specified by transition probabilities p(s |s, a) = p(St = s |St−1 = s, at−1 = a) of a next state St given a previous state St−1 and an action At−1 taken in this state. Slightly more generally, we may specify a joint probability of a next state s and reward r ∈ R where R is a set of all possible rewards: p(s , r|s, a) = Pr St = s , Rt = r|St−1 = s, at−1 = a , p(s , r|s, a) = 1, ∀s ∈ S, a ∈ A. s ∈S,r∈R This joint probability specifies the state transition probabilities p(s |s, a) = Pr St = s |St−1 = s, at−1 = a = p(s , r|s, a), as well as the expected reward function r : S × A × S → R that gives the expected value of a random reward Rt received in the state St−1 = s upon taking action at−1 = a: r(s, a, s ) = E Rt |St−1 = s, At−1 = a, St = s p St = s , Rt = r|St−1 = s, at−1 = a = . r p [St = s |St−1 = s, at−1 = a] Finally, a MDP needs to specify a discount factor γ which is a number between 0 and 1. We need the discount factor γ to compute the total cumulative reward given by the sum of all single-step rewards, where each next term gets an extra power of γ in the sum: R(s0 , a0 ) + γ R(s1 , a1 ) + γ 2 R(s2 , a2 ) + . . . . This means that as long as γ < 1, receiving a larger reward now and a smaller reward later is preferred to receiving a smaller reward now and a larger reward later. The discount factor γ just controls by how much the first scenario is preferable to the second one. This means that the discount factor for a MDP plays a similar role to a discount factor in finance, as it reflects the time value of rewards. Therefore, in financial application the discount factor γ can be identified with a continuously- 3 Markov Decision Processes Fig. 9.1 The causality diagram for a Markov Decision Process compounded interest rate. Note that for infinite-horizon MDPs, a value γ < 1 is required in order to have a finite total reward.2 A diagram describing a Markov Decision Process is shown in Fig. 9.1. The blue circles show the evolving state of the system St at discrete-time steps. These states are connected by arrows representing causality relations. We have only one arrow that enters each blue circle from a previous blue circle, emphasizing a Markov property of the dynamics: each next state depends only on the previous state, but not on the whole history of previous states. The green circles denote actions At taken by the agent. The upwards pointing arrow denote rewards Rt received by the agent upon taking actions At . The goal in a Markov Decision Process problem or in reinforcement learning is to maximize the expected total cumulative reward (9.9). This is achieved by a proper choice of a decision policy that specifies how the agent should act in each possible 3.1 Decision Policies Let us now consider how we can actually achieve the goals of maximizing the expected total rewards for Markov Decision Processes. Recall that the goal of reinforcement learning is to maximize the expected total reward from all actions performed in the future. Because this problem has to be solved now, but actions will be performed in the future, to solve this problem, we have to define a “policy”. A policy π(a|s) is a function that takes a current state St = s, and translates it into an action At = a. In other words, this function maps a state space onto an action space in the Markov Decision Process. If a system is currently in a state described by a vector St , then the next action At is given by a policy function π(St ). If the 2 An alternative formulation for infinite-horizon MDPs is to consider maximization of an average reward rather than total reward. Such approach allows one to proceed without introducing a discount factor. We will not pursue reinforcement learning with average rewards in this book. 9 Introduction to Reinforcement Learning policy function is a conventional function of its argument St , then the output At will be a single number. For example, if the policy function is π(St ) = 0.5St , then for each possible value of St we will have one action to take. Such specification of a policy is called a deterministic policy. Another way to analyze policies in MDPs is to consider stochastic policies. In this case, a policy π (a|s) describes a probability distribution rather than a function. For example, suppose that there are two actions a0 and a1 , then the stochastic policy might be given by the logistic function π0 := π(a = a0 |s) = σ (θ T s) = 1 and π1 := π(a = a1 |s) = 1 − π0 . 1+exp (−θ T s) Now we take a look at differences between these two specifications in slightly more detail. First, consider deterministic policies. In this case, the action At to take is given by the value of a deterministic policy function π(St ) applied to a current state St . If the RL agent finds itself in the same state St of the system more than once, each time it will act exactly the same way. How it will act depends only on the current state St , but not on any previous history. This assumption is made to ensure consistency with the Markov property of the system dynamics, where probabilities to observe specific future states depend only on the current state, but not on any previous states. It can actually be proven that an optimal deterministic policy π always exists for a Markov Decision Process, so our task is simply to identify it among all possible deterministic policies. As long as that is the case, it might look like deterministic policies is all we ever need to solve Markov Decision Processes. But it turns out that the second class of policies, namely stochastic policies, are also often useful for reinforcement learning. For stochastic policies, a policy π(a|s) becomes a probability distribution over possible actions At = a. This distribution may depend on the current value of St = s as a parameter of such distribution. So, if we use a stochastic policy instead of a deterministic policy, when an agent visits the same state again, it might take a different action from an action it took the last time in this state. Note that the class of stochastic policies is wider than the class of deterministic policies, as the latter can always be thought as limits of stochastic policies where a distribution collapses to a Dirac delta-function. For example, if a stochastic policy is given by a Gaussian distribution with variance σ 2 , then a deterministic Dirac-like policy can be obtained in this setting by taking the limit σ 2 → 0. We might consider stochastic policies as a more general specification, but why would we want to consider such stochastic policies? It turns out that in certain cases, introducing stochastic policies might be an unnecessary complication. In particular, if we know transition probabilities in the MDP, then we can just consider only deterministic policies to find an optimal deterministic policy. For this, we just need to solve the Bellman equation that we introduce in the next section. But if we do not know the transition probabilities, we must either estimate them from data and use them to solve the Bellman equation, or we have to rely on samples, following the reinforcement learning approach. In this case, randomization 3 Markov Decision Processes of possible actions following some stochastic policy may provide some margin for exploration—for a better estimation of the model. A second case when a stochastic policy may be useful is when instead of a fully observed (Markov) process, we use a partially observed dynamic model of an environment which can be implemented as, e.g. a HMM process. Introducing additional control variables in this model would produce a Partially Observable Markov Decision Process, or POMDP for short. Stochastic policies may be optimal for POMDP processes, which is different from a fully observable MDP case where the best policy can always be found within the class of deterministic policies. While many problems of practical interest in finance would arguably lend themselves to a POMDP formulation, we leave such formulations outside the scope of this book where we rather focus on reinforcement learning for fully observed MDP settings. 3.2 Value Functions and Bellman Equations As mentioned above, a common auxiliary task of learning by acting is to evaluate a given state of the environment. Because it is defined by a sequence of actions taken by the agent, we want to relate it to local rewards obtained by the agent. One way to define the value function V π (s) for policy π is to specify it as an expected total reward that can be obtained starting from state St = s and following policy π : Vtπ (s) T −t−1 i=0 γ R (St+i , at+i , St+i+1 ) St = s , i where R (St+i , at+i , St+i+1 ) is a random future reward at time t + i, T is the planning time horizon (an infinite-horizon case corresponds to T = ∞), and Eπt [ ·| St = s] means time-t averaging (conditional expectation) over all future states of the world given that future actions are selected according to policy π . The value function (9.10) can be used for two main settings for reinforcement learning. For episodic tasks, the problem of the agent learning naturally breaks into episodes which can be either of fixed or variable length T < ∞. For such tasks, using a discount factor γ for future rewards is not strictly necessary, and we could set there γ = 1 if so desired. On the other hand, for continuing tasks, the agent–environment interaction does not naturally break into episodes. Such tasks corresponds to the case T = ∞ in (9.10), which makes it necessary to keep γ < 1 to ensure convergence of an infinite series of rewards. The state-value function Vtπ (s) is therefore given by a conditional expectation of the total cumulative reward, also known in reinforcement learning as the random return Gt : 9 Introduction to Reinforcement Learning Gt = T −t−1 γ i R (St+i , at+i , St+i+1 ) . For each possible state St = s, the state-value function V π (s) gives us the “value” of this state for the task of maximizing the total reward using policy π . The statevalue function Vtπ (s) is thus a function of s and a functional (i.e., a function of a function) of a decision policy π . For time-homogeneous problems, the time index can be omitted, Vtπ (s) → V π (s). Similarly, we can specify a value of simultaneously being in state s and take action a as a first action, and following policy π for all the following actions. This defines the action-value function Qπ (s, a): Qπt (s, a) = Eπt T −t−1 i=0 γ i R (St+i , at+i , St+i+1 ) St = s, at = a . Note that in comparison to the state-value function Vtπ (s), the state-value function Qπ (s, a) depends on an additional argument a which is an action taken at time step t. While we may consider arbitrary inputs a ∈ A to the action-value function Qπ (s, a), it is of a particular interest to consider a special case when the first action a is also drawn from policy π(a|s). In this case, we obtain a relation between the state-value function and the action-value function: π(a|s)Qπt (s, a). (9.13) Vtπ (s) = Eπ Qπt (s, a) = a∈A (Note that while we assume a finite MDP formulation in this section, the same formulas applied to continuous-state MDPs model provided by replace sums by integrals). Both the state-value function Vtπ (s) and action-value function Qπt (s, a) are given by conditional expectations of the return Gt from states. Therefore, they could be estimated directly from data if we have observations of total returns Gt obtained in different states. The simplest case arises for a finite MDP model. Let us assume that we have K discrete states, and for each state, we have data referring observed sequences of states, actions, and rewards obtained while starting in each one of the K states. Then values functions in different states can be estimated empirically using these observed (or simulated) sequences. Such methods are referred to in reinforcement learning literature as Monte Carlo methods. Another useful way to analyze value functions is to rely on their particular structure defined as an expectation of cumulative future rewards. As we will see next, this can be used to derive certain equations for the state-value function (9.10) and action-value function (9.12). 3 Markov Decision Processes Consider first the state-value function (9.10). We can separate the first term from the sum, and write the latter as the first term plus a similar sum but starting with the second term: T −t−1 π π π i Vt (s) = Et [R (s, at , St+1 )] + γ Et γ R (St+i , at+i , St+i+1 ) St = s . i=1 Note that the first term in the right-hand side of this equation is just the expected reward from the current step. The second term is the same as the expectation of the value function at the next time step t + 1 in some other state s , multiplied by the discount factor γ . This gives & ' π (s ). Vtπ (s) = Eπt Rt s, a, s + γ Eπt Vt+1 Equation (9.14) then gives an expression for a value of the state as a sum of immediate expected reward and a discounted expectation of the expected next state value. Note further that the conditional time-t expectations Eπt [·] in this expression involve conditioning on the current state St = s. Equation (9.14) thus presents a simple recursive scheme that enables computation of the value function at time t in terms of its future values at time t + 1 by going backward in time, starting with t = T − 1. This relation is known as the “Bellman equation for the value function”. Later on we will introduce a few more equations that are also called Bellman equations. It was proposed in 1950s by Richard Bellman in the context of his pioneering work on dynamic programming. Note that this is a linear equation, because the second expectation term is a linear functional of π (s ). In particular, for a finite MDP with N distinct states, the Bellman equation Vt+1 produces a set of N linear equations defining the value function at time t for each state in terms of expected immediate rewards, and transition probabilities to states at π (s ) that enter the expectation in its time t + 1 and next-period value functions Vt+1 right-hand side. If transition probabilities are known, we can easily solve this linear system using methods of linear algebra. Finally, note that we could repeat all steps leading to the Bellman equation (9.14), but starting with the action-value function rather than the state-value function. This produces the Bellman equation for the action-value function: & ' π Qπt (s, a) = Eπt Rt s, a, s + γ Eπt Vt+1 (s ) . Similarly to (9.14), this is a linear equation that can be solved backward in time using linear algebra for a finite MDP model. 9 Introduction to Reinforcement Learning > Learning with a Finite vs Infinite Horizon While an MDP problem is formulated in the same way for both cases of a finite T < ∞ or infinite T → ∞ time horizon, computationally they produce different algorithms. For infinite-horizon MDP problems with rewards that do not explicitly depend on time, a state- and action-value function should also not depend explicitly on time: Vt (st ) → V (st ) and similarly for Gt (st , at ) → G(st , at ), which expresses time-invariance of such problem. For time-invariant problems, the Bellman equation (9.15) becomes a fixedpoint equation for the same function Q(s, t) rather than a recursive relation between different functions Qt (st , at ) and Qt+1 (st+1 , at+1 ). While many of existing textbooks and other resources are often focused on presenting and pursuing algorithms for MDP and RL for a time-independent setting, including analyses of convergence, a proper question to ask is which one of the two types of MDP problem is more relevant for financial applications? We tend to suggest that finite-horizon MDP problems are more common in finance, as most of goals in quantitative finance focus on performance over some pre-specified time horizon such as one day, one month, one quarter, etc. For example, an annual performance time horizon of T = 1Y for a mutual fund or an ETF sets up a natural time horizon for planning in the MDP formulation. On the other hand, a given fixed time horizon may consist of a large number of smaller time steps. For example, a daily fund performance can result from multiple trades executed on a minute scale during the day. Alternatively, multi-period optimization of a long-term retirement portfolio with a time horizon of 30 years can be viewed at the scale of monthly or quarterly steps. For such cases, it may be reasonable to approximate an initially time-dependent problem with a fixed but large number of time steps by a time-independent infinite-horizon problem. The latter is obtained as an approximation of the original problem when the number of time steps goes to infinity. Therefore, an infinite-horizon MDP formulation can serve as a useful approximation for problem involving long sequences. 3.3 Optimal Policy and Bellman Optimality Next, we introduce an optimal value function Vt% . The optimal value function for a state is simply the highest value function for this state among all possible policies. So, the optimal value function is attained for some optimal policy that we will call π% . The important point here is that an optimal policy π% is optimal for all states of the system. This means that Vt% should be larger or equal than Vtπ with any other 3 Markov Decision Processes policy π , and for any state S, i.e. π% > π if Vt% := Vtπ% (s) ≥ V π (s), ∀s ∈ S. We may therefore express V% as follows: Vt% (s) := Vtπ% (s) = max Vtπ (s), ∀s ∈ S. π Equivalently, the optimal policy π% can be determined in terms of the action-value function: Q%t (s, a) := Qπt % (s, a) = max Qπt (s, a), ∀s ∈ S. π This function gives the expected reward from taking action a in state s, and following an optimal policy thereafter. The optimal action-value function can therefore be represented by the following Bellman equation that can be obtained from Eq. (9.15): % Q%t (s, a) = E%t Rt (s, a, s ) + γ E%t Vt+1 (s ) . Note that this equation might be inconvenient to work with in practice, as it involves two optimal functions Q%t (s, a) and Vt% (s), rather than values of the same function at different time steps, as in the Bellman equation (9.14). However, we can obtain a Bellman equation for the optimal action-value function Q%t (s, a) in terms of this function only, if we use the following relation between two optimal value functions: Vt% (s) = max Q%t (s, a). a We can now substitute Eq. (9.19) evaluated at time t + 1 into Eq. (9.18). This produces the following equation: Q%t (s, a) % Rt (s, a, s ) + γ max Qt+1 (s , a ) . a Now, unlike Eq. (9.18), this equation relates the optimal action-value function to its values at a later time moment. This equation is called the Bellman optimality equation for the action-value function, and it plays a key role in both dynamic programming and reinforcement learning. This expresses the famous Bellman’s principle of optimality, which states that optimal cumulative rewards should be obtained by taking an optimal action now, and following an optimal policy later. Note that unlike Eq. (9.18), the Bellman optimality equation (9.20) is a nonlinear equation, due to the max operation inside of the expectation. Respectively, the Bellman optimality equation is harder to solve than the linear Bellman equation (9.14) that holds for an arbitrary fixed policy π . The Bellman optimality equation is usually solved numerically. We will discuss some classical methods of solving it in the next sections. 9 Introduction to Reinforcement Learning The Bellman equation for the optimal state-value function Vt% (s) can be obtained using Eqs. (9.18) and (9.19): % (s ) . Vt% (s) = max E%t Rt (s, a, s ) + γ Vt+1 a Like Eq. (9.20), the Bellman optimality equation (9.21) for the state-value function is a non-linear equation due to the presence of the max operator. If the optimal state-value or action-value function is already known, then finding the optimal action is simple, and essentially amounts to “greedy” onestep maximization. The term “greedy” is used in computer science to describe algorithms that are based only on intermediate (single-step) considerations but not on considerations of longer-term implications. If we already know the optimal statevalue function Vt% (s), this implies that all actions at all time steps implied in this value function are already optimal. This still does not determine on its own what action should be chosen at a current time step. However, because we know that actions at the subsequent steps are already optimal, to find the best next action in this setting we need only perform a greedy one-step search that just takes into account the immediate successor states for the current state. In other words, when the optimal state-value function Vt% (s) is used, a one-step-ahead search for optimal action produces long-term optimal actions. With the optimal action-value function Q%t (s, a), the search of an optimal action at the current step is even simpler, and amounts to simply maximizing Q%t (s, a) with respect to a. This does not require using any information about possible successor states and their values. This means that all relevant information of the dynamics is already encoded in Q%t (s, a). The Bellman optimality equations (9.20) and (9.21) are central objects for decision-making under the formalism of MDP models. The methods of dynamic programming focus on exact or numerical solutions of these equations. Many (though not all) methods of reinforcement learning are also based on approximate solutions to the Bellman optimality equations (9.20) and (9.21). As we will see in more details later on, the main difference of these methods from traditional dynamic programming methods is that they rely on empirically observed transitions rather than on a theoretical model of transitions. > Existence of Bellman Equations: Infinite-Horizon vs Finite Horizon Cases For time-homogeneous (infinite horizon) problems, a value function does not explicitly depend on time, and the Bellman equation (9.14) can be written in this case in a compact form T πV π = V π, (9.22) (continued) 4 Dynamic Programming Methods where T π : RS → RS stands for the Bellman operator & π ' T V (s) = r(s, π(s)) + γ p(s , s, π(s))V (s ), ∀s ∈ S. s ∈S Therefore, for time-stationary MDP problems the Bellman equation becomes a fixed-point equation. If 0 < γ < 1, then T π is a maximum-norm contraction and the fixed-point equation (9.22) has a unique solution, see, e.g., Bertsekas (2012). Similarly, the Bellman optimality equation (9.21) can be written in a timestationary case as T %V % = V %, where T % : RS → RS is for the Bellman optimality operator + & π ' p(s , s, a)V (s ) , ∀s ∈ S. T V (s) = max r(s, a) + γ a∈A s ∈S Again, if 0 < γ < 1, then T % is a maximum-norm contraction and the fixedpoint equation (9.22) has a unique solution, see Bertsekas (2012). For finite-horizon MDP problem the Bellman equations (9.14) and (9.21) become recursive relations between different functions Vtπ (st ) and π (s Vt+1 t+1 ). The existence of a solution for this case can be established by mapping the finite-horizon MDP problem onto a stochastic shortest-path (SSP) problem. SSP problems have a certain terminal (absorbing) state, and the problem of an agent is to incur a minimum expected total cost on a path to the terminal state. For details on existence of the Bellman equation for the SSP, see Vol II of Bertsekas (2012). In a finite-horizon MDP, time t can be thought of as a stage of the process, such that after N stages the system enters the absorbing state with probability one. The finite-horizon problem is therefore mapped onto a SSP problem with an augmented state s˜t = (st , t). 4 Dynamic Programming Methods Dynamic programming (DP), pioneered by Bellman (1957), is a collection of algorithms that can be used to solve problems of finding optimal policies when an environment is Markov and fully observable, so that we can use the formalism of Markov Decision Processes. Dynamic programming approaches work for finite MDP models with typically a low number of states, and moreover assume that the 9 Introduction to Reinforcement Learning model of the environment is perfectly known. Both cases are rarely encountered in problems of practical interest where a model of the environment is typically not known beforehand, and a state space is often either discrete and high-dimensional, or continuous, or continuous with multiple dimensions. Methods of DP which find exact solutions for finite MDP become infeasible for such situations. Still, while they are normally not very useful for practical problems, methods of DP have fundamental importance for understanding other methods that do work in “real-world” applications and are applied in reinforcement learning, as well as in other related approaches such as approximate dynamic programming. For example, we may use DP methods to validate RL algorithms applied to simulated data with known probability transition matrix. Here we want to analyze the most popular DP algorithms. A common feature in all these algorithms is that they all rely on the notion of a state-value function (or action-value function, or both) as a condensed representation of quality of both policies and states. Respectively, extensions of such algorithms to (potentially high-dimensional) sample-based approached performed by RL methods are generically called the value function based RL. Some RL methods do not rely on the notion of a value function, and operate only with policies. In this book, we will focus mostly on the value function based RL but we shall occasionally see examples of an alternative approach based on “policy iteration.” In addition, all approaches considered in this section imply that a state-action space is discrete. While usually not explicitly specified, the dimensionality of a state space is assumed to be sufficiently low, so that the resulting computational algorithm would be feasible in practice. DP approaches are also used for systems with a continuous state by discretizing a range of values. For a multi-dimensional continuous state space, we could discretize each individual component, and then produce a discretized uni-dimensional representation by taking a direct product of individual grids, and indexing all resulting states. However, this approach is straightforward and feasible when the number of continuous dimensions is sufficiently low, e.g. do not exceed 3 or 4. For higher dimensional continuous problems a naive discretization by forming cross-products of individual grids produces an exponential explosion of a number of discretized steps, and quickly becomes infeasible in practice. 4.1 Policy Evaluation As we mentioned above, the state-value function Vtπ (s) gives the value of the current state St = s provided the agent follows policy π in choosing its actions. For a time-stationary MDP, which will be considered in this section, the timeindependent version of the Bellman equation (9.14) reads V π (s) = Eπt R (s, at , St+1 ) + γ V π (s ) , 4 Dynamic Programming Methods while the time-independent version of the Bellman optimality equation (9.21) is V % (s) = max E%t R(s, a, s ) + γ V % (s ) . a Thus, for time-stationary problems, the problem of finding the state-value function V π (s) for a given policy π amounts to solving a set of |S| linear equations, where |S| stands for the dimensionality of the state space. As solving such a system directly (in one step) involves matrix inversion, such a solution can be costly when the dimensionality |S| of the state-space grows. As an alternative, the Bellman equation (9.26) can be used to set up a recursive approach to solve it. While this produces a multi-step numerical algorithm to find the state-value function rather than an explicit one-step formulas obtained with the linear algebra approach, in practice it often works better than the latter. The idea is simply to view the Bellman equation (9.26) as a stationary point at convergence as k → ∞ of an iterative map indexed by steps k = 0, 1, . . ., which is obtained by applying the Bellman operator T π [V ] := Eπt R (s, at , St+1 ) + γ V (s ) , (π ) to the previous iteration Vk (s) of the value function. Here the lower index k enumerates iterations, and replaces the time index t which is omitted because we work in this section with time-homogeneous MDPs. This produces the following update rule: π Vkπ (s) = Eπt R (s, at , St+1 ) + γ Vk−1 (s ) . The informal way to understand this relation is to think about a recursion as a sequential process. The sequential nature of the process can be mapped onto some notion of time. Therefore, if we formally replace T − t → k in the original time-dependent Bellman equation (9.14), it produces Eq. (9.29). The relation (9.29) is often referred to as the Bellman iteration. The Bellman iteration is proven to converge under some technical conditions on the Bellman operator (9.28). In particular, rewards need to be bounded to guarantee convergence of the map. For any given policy π , policy evaluation algorithm amounts to a repeated application of the Bellman iteration (9.29) starting with some initial guess, e.g. (π V0 (s) = 0. The iteration continues until convergence at a given tolerance level is achieved, or alternatively it can run for a pre-specified number of steps. Each iteration involves only linear operations, therefore it can be performed very fast. Recall here that the model of the environment is assumed to be perfectly known in the DP approach, therefore all expectations are linear and fast to compute. The method is scalable to high-dimensional discrete state spaces. While a policy evaluation algorithm can be quite fast in evaluating a single policy π , the ultimate goal of both dynamic programming and reinforcement learning approaches is to find the optimal policy π% . This opens the door to a plethora of different approaches. In one class of approaches, we consider a set of candidate 9 Introduction to Reinforcement Learning policies {π } and we find the policy π% by selecting between them. These methods are called “policy iteration” algorithms, and they use the policy evaluation algorithms as their integral part. We will consider such algorithms next. 4.2 Policy Iteration Policy iteration is a classical algorithm for finite MDP models. Again, we consider here only stationary problems, therefore we again replace the time index by an iteration index VTπ−t (s) → Vk% (s). We can also omit the index π here as the purpose of this algorithm is to find the optimal π = π% . To produce a policy iteration algorithm, we need two features: a way to evaluate a given policy and a way to improve a given policy. Both can be considered subalgorithms in an overall algorithm. Note that the first component in this scheme is already available, and is given by the policy evaluation method just presented. Therefore, to have a complete algorithms, our only remaining task is to supplement this with a method to improve a given policy. To this end, consider the Bellman equation for the action-value function Qπ (s, a) = Et R(s, a, s ) + γ V π (s ) = p(s , r|s, a) R(s, a, s ) + γ V π (s ) . s ,r (9.30) The current action a is a control variable here. If we follow policy π in choosing a, then the action taken is a = π(s), and the action value is Qπ (s, π(s)) = V π (s). This means that by taking different actions a ∼ π (with another policy π ) rather than prescribed by policy π , we can produce higher values Qπ (s, π (s)). It can be shown that this implies that the new policy also improves the state-value function, i.e. V π (s) ≥ V π (s) for all states s ∈ S (the latter statement is called the policy improvement theorem, see, e.g., Sutton and Barto (2018) or Szepesvari (2010) for details.) Now imagine we want to choose action a in Eq. (9.30) so that to maximize Qπ (s, a). Maximizing a → a% means that we find a value a% such that Qπ (s, a% ) ≥ Q(s, a) for any a = a% . This can be equivalently thought of as a greedy policy π that is given by π except for the state s, where it should be such that a% = π (s). If some greedy policy π = π can be found by maximizing Qπ (s, a), or equivalently the right-hand side of Eq. (9.30), then it will satisfy the policy improvement theorem, and then can be used to find the optimal policy via policy iteration. Therefore, an inexpensive search for a better greedy policy π by a local optimization over possible next actions a ∈ A is guaranteed to produce a sequence of policies that are either better than the previous ones, or at worst keep them unchanged. This observation underlies the policy iteration algorithm which proceeds as follows. We start with some initial policy π (0) . Often, a purely random initialization is used. After that, we repeat the following set of two calculations, for a fixed number of steps or until convergence: 4 Dynamic Programming Methods – Policy evaluation: For a given policy π (k−1) , compute the value function V (k−1) by solving the Bellman equation (9.26) – Policy improvement: Calculate new policy π (k) = arg max ! " p(s |s, a) R(s, a, s ) + γ V (k−1) (s ) . In words, at each iteration step we first compute the value function using the previous policy, and then update the policy using the current value function. The algorithm is guaranteed to converge for a finite state MDP with bounded rewards. Note that if the dimensionality of the state space is large, multiple runs of policy evaluation can be quite costly, because it would involve high-dimensional systems of linear equations. But many practical problems of optimal control involve large discrete state-action spaces, or continuous state-action space. In these settings, methods of DP introduced by Bellman (1957), and algorithms like policy iteration (or value iteration, to be presented next), do not work anymore. Reinforcement learning methods were developed in particular as a practical answer to such challenges. 4.3 Value Iteration Value iteration is another classical algorithm for finite and time-stationary MDP models. Unlike the policy iteration method, it bypasses the policy improvement stage, and uses a recursive procedure to directly find the optimal state-value function V % (s). The value iteration method works by applying the Bellman optimality equation as an update rule in an iterative scheme. In more detail, we start with initialization of the value function at some initial values V (s) = V (0) (s) for all states, with some choice of function V (0) (s), e.g. V (0) (s) = 0. Then we continue iterating the evaluation of the value function using the Bellman optimality equation as the definition of the update rule. That is, for each iteration k = 1, 2, . . ., we use the result of the previous iteration to compute the right-hand side of the equation: ! " " ! p(s , r|s, a) r +γ V (k−1) . V (k) (s)= maxE%t R(s, a, s ) + γ V (k−1) (s ) = max a s ,r (9.32) This can be thought of as combining the two steps of policy improvement and policy evaluation into one update step. Note that the new value iteration update rule (9.32) is similar to the policy evaluation update (9.29) except that it also involves taking a maximum over all possible actions a ∈ A. Now, there are a few ways to update the value function in such a value iteration. One approach is to complete re-computing the value function for all states s ∈ S, 9 Introduction to Reinforcement Learning and then simultaneously update the value function over all states, V (k−1) (s) → V (k) (s). This is referred to as synchronous updating. The other approach is to update the value function V (k−1) (s) on the fly, as it is re-computed in the current iteration k. This is called asynchronous updating. Asynchronous updates are often used for problems with large state-action spaces. When only a relatively small number of states matter for an optimal solution, updating all states after a complete sweep, as is done with synchronous updates, might be inefficient for high-dimensional state-action spaces. For either way of updating, it can be proven that the algorithm converges to the optimal value function V % (s). After V % (s) is found, the optimal policy π% can be found using the same formula as before. As one can see, the basic algorithm is very simple, and works well, as long your state-action space is discrete and has a small number of states. However, similarly to policy iteration, the value iteration algorithm quickly becomes unfeasible in highdimensional discrete or continuous state spaces, due to exponentially large memory requirements. This is known as the curse of dimensionality in the DP literature.3 Given that the time needed for DP solutions to be found is polynomial in the number of states and actions, this may also produce prohibitively long computing times.4 For low-dimensional continuous state-action spaces, the standard approach enabling applications of DP is to discretize variables. This method can be applied only if the state dimensionality is very low, typically not exceeding three or four. For higher dimensions, a simple enumeration of all possible states leads to an exponential growth of the number of discretized states, making the classical DP approaches unfeasible for such problems due to memory and speed limitations. On the other hand, as will be discussed in details below, the RL approach relies on samples, which are always discrete-valued even for continuous distributions. When a sampling-based approach of RL is joined with some reasonable choices for a low-dimensional bases in such continuous spaces (i.e., using some methods of function approximation), RL is capable of working with continuously valued multidimensional states and action. Recall that DP methods aim for a numerically exact calculation of optimal value function at all points of a discrete state space. However, what is often needed in high-dimensional problems is an approximate way of computing the value functions using simultaneously a lower, and often much lower, number of parameters than the original dimensionality of the state space. Such methods are called approximate dynamic programming, and can be applied in situations when the model of the world is known (or independently estimated beforehand from data), but the dimensionality of a (discretized) state space is too large to apply the standard value or policy 3 While high dimensionality is a curse for DP approaches as it makes them infeasible for highdimensional problems, with some other approaches this may rather bring simplifications, in which case the “curse of dimensionality” is replaced by the “blessing of dimensionality.” 4 Computing times that are polynomial in the number of states and actions are obtained for worst-case scenarios in DP. In practical applications of DP, convergence is sometimes faster than constraints given by worst-case scenarios. 4 Dynamic Programming Methods iteration methods. On the other hand, the reinforcement learning approach works directly with samples from data. When it is combined with a proper method of function approximation to handle a high-dimensional state space, it provides a sample-based RL approach to optimal control—which is the topic we will pursue next. Example 9.3 Financial Cliff Walking Consider an over-simplified model of household finance. Let St be the amount of money the household has in a bank account at time t. We assume for 6 7N −1 simplicity that St can only take values in a discrete set S (i) i=0 . The account has to be maintained for T time steps, after which it should be closed, so T is the planning horizon. The zero level S (0) = 0 is a bankruptcy level—it has to be avoided, as reaching it means inability to pay on household’s liabilities. At each step, the agent can deposit to the account S (i) → S (i+1) (action a+ ), withdraw from the account S (i) → S (i−1) (action a− ), or keep the same amount S (i) → S (i) (action a0 ). The initial amount in the account is zero. For any step before the final step T , if the agent moves to the zero level S0 = 0, it receives a negative reward of −100, and the episode terminates. Otherwise, the agent continues for all T steps. Any action not leading to the zero level gets a negative reward of −1. At the terminal time, if the final state is ST > 0, the reward is -1, but if the account goes back to zero exactly at time T , i.e. ST = 0, the last action gets a positive reward of +10. The learning task is to maximize the total reward over T time steps (Fig. 9.2). The RL agent has to learn the optimal depository policy online, by trying different actions during a training episode. Note that while this is a time-dependent problem, we can map it onto a stationary problem with an episodic task and a target state, such as the original cliff walking problem in Sutton and Barto (2018). Optimal path Fig. 9.2 The financial cliff walking problem is closely based on the famous cliff walking problem by Sutton and Barto (2018). Given a bank account with an initial zero balance (“start”), the objective is to deposit and deplete the account by a unit of currency so that the account ends with a zero balance at the final time step (“goal”). Premature depletion is labeled as a bankruptcy and terminates the game. Reaching the final time with a surplus amount in the account results in a penalty. At each time step, the agent may choose from a number of actions if the account is not zero: deposit (“U”), withdraw (“D”), or do nothing (“Z”). Transactions costs are imposed so that the optimal policy is to hold the balance at unity 9 Introduction to Reinforcement Learning 5 Reinforcement Learning Methods Reinforcement learning methods aim at the same goal of solving MDP models as do the DP methods. The main differences are in how the problems of data processing and computational design are approached. This section provides a brief overview of some of the most popular reinforcement learning methods for solving MDP problems. Three main classes of approaches to reinforcement learning are Monte Carlo methods, policy search methods, and value-based RL. The first two methods do not rely on Bellman equations, and thus do not have direct links to the Bellman equation introduced in this chapter. While we present a brief overview of Monte Carlo and policy search methods, most of the material presented in the later chapters of this book uses value-based RL. In what follows in later sections, we will often refer to value-based RL as simply “RL”. As we just mentioned, these RL approaches use the Bellman optimality equations (9.20) and (9.21); however, they proceed differently. With DP approaches, one attempts to solve these equations exactly. This is only feasible if a perfect model of the world is known and the dimensionality of the state-action space is sufficiently low. As we remarked earlier, both assumptions do not hold in most problems of practical interest. Reinforcement learning approaches do not assume that a perfect model of the world is known. Instead, they simply rely on actual data that are viewed as samples from a true data-generating distribution. The problem of estimating this distribution can be bypassed altogether with reinforcement learning. In particular, model-free reinforcement learning operates directly with samples of data, and relies only on samples when optimizing its policy. This is not to say, of course, that models of the future are useless for reinforcement learning. Model-based reinforcement learning approaches build an internal model of the world as a part of their ultimate goal of policy optimization. We will discuss model-based reinforcement learning later in this book, but in this section, we will generally restrict consideration to model-free RL. Related to the first principle of relying on the data is the second key difference of reinforcement learning methods from DP methods. Because data is always noisy, reinforcement learning cannot aim at an exact solution as the standard DP methods, but rather aim at some good approximate, rather than exact, solutions. Clearly, this does not prevent an exploration of the behavior of RL solutions when the number of data points becomes infinite. If we knew an exact model of the world, we could reconstruct it exactly in this limit. Therefore, theoretically sound (rather than purely empirically driven) RL algorithms should demonstrate convergence to known solutions in this asymptotic limit. In particular, if we deal with a sufficiently low-dimensional system, such solutions can be independently calculated using the standard DP methods. This can be used for testing and benchmarking RL algorithms, as will be discussed in more details in later sections of this book. 5 Reinforcement Learning Methods Finally, the last key difference of reinforcement learning methods from DP is that they do not seek the best solution, they simple seek a “sufficiently good” solution. The main motivation for such a paradigm change is the “curse of dimensionality” that was mentioned above. DP methods for finite MDPs operate with tabular representations of value functions, rewards, and transition probabilities. Memory requirements and speed constraints make this approach infeasible for high-dimensional discrete or continuous state-action spaces. Therefore, when working with such problems, reinforcement learning approaches rely on function approximation for quantities of interest such as value functions or action policies. Reinforcement learning algorithms with function approximation will be discussed later in this section, after we introduce tabulated versions of these algorithms for finite MDPs with sufficiently low dimensionality of discrete-valued state and action spaces. The purpose of this section is to introduce some of the most popular RL algorithms for both finite and continuous-state MDP problems. We will start with methods developed for finite MDPs, and then later show how they can be extended to continuous state-action spaces using function approximation approaches. ? Multiple Choice Question 1 Select all the following correct statements: a. Unlike DP, RL needs to know rewards and transition probability functions. b. Unlike DP, RL does not need to know reward and transition probability functions, as it relies on samples. (n) (n) (n) c. The information set Ft for RL includes a triplet Xt , at , Xt+1 for each step. (n) (n) (n) (n) d. The information set Ft for RL includes a tuplet Xt , at , Rt , Xt+1 for each step. 5.1 Monte Carlo Methods Monte Carlo methods, as other methods of reinforcement learning, do not assume complete knowledge of the environment, nor do they rely on any model of the environment. Instead, Monte Carlo methods rely on experience, that is samples of states, actions, and rewards. When working with real data, this amounts to learning without any prior knowledge of the environment. Experience can also be simulated. In this case, Monte Carlo methods provide a simulation-based approach to solving MDP problems. Recall that the DP approach requires knowledge of exact transition probabilities to perform iteration steps in policy iteration or value iteration algorithms. With reinforcement learning Monte Carlo methods, only samples from these distributions are needed, but not their explicit form. 9 Introduction to Reinforcement Learning Monte Carlo methods are normally restricted to episodic task with a finite planning horizon T < ∞. Rather than relying on Bellman equations, they operate directly with the definition of the action-value function Qπt (s, a)=Eπt T −1 i=0 R (St+i , at+i , St+i+1 ) St = s, at = a =Eπt [ Gt | St =s, at =a] , (9.33) where Gt is the total return, see Eq. (9.11). If we have access to data consistent of (n) N set of T -step trajectories each producing return Gt , then we could estimate the action-value function at the state-action values (s, a) using the empirical mean: Qπt (s, a) N " 1 ! (n) Qt St = s, at = a . ( N Note that for each trajectory, a complete T -step trajectory should be observed, so that its return can be observed and used to update the action-value function. It is worth clarifying the meaning of index π in this relation. With the Monte Carlo estimation of Eq. (9.34), π should be understood as a policy that was applied when collecting the data. This means that this Monte Carlo method is an on-policy algorithm. On-policy algorithms are only able to learn an optimal policy from samples if these samples themselves are produced using the optimal policy. Conversely, off-policy algorithms are able to learn an optimal policy from data generated using other, sub-optimal policies. Off-policy Monte Carlo methods will not be addressed here, and the interested reader is referred to Sutton and Barto (2018) for more details on this topic. As indicated by Eq. (9.33), the action-value process should be calculated separately for each combination of a state and action in a finite MDP assumed here. The number of such combinations will be |S| · |A|. Respectively, for each combination of s and a from this set, we should only select those trajectories that encounter such a combination, and only include returns from these trajectories in the sum in Eq. (9.34). For every combination of (s, a), the empirical estimate (9.34) asymptotically converges to the exact answer in the limit N → ∞. Also note that these estimates are independent for different values of (s, a). This could be useful, as it enables a trivial parallelization of the calculation. On the other hand, independence of estimates for different pairs (s, a) means that this algorithm does not bootstrap, i.e. it does not use previous or related evaluations to estimate the action-value function at node (s, a). Such a method may miss some regularities observed or expected in true solutions (e.g., smoothness of the value function with respect to its arguments), and therefore may produce some spurious jumps in estimated statevalue functions produced due to noise in the data. Beyond empirical estimation of the action-value function as in Eq. (9.34) or the state-value function, Monte Carlo methods can also be used to find the optimal control, provided the Monte Carlo RL agent has access to a real-world or simulated environment. To this end, the agent should produce trajectories using different 5 Reinforcement Learning Methods policies. For each policy π , a number of N trajectories are sampled using this policy. The action-value function is estimated using an empirical mean as in Eq. (9.34). After this, one follows with the policy improvement step which coincides with the greedy update of the policy iteration method: π (s) = arg maxa Qπ (s, a). The new policy is used to sample a new set of trajectories, and the process runs until convergence or for a fixed number of steps. Note that generating new trajectories corresponding to newly improved policies may not always be feasible. For example, an agent may only have access to one fixed set of trajectories obtained using a certain fixed policy. In such cases, one may resort to importance sampling techniques which use trajectories obtained under different policies to estimate the return under a given policy. This is achieved by re-weighting observed trajectories by likelihood ratio factors obtained as ratios of probabilities of observing given reward under the trial policy π and the policy π used in the data collection stage. Instead of updating the action-value function (or the state-value function) simultaneously after all N trajectories are sampled, which is a batch mode evaluation, we could convert the problem into an online learning problem where updates occur after observing each individual trajectory according to the following rule: Q(s, a) ← Q(s, a) + α [Gt (s, a) − Q(s, a)] , where 0 < α < 1 is a step-size parameter usually referred to as the “learning rate.” It can be shown that such iterative update converges to true empirical and theoretical averages in the limit N → ∞. Yet this update is not entirely real time, as it requires finishing each T -step trajectory, until its total return Gt can be used in the update (9.35). This may be inefficient, especially if multiple trajectory generations and evaluations are required as a part of policy optimization. As we will show below, there are other methods of learning that are free of this drawback. 5.2 Policy-Based Learning In value-based RL, the optimal policy is obtained from an optimal value function, and thus is not modeled separately. Policy-based reinforcement learning takes a different approach, and directly models policy. Unlike the value-based RL where we considered deterministic policies, policy-based RL operates with stochastic policies πθ (a|s) that define probability distributions over a set of possible actions a ∈ A, where θ defines parameters of this distribution. Recall that deterministic policies can be considered special cases of stochastic policies where a distribution of possible actions degenerates into a Dirac deltafunction concentrated in a single action prescribed by the policy: πθ (a|s) = δ(a − a% (s, θ )) where a% (s, θ ) is a fixed map from states to actions, parameterized by θ . With either deterministic or stochastic policies, learning is performed by tuning the free parameters θ to maximize the total expected reward. 9 Introduction to Reinforcement Learning Policy-based methods are based on a simple relation commonly known as the “log-likelihood trick” which is obtained by computing the derivative of an expectation J (θ ) = Eπθ (a) [G(a)]. Here function G(a) can be arbitrary, but to connect to reinforcement learning, we will generally mean that G(a) stands for the expectation of the random return (9.11), which we write here as G(a) to emphasize its dependence on the actions taken. The gradient of the expectation with respect to parameters θ can be computed as follows: ( ∇θ πθ (a) πθ (a)da =Eπθ (a) G(a)∇θ log πθ (a) . πθ (a) (9.36) This shows that the gradient of J with respect to θ is the expected value of the function G(a)∇θ log πθ (a). Therefore, if we can sample from the distribution πθ (a), we can compute this function and have an unbiased estimate of the gradient of G(a) by sampling. This is the reason the relation (9.36) is called the “log-likelihood trick”: it allows one to estimate the gradient of the functional J (θ ) by sampling or simulation. The log-likelihood trick underlies the simplest policy search algorithm called REINFORCE. The algorithm starts with some initial values of parameters θ0 and the iteration counter is k = 0. Using a size-step hyperparameter α, the update of θk amounts to first sampling ak ∼ pπk (a), and then updating the vector of parameters using the incremental version of Eq. (9.36) ∇θ J (θ )= G(a)∇θ πθ (a)dz= θk+1 = θk + αk G(ak )∇θ log πθk (ak ). Here α is a learning rate parameter defining the speed of updates along the negative of the gradient of G(a). The algorithm continues until convergence, or for a fixed number of steps. As one can see, this algorithm is very simple to implement as long as sampling from distribution πθ (a) is easy. On the other hand, Eq. (9.37) is a version of stochastic gradient descent which can be noisy and thus produce high variance estimators. The REINFORCE algorithm (9.37) is a pure policy search method that does not use any value function. A more sophisticated version of learning can be obtained where we simultaneously model both the policy and the action-value functions. Such methods are called actor-critic methods , where “actor” is an algorithm that generates a policy from a family πθ (a| x), and “critic” evaluates the results of applying the policy, expressing it in terms of a state-value or action-value function. Following such terminology, the REINFORCE algorithm could be considered an “actor-only” algorithm, while SARSA or Q-learning, to be presented below, could be viewed as “critic-only” methods. One advantage of policy-based algorithms is that they admit very flexible parameterizations of action policies, which can also work for continuous action spaces. One popular and quite general type of action policies is the so-called softmax in actions policy 5 Reinforcement Learning Methods eh(s,a,θ) . h(s,a ,θ) a e πθ (a|s) = Functions h(s, a, θ ) in this expression would be interpreted as action preferences. They could be taken linear functions in parameters θ , for example h(s, a, θ ) = θ T U (s, a), where U (s, a) would be a vector of features constructed on the product space S × A. Models of this kind are known as linear architecture models. Alternatively, preference functions h(s, a, θ ) could be modeled non-parametrically using neural networks (or some other universal approximators such as decision trees). Parameters θ in this case would be weights of such neural network. Indeed many implementations of actor-critic algorithms involve using two separate neural networks that serve as general function approximations for the action policy and value functions, respectively. More on actor-critic algorithms can be found in Sutton and Barto (2018), Szepesvari (2010). 5.3 Temporal Difference Learning We have seen that Monte Carlo methods must wait until the end of each episode to determine the increment of the action-value function update (9.35). Temporal difference (TD) methods perform updates differently, by waiting only until the next time step, and incrementing the value function at each time step. TD methods can be used for both policy evaluation and policy improvements. Here we focus on how they can be used to evaluate a given policy π by computing a state-value function Vtπ . How it can be done can be seen from the Bellman equation (9.14) which we repeat here for convenience: π Vtπ (s) = Eπt Rt (s, a, s ) + γ Vt+1 (s ) . As we discussed above, this equation can be converted into an update equation, if the state-value function from a previous iteration is used to evaluate the right-hand side to define the update rule. This idea was used in value iteration and policy iteration algorithms of DP. TD methods use the same idea, and add to this an estimation of the expectation entering Eq. (9.40) from a single observation—which is the observation obtained at the next time step. Without relying on a theoretical model of the world which, similarly to the DP approach, would calculate the expectation in Eq. (9.40) exactly, TD methods rely on a simplest possible estimation of this expectation, essentially by computing an empirical mean from a single observation! 9 Introduction to Reinforcement Learning Clearly, relying on a single observation to estimate an empirical mean can lead to very volatile updates, but this is the price one should be prepared to pay for a truly online method. On the other hand, it is extremely fast. Even if it might bring only a marginal improvement (on average) for maximization of a value function, it might produce a workable and efficient algorithm, because such updates may be repeated many times and at a low cost the during steps of policy improvement. A TD method, when applied to the state-value function Vtπ (s), takes a mismatch between the right-hand side of Eq. (9.40) (estimated with a single observation) and its left-hand side as a measure of an error δt , also called the TD error: δt = Rt (s, a, s ) + γ Vt+1 (s ) − Vt (s). Note as we deal here with updating the state-value function for a fixed policy π , we omitted explicit upper indices π in this relation. This error defines the rule of update of the state-value function at node s: Vt (s) ← V (s) + α Rt (s, a, s ) + γ Vt+1 (s ) − Vt (s) , where α is a learning rate. Note that the learning rate should not be a constant, but rather can vary with the number of iterations. In fact, as we will discuss shortly, a certain decay schedule for the learning rate α = αt should be implemented to guarantee convergence where the number of updates goes to infinity. Note that the TD error δt is not actually available at time t as it depends on the next-step value s , and is therefore only available at time t + 1. The update (9.42) relies only on the information from the next step, and is therefore often referred to as the one-step TD update, also known as the TD(0) rule. It is helpful to compare this with Eq. (9.35) that was obtained for a Monte Carlo (MC) method. Note the Eq. (9.35) requires observing a whole episode in order to compute the full trajectory return Gt . Rewards observed sequentially do not produce updates of a value function until a trajectory is completed. Therefore, the MC method cannot be used in an online setting. On the other hand, the TD update rule (9.42) enables updates of the state-value function after each individual observation, and therefore can be used as an online algorithm that bootstraps by combining a reward signal observed in a current step with an estimation of a next-period value function, where both values are estimated from a sample. TD methods thus combine the bootstrapping properties of DP with a sample-based approach of Monte Carlo methods. That is, updates in TD methods are based on samples, while in DP they are based on computing expectations of next-period value functions using a specific model of the environment. For any fixed policy π , the TD(0) rule (9.42) can be proved to converge to a true state-value function if the learning rate α slowly decreases with the number of iterations. Such proofs of convergence hold for both finite MDPs and for MDPs with continuous 5 Reinforcement Learning Methods state-action spaces—with the latter case only established with a linear function approximation, but not with more general non-linear function approximations.5 The ability of TD learning to produce updates after each observation turns out to be very important in many practical applications. Some RL tasks involve long episodes, and some RL problems such as continuous learning do not have any unambiguous definition of finite-length episodes. Whether an episodic learning appears natural or ad hoc, delaying learning until the end of each episode can slow it down and produce inefficient algorithms. Because each update of model parameters requires re-running of all episodes, Monte Carlo methods become progressively more inefficient in comparison to TD methods with increased model complexity. There exist several versions of TD learning. In particular, instead of applying it to learning a state-value function Vt (s), we could use a similar approach to update the action-value function Qt (s, a). Furthermore, for both types of TD learning, instead of one-step updates such as the TD(0) rule (9.42), one could use multi-step updates, leading to more general TD(λ) methods and n-step TD methods. We refer the reader to Sutton and Barto (2018) for a discussion on these algorithms, while focusing in the next section on one-step TD learning methods for an action-value 5.4 SARSA and Q-Learning We now arrive at arguably the most important material in this chapter. In applying TD methods to learn an action-value function Q(s, a) instead of a state-value function V (s), one should differentiate between on-policy and off-policy algorithms. Recall that on-policy algorithms assume that policy used to produce a dataset used for learning is an optimal policy, and the task is therefore is to learn the optimal policy function from the data. In contrast, off-policy algorithms assume that the policy used in a particular dataset may not necessarily be an optimal policy, but can be sub-optimal or even purely random. The purpose of off-policy algorithms is to find an optimal policy when data is collected under a different policy. This task is in general more difficult than the first case of on-policy learning which can be viewed as a direct inference problem of fitting a function (a policy function, in this case) to observed data. For both on-policy and off-policy learning with TD methods, the starting point is the Bellman optimality equation (9.20) that we repeat here Q%t (s, a) = E%t Rt (s, a, s ) + γ max Q%t+1 (s , a ) . a 5 We will discuss function approximations below, after we present TD algorithms in a tabulated setting that is appropriate for finite MDPs with a sufficiently low number of possible states and actions. 9 Introduction to Reinforcement Learning The idea of TD methods for the action-value function is the same as before, to estimate the right-hand side of Eq. (9.43) from observations, and then use a mismatch between the right- and left-hand sides of this equation to define the rule of an update. However, details of such procedure depend on whether we use on-policy or off-policy learning. Consider first the case of on-policy learning. If we know that data was collected under an optimal policy, the max operator in Eq. (9.43) becomes redundant, as observed actions should in this case correspond to maximum of a value function. Similar to the TD method for the state-value function, we replace the expectation in (9.43) by its estimation based on a single observation. The update in this case becomes Qt (s, a) ← Qt (s, a) + α Rt (s, a, s ) + γ Qt+1 (s , a ) − Qt (s, a) . This on-policy algorithm is known as SARSA, to emphasize that it uses a quintuple (s, a, r, s , a ) to make an update. The TD error for this case is δt = Rt (s, a, s ) + γ Qt+1 (s , a ) − Qt (s, a). Convergence of the SARSA algorithm depends on a policy used to generate data. If the policy converges to a greedy policy in the limit of an infinite number of steps, SARSA converges to the true policy and action-value functions in the limit when each state-action pair is visited an infinite number of times. > SARSA vs Q-Learning – SARSA is an On-Policy method, which means that it computes the Q-value according to a certain policy and then the agent follows that policy. – Q-learning is an Off-Policy method. It consists of computing the Q-value according to a greedy policy, but the agent does not necessarily follow the greedy policy. Now consider the case of off-policy learning. In this case, the data available for learning is collected using some sub-optimal, or possibly even purely random policy. Can we still learn from such data? The answer to this question is in the affirmative - we simply replace the expectation in (9.43) by its estimate obtained from a single observation, as in SARSA, but keeping this time the max operator over next-step actions a : Qt (s, a) ← Qt (s, a) + α Rt (s, a, s ) + γ max Qt+1 (s , a ) − Qt (s, a) . a 5 Reinforcement Learning Methods This is known as Q-learning. It was proposed by Watkins in 1989, and since then has become one of the most popular approaches in reinforcement learning. Q-learning is provably convergent for finite MDPs when the learning rate α slowly decays with the number of iterations, in the limit when each state-action pair is visited an infinite number of times. Extensions of a tabulated-form Q-learning (9.46) for finite MDPs to systems with a continuous state space will be presented in the following sections. Q-learning is thus a TD(0) learning applied to an action-value function. Note the key difference between SARSA and Q-learning in an online setting when an agent has to choose actions during learning. In SARSA, we use the same policy (e.g., an ε-greedy policy, see Exercise 9.8) to generate both the current action a and the next action a . In contrast to that, in Q-learning the next action a is a greedy action that maximizes the action-value function Qt+1 (s , a ) at the next time moment. It is exactly the choice of a greedy next action a that makes Q-learning an off-policy algorithm that can learn an optimal policy from different and sub-optimal execution policies. The reason Q-learning works as an off-policy method is that the TD(0) rule (9.46) does not depend on the policy used to obtain data for training. Such dependence enters the TD rule only indirectly, via an assumption that this policy should be such that each state-action pair is encountered in the data many times—in fact, an infinite number of times asymptotically, when the number of observations goes to infinity. The TD rule (9.46) does not try to answer the question how the observed values of such pairs are computed. Instead, it directly uses these observed values to make updates in values of the action-value function. It is its ability to learn from off-policy data that makes Q-learning particularly attractive for many tasks in reinforcement learning. In particular, in batch reinforcement learning, an agent should learn from some data previously produced by some other agent. Assuming that the agent that produced the historical data was acting optimally may in many cases be too stringent or unrealistic an assumption. Moreover, in certain cases the previous agent might have been acting optimally, but an environment could change due to some trend (drift) effects. Even though a policy used in the data could be optimal for previous periods, due to the drift, this becomes off-policy learning. In short, it appears there are more examples of offpolicy learning in real-world applications than of on-policy learning. The ability to use off-policy data does not come without a price which is related to the presence of the max operator in Eq. (9.46). This operator in fact provides a mechanism for comparison between different policies during learning. In particular, Eq. (9.46) implies that one cannot learn an optimal policy from a single observed transition (s, a) → (r, s , a ) and nothing else, as the chosen next action a may not necessarily be an optimal action that maximizes Q%t+1 (s , a ), as required in the Q-learning update rule (9.46). This suggests that online Q-learning could maintain a tabulated representation of the action-value function Q(s, a) for all previously visited pairs (s, a), and use it in order to estimate the maxa Q%t+1 (s , a ) term using both the past data and a newly observed transition. Such a method could be viewed as an incremental version of batch learning where a batch dataset is continuously updated by 9 Introduction to Reinforcement Learning adding new observations, and possibly remove observations that are too old and possibly correspond to very sub-optimal policies. This approach is called experience replay in the reinforcement learning literature. The same procedure of adding one observation (or a few of them) at the time can also be used in a pure batch-mode Qlearning as a computational method that allows one to make updates as processing trajectories stored in the datafile. The difference between online Q-learning with experience replay and a pure batch-mode Q-learning therefore amounts to different rules of updating the batch file. In batch-mode Q-learning, it stays the same during learning, but could also be built in increments of one or a few observations to speed up the learning process. In online Q-learning, the experience replay buffer is continuously updated by adding new observations, and removing distant ones to keep the buffer size fixed. Example 9.4 Financial Cliff Walking with SARSA and Q-Learning The “financial cliff walking” example introduced earlier in this chapter can serve as a simple test case for SARSA and Q-learning for a finite MDP. We assume N = 4 values of possible funds in the account, and assume T = 12 time steps. All combinations of state and time can then be represented as a two-dimensional grid of size N × T = 4 × 12. A time-dependent action-value function Qt (st , at ) with three possible actions at = {a+ , a− , a0 } can then be stored as a rank-three tensor of dimension 4 × 12 × 3. To facilitate exploration required in online applications of RL, we can use a ε-greedy policy. The ε-greedy policy is a simple stochastic policy where the agent takes an action that maximizes the action-value function with probability 1 − ε, and takes a purely random action with probability ε. The ε-greedy policy is used to produce both actions a, a in the SARSA update (9.44), while with Q-learning it is only used to pick the action at the current step. A comparison of the optimal policies is given in Table 9.1. For sufficiently small α and under tapering of (see Fig. 9.3, both methods are shown by Fig. 9.4 to converge to the same cumulative reward. This example is implemented in the financial cliff walking with Q-learning notebook. See Appendix “Python Notebooks” for further details. 5.5 Stochastic Approximations and Batch-Mode Q-learning A more systematic view of TD methods is given by their interpretation as stochastic approximations to solve Bellman equations. As we will show in this section, such a view both helps to better understand the meaning of TD update rules presented above, as well as to extend them to learning using batches of observations in each step, instead of taking all individual observations one by one. 5 Reinforcement Learning Methods Table 9.1 The optimal policy for the financial cliff walking problem using (top) SARSA (S) and (below) Q-learning (Q). The row indices denote the balance and the column indices denote the time period. Note that the two optimal policies are almost identical. Starting with a zero balance, both optimal policies will almost surely result in the agent following the same shortest path, with a balance of 1, until the final time period S 3 2 1 0 Q 3 2 1 0 0 Z Z Z U 0 Z Z Z U 1 Z Z Z Z 1 Z Z Z Z 2 Z Z Z Z 2 Z Z Z Z 3 Z Z Z Z 3 Z Z Z Z 4 Z Z Z Z 4 Z Z Z Z 5 Z Z Z Z 5 Z Z Z Z 6 Z Z Z Z 6 Z Z Z Z 7 Z Z Z Z 7 Z Z Z Z 8 Z Z Z Z 8 Z Z Z Z 9 Z D Z Z 9 Z D Z Z 10 Z Z D Z 10 Z D D Z 11 Z Z Z G 11 Z Z Z G 0.00 1000 1250 Episode Fig. 9.3 This figure illustrates how is tapered in the financial cliff walking problem with increasing episodes so that Q-learning and SARSA converge to the same optimal policy and cumulative reward as shown in Table 9.1 and Fig. 9.4 When the model on an environment is unknown, we try to approximately solve the Bellman optimality equation (9.20) by replacing expectations entering this equation by their empirical averages. Stochastic approximations such as the Robbins–Monro algorithm (Robbins and Monro 1951) take this idea one step further, and estimate the mean without directly summing the samples. We can illustrate the idea behind this method using a simple example of estimation of a mean value K1 K k=1 xk of a sequence of observations xk with k = 1, . . . , K. Instead of waiting for all K observations, we can add them one by one, and iteratively update the running estimation of the mean xˆk , where k is the number of iteration, or the number of data points in a dataset: 9 Introduction to Reinforcement Learning Sum of rewards during episode –80 Sarsa Q-Learning –100 0 1000 1250 Episodes Fig. 9.4 Q-learning and SARSA are observed to converge to almost the same optimal policy and cumulative reward in the financial cliff walking problem under the -tapering schedule shown in Fig. 9.3. Note that the cumulative rewards are averaged over twenty simulations xˆk+1 = (1 − αk )xˆk + αk xk , and where αk < 1 denotes the step size (learning rate) at step k, that should satisfy the following conditions: lim K k=1 αk = ∞ , (αk )2 < ∞. Robbins and Monro have shown that under these constraints, an iterative method of computing the mean (9.47) converges to a true mean with probability one (Robbins and Monro 1951). In general, the optimal choice of a (step-dependent) learning rate αk is not universal but specific to a problem, and may require some experimentation. Q-learning presented in Eq. (9.46) can now be understood as the Robbins–Monro stochastic approximation (9.47) to estimate the unknown expectation in Eq. (9.43) & ' as a current estimate Q(k) t (s, a) corrected by a current observation Rt s, a, s + & ' γ maxa ∈A Q (k) t+1 s , a : & ' (k) & ' Qt(k+1) (s, a) = (1 − αk )Q(k) R (s, a) + α Q , a . s, a, s + γ max s k t t t+1 a ∈A (9.49) The single-observation Q-update in Eq. (9.49) corresponds to a pure online version of the Robbins–Monro algorithm. Alternatively, stochastic approximations can be employed in an off-line manner, by using a chunk of data, instead of a single data point, to iteratively update model parameters. Such approaches are useful when working with large datasets, and are frequently used in machine learning, e.g. for a mini-batch stochastic gradient descent method, as a way to more efficiently train a 5 Reinforcement Learning Methods model by feeding it mini-batches of data. In addition, batch versions of stochastic approximation methods are widely used when doing reinforcement learning in continuous state-action spaces, which is a topic we discuss next. ? Multiple Choice Question 2 Select all the following correct statements: a. Q-learning is obtained by using the Robbins–Monro stochastic approximation to estimate the max(·) term in the Bellman optimality equation. b. Q-learning is obtained by using the Robbins–Monro stochastic approximation to estimate the unknown expectation in the Bellman optimality equation. c. The optimal Q-function in Q-learning is obtained when an optimal learning rate αk = αk where α ( 1/137 is used for learning. d. The optimal Q-function is learned in Q-learning iteratively, where each step (Qiteration) implements one iteration of the Robbins–Monro algorithm. Example 9.5 Optimal Stock Execution with SARSA and Q-Learning A setting that is very similar to the “financial cliff walking” example introduced earlier in this chapter can serve to develop a toy MDP model for optimal stock execution. Assume that the broker has to sell N blocks of shares with n shares in each block, e.g. we can have N = 10, n = 1000. The state of the inventory at time t is then given by the variable Xt taking values in a set X with N = 10 states X(n) , so that the start point at t = 0 is X0 = X(N −1) and the target state is XT = X(0) = 0. In each step, the agent has four possible actions at = a (i) that measure the number of blocks of shares sold at time t where a (0) = 0 stands for no action, and a (i) = i with i = 1, . . . , 3 is the number of blocks sold. The update equation is Xt+1 = (Xt − at )+ . Trades influence the stock price dynamics through a linear market impact St+1 = St e(1−νat ) + σ St Zt , where ν is a market friction parameter. To map onto a finite MDP problem, a range of possible stock prices S can be discretized to M values, e.g. M = 12. The state space of the problem is given by a direct product of states X × S of dimension N × M = 10 · 12 = 120. The dimension of the extended space including the time is then 120 · 10 = 1200. (continued) 9 Introduction to Reinforcement Learning Example 9.5 (continued) The payoff of selling at blocks of shares when the stock price is St is nat St . A risk-adjusted payoff adds a penalty on variance of the remaining inventory price at the next step t + 1: rt = nat St − λnVar [St+1 Xt+1 ]. All combinations of state and time can then be represented as a three-dimensional grid of size N × M × T = 10 · 12 · 10. A time-dependent action-value function Qt (st , at ) with four possible actions at = {a0 , a1 , a2 , a3 } can then be stored as a rankfour tensor of dimension 10 × 12 × 10 × 4. We can now apply SARSA or Q-learning to learn optimal stock execution in such a simplified setting. For exploration needed for online learning, one can use a ε-greedy policy. At each time step, a time-dependent optimal policy is therefore found with 10 × 12 (for inventory and stock price level) states and four possible actions at = {a0 , a1 , a2 , a3 } can be viewed as a 10 × 12 matrix as shown, for the second time step, in Table 9.2. This example is implemented in the market impact problem with Q-learning notebook. See Appendix “Python Notebooks” for further details (Fig. 9.5). Fig. 9.5 The optimal execution problem: how to break up large market orders into smaller orders with lower market impact? In the finite MDP formulation, the state space is the inventory, shown by the number of blocks, stock price, and time. In this illustration, the agent decides whether to sell {0, 1, 2, 3} blocks at each time step. The problem is whether to retain inventory, thus increasing market risk but reduces the market impact, or quickly sell inventory to reduce exposure but increase the market impact t =2 Inventory 0 1 2 3 4 5 6 7 8 9 Price level 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 1 0 3 2 0 3 2 0 0 0 0 0 0 t =2 Inventory Price level 1 2 3 0 0 0 1 0 0 2 0 0 3 0 2 4 0 2 5 0 2 6 0 1 7 0 3 8 0 0 9 0 0 4 0 0 0 0 0 0 0 2 0 0 Table 9.2 The optimal policy, at time step t = 2, for the trade execution problem using (left) SARSA and (right) Q-learning. The rows denote the inventory level and the columns denote the stock price level. Each element denotes an action to sell {0, 1, 2, 3} blocks of shares 5 Reinforcement Learning Methods 321 Example 9.6 9 Introduction to Reinforcement Learning Electronic Market Making with SARSA and Q-Learning We can build on the previous two examples by considering the problem of high-frequency market making. Unlike the previous example, we shall learn a time-independent optimal policy. Assume that a market maker seeks to capture the bid–ask spread by placing one lot best bid and ask limit orders. They are required to strictly keep their inventory between −1 and 1. The problem is when to optimally bid to buy (“b”), bid to sell (“s”), or hold (“h”), each time there is a limit order book update. For example, sometimes it may be more advantageous to quote a bid to close out a short position if it will almost surely yield an instantaneous net reward, other times it may be better to wait and capture a larger spread. In this toy example, the agent uses the liquidity imbalance in the top of the order book as a proxy for price movement and, hence, fill probabilities. The example does not use market orders, knowledge of queue positions, cancelations, and limit order placement at different levels of the ladder. These are left to later material and exercises. A simple illustration of the market making problem is shown in Fig. 9.6. At each non-uniform time update, t, the market feed provides best prices and depths {pta , ptb , qta , qtb }. The state space is the product of the inventory, Xt ∈ qa {−1, 0, 1}, and gridded liquidity ratio Rˆ t = a t b N ∈ [0, 1], where N is qt +qt the number of grid points and qta and qtb are the depths of the best ask and bid. Rˆ t → 0 is the regime where the mid-price will go up and an ask is filled. Conversely for Rˆ t → 1. The dimension of the state space is chosen to be 3 · 5 = 15. A bid is filled with probability t := Rˆ t and an ask is filled with probability 1 − t . The rewards are chosen to be the expected total P&L. If a bid is filled to close out a short holding, then the expected reward rt = −t (pt + c), where pt is the difference between the exit and entry price and c is the transaction cost. For example, if the agent entered a short position at time s < t with a filled ask at psa = 100 and closed out the position with a filled bid at ptb = 99, then pt = 1. The agent is penalized for quoting an ask or bid when the position is already short or long, respectively. As with previous examples, we apply SARSA or Q-learning to optimize market making. For exploration needed for online learning, one can use a εgreedy policy. A comparison of the optimal policies is given in Table 9.3. For sufficiently large number of iterations in each episode and under tapering of , both methods are observed to converge to the same cumulative reward in Fig. 9.7. This example is implemented in the electronic market making with Q-learning notebook. See Appendix “Python Notebooks” for further details. 5 Reinforcement Learning Methods Cumulated PnL: 12.40 ~ Position: short ~ Entry Price: 2243.00 Probability of fill 1.0 0.8 0.6 2240 0.4 0.2 0.0 0 30 Iteration 12.5 Total PnL 10.0 7.5 5.0 2.5 0.0 0 30 40 Iteration Fig. 9.6 The market making problem requires the placement of bid and ask quotes to maximize P&L while maintaining a position within limits. For each limit order book update, the agent must anticipate which quotes shall be filled to capture the bid–ask spread. Transaction costs, less than a tick, are imposed to penalize trading. This has the net effect of rewarding trades which capture at least a tick. A simple model is introduced to determine the fill probabilities and the state space is the product of the position and gridded fill probabilities Table 9.3 The optimal policy for the market making problem using (top) SARSA (S) and (below) Q-learning (Q). The row indices denote the position and the column indices denote the predicted ask fill probability buckets. Note that the two optimal policies are almost identical S Flat Short Long Q Flat Short Long 0–0.1 b b h 0–0.1 b b s 0.1–0.2 b b s 0.1–0.2 b b s 0.2–0.3 b b s 0.2–0.3 b b s 0.3–0.4 b b s 0.3–0.4 b b s 0.4–0.5 b b s 0.4–0.5 b b s 0.5–0.6 b b s 0.5–0.6 b b s 0.6–0.7 s b s 0.6–0.7 s b s 0.7–0.8 s b s 0.7–0.8 b b s 0.8–0.9 s b s 0.8–0.9 b b s 0.9–1.0 s h s 0.9–1.0 s b s 5.6 Q-learning in a Continuous Space: Function Approximation Our previous presentation of reinforcement learning algorithms assumed a setting of a finite MDP model with discrete state and action spaces. For this case, all combinations of state-action pairs are enumerable. Therefore, the action-value and state-value functions can be maintained in a tabulated form, while TD methods such as Q-learning (9.49) can be used to compute the action-value function values at these nodes in an iterative manner. While such methods are simple and can be proven to converge in a tabulated setting, they also quickly run into their limitations for many interesting real-world 9 Introduction to Reinforcement Learning SARSA Q-Learning Sum of rewards during episode Fig. 9.7 Q-learning and SARSA are observed to converge to almost the same optimal policy and cumulative reward in the market making problem problems. The latter are often high-dimensional in discrete or continuous stateaction spaces. If we use a straightforward discretization of each continuous state and/or action variable, and then enumerate all possible combinations of states and actions, we end up with an exponentially large number of state-action pairs. Even storing such data can pose major challenges with memory requirements, not to speak about an exponential slow-down in a cost of computations, where all such pairs essentially become parameters of a very high-dimensional optimization problem. This calls for approaches based on function approximation methods where functions in (high-dimensional) discrete or continuous spaces are represented in terms of a relatively small number of freely adjustable parameters. To motivate linear function approximations, let us start with a finite MDP with a set of nodes {sn }M n=1 where M is the number of nodes. The state-value function V (s) in this case is determined by a set of node values Vn for each node of the state grid. We can present this set using an “index-free” notation: V (s) = Vn δs,sn , 1 if s = sn 0 otherwise. where δs,sn is the Kronecker symbol: 2 δs,sn 5 Reinforcement Learning Methods Eq. (9.52) can be viewed as an approach to conveniently and simultaneously represent all values Vn of the value function on the grid in the form of a map V (s) that points to a given node value Vn for any particular choice s = sn . Now we can write the same equation (9.52) in a more suggestive form as an expansion over a set of basis functions V (s) = Vn δs,sn = Vn φn (s), with “one-hot” basis functions φn (s): 2 φn (s) = δs,sn = 1 if s = sn 0 otherwise. The latter form helps to understand how this setting can now be generalized for a continuous state space. In a discrete-state representation (9.54), we use “one-hot” (Dirac-like) basis functions φn (s) = δs,sn . We can now envision a transition to a continuous time limit as a process of adding new points to the grid, while at the same time keeping the size M of the sum in Eq. (9.54) by aggregating (taking partial sums) within some neighborhoods of a set of M node points. Each term in such a sum would be given by the product of an average mass of nodes Vn and a smoothedout version of the original Dirac-like basis function of a finite MDP. Such partial aggregation of neighboring points produces a tractable approximation to the actual value function defined in a continuous limit. A function value at any point of a continuous space is now mapped onto a M-dimensional approximation. The quality of such finite-dimensional function approximation is determined by the number of terms in the expansion, as well as the functional form of the basis functions. A smoothed-out localized basis could be constructed using, e.g., B-splines or Gaussian kernels, while for a multi-dimensional continuous-state case one could use multivariate B-splines or radial basis functions (RBFs). A set of one-dimensional B-spline basis functions is illustrated in Fig. 9.8. As one can see, B-splines produce well localized basis functions that differ from zero only on a limited segment of a total support region. Other alternatives, for example, polynomials or trigonometric functions, can also be considered for basis functions; however, they would be functions of a global, rather than local, variation. Similar expansions can be considered for action-value functions Q(s, a). Let us assume that we have a set of basis functions ψk (s, a) with k = 0, 1, . . . , K defined on the direct product S × A. Respectively, we could approximate an action-value function as follows: Q(s, a) = K−1 k=0 θk ψk (s, a). 9 Introduction to Reinforcement Learning 0.2 0.0 4.0 Fig. 9.8 B-spline basis functions Here coefficients θk of such expansion can be viewed as free parameters. Respectively, we can find the values of these parameters which provide the best fit to the Bellman optimality equation. For a fixed and finite set of K basis functions ψk (s, a), the problem of functional optimization of the action-value function Q(s, a) is reduced by Eq. (9.56) to a much simpler problem of K-dimensional numeric optimization over parameters θk , irrespective of the actual dimensionality of the state space. Note that having only a finite (and not too large) value of K, we can at best hope only for an approximate match between the optimal value function obtained in this way with a “true” optimal value function. The latter could in principle be obtained with the same basis function expansion (9.56), provided the set {ψk (s, a)} is complete, by taking the limit K → ∞. Equation (9.56) thus provides an example of function approximation, where a function of interest is represented as expansion over a set of K basis functions ψk (s, a), with coefficients θk being adjustable parameters. Such linear function representations are usually referred to as linear architectures in the machine learning literature. Functions of interest such as value functions and/or policy functions are represented and computed in these linear architecture methods as linear functions of parameters θk and basis functions ψk (s, a). The main advantage of linear architecture approaches is their relative robustness and computational simplicity. The possible amount of variation in functions being approximated is essentially determined by a possible amount of variation in basis functions, and thus can be explicitly controlled. Furthermore, because Eq. (9.56) is linear in θ , this can produce analytic solutions if a loss function is quadratic, or unique and easily computed numerical solutions when a loss function is nonquadratic but convex. Reinforcement learning methods with linear architectures are provably convergent. 5 Reinforcement Learning Methods On the other hand, the linear architecture approach is not free of drawbacks. The main one is that it gives no guidance on how to choose a good set of basis functions. For a bounded one-dimensional continuous state, it is not difficult to derive a good set of basis functions. For example, we could use a trigonometric basis, or splines, or even a polynomial basis. However, with multiple continuous dimensions, to find a good set of basis functions is non-trivial. This goes beyond reinforcement learning, and is generally known in machine learning as a feature construction (or extraction) problem. One possible approach to deal with such cases is to use non-linear architectures that use generic function approximation tools such as trees or neural networks, in order to have flexible representations for functions of interest that do not rely on any pre-defined set of basis functions. In particular, deep reinforcement learning is obtained when one uses a deep neural network to approximate value functions or action policy (or both) in reinforcement learning tasks. We will discuss deep reinforcement learning below, after we present an off-line version of Q-learning that for both discrete and continuous state spaces while operating with mini-batches of data, instead of updating each observation. 5.7 Batch-Mode Q-Learning Assume that we have a set of basis functions ψk (s, a) with k = 0, 1, . . . , K defined on the direct product S × A, and the action-value function is represented by the linear expansion (9.56). Such a representation applies to both finite and continuous MDP problems—as we have seen above, a finite MDP case can be considered a special case of the linear architecture (9.56) with Dirac delta-functions taken as basis functions ψk (s, a). Therefore, using the linear specification (9.56), we can provide a unified description of algorithms of Q-learning for both cases of finite and continuous MDPs. Solving the Bellman optimality equation (9.20) now amounts to finding parameters θk . Clearly, if we want to find all K > 1 such parameters, observing just one data point in each iteration would be insufficient to determine them, or update their previous estimation in a unique and well-specified way. To this end, we need to tackle at least K observations (and, to avoid high variance estimations, a multiple of this number) to produce such estimate. In other words, we need to work in the setting of batch-mode, or off-line, reinforcement learning. With batch reinforcement learning, an agent does not have access to an environment, but rather can only operate with some historical data collected by observing actions of another agent over a period of time. Based on the law of large numbers, one can expect that whenever batch reinforcement learning can be used for training a reinforcement learning algorithm, it can provide estimators with lower variance than those obtained with a pure online learning. To obtain a batch version of Q-learning, the one-step Bellman optimality equation (9.20) is interpreted as regression of the form 9 Introduction to Reinforcement Learning & ' & ' K−1 Rt s, a, s + γ max Q%t+1 s , a = θk ψk (s, a) + εt , a ∈A where εt is a random noise at time t with mean zero. Parameters & θk 'now appear as regression coefficients of the dependent variable R s, a, s + t & ' γ maxa ∈A Q%t+1 s , a on the regressors given by the basis functions ψk (s, a). Equations (9.57) and (9.20) are equivalent in expectations, as taking the expectation of both sides of (9.57), we recover (9.20) with function approximation (9.56) used for the optimal Q-function Q%t (s, a). A batch data file consists of tuples (s, a, r, s ) = (st , at , rt , st+1 ) for t = 0, . . . , T − 1. Each tuple record (s, a, r, s ) contains the current state s = st , action taken a = at , reward received next-step & r = ' rt , and the next state s . If we know the % action-value function Qt+1 s , a as a function of state s and action a (via either an explicit formula or a numerical algorithm), the tuple record (s, a, r, s ) could be used to view them as pairs (s, y) of a supervised regression with& the independent ' variable s = st and dependent variable y := r + γ maxa ∈A Q%t+1 s , a . Note that there is a nuance here related to taking the max maxa ∈A over all actions in the next time step. We will return to this point momentarily, but for now let us assume that this operation can be performed in one way or another, so that each tuple (s, a, r, s ) can indeed be used as an observation for the regression (9.57). Assume that for each time step t, we have samples from N trajectories.6 Using a conventional squared loss function, coefficients θk can be found by solving the standard least-square optimization problem: Lt (θ ) = N k=1 & ' & ' K−1 θk ψk (s, a) Rt sk ak , sk + γ max Q%t+1 s , a − a ∈A -2 . (9.58) This is known as the Fitted Q Iteration (FQI) method. We will discuss an application of this method in the next chapter when we present reinforcement learning for option pricing. & ' Let us now address the challenge of computing the term maxa ∈A Q%t+1 s , a that appears in the regression (9.57) when we are only given samples of transitions in form of tuples (s, a, r, s ). One simple way would be to replace the theoretical maximum by an empirical maximum observed in the dataset. This would amount to using the same dataset to estimate both the optimal action and the optimal Qfunction. & ' It turns out that such a procedure leads to an overestimation of maxa Q%t+1 s, a in the Bellman optimality equation (9.20), due to Jensen’s inequality and convexity of the max(·) function: E [max f (x)] ≤ max E [f (x)], where f (x) is an arbitrary function. Indeed, by replacing expected maximum of Q(s , a ) by an empirical maximum, we replace the expected maximum E maxa Q(s , a ) by 6 These may be Monte Carlo trajectories or trajectories obtained from real-world data. 5 Reinforcement Learning Methods maxa E Q(s , a ) , and then further use a sample-based estimation of the inner expectation in the last expression. Due to Jensen’s inequality, such replacement generally leads to overestimation of the action-value function. When repeated multiple times during iterations over time steps or during optimization over parameters, this overestimation can lead to distorted and sometimes even diverging value functions. This is known in the reinforcement learning literature as the overestimation bias problem. There are two possible approaches to address a potential overestimation bias with Q-learning. One of them is to use two different datasets to train the actionvalue function and the optimal policy. But this is not directly implementable with Q-learning where policy is determined by the action-value function Q(s, a). Instead of optimizing both the action-value function and the policy on different subsets of data, a method known as Double Q-learning (van Hasselt 2010) introduces two action-value functions QA (s, a) and QB (s, a). At each iteration, when presented with a new mini-batch of data, the Double Q-learning algorithm randomly choses between an update of QA (s, a) and update of QB (s, a). If one chooses to update QA (s, a), the optimal action is determined by finding a maximum a% of QA (s , a ), and then QA (s, a) is updated using the TD error of r + γ QB (s , a% ) − QA (s, a). If, on the other hand, one chooses to update QB (s, a), then the optimal action a% is calculated by maximizing QB (s , a ), and then updating QB (s, a) using the TD error of r + γ QA (s , a% ) − QB (s, a). As was shown by van Hasselt (2010), actionvalue functions QA (s, a) and QB (s, a) converge in Double Q-learning to the same limit when the number of observation grows. The method avoids the overestimation bias problem of a naive sample-based Q-learning, though it can at times lead to underestimation of the action-value function. Double Q-learning is often used with model-free Q-learning using neural networks to represent an action-value function Qθ (s, a) where θ is a set of parameters of the neural network. Another possibility arises if the action-value function Q (s, a) has a specific parametric form, so that the maximum over the next-step action a can be performed analytically or numerically. In particular, for linear architectures (9.56) the maximum can be computed once the form of basis functions ψk (s, a) and coefficients θk are fixed. With such an independent calculation of the maximum in the Bellman optimality equation, splitting a dataset into two separate datasets for learning the action-value function and optimal policy, as is done with Double Q-learning, can be avoided. As we will see in later chapters, such scenarios can be implemented for some problems in quantitative trading, including in particular option pricing. > Bellman Equations and Non-expansion Operators As we saw above, a non-analytic term in the Bellman optimality equation involving a max over all actions at the next step poses certain computational challenges. It turns out that this term can be relaxed using differentiable parameterized operators constructed in such a way that the “hard” max operator is recovered in a certain parametric limit. It turns out that operators (continued) 9 Introduction to Reinforcement Learning of certain type, called non-expansion operators, can replace the max operator in a Bellman equation without loosing the existence of a solution, such as a fixed-point solution for a time-stationary MDP problem. Let h be a real-valued function over a finite set I , and let * be a summary operator that maps values of function h onto a real number. The maximum operator maxi∈I h(i) and the minimum operators mini∈I h(i) are examples of summary operators. A summary operator * is called a non-expansion if it satisfies two properties min h(i) ≤ *h(i) ≤ max h(i) *h(i) − *h (i) ≤ max h(i) − h (i) , and i∈I where h is another real-valued function over the same set. Some examples of non-expansion include the mean and max operator, as well as epsilon-greedy operator epsε (X) = εmean (X) + (1 − ε) max (X). As was shown by Littman and Szepasvari (1996), value iteration for the action-value function ˆ , a) ˆ a) ← r(s, a) + γ p(s, a, s ) max Q(s (9.61) Q(s, a ∈A s ∈S can be replaced by a generalized value iteration ˆ a) ← r(s, a) + γ Q(s, ˆ , a) p(s, a, s ) *a ∈A Q(s s ∈S which converges to a unique fixed point if operator * is a non-expansion with respect to the infinity norm: ˆ ˆ a) − Q ˆ (s, a) ˆ (s, a) ≤ max Q(s, *Q(s, a) − *Q a ˆ Qˆ , s. for any Q, ? Multiple Choice Question 3 Select all the following correct statements: a. Fitted Q Iteration is a method to accelerate on-line Q-learning. 5 Reinforcement Learning Methods b. Similar to the DP approach, Fitted Q Iteration looks at only one trajectory at each update, so that Q-iteration fits better when extra noise from other trajectories is removed. c. Fitted Q Iteration works only for discrete state-action spaces. d. Fitted Q Iteration works only for continuous state-action spaces. e. Fitted Q Iteration works for both discrete and continuous state-action > Online Learning with MDPs Recall that in Chap. 1, we presented the Multi-Armed Bandit (MAB) as an example of online sequential learning. Similar to the MAB formulation, for a more general setting of online learning with Markov Decision Processes,7 a training algorithm A aims at minimization of the regret of A defined as follows: A % RA T = RT − T ρ , T −1 where RTA = t=0 Rt+1 is the total reward received up to time T while following A, and ρ % stands for the optimal long-run average reward: ρ % = max ρ π = max π μπ (s)R(s, π(s)), where μπ (s) is a stationary distribution of states induced by policy π . The problem of minimization of regret is clearly equivalent to maximization of the total reward. For a discussion of algorithms for online learning with the MDP, see Szepesvari (2010). Online learning with MDP can be of interest in the financial context for certain tasks that may require real-time adjustments of policy following new data received, such as intraday trading. As we mentioned earlier, a combination of off-line and online learning using experience replay is often found to produce better and more stable behavior than pure online learning. 5.8 Least Squares Policy Iteration Recall that for every MDP, there exists an optimal policy, π ∗ , which maximizes the expected, discounted return of every state. As discussed earlier in this chapter, policy iteration is a method of discovering this policy by iterating through a sequence of 7 See Sect. 3 for further details of MDPs. 9 Introduction to Reinforcement Learning monotonically improving policies. Each iteration consists of two phases: (i) Value determination computes the state-action values for a policy, π by solving the above system; (ii) Policy improvement defines the next policy π . These steps are repeated until convergence. The Least Squares Policy Iteration (LSPI) (Lagoudakis and Parr 2003) is a model-free off-policy method that can efficiently use (and re-use at each iteration) sample experiences collected using any policy . The LSPI method can be understood as a sample-based and model-free approximate policy iteration method that uses a linear architecture where an action-value function Qt (xt , at ) is sought as a linear expansion in a set of basis functions. The LSPI approach proceeds as follows. Assume we have a set of K basis functions { k (x, a)}K k=1 . While particular choices for such basis functions will be presented below, in this section their specific form does not matter, as long as the set of basis function is expressive enough so that the true optimal action-value function approximately lies in a span of these basis functions. Provided such a set is fixed, we use a linear function approximation for the action-value function: Qπt (xt , at ) = Wt (xt , at ) = Wtk k (xt , at ). Note that a dependence on a policy π enters this expression through a dependence of coefficients Wik on π . The LSPI method can be thought of a process of finding an optimal policy via adjustments of weights Wik . The policy π is a greedy policy that maximizes the action-value function: at% (xt ) = πt (xt ) = argmax Qt (xt , at ) . The LSPI algorithm continues iterating between computing coefficients Wtk (and hence the action-value function Qt (xt , at )) and computing the policy given the action-value function using Eq. (9.74). This is done for each time step, proceeding backward in time for t = T − 1, . . . , 0. To find coefficients Wt for a given time step, we first note that a linear Bellman equation (9.18) for a fixed policy π can be expressed in a form that only involve the action-value function, by noting that for an arbitrary policy π , we have Vtπ (xt ) = Qπt (xt , π(xt )). Using this in the Bellman equation (9.18), we write it in the following form: Qπt (xt , at ) = Rt (xt , at ) + γ Et Qπt+1 (Xt+1 , π(Xt+1 )) xt , at . Similar to Eq. (9.57), we can interpret Eq. (9.69) as regression of the form 5 Reinforcement Learning Methods Rt (xt , at , xt+1 ) + γ Qπt+1 (xt+1 , π(xt+1 )) = Wt (xt , at ) + εt , where Rt (xt , at , xt+1 ) is a random reward, and εt is a random noise at time t with (k) (k) (k) (k) mean zero. Assume we have access to sample transitions Xt , at , Rt , Xt+1 with k = 1, . . . , N for each t = T − 1, . . . , 0. For a given policy π , coefficients Wt can then found by solving the following least squares optimization problem, which is similar to Eq. (9.58) above: N (k) (k) (k) Rt Xt , at , Xt+1 Lt (Wt ) = k=1 +γ Qπt+1 2 (k) (k) (k) (k) Xt+1 , π Xt+1 − Wt Xt , at . For a solution of this equation, see Exercise 9.14. Note that for an MDP with a finite state-action space, finding an optimal policy using (9.67) is straightforward, and achieved by enumeration of possible actions for each state. When the action space is continuous, it takes more effort. Consider, for example, the case where both the state and action spaces are one-dimensional continuous spaces. To use Eq. (9.67) in such setting, we discretize the range of values of xt to a set of discrete values xn . We can first compute the optimal action for these values, and then use splines to interpolate for the rest of values of xt . For a given set of coefficients Wtk , the policy πt (xt ) is then represented by a splineinterpolated function. Example 9.7 LSPI for Optimal Allocation Recall from Chap. 1 the problem of an investor who starts with an initial wealth W0 = 1 at time t = 0 and, at each period t = 0, . . . , T − 1 allocates a fraction ut = ut (St ) of the total portfolio value to the risky asset, and the remaining fraction 1 − ut is invested in a risk-free bank account that pays a risk-free interest rate rf = 0. If the wealth process is self-financing and the one-step rewards Rt for t = 0, . . . , T − 1 are the risk-adjusted portfolio returns Rt = rt − λVar [rt |St ] = ut φt − λu2t Var [φt |St ] then the optimal investment problem for T − 1 steps is given by V π (s) = max E Rt St = s = max E ut φt − λu2t Var [φt |St ] St = s . ut ut t=0 t=0 (9.73) 9 Introduction to Reinforcement Learning Example 9.7 where we allow for short selling in the ETF (i.e., ut < 0) and borrowing of cash ut > 1. We apply the LSPI algorithm to N = 2000 simulated stock (i) T prices over T = 10 periods: {{St }N i=1 }t=1 . At each time period, we construct K a basis { k (s, a)}k=1 over the state-action space using K = 256 B-spline basis (i) N functions, where s ∈ [min({St(i) }N i=1 ), max({St }i=1 )] and a ∈ [−1, 1]. Note that in this particular simple problem, the actions are independent of the state space and hence the basis construction over state-action space is not really needed. However, our motive here is to show that we can obtain E[φt ] using a more general an estimate close to the exact solution, u∗t = 2λVar[φ t] methodology. aT −1 is initialized with uniform random samples at time step t = T − 1. In subsequent time steps, at , t ∈ {T − 1, T − 2, . . . , 0}, we initialize with ∗ . LSPI updates the policy iteratively the previous optimal action, at = at+1 k−1 k π → π until convergence of the value function. The Q-function is maximized over a gridded state-action space, h , with 200 stock values and 20 action values, to give the optimal action atk (s) = πtk (s) = argmax Qπt (s, a) , (s, a) ∈ h . For each time step, LSPI updates the policy π k until the following stopping k k−1 criterion is satisfied ||V π − V π ||2 ≤ τ where τ = 1 × 10−6 . The optimal allocation using the LSPI algorithm (red) is compared against the exact solution (blue) in Fig. 9.9. The implementation of the LSPI and its application to this optimal allocation problem is given in the Python notebook 110 action 90 –0.36 4 6 Time Steps (a) 4 time (b) Fig. 9.9 Stock prices are simulated using an Euler scheme over a one year horizon. At each of ten periods, the optimal allocation is estimated using the LSPI algorithm (red) and compared against the exact solution (blue). (a) Stock price simulation. (b) Optimal allocation 5 Reinforcement Learning Methods 5.9 Deep Reinforcement Learning A good choice of basis functions that could be used in linear architectures (9.56) may be a hard problem for many practical applications, especially if the dimensionality of data grows, or if data become highly non-linear, or both. This problem is also known as the feature engineering problem, and it is common for all types of machine learning, rather than being specific to reinforcement learning. Learning representative features is an interesting and actively researched topic in the machine learning literature, and various supervised and unsupervised algorithms have been suggested to address such tasks. Instead of pursuing hand-engineered or algorithm-driven features defined in general as parameter-based functional transforms of original data, we can resort to universal function approximation methods such as trees or neural networks considered as parameterized “black-box” algorithms. In particular, deep reinforcement learning approaches rely on multi-level neural networks to represent value functions and/or policy functions. For example, if an action-value function Q(s, a) is represented by a multilayer neural network, one way of thinking about it would be in terms of the linear architecture specification (9.56) where parameters θk represent the weight of a last linear layer of a neural network, while the previous layer generates certain “black-box”-type basis functions ψk (s, a) that can be parameterized in terms of their own parameters θ . This approach may be advantageous when an action-value function is highly non-linear and no clear-cut choice of a good set of basis function can be immediately suggested. In particular, functions of high variations appear in analysis of images, videos, and video games. For such applications, using deep neural networks as function approximation is very useful. A strong push to this area of research was initiated by Google’s DeepMind’s work on using deep Q-learning for playing Atari video games. Since we cannot reasonably learn and store a Q value for each state-action tuple when the state space is continuous, we will represent our Q values as a function q(s, ˆ a, w) where w are parameters of the function (typically, a neural network’s weights and bias). In this approximation setting, our update rule becomes * ) & ' w = w + α r + γ max qˆ s , a , w − qˆ (s, a, w) ∇w q(s, ˆ a, w). a ∈A In other words, we seek to minimize L(w) = E 2 & ' r + γ max qˆ s , a , w − q(s, ˆ a, w) . a ∈A 9 Introduction to Reinforcement Learning Target Network DeepMind (Mnih et al. 2015) maintain two sets of parameters, w (to compute ˆ , a )) s.t. our update rule becomes q(s, ˆ a)) and w− (target network, to compute q(s * ) ' & ˆ a, w). w = w + α r + γ max qˆ s , a , w− − qˆ (s, a, w) ∇w q(s, a ∈A The target network’s parameters are updated with the Q-network’s parameters occasionally and are kept fixed between individual updates. Note that when computing the update, we do not compute gradients with respect to w− (these are considered fixed weights). Replay Memory As we play, we store our transitions (s, a, r, s ) in a buffer. Old examples are deleted as we store new transitions. To update our parameters, we sample a mini-batch from the buffer and perform a stochastic gradient descent update. -Greedy Exploration Strategy During training, we use an -greedy heuristic strategy. DeepMind (Mnih et al. 2015) decrease from 1 to 0.1 during the first million steps. At test time, the agent chooses a random action with probability = 0.05. π(a|s) = 1 − , a = argmaxa Qt (s, a) /|A|, a = argmaxa Qt (s, a) There are several points to note: a. w updates every learning_freq steps by using a mini-batch of experiences sampled from the replay buffer. b. DeepMind’s deep Q-network takes as input the state s and outputs a vector of size = number of actions. In our environment, we have |A| actions, thus q(s, ˆ w) ∈ R|A| . c. The input of the deep Q-network can be based on both the current and history of observations of the environment. The practice of using Deep Q-learning on finance problems is problematic. The sheer number of parameters that need to be tuned and configured renders the approach much more complex than Q-learning, LSPI, and of course deep learning in a supervised learning setting. However, deep Q-learning is one of the few approaches which scales to high-dimensional discrete and/or continuous state and action spaces. 7 Exercises 6 Summary This chapter has introduced the reader to reinforcement learning, with some toy examples of how it is useful for solving problems in finance. The emphasis of the chapter is on understanding the various algorithms and RL approaches. The reader should check the following learning objectives: – Gain familiarity with Markov Decision Processes; – Understand the Bellman equation and classical methods of dynamic programming; – Gain familiarity with the ideas of reinforcement learning and other approximate methods of solving MDPs; – Understand the difference between off-policy and on-policy learning algorithms; and – Gain insight into how RL is applied to optimization problems in asset management and trading. The next chapter will present much more in depth examples of how RL is applied in more realistic financial models. 7 Exercises Exercise 9.1 Consider an MDP with a reward function rt = r(st , at ). Let Qπ (s, a) be an actionvalue function for policy π for this MDP, and π% (a|s) = arg maxπ Qπ (s, a) be an optimal greedy policy. Assume we define a new reward function as an affine transformation of the previous reward: r˜ (t) = wrt + b with constant parameters b and w > 0. How does the new optimal policy π˜ % relate to the old optimal policy π% ? Exercise 9.2 With True/False questions, give a short explanation to support your answer. – True/False: Value iteration always find the optimal policy, when run to convergence. [3] – True/False: Q-learning is an on-policy learning (value iteration) algorithm and estimates updates to the action-value function, Q(s, a) using actions taken under the current policy π . [5] – For Q-learning to converge we need to correctly manage the exploration vs. exploitation tradeoff. What property needs to hold for the exploration strategy? [4] – True/False: Q-learning with linear function approximation will always converge to the optimal policy. [2] 9 Introduction to Reinforcement Learning Table 9.4 The reward function depends on fund wealth w and time w 2 1 t0 0 0 t1 0 −10 Exercise 9.3* Consider the following toy cash buffer problem. An investor owns a stock, initially valued at St0 = 1, and must ensure that their wealth (stock + cash) is not less than a certain threshold K at time t = t1 . Let Wt = St + Ct denote their at time t, where Ct is the total cash in the portfolio. If the wealth Wt1 < K = 2 then the investor is penalized with a -10 reward. The investor chooses to inject either 0 or 1 amounts of cash with a respective penalty of 0 or −1 (which is not deducted from the fund). Dynamics The stock price follows a discrete Markov chain with P (St+1 = s | St = s) = 0.5, i.e. with probability 0.5 the stock remains the same price over the time interval. P (St+1 = s + 1 | St = s) = P (St+1 = s − 1 | St = s) = 0.25. If the wealth moves off the grid it simply bounces to the nearest value in the grid at that time. The states are grid squares, identified by their row and column number (row first). The investor always starts in state (1,0) (i.e., the initial wealth Wt0 = 1 at time t0 = 0—there is no cash in the fund) and both states in the last column (i.e., at time t = t1 = 1) are terminal (Table 9.4). Using the Bellman equation (with generic state notation), give the first round of value iteration updates for each state by completing the table below. You may ignore the time value of money, i.e. set γ = 1. Vi+1 (s) = max( a T (s, a, s )(R(s, a, s ) + γ Vi (s ))) (w,t) V0 (w) V1 (w) (1,0) 0 ? (2,0) 0 NA Exercise 9.4* Consider the following toy cash buffer problem. An investor owns a stock, initially valued at St0 = 1, and must ensure that their wealth (stock + cash) does not fall below a threshold K = 1 at time t = t1 . The investor can choose to either sell the stock or inject more cash, but not both. In the former case, the sale of the stock at time t results in an immediate cash update st (you may ignore transactions costs). If the investor chooses to inject a cash amount ct ∈ {0, 1}, there is a corresponding penalty of −ct (which is not taken from the fund). 7 Exercises Let Wt = St + Ct denote their wealth at time t, where Ct is the total cash in the portfolio. Dynamics The stock price follows a discrete Markov chain with P (St+1 = s | St = s) = 0.5, i.e. with probability 0.5 the stock remains the same price over the time interval. P (St+1 = s + 1 | St = s) = P (St+1 = s − 1 | St = s) = 0.25. If the wealth moves off the grid it simply bounces to the nearest value in the grid at that time. The states are grid squares, identified by their row and column number (row first). The investor always starts in state (1,0) (i.e., the initial wealth Wt0 = 1 at time t0 = 0—there is no cash in the fund) and both states in the last column (i.e., at time t = t1 = 1) are terminal. Using the Bellman equation (with generic state notation), give the first round of value iteration updates for each state by completing the table below. You may ignore the time value of money, i.e. set γ = 1. Vi+1 (s) = max( a T (s, a, s )(R(s, a, s ) + γ Vi (s ))) (w,t) V0 (w) V1 (w) (0,0) 0 ? (1,0) 0 ? Exercise 9.5* Deterministic policies such as the greedy policy pi% (a|s) = arg maxπ Qπ (s, a) are invariant with respect to a shift of the action-value function by an arbitrary ˜ π (s, a) where function of a state f (s): π% (a|s) = arg maxπ Qπ (s, a) = arg maxπ Q π π ˜ (s, a) = Q (s, a) − f (s). Show that this implies that the optimal policy is also Q invariant with respect to the following transformation of an original reward function r(st , at , st+1 ): r˜ (st , at , st+1 ) = r(st , at , st+1 ) + γf (st+1 ) − f (st ). This transformation of a reward function is known as reward shaping (Ng, Russell 1999). It has been used in reinforcement learning to accelerate learning in certain settings. In the context of inverse reinforcement learning, reward shaping invariance has far-reaching implications, as we will discuss later in the book. Exercise 9.6** Define the occupancy measure ρπ : S × A → R by the relation Table 9.5 The reward function depends on fund wealth w and time w 1 0 t0 0 0 t1 0 −10 9 Introduction to Reinforcement Learning ρπ (s, a) = π(a|s) γ t Pr (st = s|π ) , where Pr (st = s|π ) is the probability density of the state s = st at time t following policy π . The occupancy measure ρπ (s, a) can be interpreted as an unnormalized density of state-action pairs. It can be used, e.g., to specify the value function as an expectation value of the reward: V =< r(s, a) >ρ . a. Compute the policy in terms of the occupancy measure ρπ . b. Compute a normalized occupancy measure ρ˜π (s, a). How different the policy will be if we used the normalized measure ρ˜π (s, a) instead of the unnormalized measure ρπ ? Exercise 9.7** Theoretical models for reinforcement learning typically assume that rewards rt := r(st , at , st+1 ) are bounded: rmin ≤ rt ≤ rmax with some fixed values rmin , rmax . On the other hand, some models of rewards used by practitioners may produce (numerically) unbounded rewards. For example, with linear architectures, a popular choice of a reward function is a linear expansion rt = K k=1 θk k (st , at ) over a set of K basis functions k . Even if one chooses a set of bounded basis functions, this expression may become unbounded via a choice of coefficients θt . a. Use the policy invariance under linear transforms of rewards (see Exercise 9.1) to equivalently formulate the same problem with rewards that are bounded to the unit interval [0, 1], so they can be interpreted as probabilities. b. How could you modify a linear unbounded specification of reward rθ (s, a, s ) = k=1K θk k (s, a, s ) to a bounded reward function with values in a unit interval [0, 1]? Exercise 9.8 Consider an MDP with a finite number of states and actions in a real-time setting where the agent learns to act optimally using the ε-greedy policy. The ε-greedy policy amounts to taking an action a% = argmaxa Q(s, a ) in each state s with probability 1−ε, and taking a random action with probability ε. Will SARSA and Qlearning converge to the same solution under such policy, using a constant value of ε? What will be different in the answer if ε decays with the epoch, e.g. as εt ∼ 1/t? Exercise 9.9 Consider the following single-step random cost (negative reward) C (st , at , st+1 ) = ηat + (K − st+1 − at )+ , where η and K are some parameters. You can use such a cost function to develop an MDP model for an agent learning to invest. For example, st can be the current assets in a portfolio of equities at time t, at be an additional cash added to or subtracted from the portfolio at time t, and st+1 be the portfolio value at the end of time interval 7 Exercises [t, t + 1). The second term is an option-like cost of a total portfolio (equity and cash) shortfall by time t + 1 from a target value K. Parameter η controls the relative importance of paying costs now as opposed to delaying payment. a. What is the corresponding expected cost for this problem, if the expectation is taken w.r.t. to the stock prices and at is treated as deterministic? b. Is the expected cost a convex or concave function of the action at ? c. Can you find an optimal one-step action at% that minimizes the one-step expected cost? Hint: For Part (i), you can use the following property: d d [y − x]+ = [(y − x)H (y − x)] , dx dx where H (x) is the Heaviside function. Exercise 9.10 Exercise 9.9 presented a simple single-period cost function that can be used in the setting of model-free reinforcement learning. We can now formulate a model based formulation for such an option-like reward. To this end, we use the following specification of the random end-of-period portfolio state: st+1 = (1 + rt )st rt = G(Ft ) + εt . In words, the initial portfolio value st + at in the beginning of the interval [t, t + 1) grows with a random return rt given by a function G(Ft ) of factors Ft corrupted by noise ε with E [ε] = 0 and E ε2 = σ 2 . a. Obtain the form of expected cost for this specification in Exercise 9.9. b. Obtain the optimal single-step action for this case. c. Compute the sensitivity of the optimal action with & respect' to the i-th factor Fit assuming the sigmoid link function G(Ft ) = σ i ωi Fit and a Gaussian noise εt . Exercise 9.11 Assuming a discrete set of actions at ∈ A of dimension K show that deterministic policy optimization by greedy policy of Q-learning Q(st , at% ) = maxat ∈A Q(st , at ) can be equivalently expressed as maximization over a set probability distributions π(at ) with probabilities πk for at = Ak , k = 1, . . . K (this relation is known as Fenchel duality): max Q(st , at ) = max at ∈A {π }k K k=1 πk Q (st , Ak ) s.t. 0 ≤ πi ≤ 1, K k=1 πk = 1. 9 Introduction to Reinforcement Learning Exercise 9.12** The reformulation of a deterministic policy search in terms of search over probability distributions given in Exercise 9.11 is a mathematical identity where the end result is still a deterministic policy. We can convert it to a probabilistic policy search if we modify the objective function max Q(st , at ) = max at ∈A {π }k πk Q (st , Ak ) s.t. 0 ≤ πi ≤ 1, πk = 1 by adding to it a KL divergence of the policy π with some reference (“prior”) policy ω: G% (st , at ) = max π πk Q (st , Ak ) − K πk 1 πi log , β ωk k=1 where β is a regularization parameter controlling the relative importance of the two terms that enforce, respectively, maximization of the action-value function and a preference for a previous reference policy ω with probabilities ωk . When parameter β < ∞ is finite, this produces a stochastic rather than deterministic optimal policy π % (a|s). Find the optimal policy π % (a|s) from the entropy-regularized functional G(st , at ) (Hint: use the method of Lagrange multipliers to enforce the normalization constraint k πk = 1). Exercise 9.13** Regularization by KL-divergence with a reference distribution ω introduced in the previous exercise can be extended to a multi-period setting. This produces maximum entropy reinforcement learning which augments the standard RL reward by an additional entropy penalty term in the form of KL divergence. The optimal value function in MaxEnt RL is ∞ * ) π(at |st ) 1 % t F (s) = max E γ r(st , at , st+1 ) − log (9.78) s0 = s , π β π0 (at |st ) t=0 where E [·] stands for an average under a stationary distribution ρπ (a) = s μπ (s)π(a|s) where μπ (s) is a stationary distribution over states induced by the policy π , and π0 is some reference policy. Show that the optimal policy for this entropy-regularized MDP problem has the following form: π % (a|s) = π π 1 π0 (at |st )eβGt (st ,at ) , Zt ≡ π0 (at |st )eβGt (st ,at ) , Zt a t 7 Exercises π (s where Gπt (st , at ) = Eπ [r(st , at , st+1 )] + γ st+1 p(st+1 |st , at )Ft+1 t+1 ). Check that the limit β → ∞ reproduces the standard deterministic policy, that is limβ→∞ V % (s) = maxπ V π (s), while in the opposite limit β → 0 we obtain a random and uniform policy. We will return to entropy-regularized value-based RL and stochastic policies such as (9.79) (which are sometimes referred to as Boltzmann policies) in later chapters of this book. Exercise 9.14* Show that the solution for the coefficients Wtk in the LSPI method (see Eq. (9.71)) is W%t = S−1 t Mt , where St is a matrix and Mt is a vector with the following elements: (t) Snm = n Xt(k) , at(k) m Xt(k) , at(k) Mn(t) = (k) (k) (k) (k) (k) (k) (k) Rt Xt , at , Xt+1 + γ Qπt+1 Xt+1 , π Xt+1 . n Xt , at Exercise 9.15** Consider the Boltzmann weighted average of a function h(i) defined on a binary set I = {1, 2}: Boltzβ h(i) = eβh(i) βh(i) i∈I e a. Verify that this operator smoothly interpolates between the max and the mean of h(i) which are obtained in the limits β → ∞ and β → 0, respectively. b. By taking β = 1, h(1) = 100, h(2) = 1, h (1) = 1, h (2) = 0, show that Boltzβ is not a non-expansion. c. (Programming) Using operators that are not non-expansions can lead to a loss of a solution in a generalized Bellman equation. To illustrate such phenomenon, we use the following simple example.Consider the MDP problem on the set I = {1, 2} with two actions a and b and the following specification: p(1|1, a) = 0.66, p(2|1, a) = 0.34, r(1, a) = 0.122 and p(1|1, b) = 0.99, p(1|1, b) = 0.01, r(1, b) = 0.033. The second state is absorbing with p(1|2) = 0, p(2|2) = 1. The discount factor is γ = 0.98. Assume we use the Boltzmann policy ˆ eβ Q(s,a) π(a|s) = . ˆ β Q(s,a) ae 9 Introduction to Reinforcement Learning Show that the SARSA algorithm " ! ˆ a) , ˆ a) ← Q(s, ˆ a) + α r(s, a) + γ Q(s ˆ , a ) − Q(s, Q(s, where a, a are drawn from the Boltzmann policy with β = 16.55 and α = 0.1, ˆ 1 , a) that do not achieve stable ˆ 1 , a) and Q(s leads to oscillating solutions for Q(s states with an increased number of iterations. Exercise 9.16** An alternative continuous approximation to the intractable max operator in the Bellman optimality equation is given by the mellowmax function (Asadi and Littman 2016) , n 1 ωxi 1 mmω (X) = log e ω n i=1 a. Show that the mellowmax function recovers the max function in the limit ω → ∞. b. Show that mellowmax is a non-expansion. Appendix Answers to Multiple Choice Questions Question 1 Answer: 2, 4. Question 2 Answer: 2, 4 Question 3 Answer: 5 Python Notebooks The notebooks provided in the accompanying source code repository accompany many of the examples in this chapter, including Q-learning and SARSA for the financial cliff walking problem, the market impact problem, and electronic market making. The repository also includes an example implementation of the LSPI algorithm for optimal allocation in a Markowitz portfolio. Further details of the notebooks are included in the README.md file. References Asadi, K., & Littman, M. L. (2016). An alternative softmax operator for reinforcement learning. Proceedings of ICML. Bellman, R. E. (1957). Dynamic programming. Princeton, NJ: Princeton University Press. Bertsekas, D. (2012). Dynamic programming and optimal control (vol. I and II), 4th edn. Athena Scientific. Lagoudakis, M. G., & Parr, R. (2003). Least-squares policy iteration. Journal of Machine Learning Research, 4, 1107–1149. Littman, M. L., & Szepasvari, S. (1996). A generalized reinforcement-learning model: convergence and applications. In Machine Learning, Proceedings of the Thirteenth International Conference (ICML ’96), Bari, Italy. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. Robbins, H., & Monro, S. (1951). A stochastic approximation method. Ann. Math. Statistics, 22, 400–407. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction, 2nd edn. MIT. Szepesvari, S. (2010). Algorithms for reinforcement learning. Morgan & Claypool. Thompson, W. R. (1935). On a criterion for the rejection of observations and the distribution of the ratio of deviation to sample standard deviation. Ann. Math. Statist., 6(4), 214–219. Thompson, W. R. (1993). On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3), 285–94. van Hasselt, H. (2010). Double Q-learning. Advances in Neural Information Processing Systems. http://papers.nips.cc/ Chapter 10 Applications of Reinforcement Learning This chapter considers real-world applications of reinforcement learning in finance, as well as further advances in the theory presented in the previous chapter. We start with one of the most common problems of quantitative finance, which is the problem of optimal portfolio trading in discrete time. Many practical problems of trading or risk management amount to different forms of dynamic portfolio optimization, with different optimization criteria, portfolio composition, and constraints. This chapter introduces a reinforcement learning approach to option pricing that generalizes the classical Black–Scholes model to a data-driven approach using Q-learning. It then presents a probabilistic extension of Q-learning called G-learning and shows how it can be used for dynamic portfolio optimization. For certain specifications of reward functions, G-learning is semi-analytically tractable and amounts to a probabilistic version of linear quadratic regulators (LQR). Detailed analyses of such cases are presented, and show their solutions with examples from problems of dynamic portfolio optimization and wealth management. 1 Introduction In this chapter, we consider real-world applications of reinforcement learning in finance. We start with one of the most common problems of quantitative finance, which is the problem of optimal portfolio trading. Many practical problems of trading or risk management amount to different forms of dynamic portfolio optimization, with different optimization criteria, portfolio composition, and constraints. For example, the problem of optimal stock execution can be viewed as a problem of optimal dynamic management of a portfolio of stocks of the same company, with the objective being minimization of slippage costs of selling the stock. A more traditional example of dynamic portfolio optimization is given by asset managers and mutual or pension funds who usually manage investment © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https://doi.org/10.1007/978-3-030-41068-1_10 10 Applications of Reinforcement Learning portfolios over long time horizons (months or years). Intra-day trading which is more typical of hedge funds can also be thought of as a dynamic portfolio optimization problem with a different portfolio choice, time step, constraints, and so on. In addition to different time horizons and objective functions, details of a portfolio optimization problem determine choices for features and hence for a state description. For example, management of long-horizon investment portfolios typically involves macroeconomic factors but not the limit order book data, while for intra-day trading it is the opposite case. Dynamic portfolio management is a problem of stochastic optimal control where control variables are represented by changes in positions in different assets in a portfolio made by a portfolio manager, and state variables describe the current composition of the portfolio, prices of its assets, and possibly other relevant features including market indices, bid–ask spreads, etc. If we consider a large market player whose trade can move the market, actions of such trader may produce a feedback loop effect. The latter is referred to in the financial literature as a “market impact effect.” All the above elements of dynamic portfolio management make it suitable for applying methods of dynamic programming and reinforcement learning. While the previous chapter introduced the main concepts and methods of reinforcement learning, here we want to take a more detailed look at practical applications for portfolio management problems. When viewed as problems of optimal control to be addressed using reinforcement learning, such problems typically have a very high-dimensional state-action space. Indeed, even if we constrain ourselves by actively traded US stocks, we get around three thousands stocks. If we add to this other assets such as futures, exchangetraded funds, government and corporate bonds, etc., we may end up with state spaces of dimensions of many thousands. Even in a more specialized case of an equity fund whose objective is to beat a given benchmark portfolio, the investment universe may be tens or hundreds of stocks. This means that such applications of reinforcement learning in finance have to handle (very) high-dimensional and typically continuous (or approximately continuous) state-action spaces. Clearly, such high-dimensional RL problems are far more complex than simple low-dimensional examples typically used to test reinforcement learning methods, such as inverted pendulum or cliff walking problems described in the Sutton–Barto book, or the “financial cliff walking” problem presented in the previous chapter. Modern RL methods applied to problems outside of finance typically operate with action space dimensions measured in tens but not hundreds or thousands, and typically have a far larger signal-to-noise ratio than financial problems. Low signal-to-noise ratios and potentially very high dimensionality are therefore two marked differences of applications of reinforcement learning in finance as opposed to applications to video games and robotics. As high-dimensional optimal control problems are harder than low-dimensional ones, we first want to explore applications of reinforcement learning with lowdimensional portfolio optimization problems. Such an approach both sets the 2 The QLBS Model for Option Pricing grounds for more complex high-dimensional applications and can be of independent interest when applied to practically interesting problems falling in this general class of dynamic portfolio optimization. The first problem that we address in this chapter is exactly of this kind: it is both low-dimensional and of practical interest on its own, rather than being a toy example for a multi-asset portfolio optimization. Namely, we will consider the classical problem of option pricing, in a formulation that closely resembles the framework of the celebrated Black–Scholes–Merton (BSM) model , also known as the Black– Scholes (BS) model, one of the cornerstones of modern quantitative finance (Black and Scholes 1973; Merton 1974). Chapter Objectives This chapter will present a few practical cases of using reinforcement learning in finance: – – – – RL for option pricing and optimal hedging (QLBS); G-learning for stock portfolios and linear quadratic regulators; RL for optimal consumption using G-learning; and RL for portfolio optimization using The chapter is accompanied by two notebooks implementing the QLBS model for option pricing and hedging, and G-learning for wealth management. See Appendix “Python Notebooks” for further details. 2 The QLBS Model for Option Pricing The BSM model was initially developed for the so-called plain vanilla European call and put options. A European call option is a contract that allows a buyer to obtain a given stock at some future time T for a fixed price K. If ST is the stock price at time T , then the payoff to the option buyer at time T is (ST − K)+ . Similarly, a buyer of a European put option has a right to sell the stock at time T for a fixed price K, with a terminal payoff of (K − ST )+ . European call and put options are among the simplest and most popular types of financial derivatives whose value is derived from (or driven by) the underlying stock (or more generally, the underlying asset). The core idea of the BSM model is that options can be priced using the relative value approach to asset pricing, which prices assets in terms of other tradable assets. The relative pricing method for options is known as dynamic option replication. It is based on the observation that an option payoff depends only on the price of a stock at expiry of the option. Therefore, if we neglect other sources of uncertainty such as stochastic volatility, the option value at arbitrary times before the expiry should only depend on the stock value. This makes it possible to mimic the option using a simple portfolio made of the underlying stock and cash, which is called the hedge 10 Applications of Reinforcement Learning portfolio. The hedge portfolio is dynamically managed by continuously rebalancing its wealth between the stock and cash. Moreover, this is done in a self-financing way, meaning that there are no cash inflows/outflows in the portfolio except at the time of inception. The objective of dynamic replication is to mimic the option using the hedge portfolio as closely as possible. In the continuous-time setting of the original BSM model, it turns out that such dynamic replication can be made exact by a continuous rebalancing of the hedge portfolio between the stock and cash, such that the amount of stock coincides with the option price sensitivity with respect to the stock price. This makes the total portfolio made of the option and the hedge portfolio instantaneously risk-free, or equivalently it makes the option instantaneously perfectly replicable in terms of the stock and cash. Risk of mis-hedging between the option and its underlying is instantaneously eliminated; therefore, the full portfolio involving the option and its hedge should earn a risk-free rate. The option price in this limit does not depend on risk preferences of investors. Such analysis performed in the continuous-time setting gives rise to the celebrated Black–Scholes partial differential equation (PDE) for option prices, whose solution produces option prices as deterministic functions of current stock prices. The Black–Scholes PDE can be derived using analysis of the hedge portfolio in discrete time with time steps t, and then taking the continuous-time limit t → 0, see, e.g., Wilmott (1998). It can be shown that the resulting continuous-time BSM model does not amount to a problem of sequential decision making and does not in general reduce to any sort of optimization problem. However, in option markets, rebalancing of option replication (hedge) portfolio occurs at finite frequency, e.g. daily. A frequent rebalancing can be costly due to transaction costs which are altogether neglected in the classical BSM model. When transaction costs are added, a formal continuous-time limit may not even exist as it leads to formally infinite option prices due to an infinite number of portfolio rebalancing acts. With a finite rebalancing frequency, perfect replication is no longer feasible, and the replicating portfolio will in general be different from the option value according to the amount of hedge slippage. The latter depends on the stock price evolution between consecutive rebalancing acts for the portfolio. Respectively, in the absence of perfect replication, a hedged option position carries some mis-hedging risk which the option buyer or seller should be compensated for. This means that once we revert from the idealized setting of continuous-time finance to a realistic setting of discretetime finance, option pricing becomes dependent on investors’ risk preferences. If we take a view of an option seller agent in such a discrete-time setting, its objective should be to minimize some measures of slippage risk, also referred to as a “risk-adjusted cost of hedging” the option, by dynamic option replication. When viewed over the lifetime of an option, this setting can be considered a sequential decision-making process of minimization of slippage cost (or equivalently maximization of rewards determined as negative costs). While such a discrete-time approach converges to the Black–Scholes formulation in the limit of vanishing time steps, it offers both a more realistic setting, and allows one to focus on the 2 The QLBS Model for Option Pricing key objective of option trading and pricing, which is risk minimization by hedging in a sequential decision-making process. This makes option pricing amenable to methods of reinforcement learning, and indeed, as we will see below, option pricing and hedging in discrete time amounts to reinforcement learning. Casting option pricing as a reinforcement learning task offers a few interesting insights. First, if we select a specific model for the stock price dynamics, we can use model-based reinforcement learning as a powerful sample-based (Monte Carlo) computational approach. The latter may be advantageous to other numerical methods such as finite differences for computing option prices and hedge ratios, especially when dimensionality of the state space goes beyond three or four. Second, we may rely on model-free reinforcement learning methods such as Q-learning, and bypass the need to build a model of stock price dynamics altogether. RL provides a framework for model-free learning of option prices and hedges.1 While we only consider the simplest setting for a reinforcement learning approach to pricing and hedging of European vanilla options (e.g., put or call options), the approach can be extended in a straightforward manner to more complex instruments including options on multiple assets, early exercises, option portfolios, market frictions, etc. The model presented in this chapter is referred to as the QLBS model, in recognition of the fact that it combines the Q-learning method of Watkins (1989); Watkins and Dayan (1992) with the method of dynamic option replication of the (time-discretized) Black–Scholes model. As Q-learning is a model-free method, this means that the QLBS model is also model-free. More accurately, it is distributionfree: option prices in this approach depend on the chosen utility function, but do not rely on any model for the stock price distribution, and instead use only samples from this distribution. The QLBS model may also be of interest as a financial model which relates to the literature on hedging and pricing in incomplete markets (Föllmer and Schweizer 1989; Schweizer 1995; Cerný and Kallsen 2007; Potters et al. 2001; Petrelli et al. 2010; Grau 2007). Unlike many previous models of this sort, QLBS ensures a full consistency of hedging and pricing at each time step, all within an efficient and datadriven Q-learning algorithm. Additionally, it extends the discrete-time BSM model. Extending Markowitz portfolio theory (Markowitz 1959) to a multi-period setting, Sect. 3 incorporates a drift in a risk/return analysis of the option’s hedge portfolio. This extension allows one to consider both hedging and speculation with options in a consistent way within the same model, which is a challenge for the standard BSM model or its “phenomenological” generalizations, see, e.g., Wilmott (1998). Following this approach, it turns out that all results of the classical BSM model (Black and Scholes 1973; Merton 1974) can be obtained as a continuous-time limit t → 0 of a multi-period version of the Markowitz portfolio theory (Markowitz 1959), if the dynamics of stock prices are log-normal, and the investment portfolio 1 Here we use the notion of model-free learning in the same context as it is normally used in the machine learning literature, namely as a method that does not rely on an explicit model of feature dynamics. Option prices and hedge ratios in the framework presented in this section depend on a model of rewards, and in this sense are model-dependent. 10 Applications of Reinforcement Learning is self-replicating. However, this limit is degenerate: all fluctuations of the “true” option price asymptotically decay in this limit, resulting in a deterministic option price which is independent of risk preferences of an investor. However, as long as the time step t is kept finite, both risk of option mis-hedging and dependence of the option price on investor risk preferences persist. To the extent that option pricing in discrete time amounts to either DP (a.k.a. model-based RL), if a model is known, or RL if a model is unknown, we may say that the classical continuous-time BSM model corresponds to the continuous-time limit of model-based reinforcement learning. In such a limit, all data requirements are reduced to just two numbers—the current stock price and volatility. 3 Discrete-Time Black–Scholes–Merton Model We start with a discrete-time version of the BSM model. As is well known, the problem of option hedging and pricing in this formulation amounts to a sequential risk minimization. The main open question is how to define risk in an option. In this part, we follow a local risk minimization approach pioneered in the work of Föllmer and Schweizer (1989), Schweizer (1995), Cerný and Kallsen (2007). A similar method was developed by physicists Potters et al. (2001), see also the work by Petrelli et al. (2010). We use a version of this approach suggested in Grau (2007). In this approach, we take the view of a seller of a European option (e.g., a put option) with maturity T and the terminal payoff of HT (ST ) at maturity, that depends on a final stock price ST at that time. To hedge the option, the seller uses proceeds of the sale to set up a replicating (hedge) portfolio $t composed of the stock St and a risk-free bank deposit Bt . The value of hedge portfolio at any time t ≤ T is $t = ut St + Bt , where ut is a position in the stock at time t, taken to hedge risk in the option. 3.1 Hedge Portfolio Evaluation As usual, the replicating portfolio tries to exactly match the option price in all possible future states of the world. If we start at maturity T when the option position is closed, the hedge ut should be closed at the same time, thus we set uT = 0 and therefore $T = BT = HT (ST ), 3 Discrete-Time Black–Scholes–Merton Model which sets a terminal condition for BT that should hold in all future states of the world at time T .2 To find an amount needed to be held in the bank account at previous times t < T , we impose the self-financing constraint which requires that all future changes in the hedge portfolio should be funded from an initially set bank account, without any cash infusions or withdrawals over the lifetime of the option. This implies the following relation that ensures conservation of the portfolio value by a re-hedge at time t + 1: ut St+1 + ert Bt = ut+1 St+1 + Bt+1 . This relation can be expressed recursively in order to calculate the amount of cash in the bank account to hedge the option at any time t < T using its value at the next time step: Bt = e−rt [Bt+1 + (ut+1 − ut ) St+1 ] , t = T − 1, . . . , 0. Substituting this into Eq. (10.1) produces a recursive relation for $t in terms of its values at later times, which can therefore be solved backward in time, starting from t = T with the terminal condition (10.2), and continued through to the current time t = 0: $t = e−rt [$t+1 − ut St ] , St = St+1 − ert St , t = T − 1, . . . , 0. (10.5) Note that Eqs. (10.4) and (10.5) imply that both Bt and $t are not measurable at any t < T , as they depend on the future. Respectively, their values today B0 and $0 will be random quantities with some distributions. For any given hedging strategy {ut }Tt=0 , these distributions can be estimated using Monte Carlo simulation, which first simulates N paths of the underlying S1 → S2 → . . . → SN , and then evaluates $t going backward on each path. Note that because the choice of a hedge strategy does not affect the evolution of the underlying, such simulation of forward paths should only be performed once, and then re-used for future evaluations of the hedge portfolio under difference hedge strategy scenarios. Alternatively, the distribution of the hedge portfolio value $0 can be estimated using real historical data for stock prices, together with a pre-determined hedging strategy {ut }Tt=0 and a terminal condition (10.2). To summarize, the forward pass of Monte Carlo simulation is done by simulating the process S1 → S2 → . . . → SN , while the backward pass is performed using the recursion (10.5) which takes a prescribed hedge strategy, {ut }Tt=0 , and backpropagates uncertainty in the future into uncertainty today, via the self-financing constraint (10.3) (Grau 2007) which serves as a “time machine for risk.” transaction costs are neglected, taking uT = 0 simply means converting all stock into cash. For more details on the choice uT = 0, see Grau (2007). 2 When 10 Applications of Reinforcement Learning As a result of such “back-propagation of uncertainty” from the future to the current time t, the option replicating portfolio $t at time t is a random quantity with a certain distribution. The option price acceptable to the option seller would then be determined by risk preferences of the option seller. The option price can, for example, be taken to be the mean of the distribution of $t , plus some premium for risk. Clearly, the option price can be determined only after the seller decides on a hedging strategy, {ut }Tt=0 , to be used in the future, which would be applied in the same way (as a mapping) for any future value, {$t }Tt=0 . The choice of an optimal hedge strategy, {ut }Tt=0 , will therefore be discussed next. 3.2 Optimal Hedging Strategy Unlike the recursive calculation of the hedge portfolio value (10.5) which is performed path-wise, optimal hedges are computed using a cross-sectional analysis that operates simultaneously over all paths. This is because we need to learn a hedging strategy, {ut }Tt=0 , which would apply to all states that might be encountered in the future, but each given path only produces one value St at time t. Therefore, to compute an optimal hedge, ut (St ), for a given time step t, we need cross-sectional information on all concurrent paths. As with the portfolio value calculation, the optimal hedges, {ut }Tt=0 , are computed backward in time, starting from t = T . However, because we cannot know the future when we compute a hedge, for each time t, any calculation of an optimal hedge, ut , can only condition on the information Ft available at time t. This calculation is similar to the American Monte Carlo method of Longstaff and Schwartz (2001). > Longstaff–Schwartz American Monte Carlo Option Pricing While the objective of the American Monte Carlo method of Longstaff and Schwartz (2001) is altogether different from the problem addressed in this chapter (a risk-neutral valuation of an American option vs a real-measure discrete-time hedging/pricing of a European option), the mathematical setting is similar. Both problems look for an optimal strategy, and their solution requires a backward recursion in combination with a forward simulation. Here we provide a brief outline of their method. The main idea of the LSM approach of Longstaff and Schwartz (2001) is to treat the backward-looking stage of the security evaluation as a regression problem formulated in a forward-looking manner which is more suited for (continued) 3 Discrete-Time Black–Scholes–Merton Model a Monte Carlo (MC) setting. The starting point is the (backward-looking) Bellman equation, the most fundamental equation of the stochastic optimal control (otherwise known as stochastic optimization). For an American option on a financial underlying, the control variable is binary: “exercise” or “not exercise.” The Bellman equation for this particular case produces the continuation value, Ct (St ), at time t as a function of the current underlying value St : Ct (St ) = E e−rt max (ht+t (St+t ), Ct+t (St+t )) Ft . Here hτ (Sτ ) is the option payoff at time τ . For example, for an American put option, hτ (Sτ ) = (K − Sτ )+ . Note that for American options, the continuation value should be estimated as a function Ct (x) of the value x = Xt , as long we want to know whether it is larger or smaller than the intrinsic value, H (Xt ), for a particular realization Xt = x of the process Xt at time t. The problem is, of course, that each Monte Carlo path has exactly one value of Xt at time t. One way to estimate a function Ct (St ) is to use all paths, i.e. use the cross-sectional information. To this end, the one-step Bellman equation (10.6) is interpreted as a regression of the form max (ht+t (St+t ), Ct+t (St+t )) = ert Ct (St ) + εt (St ), where εt (St ) is a random noise at time t with mean zero, which may in general depend on the underlying value St at that time. Clearly (10.7) and (10.6) are equivalent in expectations, as taking the expectation of both sides of (10.7), we recover (10.6). Next the unknown function Ct (St ) is expanded in a set of basis functions: Ct (x) = an (t)φn (x), (10.8) n for some particular choice of the basis {φn (x)}, and the coefficients an (t) are then calculated using the least squares regression of max (ht+t (St+t ), Ct+t (St+t )) on the value St of the underlying at time t across all Monte Carlo paths. The optimal hedge, u% (St ), in this model is obtained from the requirement that the variance of $t across all simulated paths at time t is minimized when conditioned on the currently available cross-sectional information Ft , i.e. 10 Applications of Reinforcement Learning u%t (St ) = argmin V ar [$t |Ft ] u = argmin V ar [$t+1 − ut St |Ft ] , u t = T − 1, . . . , 0. Note the first expression in Eq. (10.9) implies that all uncertainty in $t is due to uncertainty regarding the amount Bt needed to be held in the bank account at time t in order to meet future obligations at the option maturity T . This means that an optimal hedge should minimize the cost of a hedge capital for the option position at each time step t. The optimal hedge can be found analytically by setting the derivative of (10.9) to zero. This gives u%t (St ) = Cov ( $t+1 , St | Ft ) , t = T − 1, . . . , 0. V ar ( St | Ft ) This expression involves one-step expectations of quantities at time t+1, conditional on time t. How they can be computed depends on whether we deal with a continuous or a discrete state space. If the state space is discrete, then such one-step conditional expectations are simply finite sums involving transition probabilities of an MDP model. If, on the other hand, we work in a continuous-state setting, these conditional expectations can be calculated in a Monte Carlo setting by using expansions in basis functions, similar to the LSMC method of Longstaff and Schwartz (2001), or realmeasure MC methods of Grau (2007), Petrelli et al. (2010), Potters et al. (2001). In our exposition below, we use a general notation as in Eq. (10.10) to denote similar conditional expectations where Ft denotes cross-sectional information set at time t, which lets us keep the formalism general enough to handle both cases of a continuous and a discrete state spaces, and discuss simplifications that arise in a special case of a discrete-state formulation separately, whenever appropriate. 3.3 Option Pricing in Discrete Time We start with the notion of a fair option price Cˆ t defined as a time-t expected value of the hedge portfolio $t : Cˆ t = Et [ $t | Ft ] . Using Eq. (10.5) and the tower law of conditional expectations, we obtain Cˆ t = Et e−rt $t+1 Ft − ut (St ) Et [ St | Ft ] = Et e−rt Et+1 [ $t+1 | Ft+1 ] Ft − ut (St ) Et [ St | Ft ] (10.12) " ! = Et e−rt Cˆ t+1 Ft − ut (St ) Et [ St | Ft ] , t = T − 1, . . . , 0. 3 Discrete-Time Black–Scholes–Merton Model Note that we can similarly use the tower law of conditional expectations to express the optimal hedge in terms of Cˆ t+1 instead of $t+1 : Cov Cˆ t+1 , St Ft | F , S Cov $ ( ) t+1 t t % ut (St ) = = . V ar ( St | Ft ) V ar ( St | Ft ) If we now substitute (10.13) into (10.12) and re-arrange terms, we can put the recursive relation for Cˆ t in the following form: " ! ˆ Cˆ t = e−rt EQ Cˆ t+1 Ft , t = T − 1, . . . , 0, ˆ is a signed measure with transition probabilities where Q (St − Et [St ]) Et [St ] q˜ ( St+1 | St ) = p ( St+1 | St ) 1 − , V ar ( St | Ft ) and where p ( St+1 | St ) are transition probabilities under the physical measure P. Note that for sufficiently large moves of St , this expression may become negative. ˆ is not a genuine probability measure, but rather only a signed This means that Q measure (a signed measure, unlike a regular measure, can take both positive and negative values). A potential for a negative fair option price, Cˆ t , is a well-known property of quadratic risk minimization schemes (Cerný and Kallsen 2007; Föllmer and Schweizer 1989; Grau 2007; Potters et al. 2001; Schweizer 1995). However, we note that the “fair” (expected) option price (10.11) is not a price a seller of the option should charge. The actual fair risk-adjusted price is given by Eq. (10.16) below, which can always be made non-negative by a proper level of risk-aversion λ which is defined by the seller’s risk preferences.3 The reason why the fair option price is not yet the price that the option seller should charge for the option is that she is exposed to the risk of exhausting the bank account Bt at some time in the future, after any fixed amount Bˆ 0 = E0 [B0 ] is paid into the bank account at time t = 0 upon selling the option. If necessary, the option seller would need to add cash to the hedge portfolio and she has to be compensated for such a risk. One possible specification of a risk premium, that the dealer has to add on top of the fair option price, is her own optimal ask price to add to the cumulative expected discounted variance of the hedge portfolio along all time steps t = 0, . . . , N , with a risk-aversion parameter λ: 3 If it is desired to have non-negative option prices for arbitrary levels of risk-aversion, the method developed below can be generalized by using non-quadratic utility functions instead of the quadratic Markowitz utility. This would incur a moderate computational overhead of numerically solving a convex optimization problem at each time step, instead of a quadratic optimization that is solved 10 Applications of Reinforcement Learning (S, u) = E0 $0 + λ T t=0 −rt e V ar [ $t | Ft ] S0 = S, u0 = u. In order to proceed further, we first note that the problem of minimization of a fair (to the dealer) option price (10.16) can be equivalently expressed as the problem of (ask) , where maximization of its negative Vt = −Ct Vt (St ) = Et −$t − λ −r(t −t) t =t Example 10.8 V ar [ $t | Ft ] Ft . Option pricing with non-quadratic utility functions The fact that the “fair” option price, Cˆ t , can become negative when price fluctuations are large is attributed to the non-monotonicity of the Markowitz quadratic utility. Non-monotonicity violates the Von Neumann–Morgenstern conditions U (a) ≥ 0, U (a) ≤ 0 on a utility function U (a) of a rational investor. While this problem with the quadratic utility function can be resolved by adding a risk premium to the fair option price, this may require that the risk-aversion parameter, λ, exceeds some minimal value. We can obtain nonnegative option prices with arbitrary values of risk-aversion if, instead of a quadratic utility, we use utility functions that satisfy the Von NeumannMorgenstern conditions. In particular, one popular choice is given by the exponential utility function U (X) = − exp(−γ X), where γ is a risk-aversion parameter whose meaning is similar to parameter ρ in the quadratic utility. As shown in Halperin (2018), the hedges and prices corresponding to the quadratic risk minimization scheme can be obtained with the exponential utility in the limit of a small risk-aversion γ → 0, alongside calculable corrections via an expansion in powers of γ . Note that while the idea of adding an option price premium proportional to the variance of the hedge portfolio as done in Eq. (10.16) was initially suggested on the intuitive grounds by Potters et al. (2001), a utility-based approach presented in Halperin (2018) actually derives it as a quadratic approximation to a utility-based option price, which also establishes an approximate relation between a risk-aversion parameter λ of the quadratic risk optimization and a parameter γ of the exponential utility U (X) = − exp(−γ X): λ( 1 γ. 2 3 Discrete-Time Black–Scholes–Merton Model 3.4 Hedging and Pricing in the BS Limit The framework presented above provides a smooth transition to the strict BS limit t → 0. In this limit, the BSM model dynamics under the physical measure P is described by a continuous-time geometric Brownian motion with a drift, μ, and volatility, σ : dSt = μdt + σ dWt , St where Wt is a standard Brownian motion. Consider first the optimal hedge strategy (10.13) in the BS limit t → 0. Using the first-order Taylor expansion ∂Ct Cˆ t+1 = Ct + St + O (t) ∂St ∂Ct , ∂St in (10.13), we obtain % uBS t (St ) = lim ut (St ) = t→0 which is the correct optimal hedge in the continuous-time BSM model. To find the continuous-time limit of the option price, we first compute the limit of the second term in Eq. (10.12): ∂Ct dt. t→0 dt→0 dt→0 ∂St (10.23) To evaluate the first term in Eq. (10.12), we use the second-order Taylor expansion: lim ut (St ) Et [ St | Ft ] = lim uBS t St (μ − r)dt = lim (μ − r)St ∂Ct ∂Ct 1 ∂ 2 Ct dt + dSt + Cˆ t+1 = Ct + (dSt )2 + . . . ∂t ∂St 2 ∂St2 ∂Ct ∂Ct dt + St (μdt + σ dWt ) ∂t ∂St 1 ∂ 2 Ct 2 2 2 + σ S dW + 2μσ dW dt + O dt 2 . t t t 2 ∂St2 = Ct + Substituting Eqs. (10.23) and (10.24) into Eq. (10.12), using E [dWt ] = 0 and E dWt2 = dt, and simplifying, we find that the stock drift μ under the physical measure P drops out from the problem, and Eq. (10.12) becomes the celebrated Black–Scholes equation in the limit dt → 0: ∂Ct ∂Ct ∂ 2 Ct 1 + σ 2 St2 − rCt = 0. + rSt ∂t ∂St 2 ∂St2 10 Applications of Reinforcement Learning Therefore, if the stock price is log-normal, both our hedging and pricing formulae become the original formulae of the Black–Scholes–Merton model in the strict limit t → 0. ? Multiple Choice Question 1 Select all the following correct statements: a. In the Black–Scholes limit t → 0, the optimal hedge ut is equal to the BS delta ∂ 2 Ct . ∂St2 b. In the Black–Scholes limit t → 0, the optimal hedge ut is equal to the BS delta ∂Ct ∂St . c. The risk-aversion parameter λ drops from the problem of option pricing and hedging in the limit t → 0. d. For finite t, the optimal hedge ut depends on λ. 4 The QLBS Model We shall now re-formulate and generalize the discrete-time BSM presented in Sect. 3 using the framework of Markov Decision Processes (MDPs). The key idea is that the risk-based pricing and hedging in discrete time can be understood as an MDP problem with the value function to be maximized determined by Eq. (10.17). Recall that we defined this value function as the negative of a risk-adjusted option price for the option seller. The availability of an MDP formulation for option pricing is beneficial in multiple ways. First, it generalizes the BSM by providing a consistent option pricing and hedging method which can take the expected return of the option into decision making, and thus can be used by both types of market players that trade options: hedgers and speculators. Previous incomplete-market models for option pricing either do not ensure consistency of hedging and pricing, or do not allow for incorporation of stock returns into analysis, or both.4 Therefore, the MDP formulation improves the original discrete-time BSM model by making it more generally applicable. Second, the MDP formulation can be used to formulate new computational approaches to option pricing and hedging. Particular methods are chosen depending on assumptions on a data-generating stock price process. If we assume it is known, so that both transition probabilities and a reward function are known, the option 4 The standard continuous-time BSM model is equivalent to using a risk-neutral pricing measure for option valuation. This approach only enables pure risk-based option hedging, which might be suitable for a hedger but not for an option speculator. 4 The QLBS Model pricing problem can be solved by solving a Bellman optimality equation using dynamic programming or approximate dynamic programming. For the simplest onestock model formulation, we will show how it can be solved using a combination of Monte Carlo simulation of the underlying process and a recursive semi-analytical procedure that only involves matrix linear algebra (OLS linear regression) for a numerical implementation. Similar methods based on approximate dynamic programming can also be applied to more complex multi-dimensional extensions of the model. On the other hand, we might know only the general structure of an MDP model, but not its specifications such as transition probability and reward function. In this case, we should solve a backward recursion for the Bellman optimality equation relying only on samples of data. This is the setting of reinforcement learning. It turns out that a Bellman optimality equation for our MDP model, without knowing model dynamics by relying only on data, can be easily solved (also semi-analytically, due to a quadratic reward function) using Q-learning or its modifications. A particular choice between different versions of Q-learning is determined by how the state space is modeled. One can discretize the state and action spaces and work with Markov chain approximation to continuous stock price dynamics, see, e.g., Duan and Simonato (2001). If such a finite-state approximation to dynamics converges to the actual continuous-state dynamics, optimal option prices and hedge ratios computed with this approach also converge to their continuous-state limits. If the log-normal model is indeed the data generation process, prices and hedge ratios convergence to the classical BSM limits, once one further takes the limit t → 0, λ → 0 in resulting expressions. Another possibility is to keep the state space continuous and work with approximate methods to represent a Q-function. In particular, if linear architectures are used, we can use the Fitted Q-iteration (FQI) method (Ernst et al. 2005). For nonlinear architectures, neural Q-iteration methods use neural networks to represent a Q-function. Our presentation below is mostly focused on the continuous-state FQI method for the basic single-stock option setting that uses a linear architecture and a fixed set of basis functions. However, all formulas presented below can be easily adjusted to a finite-state formulation by using “one-hot” basis functions. 4.1 State Variables As stock price dynamics typically involve a deterministic drift term, we can consider a change of state variable such that new, time-transformed variables would be stationary, i.e. non-drifting. For a given stock price process St , we can achieve this by defining a new variable Xt by the following relation: * ) σ2 t + log St . Xt = − μ − 2 10 Applications of Reinforcement Learning The advantage of this representation can be clearly seen in a special case when St is a geometric Brownian motion (GBM). For this case, we obtain ) σ2 dXt = − μ − 2 * dt + d log St = σ dWt . Therefore, when the true dynamics of St is log-normal, Xt is a standard Brownian motion, scaled by volatility, σ . If we know the value of Xt in a given MC scenario, the corresponding value of St is given by the formula St = e 2 Xt + μ− σ2 t Note that as long as {Xt }Tt=0 is a martingale, i.e. E [dXt ] = 0, ∀t, on average it should not run too far away from an initial value X0 during the lifetime of an option. The state variable, Xt , is time-uniform, unlike the stock price, St , which has a drift term. But the relation (10.28) can always be used in order to map non-stationary dynamics of St onto stationary dynamics of Xt . The martingale property of Xt is also helpful for numerical lattice approximations, as it implies that a lattice should not be too large to capture possible future variations of the stock price. The change of variables (10.26) and its reverse (10.28) can also be applied when the stock price dynamics are not GBM. Of course, the new state variable Xt will not in general be a martingale in this case; however, it is intrinsically useful for separating non-stationarity of the optimization task from non-stationarity of state variables. 4.2 Bellman Equations We start by re-stating the risk minimization procedure outlined above in Sect. 3.2 in the language of MDP problems. In particular, time-dependent state variables, St , are expressed in terms of time-homogeneous variables Xt using Eq. (10.28). In addition, we will use the notation at = at (Xt ) to denote actions expressed as functions of time-homogeneous variables Xt . Actions, ut = ut (St ), in terms of stock prices are then obtained by the substitution ) * * ) σ2 t , ut (St ) = at (Xt (St )) = at log St − μ − 2 where we have used Eq. (10.26). To differentiate between the actual hedging decisions, at (xt ), where xt is a particular realization of a random state Xt at time t, and a hedging strategy that applies for any state Xt , we introduce the notion of a time-dependent policy, π (t, Xt ). We consider deterministic policies of the form: 4 The QLBS Model π : {0, . . . , T − 1} × X → A, which is a deterministic policy that maps the time t and the current state, Xt = xt , to the action at ∈ A: at = π(t, xt ). We start with the value maximization problem of Eq. (10.17), which we rewrite here in terms of a new state variable Xt , and with an upper index to denote its dependence on the policy π : Vtπ (Xt ) = −$t (Xt ) − λ −r(t −t) t =t V ar [ $t (Xt )| Ft ] Ft = Et ⎣ −$t (Xt ) − λ V ar [$t ] − λ T t =t+1 −r(t −t) ⎤ V ar [ $t (Xt )| Ft ] Ft ⎦ . The last term in this expression, which involves a sum from t = t + 1 to t = T , can be expressed in terms of Vt+1 using the definition of the value function with a shifted time argument gives: ⎡ −λEt+1 ⎣ ⎤ e −r(t −t) V ar [ $t | Ft ]⎦ = γ (Vt+1 + Et+1 [$t+1 ]) , γ := e−rt . t =t+1 (10.33) Note that parameter γ , introduced in the last relation, is a discrete-time discount factor which in our framework is fixed in terms of a continuous-time risk-free interest rate r of the original BSM model. Substituting this into (10.32), re-arranging terms and using the portfolio process Eq. (10.5), we obtain the Bellman equation for the QLBS model: π Vtπ (Xt ) = Eπt R(Xt , at , Xt+1 ) + γ Vt+1 (Xt+1 ) , where the one-step time-dependent random reward is defined as follows5 : Rt (Xt , at , Xt+1 ) = γ at St (Xt , Xt+1 ) − λ V ar [ $t | Ft ] , t = 0, . . . , T − 1 (10.35) 2 ˆ 2t+1 − 2at Sˆt $ ˆ t+1 + at2 Sˆt , = γ at St (Xt , Xt+1 ) − λγ 2 Et $ 5 Note that with our definition of the value function Eq. (10.32), it is not equal to a discounted sum of future rewards. 10 Applications of Reinforcement Learning ˆ t+1 := $t+1 − $ ¯ t+1 , and where we used Eq. (10.5) in the second line, and $ ¯ t+1 is the sample mean of all values of $t+1 , and similarly for Sˆt . For where $ t = T , we have RT = −λ V ar [$T ] , where $T is determined by the terminal condition (10.2). Note that Eq. (10.35) implies that the expected reward, Rt , at time step t is quadratic in the action variable at : Et [Rt (Xt , at , Xt+1 )] = γ at Et [St ] (10.36) 2 ˆ t+1 + at2 Sˆt ˆ 2t+1 − 2at Sˆt $ . − λγ 2 Et $ This expected reward has the same mathematical structure as a risk-adjusted return of a single-period Markowitz portfolio model for a special case of a portfolio made of cash and a single stock. The first term gives the expected return from such portfolio, while the second term penalizes for its quadratic risk. Note further that when λ → 0, the expected reward is linear in at , so it does not have a maximum. As the one-step reward in our formulation incorporates variance of the hedge portfolio as a risk penalty, this approach belongs to a class of risk-sensitive reinforcement learning. With our method, risk is incorporated to a traditional riskneutral RL framework (which only aims at maximization of expected rewards) by modifying the one-step reward function. A similar construction of a risk-sensitive MDP by adding one-step variance penalties to a finite-horizon risk-neutral MDP problem was suggested in a different context by Gosavi (2015). The action-value function, or Q-function, is defined by an expectation of the same expression as in Eq. (10.32), but conditioned on both the current state Xt and the initial action a = at , while following a policy π afterwards: (10.37) Qπt (x, a) = Et [ −$t (Xt )| Xt = x, at = a] T − λ Eπt e−r(t −t) V ar [ $t (Xt )| Ft ] Xt = x, at = a . t =t The optimal policy πt% (·|Xt ) is defined as the policy which maximizes the value function Vtπ (Xt ), or alternatively and equivalently, maximizes the action-value function Qπt (Xt , at ): πt% (Xt ) = argmaxπ Vtπ (Xt ) = argmaxat ∈A Q%t (Xt , at ). The optimal value function satisfies the Bellman optimality equation Vt% (Xt ) = Eπt % Rt (Xt , ut = πt% (Xt ), Xt+1 ) + γ Vt+1 (Xt+1 ) . 4 The QLBS Model The Bellman optimality equation for the action-value function reads for t = 0, . . . , T − 1 Q%t (x, a) = Et Rt (Xt , at , Xt+1 ) + γ max Q%t+1 (Xt+1 , at+1 ) Xt = x, at = a , at+1 ∈A (10.40) with a terminal condition at t = T Q%T (XT , aT = 0) = −$T (XT ) − λ V ar [$T (XT )] , and where $T is determined by Eq. (10.2). Recall that V ar [·] here means variance with respect to all Monte Carlo paths that terminate in a given state. 4.3 Optimal Policy Substituting the expected reward (10.36) into the Bellman optimality equation (10.40), we obtain & ' % + at St (10.42) Q%t (Xt , at ) = γ Et Q%t+1 Xt+1 , at+1 2 ˆ 2t+1 − 2at $ ˆ t+1 Sˆt + at2 Sˆt , t = 0, . . . , T − 1. − λγ 2 Et $ & ' % Note that the first term Et Q%t+1 Xt+1 , at+1 depends on the current action only through the conditional probability p (Xt+1 |Xt at ). However, the next-state probability depends on the current action, at , only when there is a feedback loop of trading in the option’s underlying stock on the stock price. In the present framework, we follow the standard assumptions of the Black–Scholes model which assumes an option buyer or seller does not produce any market impact. & ' % Neglecting the feedback effect, the expectation Et Q%t+1 Xt+1 , at+1 does not depend on at . Therefore, with this approximation, the action-value function Q%t (Xt , at ) is quadratic in the action variable at . > The Black–Scholes Limit Note that in the limit of zero risk-aversion λ → 0, this equation becomes & ' % + at St . Q%t (Xt , at ) = γ Et Q%t+1 Xt+1 , at+1 (10.43) (continued) 10 Applications of Reinforcement Learning As in this limit Q%t (Xt , at ) = −$ (Xt , at ), using the fair option price definition (10.11), we obtain ! " Cˆ t = γ Et Cˆ t+1 − at St . This equation coincides with Eq. (10.12), showing that the recursive formula (10.42) correctly rolls back the BS fair option price Cˆ t = Et [$t ], which corresponds to first taking the limit λ → 0, and then taking the limit t → 0 of the QLBS price (while using the BS delta for at in Eq. (10.44), see below). As Q%t (Xt , at ) is a quadratic function of at , the optimal action (i.e., the hedge) at% (St ) that maximizes Q%t (Xt , at ) is computed analytically: ! " ˆ t+1 + 1 St Et Sˆt $ 2γ λ at% (Xt ) = . 2 Et Sˆt If we now take the limit of this expression as t → 0 by using Taylor expansions around time t as in Sect. 3.4, we obtain (see also Problem 1): lim at% = ∂ Cˆ t μ−r 1 + . ∂St 2λσ 2 St Note that if we set μ = r, or alternatively if we take the limit λ → ∞, it becomes identical to the BS delta, while the finite-t delta in Eq. (10.45) coincides in these cases with a local risk-minimization delta given by Eq. (10.10). Both these facts have related interpretations. The quadratic hedging that approximates option delta (see Sect. 3.4) only accounts for risk of a hedge portfolio, while here we extend it by adding a drift term Et [$t ] to the objective function, see Eq. (10.17), in the style of Markowitz risk-adjusted portfolio return analysis (Markowitz 1959). This produces a linear first term in the quadratic expected reward (10.36). Resulting hedges are therefore different from hedges obtained by only minimizing risk. Clearly, a pure risk-focused quadratic hedge corresponds to either taking the limit of infinite riskaversion rate in a Markowitz-like risk-return analysis, or setting μ = r in the above formula, to achieve the same effect. Both factors appearing in Eq. (10.46) show these two possible ways to obtain pure risk-minimizing hedges from our more general hedges. Such hedges can be applied when an option is considered for investment/ speculation, rather than only as a hedge instrument. 4 The QLBS Model To summarize, the local risk-minimization hedge and fair price formulae of Sect. 3 are recovered from Eqs. (10.45) and (10.42), respectively, if we first set μ = r in Eq. (10.45), and then set λ = 0 in Eq. (10.42). After that, the continuoustime BS formulae for these expressions are reproduced in the final limit t → 0 in these resulting expressions, as discussed in Sect. 3. Note that the order of taking the limits is to start with the hedge ratio (10.46), set there μ = r, then substitute this into the price equation (10.42), and take the limit λ → 0 there, leading to Eq. (10.44). The latter relation yields the Black–Scholes equation in the limit t → 0 as shown in Eq. (10.25). This order of taking the BS limit is consistent with the principle of hedging first and pricing second, which is implemented in the QLBS model, as well as consistent with market practices of working with illiquid options. Substituting Eq. (10.45) back into Eq. (10.42), we obtain an explicit recursive formula for the optimal action-value function for t = 0, . . . , T − 1: 2 & ' % ˆ 2t+1 + λγ at% (Xt ) 2 Sˆt Q%t (Xt , at% ) = γ Et Q%t+1 (Xt+1 , at+1 , ) − λγ $ (10.47) where at% (Xt ) is defined in Eq. (10.45). Note that this relation does not have the right risk-neutral limit when we set λ → 0 in it. The reason is that setting λ → 0 in Eq. (10.47) is equivalent to setting λ → 0 in Eq. (10.45), but, as we just discussed, this would not be the right way to reproduce the BS option price equation (10.25). The correct procedure of taking the limit λ → 0 in the recursion for the Qfunction is given by Eq. (10.43) which implies that action at used there is obtained as explained above by setting μ = r in Eq. (10.46). The backward recursion given by Eqs. (10.45) and (10.47) proceeds all the way backward starting at t = T − 1 to the present t = 0. At each time step, the problem of maximization over possible actions amounts to convex optimization which is done analytically using Eq. (10.45), which is then substituted into Eq. (10.47) for the current time step. Note that such simplicity of action optimization in the Bellman optimality equation is not encountered very often in other SOC problems. As Eq. (10.47) provides the backward recursion directly for the optimal Q-function, neither continuous nor discrete action space representation is required in our setting, as the action in this equation is always just one optimal action. If we deal with a finite-state QLBS model, then the values of the optimal time-t Q-function for each node are obtained directly from sums of values of the next-step expectation in various states at time t + 1, times one-step probabilities to reach these states. The end result of the backward recursion for the action-value function is its current value. According to our definition of the option price (10.16), it is exactly the negative of the optimal Q-function. We therefore obtain the following expression for the fair ask option price in our approach, which we can refer to as the QLBS option price: (QLBS) & ' (St , ask) = −Qt St , at% . 10 Applications of Reinforcement Learning It is interesting to note that while in the original BSM model the price and the hedge for an option are given by two separate expressions, in the QLBS model, they are parts of the same expression (10.48) simply because its option price is the (negative of the) optimal Q-function, whose second argument is by construction the optimal action—which corresponds to the optimal hedge in the setting of the QLBS model. Equations (10.48) and (10.45) that give, respectively, the optimal price and the optimal hedge for the option, jointly provide a complete solution of the QLBS model (when the dynamics are known) that generalizes the classical BSM model towards a non-asymptotic case t > 0, while reducing to the latter in the strict BSM limit t → 0. In the next section, we will see how they can be implemented. 4.4 DP Solution: Monte Carlo Implementation In practice, the backward recursion expressed by Eqs. (10.45) and (10.47) is solved in a Monte Carlo setting, where we use N simulated (or real) paths for the state variable Xt . In addition, we assume that we have chosen a set of basis functions { n (x)}. % We & can% 'then expand the optimal action (hedge) at (Xt ) and optimal Q-function % Qt Xt , at in basis functions, with time-dependent coefficients: at% (Xt ) n (Xt ) M & ' % Xt , at = ωnt n (Xt ) . Coefficients φnt and ωnt are computed recursively backward in time for t = T − 1, . . . , 0. First, we find coefficients φnt of the optimal action expansion. This is found by minimization of the following quadratic functional that is obtained by replacing the expectation in Eq. (10.42) by a MC estimate, dropping all at -independent terms, substituting the expansion (10.49) for at , and changing the overall sign to convert maximization into minimization: ⎛ -2 ⎞ , N MC ⎝− ˆ kt+1 − Gt (φ) = φnt n Xtk Stk + γ λ $ φnt n Xtk Sˆtk ⎠ . k=1 (10.50) This formulation automatically ensures averaging over market scenarios at time t. Minimization of Eq. (10.50) with respect to coefficients φnt produces a set of linear equations: M m (t) A(t) nm φmt = Bn , n = 1, . . . , M 4 The QLBS Model where A(t) nm := N MC n 2 Xtk Sˆtk 1 k k k k ˆ ˆ S $t+1 St + n Xt 2γ λ t N MC k=1 which produces the solution for the coefficients of expansions of the optimal action at% (Xt ) in a vector form: φ %t = A−1 t Bt , where At and Bt are a matrix and vector, respectively, with matrix elements given by Eq. (10.52). Note a similarity between this expression and the general relation (10.45) for the optimal action. Once the optimal action at% at time t is found in terms of its coefficients (10.53), we turn to the problem of finding coefficients ωnt of the basis function expansion (10.49) for the optimal Q-function. To this end, the one-step Bellman optimality equation (10.40) for at = at% is interpreted as regression of the form & ' Rt Xt , at% , Xt+1 + γ max Q%t+1 (Xt+1 , at+1 ) = Q%t (Xt , at% ) + εt , at+1 ∈A where εt is a random noise at time t with mean zero. Clearly, taking expectations of both sides of (10.54), we recover Eq. (10.40) with at = at% ; therefore, Eqs. (10.54) and (10.40) are equivalent in expectations when at = at% . Coefficients ωnt are therefore found by solving the following least square optimization problem: Ft (ω) = N MC Rt Xt , at% , Xt+1 + γ max at+1 ∈A Q%t+1 (Xt+1 , at+1 ) − 2 k . n Xt Introducing another pair of a matrix Ct and a vector Dt with elements (t) Cnm := N MC n N MC k=1 * ) & ' k % % Rt Xt , at , Xt+1 + γ max Qt+1 (Xt+1 , at+1 ) . n Xt at+1 ∈A 10 Applications of Reinforcement Learning We obtain the vector-valued solution for optimal weights ωt defining the optimal Q-function at time t: ωt% = C−1 t Dt . Equations (10.53) and (10.57) computed jointly and recursively for t = T −1, . . . , 0 provide a practical implementation of the backward recursion scheme of Sect. 4.3 in a continuous-space setting using expansions in basis functions. This approach can be used to find optimal price and optimal hedge when the dynamics are known. ? Multiple Choice Question 2 Select all the following correct statements: a. The coefficients of expansion of the Q-function in the QLBS model are obtained in the DP solution from the Bellman equation interpreted as a classification problem, which is solved using deep learning. b. The coefficients of expansion of the Q-function in the QLBS model are obtained in the DP solution from the Bellman equation interpreted as a regression problem, which is solved using least square minimization. c. The DP solution requires rewards to be observable. d. The DP solution computes rewards as a part of the hedge 4.5 RL Solution for QLBS: Fitted Q Iteration When the transition probabilities and reward functions are not known, the QLBS model can be solved using reinforcement learning. In this section, we demonstrate this approach using a version of Q-learning that is formulated for continuous stateaction spaces and is known as Fitted Q Iteration. Our setting assumes a batch-mode learning, when we only have access to some historically collected data. The data available is given by a set of NMC trajectories for the underlying stock St (expressed as a function of Xt using Eq. (10.26)), hedge position at , instantaneous reward Rt , and the next-time value Xt+1 : (n) (n) (n) (n) (n) Xt , at , Rt , Xt+1 T −1 t=0 , n = 1, . . . , NMC . We assume that such dataset is available either as a simulated data, or as a real historical stock price data, combined with real trading data or artificial data that would track the performance of a hypothetical stock-and-cash replicating portfolio for a given option. 4 The QLBS Model A starting point of the Fitted Q Iteration (FQI) (Ernst et al. 2005; Murphy 2005) method is a choice of a parametric family of models for quantities of interest, namely optimal action and optimal action-value function. We use linear architectures where functions sought are linear in adjustable parameters that are next optimized to find the optimal action and action-value function. We use the same set of basis functions { n (x)} as we used above in Sect. 4.4. As the optimal Q-function Q%t (Xt , at ) is a quadratic function of at , we can represent it as an expansion in basis functions, with time-dependent coefficients parameterized by a matrix Wt : ⎞⎛ ⎛ * W11 (t) W12 (t) · · · W1M (t) ) 1 ⎜ Q%t (Xt , at ) = 1, at , at2 ⎝ W21 (t) W22 (t) · · · W2M (t) ⎠ ⎝ 2 W31 (t) W32 (t) · · · W3M 1 (Xt ) ⎟ .. ⎠ . M (Xt ) := ATt Wt (Xt ) := ATt UW (t, Xt ). Equation (10.59) is further re-arranged to convert it into a product of a parameter vector and a vector that depends on both the state and the action: Q%t (Xt , at ) = ATt Wt (X) = M 3 Wt * At ⊗ T (X) i=1 j =1 = Wt · vec At ⊗ T (X) := Wt (Xt , at ) . Here * and ⊗ stand for an element-wise (Hadamard) and outer (Kronecker) product of two matrices, respectively. The vector of time-dependent parameters Wt is obtained by concatenating columns of matrix Wt , and similarly, (Xt , at ) = ' & vec At ⊗ T (X) denotes a vector obtained by concatenating columns of the outer product of vectors At and (X). Coefficients Wt can now be computed recursively backward in time for t = T − 1, . . . , 0. To this end, the one-step Bellman optimality equation (10.40) is interpreted as regression of the form Rt (Xt , at , Xt+1 ) + γ max Q%t+1 (Xt+1 , at+1 ) = Wt (Xt , at ) + εt , at+1 ∈A where εt is a random noise at time t with mean zero. Equations (10.61) and (10.40) are equivalent in expectations, as taking the expectation of both sides of (10.61), we recover (10.40) with function approximation (10.59) used for the optimal Qfunction Q%t (x, a). Coefficients Wt are therefore found by solving the following least square optimization problem: 10 Applications of Reinforcement Learning Lt (Wt ) = N MC *2 ) % Rt (Xt , at , Xt+1 ) + γ max Qt+1 (Xt+1 , at+1 ) − Wt (Xt , at ) . at+1 ∈A (10.62) Note that this relation holds for a general off-model, off-policy setting of the Fitted Q Iteration method of RL. Performing minimization, we obtain Wt% = S−1 t Mt , where (t) := Snm N MC n Xtk , atk m Xtk , atk N MC k=1 Xtk , atk * k k k % k Rt Xt , at , Xt+1 + γ max Qt+1 Xt+1 , at+1 at+1 ∈A To perform the maximization step in the second equation in (10.64) analytically, note that because coefficients Wt+1 and hence vectors UW (t + 1, Xt+1 ) := Wt+1 (Xt+1 ) (see Eq. (10.59)) are known from the previous step, we have & ' (0) (1) % % = UW (t + 1, Xt+1 ) + at+1 UW (t + 1, Xt+1 ) Q%t+1 Xt+1 , at+1 & % '2 a (2) + t+1 UW (t + 1, Xt+1 ) . (10.65) 2 % , it would be We emphasize here that while this is a quadratic expression in at+1 % wrong to use a point of its maximum as a function of at+1 as such an optimal value in Eq. (10.65). This would amount to using the same dataset to estimate both the &optimal action ' and the optimal Q-function, leading to an overestimation % in Eq. (10.64), due to Jensen’s inequality and convexity of of Q%t+1 Xt+1 , at+1 the max(·) function. The correct approach for using Eq. (10.65) is to input a value % of at+1 computed using the analytical solution Eq. (10.45) (implemented in the sample-based approach in Eq. (10.53)), applied at the previous time step. Due to the availability of the analytical optimal action (10.45), a potential overestimation problem—a classical problem of Q-learning that is sometimes addressed using such methods as Double Q-learning (van Hasselt 2010)—is avoided in the QLBS model, leading to numerically stable results. Equation (10.63) gives the solution for the QLBS model in a model-free and off-policy setting, via its reliance on Fitted Q Iteration which is a model-free and off-policy algorithm (Ernst et al. 2005; Murphy 2005). 4 The QLBS Model ? Multiple Choice Question 3 Select all the following correct statements: a. Unlike the classical Black–Scholes model, the discrete-time QLBS model explicitly prices mis-hedging risk of the option because it maximizes the Q-function which incorporates mis-hedging risk as a penalty. b. Counting by the number of parameters to learn, the RL setting for the QLBS model has more unknowns, but also a higher dimensionality of data (more features per observation) than the DP setting. c. The BS solution is recovered from the RL solution in the limit t → 0 and λ → 0. d. The RL solution is recovered from the BS solution in the limit t → ∞ and λ → ∞. 4.6 Examples Here we illustrate the performance of the QLBS model using simulated stock price histories St with the initial stock price S0 = 100, stock drift μ = 0.05, and volatility σ = 0.15. Option maturity is T = 1 year, and a risk-free rate is r = 0.03. We consider an ATM (“at-the-money”) European put option with strike K = 100. Rehedges are performed bi-weekly (i.e., t = 1/24). We use N = 50, 000 Monte Carlo scenarios of the stock price trajectory and report results obtained with two MC runs (each having N paths), where the error reported is equal to one standard deviation calculated from these runs. In our experiments, we use pure risk-based hedges, i.e. omit the second term in the numerator in Eq. (10.45), for ease of comparison with the BSM model. We use 12 basis functions chosen to be cubic B-splines on a range of values of Xt between the smallest and largest values observed in a dataset. In our experiments below, we pick the Markowitz risk-aversion parameter λ = 0.001. This provides a visible difference of QLBS prices from BS prices, while being not too far away from BS prices. The dependence of the ATM option price on λ is shown in Fig. 10.1. Simulated path and solutions for optimal hedges, portfolio values, and Qfunction values corresponding to the DP solution of Sect. 4.4 are illustrated in Fig. 10.2. The resulting QLBS ATM put option price is 4.90 ± 0.12 (based on two MC runs), while the BS price is 4.53. We first report results obtained with on-policy learning with λ = 0.001. In this case, optimal actions and rewards computed as a part of a DP solution are used as inputs to the Fitted Q Iteration algorithm of Sect. 4.5 and the IRL method of Sect. 10.2, in addition to the paths of the underlying stock. Results of two MC 10 Applications of Reinforcement Learning Fig. 10.1 The ATM put option price vs risk-aversion parameter. The time step is t = 1/24. The horizontal red line corresponds to the continuous-time BS model price. Error bars correspond to one standard deviation of two MC runs batches with Fitted Q Iteration algorithm of Sect. 4.5 are shown (respectively, in the left and right columns, with a random selection of a few trajectories) in Fig. 10.3. Similar to the DP solution, we add a unit matrix with a regularization parameter of 10−3 to invert matrix Ct in Eq. (10.63). Note that because here we use onpolicy the resulting optimal Q-function Q%t (Xt , at ) and its optimal value & learning, ' % % Qt Xt , at are virtually identical in the graph. The resulting QLBS RL put price is 4.90 ± 0.12 which is identical to the DP value. As expected, the IRL method of Sect. 10.2 produces the same result. In the next set of experiments we consider off-policy learning. The risk-aversion parameter is λ = 0.001. To generate off-policy data, we multiply, at each time step, optimal hedges computed by the DP solution of the model by a random uniform number in the interval [1 − η, 1 + η], where 0 < η < 1 is a parameter controlling the noise level in the data. We will consider the values of η = [0.15, 0.25, 0.35, 0.5] to test the noise tolerance of our algorithms. Rewards corresponding to these suboptimal actions are obtained using Eq. (10.35). In Fig. 10.4 we show results obtained for off-policy learning with 10 different scenarios of sub-optimal actions obtained by random perturbations of a fixed simulated dataset. Note that the impact of suboptimality of actions in recorded data is rather mild, at least for a moderate level of noise. This is as expected as long as Fitted Q Iteration is an off-policy algorithm. This implies that when dataset is large enough, the QLBS model can learn even from data with purely random actions. In particular, if the stock prices are log-normal, it can learn the BSM model itself. Results of two MC batches for off-policy learning with the noise parameter η = 0.5 with Fitted Q Iteration algorithm are shown in Fig. 10.5. 4 The QLBS Model Fig. 10.2 The DP solution for the ATM put option on a subset of MC paths 4.7 Option Portfolios Thus far we have only considered the problem of hedging and pricing of a single European option by an option seller that does not have any pre-existing option portfolio. Here we outline a simple generalization to the case when the option seller does have such a pre-existing option portfolio, or alternatively if she seeks to sell a few options simultaneously. In this case, she is concerned with consistency of pricing and hedging of all options in her new portfolio. In other words, she has to solve the notorious volatility smile problem for her particular portfolio. Here we outline how she can solve it using the QLBS model, illustrating the flexibility and data-driven nature of the model. Such flexibility facilitates adaptation to arbitrary consistent volatility 10 Applications of Reinforcement Learning Fig. 10.3 The RL solution (Fitted Q Iteration) for on-policy learning for the ATM put option on a subset of MC paths for two MC batches Assume the option seller has a pre-existing portfolio of K options with market prices C1 , . . . , CK . All these options reference an underlying state vector (market) Xt which can be high-dimensional such that each particular option Ci with i = 1, . . . , K references only one or a few components of market state Xt . Alternatively, we can add vanilla option prices as components of the market state Xt . In this case, our dynamic replicating portfolio would include vanilla options, along with underlying stocks. Such hedging portfolio would provide a dynamic generalization of static option hedging for exotics introduced by Carr et al. (1988). We assume that we have a historical dataset F which includes N observations of trajectories of tuples of vector-valued market factors, actions (hedges), and rewards (compare with Eq. (10.58)): 4 The QLBS Model 377 Optimal option price vs noise in action data On-policy value = 5.02 Optimal option price 5.2 5.0 4.8 4.6 4.4 4.2 4.0 0.15 0.30 0.35 Noise level Fig. 10.4 Means and standard deviations of option prices obtained with off-policy FQI learning with data obtained by randomization of DP optimal actions by multiplying each optimal action by a uniform random variable in the interval [1 − η, 1 + η] for η = [0.15, 0.25, 0.35, 0.5]. Error bars are obtained with 10 scenarios for each value of η. The horizontal red line shows the value obtained with on-policy learning corresponding to η = 0 (n) (n) (n) (n) Xt , at , Rt , Xt+1 T −1 t=0 , n = 1, . . . , N. We now assume that the option seller seeks to add to this pre-existing portfolio another (exotic) option Ce (or alternatively, she seeks to sell a portfolio of options C1 , . . . , CK , Ce ). Depending on whether the exotic option Ce was traded before in the market or not, there are two possible scenarios. We shall analyze these scenarios one by one. In the first case, the exotic option Ce was previously traded in the market (by the seller herself, or by someone else). As long as its deltas and related P&L impacts marked by a trading desk are available, we can simply extend vectors of actions (n) (n) at and rewards Rt in Eq. (10.66), and then proceed with the FQI algorithm of Sect. 4.5 (or with the IRL algorithm of Sect. 10.2, if rewards are not available). The outputs of the algorithm will be the optimal price Pt of the whole option portfolio, plus optimal hedges for all options in the portfolio. Note that as long as FQI is an off-policy algorithm, it is quite forgiving to human or model errors: deltas in the data should not even be perfectly mutually consistent (see single-option examples in the previous section). But of course, the more consistency in the data, the less data is needed to learn an optimal portfolio price Pt . Once the optimal time-zero value P0 of the total portfolio C1 , . . . , CK , Ce is computed, a market-consistent price for the exotic option is simply given by a subtraction: 10 Applications of Reinforcement Learning Fig. 10.5 The RL solution (Fitted Q Iteration) for off-policy learning with noise parameter η = 0.5 for the ATM put option on a subset of MC paths for two MC batches Ce = P0 − Ci . Note that by construction, the price Ce is consistent with all option prices C1 , . . . , CK and all their hedges, to the extent they are consistent between themselves (again, this is because Q-learning is an off-policy algorithm). Now consider a different case, when the exotic option Ce was not previously traded in the market, and therefore there are no available historical hedges for this option. This can be handled by the QLBS model in essentially the same way as in the previous case. Again, because Q-learning is an off-policy algorithm, it means that a delta and a reward of a proxy option Ce (that was traded before) to Ce could be used in the scheme just described in lieu of their actual values for option Ce . 4 The QLBS Model Consistent with common intuition, this will just slow down the learning, so that more data would be needed to compute the optimal price and hedge for the exotic Ce . On the other hand, the closer the traded proxy Ce to the actual exotic Ce the option seller wants to hedge and price, the more it helps the algorithm on the data demand side. 4.8 Possible Extensions So far we have presented the QLBS model in its most basic setting where it is applied to a single European vanilla option such as a put or call option. This framework can be extended or generalized along several directions. Here we overview them, in the order of increasing complexity of changes that would be needed on top of the basic computational framework presented above. The simplest extension of the basic QLBS setting is to apply it to European options with a non-vanilla terminal payoff, e.g. to a straddle option. Clearly, the only change needed to the basic QLBS setting in this case would be a different terminal condition for the action-value function. The second extension that is easy to incorporate in the basic QLBS setting are early exercise features for options. This can be added to the QLBS model in much the same way as they are implemented in the American Monte Carlo method of Longstaff and Schwartz. Namely, in the backward recursion, at each time step where an early option exercise is possible, the optimal action-value function is obtained by comparing its value from the next time step continued to the current time step with an intrinsic option value. The latter is defined as a payoff from an immediate exercise of the option, see also Exercise 10.2. One more possible extension involves capturing higher moments of the replicating portfolio. This assumes using a non-quadratic utility function. One approach is to use an exponential utility function as was outlined above (see also in Halperin (2018)). On the computational side, using a non-quadratic utility gives rise to the need to solve a convex optimization problem at each time step, instead of quadratic optimization. The basic QLBS framework can also be extended by incorporating transaction costs. This requires re-defining the state and action spaces in the problem. As in the presence of transaction costs holding cash is not equivalent to holding a stock, for this case we can use changes in the stock holding as action variables, while the current stock holding and the stock market price should now be made parts of a state vector. Depending on a functional model for transaction cost, the resulting optimization problem can be either quadratic (if both the reward and transaction cost functions are quadratic in action), or convex, if both these functions are convex. Finally, the basic framework can be generalized to a multi-asset setting, including option portfolios. The main challenge of such task would be to specify a good set of basis functions. In multiple dimensions, this might be a challenging problem. Indeed, a simple method to form a basis in a multi-dimensional space is to take a 10 Applications of Reinforcement Learning direct (cross) product of individual bases, but this produces an exponential number of basis functions. As a result, such a naive approach becomes intractable beyond a rather low (< 10) number of dimensions. Feature selection in high-dimensional spaces is a general problem in machine learning, which is not specific to reinforcement learning or the QLBS approach. The latter can benefit from methods developed in the literature. Rather than pursuing this direction, we now turn to a different and equally canonical finance application, namely the multi-period optimization of stock portfolios. We will show that such a multi-asset setting may entirely avoid the need to choose basis functions. 5 G-Learning for Stock Portfolios 5.1 Introduction In this section, we consider a multi-dimensional setting with a multi-asset investment portfolio. Specifically, we consider a stock portfolio, although similar methods can be used for portfolios of other assets, including options. As we mentioned above in this chapter, one challenge with scaling to multiple dimensions with reinforcement learning is the computational cost and the problem of under-sampling due to the curse of dimensionality. Another potential (and related) issue is the pronounced importance of noise in data. With finite samples, estimations of functions such as the action-value function or the policy function with noisy high-dimensional data can become quite noisy themselves. Rather than relying on deterministic policies as in Q-learning, we may prefer to work with probabilistic methods where such noise can be captured. A framework that is presented below is designed as a probabilistic approach that scales to a very high-dimensional setting. Again, for ease of exposition, we consider methods for quadratic (Markowitz) reward functions; however, the approach can be generalized to include other reward (utility) functions. Our approach is based on a probabilistic extension of Q-learning known in the literature as “G-learning.” While G-learning was initially formulated for finite MDPs, here we extend it to a continuous-state and continuous-action case. For an arbitrary reward function, this requires relying on a set of pre-specified basis functions, or using universal function approximators (e.g., neural networks) to represent the action-value function. However, as we will see below, when a reward function is quadratic, neither approach is needed, and the portfolio optimization procedure is semi-analytic. 5 G-Learning for Stock Portfolios 5.2 Investment Portfolio We adopt the notation and assumption of the portfolio model suggested by Boyd et al. (2017). In this model, dollar values of positions in n assets i = 1, . . . , n are denoted as a vector xt with components (xt )i for a dollar value of asset i at the beginning of period t. In addition to assets xt , an investment portfolio includes a risk-free bank cash account bt with a risk-free interest rate rf . A short position in any asset i then corresponds to a negative value (xt )i < 0. The vector of mean of bid and ask prices of assets at the beginning of period t is denoted as pt , with (pt )i > 0 being the price of asset i. Trades ut are made at the beginning of interval t, so that asset values x+ t immediately after trades are deterministic: x+ t = xt + ut . vt = 1T xt + bt , The total portfolio value is where 1 is a vector of ones. The post-trade portfolio is therefore + T + T + vt+ = 1T x+ t + bt = 1 (xt + ut ) + bt = vt + 1 ut + bt − bt . We assume that all rebalancing of stock positions are financed from the bank cash account (additional cash costs related to the trade will be introduced below). This imposes the following “self-financing” constraint: 1T ut + bt+ − bt = 0, which simply means that the portfolio value remains unchanged upon an instantaneous rebalancing of the wealth between the stock and cash: vt+ = vt . The post-trade portfolio, vt+ and cash are invested at the beginning of period t until the beginning of the next period. The return of asset i over period t is defined as (rt )i = (pt+1 )i − (pt )i , i = 1, . . . , n. (pt )i Asset positions at the next time period are then given by + xt+1 = x+ t + rt ◦ xt , 10 Applications of Reinforcement Learning where ◦ denotes an element-wise (Hadamard) product, and rt ∈ Rn is the vector of asset returns from period t to period t + 1. The next-period portfolio value is then obtained as follows: vt+1 = 1T xt+1 = = (1 + rt )T x+ t T (1 + rt ) (xt (10.75) (10.76) + ut ). Given a vector of returns rt in period t, the change of the portfolio value in excess of a risk-free growth is vt := vt+1 − (1 + rf )vt = (1 + rt )T (xt + ut ) + (1 + rf )bt+ −(1 + rf )1T xt − (1 + rf )bt = (rr − rf 1)T (xt + ut ), where in the second equation we used Eq. (10.71). 5.3 Terminal Condition As we generally assume a finite-horizon portfolio optimization with a finite investment horizon T , we must supplement the problem with a proper terminal condition at time T . For example, if the investment portfolio should track a given benchmark portfolio (e.g., the market portfolio), a terminal condition is obtained from the requirement that at time T , all stock positions should be equal to the actual observed weights of B stocks in the benchmark xB T . This implies that xT = xT . By Eq. (10.68), this fixes the action uT at the last time step: uT = xM T − xT −1 . Therefore, action uT at the last step is deterministic and is not subject to optimization that should be applied to T remaining actions uT −1 , . . . , u0 . Alternatively, the goal of the investment portfolio can be maximization of a riskadjusted cumulative reward of the portfolio. In this case, an appropriate terminal condition could be xT = 0, meaning that any remaining long stock positions should be converted to cash at time T . 5 G-Learning for Stock Portfolios 5.4 Asset Returns Model We assume the following linear specification of one-period excess asset returns: rt − rf 1 = Wzt − MT ut + εt , where zt is a vector of predictors with factor loading matrix W, M is a matrix of permanent market impacts with a linear impact specification, and εt is a vector of residuals with E [εt ] = 0, V [εt ] = r , where E [·] denotes an expectation with respect to the physical measure P. Equation (10.80) specifies stochastic returns rt , or equivalently the next-step stock prices, as driven by external signals zt , control (action) variables ut , and uncontrollable noise εt . Though they enter “symmetrically” in Eq. (10.80), two drivers of returns zt and ut play entirely different roles. While signals zt are completely external for the agent, actions ut are controlled degrees of freedom. In our approach, we will be looking for optimal controls ut for the market-wise portfolio. When we set up a proper optimization problem, we solve for an optimal action ut . As will be shown in this section, this optimal control turns out to be a linear function of xt , plus noise. 5.5 Signal Dynamics and State Space Our approach is general and works for any set of predictors zt that might be relevant at the time scale of portfolio rebalancing periods t. For example, for daily portfolio trading with time steps t ( 1/250, predictors zt may include news and various market indices such as VIX and MSCI indices. For portfolio trading on monthly or quarterly steps, additional predictors can include macroeconomic variables. In the opposite limit of intra-day or high-frequency trading, instead of macroeconomic variables, variables derived from the current state of the limit order book (LOB) might be more useful. As a general rule, a predictor zt may be of interest if it satisfies three requirements: (i) it correlates with equity returns, (ii) is predictable itself, to a certain degree (e.g., it can be a mean-reverting process); and (iii) its characteristic times τ are larger than the time step t. In particular, for a mean-reverting signal zt , a mean reversion parameter κ gives rise to a characteristic time scale τ ( 1/κ. The last requirement simply means that if τ , t and the mean level of zt is zero, then fluctuations of zt will be well described by a stationary white noise process, and thus will be indistinguishable from the white noise term that is already present in Eq. (10.80). It is for this reason that it would be futile to, e.g., include 10 Applications of Reinforcement Learning any features derived from the LOB for portfolio construction designed for monthly rebalancing. For dynamics of signals zt , similar to Garleanu and Pedersen (2013), we will assume a simple multivariate mean-reverting Ornstein–Uhlenbeck (OU) process for a K-component vector zt : zt+1 = (I − ) ◦ zt + εtz , where εtz ∼ N (0, z ) is the noise term, and is a diagonal matrix of mean reversion rates. It is convenient to form an extended state vector yt of size N + K by concatenating vectors xt and zt : yt = xt . zt The extended vector, yt , describes a full state of the system for the agent that has some control of its x-component, but no control of its z-component. 5.6 One-Period Rewards We first consider an idealized case when there are no costs of taking action ut at time step t. An instantaneous random reward received upon taking such action is obtained by substituting Eq. (10.80) in Eq. (10.78): T (0) Rt (yt , ut ) = Wzt − MT ut + ε t (xt + ut ) . In addition to this reward that would be obtained in an ideal friction-free market, we must add (negative) rewards received due to instantaneous market impact and transaction fees.6 Furthermore, we must include a negative reward due to risk in a newly created portfolio position at time t + 1. Following Boyd et al. (2017), we choose a simple quadratic measure of such risk penalty, as the variance of the instantaneous reward (10.84) conditional on the new state xt + ut , multiplied by the risk-aversion parameter λ: " ! (0) (yt , ut ) = −λV Rt (yt , ut ) xt + ut = −λ(xt + ut )T r (xt + ut ). (10.85) To specify negative rewards (costs) of an instantaneous market impact and transaction costs, it is convenient to represent each action uti as a difference of two − non-negative action variables u+ ti , uti ≥ 0: (risk) 6 We assume no short sale positions in our setting, and therefore do not include borrowing costs. 5 G-Learning for Stock Portfolios − + − + − uti = u+ ti − uti , |uti | = uti + uti , uti , uti ≥ 0, − so that uti = u+ ti if uti > 0 and uti = −uti if uti < 0. The instantaneous market impact and transaction costs are then given by the following expressions: (impact) (f ee) T − − T (yt , ut ) = −xTt + u+ t − xt ut − xt ϒzt −T − (yt , ut ) = −ν +T u+ ut . t −ν Here + , − , ϒ and ν + , ν − are, respectively, matrix-valued and vector-valued parameters which in the simplest case can be parameterized in terms of single scalars multiplied by unit vectors or matrices. Combining Eqs. (10.84, (10.85), (10.87), we obtain our final specification of a risk- and cost-adjusted instantaneous reward function for the problem of optimal portfolio liquidation: (0) Rt (yt , ut ) = Rt (yt , ut ) + Rt (yt , ut ) + Rt (f ee) (yt , ut ) + Rt − The expected one-step reward given action ut = u+ t − ut is given by (yt , ut ). (10.88) (impact) (f ee) (0) (risk) (yt , ut ) + Rt (yt , ut ) + Rt (yt , ut ), Rˆ t (yt , ut ) = Rˆ t (yt , ut ) + Rt (10.89) where ! " T & ' − − xt + u+ Rˆ t(0) (yt , ut ) = Et,u Rt(0) (yt , ut ) = Wzt − MT (u+ t − ut ) t − ut , (10.90) and where Et,u [·] := E [·|yt , ut ] denotes averaging over next-periods realizations of market returns. Note that the one-step expected reward (10.89) is a quadratic form of its inputs. We can write it more explicitly using vector notation: ˆ t , at ) = yTt Ryy yt + aTt Raa a + aTt Ray yt + aTt Ra , R(y where ) at = Ray = u+ t u− t −M − λr M + λr −λr W − ϒ , Ryy = , M + λr −M − λr 0 0 + W −M − 2λr − + ν , . (10.92) = − , R a M + 2λr − − ν+ W , Raa = 10 Applications of Reinforcement Learning 5.7 Multi-period Portfolio Optimization Multi-period portfolio optimization is equivalently formulated either as maximization of risk- and cost-adjusted returns, as in the Markowitz portfolio model, or as minimization of risk- and cost-adjusted trading costs. The latter specification is usually used in problems of optimal portfolio liquidation. The multi-period risk- and cost-adjusted reward maximization problem is defined as ! " T −1 t −t ˆ maximize Et γ (y , a ) (10.93) R t t t t =t where Rˆ t (yt , at ) = yTt Ryy yt + aTt Raa a + aTt Ray yt + aTt Ra ) +* ut ≥ 0, w.r.t. at = u− t − subject to xt + u+ t − ut ≥ 0. Here 0 < γ ≤ 1 is a discount factor. Note that the sum over future periods t = [t, . . . , T − 1] does not include the last period t = T , because the last action is fixed by Eq. (10.79). The last constraint in Eq. (10.93) is appropriate for a long-only portfolio and can be replaced by other constraints, for example, a constraint on the portfolio leverage. With any (or both) of these constraints, the problem belongs to the class of convex optimization with constraints, and thus can be solved in a numerically efficient way (Boyd et al. 2017). An equivalent cost-focused formulation is obtained by flipping the sign of the above problem and re-phrasing it as minimization of trading costs Cˆ t (yt , at ) = −Rˆ t (yt , at ): minimize Et T −1 t −t ˆ Ct (yt , at ) t =t γ where Cˆ t (yt , at ) = −Rˆ t (yt , at ), " (10.94) (10.95) subject to the same constraints as in (10.93). 5.8 Stochastic Policy Note that the multi-period portfolio optimization problem (10.93) assumes that an optimal policy that determines actions at is a deterministic policy that can also be described as a delta-like probability distribution & ' π(at |yt ) = δ at − a%t (yt ) , 5 G-Learning for Stock Portfolios where the optimal deterministic action a%t (yt ) is obtained by maximization of the objective (10.93) with respect to controls at . But the actual trading data may be sub-optimal, or noisy at times, because of model mis-specifications, market timing lags, human errors, etc. Potential presence of such sub-optimal actions in data poses serious challenges, if we try to assume deterministic policy (10.96) that assumes the action chosen is always an optimal action. This is because such events should have zero probability under these model assumptions, and thus would produce vanishing path probabilities if observed in data. Instead of assuming a deterministic policy (10.96), stochastic policies described by smoothed distributions π(at |yt ) are more useful for inverse problems such as the problem of inverse portfolio optimization. In this approach, instead of maximization with respect to deterministic policy/action at , we re-formulate the problem as maximization over probability distributions π(at |yt ): maximize Eqπ ! T −1 t =t γ t −t Rˆ t (yt , at ) " (10.97) ˆ t , at ) = yTt Ryy yt + aTt Raa a + aTt Ray yt + aTt Ra where R(y $T −1 w.r.t. qπ (x, ¯ a|y ¯ 0 ) = π(a0 |y0 ) t=1 π(at |yt )P (yt+1 |yt , at ) % subject to dat π (at |yt ) = 1. Here Eqπ [·] denotes expectations with respect to path probabilities defined according to the third line in Eqs. (10.97). Note that due to inclusion of a quadratic risk penalty in the risk-adjusted return, ˆ t , at ), the original problem of risk-adjusted return optimization is re-stated in R(x Eq. (10.97) as maximizing the expected cumulative reward in the standard MDP setting, thus making the problem amenable to a standard risk-neutral approach of MDP models. Such simple risk adjustment based on one-step variance penalties was suggested in a non-financial context by Gosavi (2015) and used in a reinforcement learning based approach to option pricing in Halperin (2018, 2019). Another comment that is due here is that a probabilistic approach to actions in portfolio trading appears, on many counts, a more natural approach than a formalism based on deterministic policies. Indeed, even in a simplest one-period setting, because the Markowitz-optimal solution for portfolio weights is a function of estimated stock means and covariances, they are in fact random variables. Yet the probabilistic nature of portfolio optimization is not recognized as such in the Markowitz-type single-period or multi-period optimization settings such as (10.93). A probabilistic portfolio optimization formulation was suggested in a one-period setting by Marschinski et al. (2007). 10 Applications of Reinforcement Learning 5.9 Reference Policy We assume that we are given a probabilistic reference (or prior) policy π0 (at |yt ) which should be decided upon prior to attempting the portfolio optimization (10.97). Such policy can be chosen based on a parametric model, past historic data, etc. We will use a simple Gaussian reference policy ) * 'T −1 & ' 1& 1 # ˆ ˆ π0 (at |yt ) = exp − 2 at − a(yt ) p at − a(yt ) , (2π )N p (10.98) where aˆ (yt ) can be a deterministic policy chosen to be a linear function of a state vector yt : ˆ0 +A ˆ 1 yt . aˆ (yt ) = A A simple choice of parameters in (10.98) could be to specify them in terms of only ˆ 0 = aˆ 0 1|A| and A ˆ 1 = aˆ 1 1|A|×|A| , where |A| is the two scalars aˆ 0 , aˆ 1 as follows: A size of vector at , 1A and 1A×A are, respectively, a vector and matrix made of ones. The scalars aˆ 0 and aˆ 1 would then serve as hyperparameters in our setting. Similarly, covariance matrix p for the prior policy can be taken to be a simple matrix with constant correlations ρp and constant variances σp . As will be shown below, an optimal policy has the same Gaussian form as the ˆ 0, A ˆ 1 , and p . These updates will prior policy (10.98), with updated parameters A be computed iteratively starting with their initial values defining the prior (10.98). ˆ (k) , Respectively, updates at iteration k will be denoted by upper subscripts, e.g. A 0 (k) ˆ . A 1 Furthermore, it turns out that a linear dependence on yt at iteration k, driven by ˆ (k) arises even if we set A ˆ1 = A ˆ (0) = 0 in the prior (10.98). Such the value of A 1 1 choice of a state-independent prior π0 (at |yt ) = π0 (at ), although not very critical, reduces the number of free parameters in the model by two, as well as simplifies some of the analyses below, and hence will be assumed going forward. It also makes it unnecessary to specify the value of y¯ t in the prior (10.98) (equivalently, we can initialize it at zero). The final set of hyperparameters defining the prior (10.98) therefore includes only three values of aˆ 0 , ρa , p . 5.10 Bellman Optimality Equation Let Vt% (yt ) = max E π(·|y) T −1 t =t t −t ˆ Rt (yt , at ) yt . 5 G-Learning for Stock Portfolios The optimal state-value function Vt% (xt ) satisfies the Bellman optimality equation % (yt+1 ) . Vt% (yt ) = max Rˆ t (yt , at ) + γ Et,at Vt+1 at The optimal policy π % can be obtained from V % as follows: % πt% (at |yt ) = arg max Rˆ t (yt , at ) + γ Et,at Vt+1 (yt+1 ) . at The goal of reinforcement learning (RL) is to solve the Bellman optimality equation based on samples of data. Assuming that an optimal value function is found by means of RL, solving for the optimal policy π % takes another optimization problem as formulated in Eq. (10.102). 5.11 Entropy-Regularized Bellman Optimality Equation We start by reformulating the Bellman optimality equation using a Fenchel-type representation: Vt% (yt ) = max % π(at |yt ) Rˆ t (yt , at ) + γ Et,at Vt+1 (yt+1 ) . at ∈At 7 6 Here P = π : π ≥ 0, 1T π = 1 denotes a set of all valid distributions. Equation (10.103) is equivalent to the original Bellman optimality equation (10.100), because for any x ∈ Rn , we have maxi∈ {1,...,n} xi = maxπ ≥0,||π ||≤1 π T x. Note that while we use discrete notations for simplicity of presentation, all formulas below can be equivalently expressed in continuous notations by replacing sums by integrals. For brevity, we will denote the expectation Eyt+1 |yt ,at [·] as Et,a [·] in what follows. The one-step information cost of a learned policy π(at |yt ) relative to a reference policy π0 (at |yt ) is defined as follows (Fox et al. 2015): g π (y, a) = log π(at |yt ) . π0 (at |yt ) Its expectation with respect to the policy π is the Kullback–Leibler (KL) divergence of π(·|yt ) and π0 (·|yt ): π(at |yt ) . Eπ g π (y, a) yt = KL[π ||π0 ](yt ) := π(at |yt ) log π 0 (at |yt ) a The total discounted information cost for a trajectory is defined as follows: 10 Applications of Reinforcement Learning I π (y) = γ t −t E g π (yt , at ) yt = y . t =t The free energy function Ftπ (yt ) is defined as the value function (10.103) augmented by the information cost penalty (10.106): 1 π I (yt ) β T 1 = γ t −t E Rˆ t (yt , at ) − g π (yt , at ) . β Ftπ (yt ) = Vtπ (yt ) − t =t Note that β in Eq. (10.107) serves as the “inverse temperature” parameter that controls a tradeoff between reward optimization and proximity to the reference policy, see below. The free energy, Ftπ (yt ), is the entropy-regularized value function, where the amount of regularization can be tuned to the level of noise in the data.7 The reference policy, π0 , provides a “guiding hand” in the stochastic policy optimization process that we now describe. A Bellman equation for the free energy function Ftπ (yt ) is obtained from (10.107): Ftπ (yt ) = Ea|y π 1 π ˆ Rt (yt , at ) − g (yt , at ) + γ Et,a Ft+1 (yt+1 ) . β For a finite-horizon setting, Eq. (10.108) should be supplemented by a terminal condition FTπ (yT ) = Rˆ T (yT , aT ) (10.109) aT =−xT −1 (see Eq. (10.79)). Eq. (10.108) can be viewed as a soft probabilistic relaxation of the Bellman optimality equation for the value function, with the KL information cost penalty (10.104) as a regularization controlled by the inverse temperature β. In addition to such a regularized value function (free energy), we will next introduce an entropy regularized Q-function. 7 Note that in physics, free energy is defined with a negative sign relative to Eq. (10.107). This difference is purely a matter of a sign convention, as maximization of Eq. (10.107) can be re-stated as minimization of its negative. Using our sign convention for the free energy function, we follow the reinforcement learning and information theory literature. 5 G-Learning for Stock Portfolios 5.12 G-Function: An Entropy-Regularized Q-Function Similar to the action-value function, we define the state-action free energy function Gπ (x, a) as (Fox et al. 2015) π (10.110) (yt+1 ) yt , at Gπt (yt , at ) = Rˆ t (yt , at ) + γ E Ft+1 ⎡ ⎤ * ) T 1 = Rˆ t (yt , at ) + γ Et,a ⎣ γ t −t−1 Rˆ t (yt , at ) − g π (yt , at ) ⎦ β = Et,a T t =t t =t+1 t −t ) * 1 π Rˆ t (yt , at ) − g (yt , at ) , β where in the last equation we used the fact that the first action at in the G-function is fixed, and hence g π (yt , at ) = 0 when we condition on at = a. If we now compare this expression with Eq. (10.107), we obtain the relation between the G-function and the free energy Ftπ (yt ): Ftπ (yt ) = π(at |yt ) 1 . π(at |yt ) Gπt (yt , at ) − log β π0 (at |yt ) This functional is maximized by the following distribution π(at |yt ): π(at |yt ) = Zt = π 1 π0 (at |yt )eβGt (yt ,at ) Zt π (y π0 (at |yt )eβGt t ,at ) The free energy (10.111) evaluated at the optimal solution (10.112) becomes Ftπ (yt ) = π 1 1 log Zt = log π0 (at |yt )eβGt (yt ,at ) . β β a Using Eq. (10.113), the optimal action policy can be written as follows : π (y π(at |yt ) = π0 (at |yt )eβ (Gt π t ,at )−Ft (yt ) Equations (10.113), (10.114), along with the first form of Eq. (10.110) repeated here for convenience: π (yt+1 ) yt , at , Gπt (yt , at ) = Rˆ t (yt , at ) + γ Et,a Ft+1 10 Applications of Reinforcement Learning constitute a system of equations that should be solved self-consistently by backward recursion for t = T − 1, . . . , 0, with terminal conditions GπT (yT , aT ) = Rˆ T (yT , aT ) FTπ (yT ) = GπT (yT , aT ) = Rˆ T (yT , aT ). Equations (10.113, 10.114, 10.115) (Fox et al. 2015) constitute a system of equations that should be solved self-consistently for π(at |yt ), Gπt (yt , at ), and Ftπ (yt ). Before proceeding with methods of solving it, we want to digress on an alternative interpretation of entropy regularization in Eq. (10.107), that can be useful later in the book. > Adversarial Interpretation of Entropy Regularization A useful alternative interpretation of the entropy regularization term in Eq. (10.107) can be suggested using its representation as a Legendre–Fenchel transform of another function (Ortega and Lee 2014): − π(at |yt ) 1 = min π(at |yt ) log (−π(at |yt ) (1 + C(at , yt )) β a π0 (at |yt ) C(at ,yt ) a t t (10.117) + π0 (at |yt )eβC(at ,yt ) , where C(at , yt ) is an arbitrary function. Equation (10.117) can be verified by direct minimization of the right-hand side with respect to C(at , yt ). Using this representation of the KL term, the free energy maximization problem (10.111) can be re-stated as a max–min problem Ft% (yt )= max min π π(at |yt ) Gπt (yt , at )−C(at , yt )−1 +π0 (at |yt )eβC(at ,yt ) . (10.118) The imaginary adversary’s optimal cost obtained from (10.118) is C % (at , yt ) = π(at |yt ) 1 log . β π0 (at |yt ) Similar to Ortega and Lee (2014), one can check that this produces an indifference solution for the imaginary game between the agent and its adversarial environment where the total sum of the optimal G-function and the (continued) 5 G-Learning for Stock Portfolios optimal adversarial cost (10.119) is constant: G%t (yt , at )+C % (at , yt ) = const, which means that the game of the original agent and its adversary is in a Nash equilibrium . Therefore, portfolio optimization in a stochastic environment by a single agent is mathematically equivalent to studying a Nash equilibrium in a twoparty game of our agent with an adversarial counterparty with an exponential budget given by the last term in Eq. (10.118). 5.13 G-Learning and F-Learning In the RL setting when rewards are observed, the system Eqs. (10.113, 10.114, 10.115) can be reduced to one non-linear equation. Substituting the augmented free energy (10.113) into Eq. (10.115), we obtain ⎡ ⎤ π γ ˆ t , at ) + Et,a ⎣ log Gπt (y, a) = R(y π0 (at+1 |yt+1 )eβGt+1 (yt+1 ,at+1 ) ⎦ . β a t+1 (10.120) This equation provides a soft relaxation of the Bellman optimality equation for the action-value Q-function, with the G-function defined in Eq. (10.110) being an entropy-regularized Q-function (Fox et al. 2015). The “inverse-temperature” parameter β in Eq. (10.120) determines the strength of entropy regularization. In particular, if we take β → ∞, we recover the original Bellman optimality equation for the Q-function. Because the last term in (10.120) approximates the max(·) function when β is large but finite, Eq. (10.120) is known, for the special case of a uniform reference policy π0 , as “soft Q-learning”. For finite values β < ∞, in a setting of reinforcement learning with observed rewards, Eq. (10.120) can be used to specify G-learning (Fox et al. 2015): an off-policy time-difference (TD) algorithm that generalizes Q-learning to noisy environments where an entropy-based regularization might be needed. The Glearning algorithm of Fox et al. (2015) was specified in a tabulated setting where both the state and action space are finite. In our case, we deal with high-dimensional continuous state and action spaces. Respectively, we cannot rely on a tabulated Glearning and need to specify a functional form of the action-value function, or use a non-parametric function approximation such as a neural network to represent its values. An additional challenge is to compute a multi-dimensional integral (or a sum) over all next-step actions in Eq. (10.120). Unless a tractable parameterization is used for π0 and Gt , repeated numerical integration of this integral can substantially slow down the learning. 10 Applications of Reinforcement Learning > G-Learning vs Q-Learning – Q-learning is an off-policy method with a deterministic policy. – G-learning is an off-policy method with a stochastic policy. Because the G-learning operates with stochastic policies, it gives rise to generative models. G-learning can be considered as an entropy-regularized Q-learning. Another possible approach is to bypass the G-function (i.e., the entropy-regulated Q-function) altogether, and proceed with the Bellman optimality equation for the free energy F-function (10.107). In this case, we have a pair of equations for Ftπ (yt ) and π(at |yt ): ˆ t , at ) − 1 g π (yt , at ) + γ Et,a F π (yt+1 ) Ftπ (yt ) = Ea|x R(y t+1 β π(at |yt ) = π 1 ˆ π0 (at |yt )eR(yt ,at )+γ Et,a Ft+1 (yt+1 ) . Zt Here the first equation is the Bellman equation (10.108) for the F-function, and the second equation is obtained by substitution of Eq. (10.115) into Eq. (10.112). Also note that the normalization constant Zt in Eq. (10.121) is in general different from the normalization constant in Eq. (10.112). We will return to solutions of G-learning with continuous states and actions in the next chapter where we will address inverse reinforcement learning. > G-Learning for Stationary Problems For time-stationary (infinite-horizon) problems, the “soft Q-learning” (Glearning) equation (10.120) becomes (continued) 5 G-Learning for Stock Portfolios γ π ˆ a) + Gπ (y, a) = R(y, ρ(y |y, a) log π0 (a |y )eβG (y ,a ) β y (10.122) This is a non-linear integral equation. For example, if both the state and action space are one-dimensional, the resulting integral equation is two-dimensional. Therefore, computationally, G-learning for time-stationary problems amounts to solution of a non-linear integral equation (10.122). Existing numerical methods could be used to address this problem, see also Exercise 10.4. 5.14 Portfolio Dynamics with Market Impact A state equation for the portfolio vector xt is obtained using Eqs. (10.74) and (10.80): xt+1 = xt + ut + rt ◦ (xt + ut ) = xt + ut + rf 1 + Wzt − MT ut + εt ◦ (xt + ut ) = (1 + rf )(xt + ut ) + diag (Wzt − Mut ) (xt + ut ) + ε(xt , ut ) Here we assumed that the matrix M of market impacts is diagonal with elements μi , and set M = diag (μi ) , ε(xt , ut ) := εt ◦ (xt + ut ) Eq. (10.123) shows that the dynamics are non-linear in controls ut due to the market impact ∼ M. More specifically, when friction parameters μi > 0, the state equation is linear in xt , but quadratic in controls ut . In the limit μ → 0, the dynamics become linear. On the other hand, the reward (10.91) is quadratic for either case μi = 0 or μi > 0. The fact that the dynamics are non-linear (quadratic) when μi > 0 has farreaching implications both on the practical (computational) and theoretical side. Before discussing the non-linear case, we want to first analyze a simpler case when μi = 0, i.e. market impact effects are neglected, and the dynamics are linear. When dynamics are linear while rewards are quadratic, the problem of optimal portfolio management with a deterministic policy amounts to a well-known linear quadratic regulator (LQR) whose solution is known from control theory. In the next section we present a probabilistic version of the LQR problem that is particularly well suited for dynamic portfolio optimization. 10 Applications of Reinforcement Learning 5.15 Zero Friction Limit: LQR with Entropy Regularization When the market impact is neglected, so that μi = 0 for all i, the portfolio optimization problem simplifies because dynamics become linear with the following state equation: ' & xt+1 = 1 + rf + Wzt + εt ◦ (xt + ut ). We can equivalently write it as follows: xt+1 = At (xt + ut ) + (xt + ut ) ◦ εt ' & At = A(zt ) = diag 1 + rf + Wzt Unlike the previous section where we assumed proportional transaction costs, here we assume convex transaction costs ηuTt Cut , where η is a transaction cost parameter and C is a matrix which can be taken to be a diagonal unit matrix. We neglect other costs such as holding costs. The expected one-step reward at time t is then given by the following expression: Rˆ t (xt , ut ) = (xt + ut )T Wzt − λ (xt + ut )T r (xt + ut ) − ηuTt Cut . If we assume that our problem is to maximize the risk-adjusted return of the portfolio for a pre-specified time horizon T , then the natural terminal condition for xT would be to set xT = 0, meaning that all stock positions should be converted to cash at maturity of the portfolio. This implies that the last action is deterministic rather than stochastic and is determined by the stock holding at time T − 1: uT −1 = xT − xT −1 = −xT −1 . The last reward is therefore a quadratic functional of xT −1 : Rˆ T −1 = −ηuTT −1 CuT −1 . As the last action is deterministic, optimization amounts to choosing the remaining T − 1 portfolio adjustments u0 , . . . , uT −2 . We now show that reinforcement learning with G-learning can be solved semianalytically in this setting using Gaussian time-varying policies (GTVP) . We start by specifying a functional form of the value function as a quadratic form of xt : (0) Ftπ (xt ) = xTt F (xx) xt + xTt F(x) t t + Ft , 5 G-Learning for Stock Portfolios (xx) where Ft , Ft , Ft are parameters that depend on time both explicitly (for finite horizon problems) and implicitly, via their dependence on the signals zt . As for the last time step we have FTπ−1 (xT −1 ) = Rˆ T −1 , using Eqs. (10.130) and (10.131), we obtain the terminal conditions for coefficients in Eq. (10.131): (xx) FT −1 = −ηC, FT −1 = 0, FT −1 = 0. For an arbitrary time step t = T − 2, . . . , 0, we use Eq. (10.126) and independence of noise terms for xt and zt to compute the conditional expectation of the next-period F-function in Eq. (10.115) as follows: π (xx) (xx) (xt+1 ) = (xt + ut )T ATt F¯ t+1 At + r ◦ F¯ t+1 (xt + ut ) Et,a Ft+1 + (xt + ut )T ATt F¯ t+1 + F¯ t+1 , (x) ! " (xx) (xx) (x) (0) where F¯ t+1 := Et Ft+1 , and similarly for F¯ t+1 and F¯ t+1 . Importantly, this is a ˆ t , at ) quadratic function of xt and ut . Combining it with the quadratic reward R(x in (10.131) in the Bellman equation (10.115), we see that the action-value function Gπt (xt , ut ) should also be a quadratic function of xt and ut : (0) T (x) Gπt (xt , ut ) = xTt Q(xx) xt + uTt Q(uu) ut + uTt Q(ux) xt + uTt Q(u) t t t t + xt Qt + Qt , (10.134) where (xx) (xx) (xx) Qt = −λr + γ ATt F¯ At + r ◦ F¯ t+1 = −ηC + Qt = 2Qt Qt Qt T ¯ (x) Q(x) t = Wzt + γ At Ft+1 (u) = Qt (0) Q(0) t = γ Ft+1 . Having computed the G-function Gπt (xt , ut ) in terms of its coefficients (10.135), the F-function for the current step can be found using Eq. (10.113) which we repeat here in terms of the original variables xt , ut , and replacing summation by integration: Ftπ (xt ) = 1 log β π (x π0 (ut |xt )eβGt t ,ut ) dut . We assume that a reference policy π0 (ut |xt ) is Gaussian: 10 Applications of Reinforcement Learning π0 (ut |xt ) = # − 12 (ut −uˆ t ) p−1 (ut −uˆ t ) T e (2π )n p where the mean value uˆ t is a linear function of the state xt : uˆ t = u¯ t + v¯ t xt . Here u¯ t and v¯ t are parameters that can be considered time-independent (so that the time index can be omitted) in the prior distribution (10.137). The reason we keep the time label is that, as we will see shortly, the optimal policy obtained from Glearning with linear dynamics (10.126) is also a Gaussian that can be written in the same form as (10.137) but with updated parameters u¯ t and v¯ t that will become time-dependent due to their dependence on signals zt . If no constraints are imposed on ut , integration over ut in Eq. (10.136) with a Gaussian reference policy π0 can be easily performed analytically as long as Gπt (xt , ut ) is quadratic in ut .8 Using the n-dimensional Gaussian integration formula gives: 9 ( e − 12 xT Ax+xT B n d x= (2π )n 1 BT A−1 B , e2 |A| where |A| denotes the determinant of matrix A. Using this relation to calculate the integral in Eq. (10.136) and introducing auxiliary parameters Ut = βQ(ux) + p−1 v¯ t t (u) Wt = βQt + p−1 u¯ t (uu) ¯ p = −1 , p − 2βQt we find that the resulting F -function has the same structure as in Eq. (10.131), where the coefficients are now computed in terms of coefficients of the Q-function (see Exercise 10.3): (xx) Ftπ (xt ) = xTt Ft (xx) 8 As xt + xTt Ft + Ft 1 T ¯ −1 (xx) ¯ Ut p Ut − v¯ Tt −1 v = Qt + t p 2β in the present formulation actions are constrained by the self-financing condition, an independent Gaussian integration may produce inaccurate results. For a constrained version of the integral with a constraint on the sum of variables, see Exercise 10.6. In the next section we will present a case where an unconstrained Gaussian integration works better. 5 G-Learning for Stock Portfolios (x) 1 T ¯ −1 ¯t Ut p Wt − v¯ Tt −1 (10.141) p u β ' 1 & 1 T ¯ −1 (0) ¯ p . ¯ Wt p Wt − u¯ Tt −1 log p + log u − = Qt + t p 2β 2β (x) = Qt Finally, the optimal policy for the given step can be found using Eq. (10.114) which we again rewrite here in terms of the original variables xt , ut : π (x π(ut |xt ) = π0 (ut |xt )eβ (Gt π t ,ut )−Ft (xt ) As Gπt (xt , ut ) is a quadratic function of ut , this produces a Gaussian policy π(ut |xt ): T ˜ −1 1 ˜ t −˜vt xt ) − 1 (ut −u˜ t −˜vt xt ) p (ut −u π(ut |xt ) = : , e 2 ˜ (2π )n p with updated parameters (uu) −1 ˜ −1 p = p − 2βQt ˜ p −1 ¯ t + βQ(u) u˜ t = t p u (ux) ˜ p −1 ¯ v˜ t = . v + βQ t t p Therefore, policy optimization with the entropy-regularized LQR with signals expressed by Eq. (10.142) amounts to the Bayesian update of the prior distribu˜p tion (10.137) with parameters updates u¯ t , v¯ t , p to the new values u˜ t , v˜ t , defined in Eqs. (10.144). These quantities depend on time via their dependence on zt . Note that the third of equations (10.144) indicates that even if we started with v¯ t = 0 (meaning a state-independent mean level in Eq. (10.137)), the optimal policy has a mean that is linear in state xt as long as Q(ux) = 0. Therefore, t the entropy-regularized LQR produces a Gaussian optimal policy whose mean is a linear function of the state xt . This provides a probabilistic generalization of a regular (deterministic) LQR where an optimal policy itself is a linear function of the state. Another interesting point to note about the Bayesian update (10.144) is that ¯ v¯ , the updated values even if we start with time-independent values u¯ t , v¯ t = u, (u) u˜ t , v˜ t will be time-dependent as parameters Q(ux) and Q depend on time via t t their dependence on signals zt , see Eqs. (10.135). For a given time step t, the G-learning algorithm keeps iterating between the policy optimization step that updates policy parameters according to Eq. (10.144) 10 Applications of Reinforcement Learning for fixed coefficients of the F - and G-functions, and the policy evaluation step that involves Eqs. (10.134, 10.135, 10.141) and solves for parameters of the F - and Gfunctions given policy parameters. At convergence of this iteration for time step t, Eqs. (10.134, 10.135, 10.141) and (10.143) together solve one step of G-learning for the entropy-regularized linear dynamics. The calculation then proceeds by moving to the previous step t → t − 1, and repeating the calculation. To accelerate convergence, optimal parameters of the policy at time t can be used as parameters of a prior distribution for time step t − 1. The whole procedure is then continued all the way back to the present time. Note that as parameters in Eq. (10.141) depend on the signals zt , their expected next-step values will have to be computed as indicated in Eq. (10.133). ? Multiple Choice Question 4 Select all the following correct statements: a. In G-learning, the conventional action-value function and value recovered from F- and G-functions in the limit β → 0. b. In G-learning, the conventional action-value function and value recovered from F- and G-functions in the limit β → ∞. c. Linearization of portfolio dynamics with G-learning is needed to linear G-function and F-function. d. Linearization of portfolio dynamics with G-learning is needed to quadratic G-function and F-function. function are function are get a locally get a locally 5.16 Non-zero Market Impact: Non-linear Dynamics As we just demonstrated, in the limit of vanishing market impact parameters, dynamic portfolio optimization using G-learning with quadratic rewards and Gaussian reference policies amounts to a probabilistic version of a linear quadratic regulator (LQR) and provides a convenient and fast semi-analytical calculation method to optimize an investment policy for a multi-asset and multi-period portfolio. This provides a probabilistic and multi-period RL-based generalization of the classical Markowitz portfolio optimization problem. If we turn the market impact parameters μi on, this leads to drastic changes in the problem: it is no longer analytically tractable due to non-linearity of dynamics for μi > 0. The state equation in this case is xt+1 = (1 + rf )(xt + ut ) + diag (Wzt − Mut ) (xt + ut ) + ε(xt , ut ), where M = diag (μi ) and ε(xt , ut ) := εt ◦ (xt + ut ) (see Eqs. (10.123), (10.124)). 6 RL for Wealth Management When dynamics are non-linear, one possibility is to iteratively linearize the dynamics, similar to the working of an extended Kalman filter. For problems with deterministic policies, an iterative quadratic regulator (IQR) can be used to linearize the dynamics around a reference path at each step of a policy iteration (Todorov and Li 2005). Other methods can be applied when working with stochastic policies. In particular, Halperin and Feldshteyn (2018) explore a variational EM algorithm where Gaussian random hidden variables are used to linearize dynamics, providing a probabilistic version of the IQR. Such approaches, needed when the dynamics are non-linear, are more technically involved that the linear LQR case, and the reader is referred to the literature for details. Beyond and above of purely computational issues, non-linearity of dynamics leads to important theoretical implications. The problem of optimal control is to find the optimal action u%t as a function of the state xt . When this is done, the resulting expression can be substituted back to obtain non-linear open-loop dynamics (i.e., the dynamics where action variables are substituted by their optimal values). As will be discussed later in this book, the ensuing non-linearity of dynamics might have important consequences for capturing the behavior of real markets. 6 RL for Wealth Management 6.1 The Merton Consumption Problem Our two previous use cases for reinforcement learning in quantitative finance, namely option pricing and dynamic portfolio optimization, operate with selffinancing portfolios. Recall that self-financing portfolios are asset portfolios that do not admit cash injections or withdrawals at any point in a lifetime of a portfolio, except at its inception and closing. For these classical financial problems, the assumption of a self-financing portfolio is well suited and reasonably matches actual financial practices. There is yet another wide class of classical problems in finance where a selffinancing portfolio is not a good starting point for modeling. Financial planning and wealth management are two good examples of such problems. Indeed, a typical investor in a retirement plan makes periodic investments in the portfolio while being employed and periodically draws from the account when in retirement. In addition to adding or withdrawing capital to the portfolio, the investor can also re-balance the portfolio by selling and buying different assets (e.g., stocks). The problem of optimal consumption with an investment portfolio is frequently referred to as the Merton consumption problem, after the celebrated work of Robert Merton who considered this problem as a problem of continuous-time optimal control with log-normal dynamics for asset prices (Merton 1971). As optimization in problems involving cash injections instead of cash withdrawals formally corresponds to a sign change of one-step consumption in the Merton formulation, 10 Applications of Reinforcement Learning we can collectively refer to all types of wealth management problems involving injections or withdrawals of funds at intermediate time steps as a generalized Merton consumption problem. We shall begin with a simple example which involves a combination of asset allocation and consumption under a specific choice of utility of wealth and lifetime consumption. Example 10.9 Discrete time Merton consumption with CRRA utility To illustrate the basic setup, let us consider the problem of allocating wealth between a risky asset and a risk-free asset, in addition to wealth consumption. The optimal consumption problem is formulated in a discrete-time, finitehorizon setting rather than the classical continuous-time approach of Merton (1971). Following Cheung and Yang (2007), we will assume that the investment horizon T ∈ N is fixed. We further assume that at the beginning of each time period, an investor can decide the allocation of wealth between the risky asset and the level of consumption, which should be non-negative and less than her total wealth at that time. Denote the wealth of the investor at time t as Wt , and the random return from the risk assets in time period [t, t +1] is denoted as Rt . The time t consumption level is denoted as ct ∈ [0, Wt ]. After consumption, a proportion αt ∈ [0, 1] of the remaining amount will be invested in the risky asset and the rest in the risk-free asset. We refer to these constraints as the “budget constraints.” The discrete-time wealth evolution equation is given by Wt+t = (Wt − ct )[(1 − αt )Rt + αt Rf t]. with an initial positive wealth W0 at time t = 0. The sequence of maps (C, α) = {(c0 , α0 ), . . . , (cT −1 , αT −1 )} that satisfies the budget constraints is called the “investment-consumption strategy.” The expected sum of the rewards for consumption (i.e., a utility of consumption) with a terminal reward is used as a criterion to measure the performance of an investment-consumption strategy. In RL, we are free to choose any reward function which is concave in the actions. Writing down the optimization problem: max (c0 ,α0 ),...,(CT −1 ,αT −1 T −1 γ t R(Wt , (ct , αt ), Wt+1 ) + γ T R(WT )|W0 = w], (10.146) (continued) 6 RL for Wealth Management Example 10.9 the optimal investment strategy is given by Vt (w) = T −1 (ct ,αt ),...,(CT −1 ,αT −1 γ s−t R(Ws , (cs , αs ), Ws+1 )+γ T −t R(WT )|W0 = w], (10.147) which can be solved from the Bellman Equation for the value function updates: Vt (w) = max E[γ Vt+1 (Wt+1 )|Wt = w], ∀t ∈ {0, . . . , T − 1}, (ct ,αt ) and some terminal condition for VT (w). A common choice of utility function, which leads to closed-form solutions, is to choose a constant relative risk aversion (CRRA) utility function of the form U (x) = γ1 x γ , with γ ∈ (0, 1). Then the state function reduces to Vt (w) = "1−γ wγ ! 1/(1−γ ) 1 + Y H , (γ ) t γ with optimal consumption, linear in the wealth: cˆt (w) = (1 + (γ Yt Ht )1/(1−γ ) ) ≤ w, where the expected returns of the fund under optimal allocation at time t are Yt = E[(αt∗ Rt + (1 − αt∗ )Rf )γ ] and the recurrent variable is given by the recursion relation Ht = 1 + (γ Yt+1 Ht+1 )1/(1−γ ) , and we assume HT = 0. Note that there is a unique αt∗ ∈ [0, 1] such that Yt is maximized. To show that Yt is concave in αt∗ , we see that γ (γ − 1) (αt∗ E[Rt ] + (1 − αt∗ )Rf )γ (E[Rt ] − Rf )2 ≤ 0, (10.153) (continued) 10 Applications of Reinforcement Learning Example 10.9 since γ (γ −1) < 0, (E[Rt ]−Rf )2 ≥ 0, and (αt∗ E[Rt ]+(1−αt∗ )Rf )γ −2 > 0 for non-negative risk-free rates and average stock returns. In the absence of transaction costs, clearly when the expected return of the risky asset is above Rf , we allocate the wealth to the risky asset and when the expected return is below, we choose to entirely invest in the risk-free account. We can therefore simplify the optimization problem under the CRRA utility function to solve for the optimal consumption from Eq. (10.150). Figure 10.6 illustrates the optimal consumption under simulated stock prices. The optimal allocation is not shown here but alternates between 0 and 1 depending on whether the mean return of the risk asset is, respectively, above or below the risk-free rate. The analytic approach of Cheung and Yang (2007) in the above example is limited to the choice of utility function and single asset portfolio. One can solve the same problem with flexibility in the choice of utility functions using the LSPI algorithm, but such an approach does not extend to higher dimensional portfolios. We therefore turn to a G-learning approach which scales to high-dimensional portfolios while providing some flexibility in the choice of utility functions. In the following section, we will consider a class of wealth management problems: optimization of a defined contribution retirement plan, where cash is injected (rather than withdrawn) at each time step. Instead of relying on a utility of consumption, along the lines of the approach just previously described, we will adopt a more “RL-native” approach by directly specifying one-step rewards. Another difference is that, as in the previous section, we define actions as absolute (dollar-valued) changes of asset positions, instead of defining them in fractional terms, as in the Merton approach. As we will see shortly, this enables a simple transformation into an unconstrained optimization problem and provides a semianalytical solution for a particular choice of the reward function. c *t (optimal consumption) 6 t (a) 30 Wealth (W) (b) Fig. 10.6 Stock prices are simulated using an Euler scheme over a one-year horizon. At each of ten periods shown by ten separate lines, the optimal consumption is estimated using the closedform formula in Eq. (10.150). The optimal investment is monotonically increasing in time. (a) Simulated stock prices. (b) Optimal consumption against wealth 6 RL for Wealth Management 6.2 Portfolio Optimization for a Defined Contribution Retirement Plan Here we consider a simplified model for retirement planning. We assume a discretetime process with T steps, so that T is the (integer-valued) time horizon. The investor/planner keeps the wealth in N assets, with xt being the vector of dollar values of positions in different assets at time t, and ut being the vector of changes in these positions. We assume that the first asset with n = 1 is a risk-free bond, and other assets are risky, with uncertain returns rt whose expected values are r¯ t . The covariance matrix of return is r of size (N − 1) × (N − 1). Note that our notation in this section is different from the previous section where xt was used to denote a vector of risky asset holding values. Optimization of a retirement plan involves optimization of both regular contributions to the plan and asset allocations. Let ct be a cash installment in the plan at time t. The pair (ct , ut ) can thus be considered the action variables in a dynamic optimization problem corresponding to the retirement plan. We assume that at each time step t, there is a pre-specified target value Pˆt+1 of a portfolio at time t + 1. We assume that the target value Pˆt+1 at step t exceeds the next-step value Vt+1 = (1 + rt )(xt + ut ) of the portfolio, and we want to impose a penalty for under-performance relative to this target. To this end, we can consider the following expected reward for time step t: Rt (xt , ut , ct ) = −ct − λEt Pˆt+1 − (1 + rt )(xt + ut ) − uTt ut . + Here the first term is due to an installment of amount ct at the beginning of time period t, the second term is the expected negative reward from the end of the period for under-performance relative to the target, and the third term approximates transaction costs by a convex functional with the parameter matrix and serves as a L2 regularization. The one-step reward (10.154) is inconvenient to work with due to the rectified non-linearity (·)+ := max(·, 0) under the expectation. Another problem is that decision variables ct and ut are not independent but rather satisfy the following constraint: N utn = ct , which simply means that at every time step, the total change in all positions should equal the cash installment ct at this time. We therefore modify the one-step reward (10.154) in two ways: we replace the first term using Eq. (10.155) and approximate the rectified non-linearity by a quadratic function. The new one-step reward is 10 Applications of Reinforcement Learning Rt (xt , ut ) = − utn − λEt Pˆt+1 − (1 + rt )(xt + ut ) − uTt ut . (10.156) The new reward function (10.156) is attractive on two counts. First, it explicitly resolves the constraint (10.155) between the cash injection ct and portfolio allocation decisions, and thus converts the initial constrained optimization problem into an unconstrained one. This differs from the Merton model where allocation variables are defined as fractions of the total wealth, and thus are constrained by construction. The approach based on dollar-measured actions both reduces the dimensionality of the optimization problem and makes it unconstrained. When the unconstrained optimization problem is solved, the optimal contribution ct at time t can be obtained from Eq. (10.155). The second attractive feature of the reward (10.156) is that it is quadratic in actions ut and is therefore highly tractable. On the other hand, the well-known disadvantage of quadratic rewards (penalties) is that they are symmetric, and penalize both scenarios Vt+1 ' Pˆt+1 and Vt+1 , Pˆt+1 , while in fact we only want to penalize the second class of scenarios. To mitigate this drawback, we can consider target values Pˆt+1 that are considerably higher than the time-t expectation of the next-period portfolio value. In what follows we assume this is the case, otherwise the value of Pˆt+1 can be arbitrary. For example, one simple choice could be to set the target portfolio as the current portfolio growing with a fixed and sufficiently high return. We note that a quadratic loss specification relative to a target time-dependent wealth level is a popular choice in the recent literature on wealth management. One example is provided by Lin et al. (2019) who develop a dynamic optimization approach with a similar squared loss function for a defined contribution retirement plan. A similar approach that relies on a direct specification of a reward based on a target portfolio level is known as “goal-based wealth management” (Browne 1996; Das et al. 2018). > Goal-Based Wealth Management Mean–variance Markowitz optimization remains one of the most commonly used tools in wealth management. Portfolio objectives in this approach are defined in terms of expected returns and covariances of asset in the portfolio, which may not be the most natural formulation for retail investors. Indeed, the latter typically seek specific financial goals for their portfolios. For example, a contributor to a retirement plan may demand that the value of their portfolio (continued) 6 RL for Wealth Management at the age of his or her retirement be at least equal to, or preferably larger than, some target value PT . Goal-based wealth management offers some interesting perspectives into optimal structuring of wealth management plans such as retirement plans or target date funds. The motivation for operating in terms of wealth goals can be more intuitive (while still tractable) than the classical formulation in terms of expected excess returns and variances. To see this, let VT be the final wealth in the portfolio, and PT be a certain target wealth level at the horizon T . The goal-based wealth management approach of Browne (1996) and Das et al. (2018) uses the probability P [VT − PT ≥ 0] of final wealth VT to be above the target level PT as an objective for maximization by an active portfolio management. This probability is the same as the price of a binary option on the terminal wealth VT with strike PT : P [VT − PT ≥ 0] = Et 1VT >PT . Instead of a utility of wealth such as a power or logarithmic utility, this approach uses the price of this binary option as the objective function. This ideacan also be modified by using a call option-like expectation Et (VT − PT )+ , instead of a binary option. Such an expectation quantifies how much the terminal wealth is expected to exceed the target, rather than simply providing the probability of such an event. The square loss reward specification is very convenient, as we have already seen on many occasions in this chapter, as it allows one to construct optimal policies semi-analytically. Here we will show how to build a semi-analytical scheme for computing optimal stochastic consumption-investment policies for a retirement plan—the method is sufficiently general for either a cumulation or de-cumulation phase. For other specifications of rewards, numerical optimization and function approximations (e.g., neural networks) would be required. The expected reward (10.156) can be written in a more explicit form if we denote asset returns as rt = r¯ t + ε˜ t where the first component r¯0 (t) = rf is the risk-free rate (as the first asset is risk-free), and ε˜ t = (0, ε t ), where ε t is an idiosyncratic noise with covariance r of size (N − 1) × (N − 1). Substituting this expression in Eq. (10.156), we obtain 2 − uTt 1 + 2λPˆt+1 (xt + ut )T (1 + r¯ t ) Rt (xt , ut ) = −λPˆt+1 ˆ t (xt + ut ) − uTt ut , − λ (xt + ut )T ˆ t = 0 0 + (1 + r¯ t )(1 + r¯ t )T . 0 r where we defined 10 Applications of Reinforcement Learning The quadratic one-step reward (10.157) has a similar structure to the rewards we considered in the previous section, see, e.g., Eq. (10.128). In contrast to the setting in Sect. 5.15, instead of a self-financing portfolio, here we deal with a portfolio with periodic cash installments ct . However, because the latter are related to allocation decision variables by the constraint (10.155), the resulting quadratic reward (10.157) has the same quadratic structure as the linear LQR reward (10.128). 6.3 G-Learning for Retirement Plan Optimization As we have just mentioned, the quadratic one-step reward (10.157) is very similar to the reward Eq. (10.128) which we considered in Sect. 5.15 for a self-financing 2 in (10.157) portfolio. The main difference is the presence of the first term −λPˆt+1 which is independent of a state or action. This term does not impact the policy optimization task, and can be trivially accounted for, if needed, e.g., to compute the total reward, by a direct summation from all time steps in the plan lifeline. As in Sect. 5.15, we use a similar semi-analytical formulation of G-learning with Gaussian time-varying policies (GTVP). We start by specifying a functional form of the value function as a quadratic form of xt : (xx) Ftπ (xt ) = xTt Ft (xx) xt + xTt Ft + Ft , where Ft , Ft , Ft are parameters that can depend on time via their dependence on the target values Pˆt+1 and the expected returns r¯ t (in the formulation in Sect. 5.15, the latter were encoded in signals zt ). The dynamic equation now reads (compare with Eq. (10.126)) At := diag (1 + r¯ t ) , ε˜ t := (0, ε t ) (10.160) Coefficients of the value function (10.159) are computed backward in time starting from the last maturity t = T − 1. For t = T − 1, the quadratic reward (10.157) can be optimized analytically by the following action: xt+1 = At (xt + ut ) + (xt + ut ) ◦ ε˜ t , ˜ ˜ −1 ˆ uT −1 = T −1 PT − T −1 xT −1 , ˜ T −1 and P˜ T as follows: where we defined parameters ˜ T −1 := ˆ T −1 + 1 , P˜ T := PˆT (1 + r¯ T −1 ) − 1 . λ 2λ Note that the optimal action is a linear function of the state, as in our previous section. Another interesting point to note is that the last term ∼ that describes 6 RL for Wealth Management convex transaction costs in Eq. (10.157) produces regularization of matrix inversion in Eq. (10.161). As for the last time step we have FTπ−1 (xT −1 ) = Rˆ T −1 , coefficients (x) (0) F(xx) T −1 , FT −1 , FT −1 can be computed by plugging Eq. (10.161) back in Eq. (10.157), and comparing the result with Eq. (10.159) with t = T − 1. This provides terminal conditions for parameters in Eq. (10.159): −1 ˆ T −1 ˜ T −1 ˆ T −1 T −1 − ˜ T −1 ˆ T −1 FT −1 = −λTT −1 (xx) ˆ T ˆ ˜ −1 ¯ T −1 ) F(x) T −1 = T −1 T −1 1 + 2λPT T −1 (1 + r −1 ˆ T −1 ˜ T −1 P˜ T ˆ T −1 ˜ T −1 P˜ T + 2 ˜ T −1 − 2λTT −1 (0) ˜ ˆ ˜ −1 ˜ −1 ¯ T −1 )T FT −1 = −λPˆT2 − P˜ TT T −1 1 + 2λPT (1 + r T −1 PT −1 ˜ T −1 P˜ T , . ˜ T −1 ˆ T −1 ˜ T −1 P˜ T − P˜ TT ˜ T −1 − λP˜ TT ˜ −1 ˆ where we defined T −1 := I − T −1 T −1 . For an arbitrary time step t = T − 2, . . . , 0, we use Eq. (10.160) to compute the conditional expectation of the nextperiod F-function in the Bellman equation (10.115) as follows: π (xx) ˜ r ◦ F¯ (xx) (xt + ut ) (xt+1 ) = (xt + ut )T ATt F¯ t+1 At + Et,a Ft+1 t+1 (x) (0) ˜ r := 0 0 ,(10.164) + (xt + ut )T ATt F¯ t+1 + F¯t+1 , 0 r ! " (xx) ¯ (x) ¯ (0) where F¯ (xx) F := E t t+1 t+1 , and similarly for Ft+1 and Ft+1 . This is a quadratic ˆ t , at ) in function of xt and ut and has the same structure as the quadratic reward R(x Eq. (10.157). Plugging both expressions in the Bellman equation π (yt+1 ) yt , at Gπt (yt , at ) = Rˆ t (yt , at ) + γ Et,a Ft+1 we see that the action-value function Gπt (xt , ut ) should also be a quadratic function of xt and ut : (xx) Gπt (xt , ut ) = xTt Qt xt + xTt Qt ut + uTt Qt ut + xTt Qt + uTt Qt + Qt , (10.165) where (xx) ˆ t + γ ATt F¯ (xx) At + ˜ r ◦ F¯ (xx) = −λ t+1 t+1 Q(xu) = 2Q(xx) t t (uu) = Qt 10 Applications of Reinforcement Learning (x) (x) = 2λPˆt+1 (1 + r¯ t ) + γ ATt F¯ t+1 = Qt (0) 2 = −λPˆt+1 + γ Ft+1 . Qt Qt Note that the quadratic action-value function in Eq. (10.165) is similar to Eq. (10.134)—the only difference is the specification of the parameters. Beyond different expressions for coefficients of the value function Ft (xt ) and action-value function Gt (xt , ut ) and a different terminal condition, the rest of calculations to perform one step of G-learning is the same as in Sect. 5.15. The F-function for the current step can be found using Eq. (10.136) repeated again here: Ftπ (xt ) = 1 log β π (x π0 (ut |xt )eβGt t ,ut ) dut . A reference policy π0 (ut |xt ) is Gaussian: π0 (ut |xt ) = # − 12 (ut −uˆ t ) p−1 (ut −uˆ t ) T e (2π )n p where the mean value uˆ t is a linear function of the state xt : uˆ t = u¯ t + v¯ t xt . Again as in Sect. 5.15, integration over ut in Eq. (10.167) is performed analytically using Eq. (10.139). The difference is that in Sect. 5.15 we considered a selffinancing asset portfolio, which constrains actions ut . Ignoring such a constraint can produce numerical inaccuracies. In contrast, in the present case we do not impose constraints on actions ut ; therefore, an unconstrained multivariate Gaussian integration should be superior in this case. Remarkably, this implies that once the decision variables are chosen appropriately, portfolio optimization for wealth management tasks may in a sense be an easier problem than portfolio optimization with self-financing. Performing the Gaussian integration and comparing the resulting expression with Eq. (10.159), we obtain for its coefficients: (xx) Ftπ (xt ) = xTt Ft (xx) F(x) t (0) xt + xTt Ft + Ft 1 T ¯ −1 (xx) ¯ Ut p Ut − v¯ Tt −1 v = Qt + t p 2β 1 T ¯ −1 ¯t Ut p Wt − v¯ Tt −1 (10.170) = Q(x) t + p u β ' 1 & 1 T ¯ −1 (0) ¯ p , ¯ Wt p Wt − u¯ Tt −1 log p + log − u = Qt + t p 2β 2β 6 RL for Wealth Management where we use the auxiliary parameters (ux) Ut = βQt Wt = βQt + p−1 v¯ t + p−1 u¯ t (uu) ¯ p = −1 . p − 2βQt The optimal policy for the given step can be found using Eq. (10.142) repeated again here: π (x π(ut |xt ) = π0 (ut |xt )eβ (Gt π t ,ut )−Ft (xt ) Using here the quadratic action-value function (10.165) produces a new Gaussian policy π(ut |xt ): π(ut |xt ) = : e n ˜ (2π ) p T ˜ −1 ˆ t −˜vt xt ) − 12 (ut −u˜ t −˜vt xt ) p (ut −u where (uu) ˜ p−1 = −1 p − 2βQt ˜ p −1 ¯ t + βQ(u) u˜ t = t p u ˜ p −1 ¯ t + βQ(ux) v˜ t = t p v Therefore, policy optimization for G-learning with quadratic rewards and Gaussian reference policy amounts to the Bayesian update of the prior distribution (10.168) ˜ p defined in with parameters updates u¯ t , v¯ t , p to the new values u˜ t , v˜ t , Eqs. (10.174). These quantities depend on time via their dependence on the targets Pˆt and expected asset returns r¯ t . As in Sect. 5.15, for a given time step t, the G-learning algorithm keeps iterating between the policy optimization step that updates policy parameters according to Eq. (10.174) for fixed coefficients of the F - and G-functions, and the policy evaluation step that involves Eqs. (10.165, 10.166, 10.170) and solves for parameters of the F - and G-functions given policy parameters. Note that convergence of ˜ −1 iterations for u˜ t , v˜ t is guaranteed as p p < 1. At convergence of iteration for time step t, Eqs. (10.165, 10.166, 10.170) and (10.143) together solve one step of G-learning. The calculation then proceeds by moving to the previous step t → t −1, and repeating the calculation, all the way back to the present time. 10 Applications of Reinforcement Learning The additional step needed from G-learning for the present problem is to find the optimal cash contribution for each time step by using the budget constraint (10.155). As G-learning produces Gaussian random actions ut , Eq. (10.155) implies that the time-t optimal contribution ct is Gaussian distributed with mean c¯t = 1T (u¯ t + v¯ t xt ). The expected optimal contribution c¯t thus has a part ∼ u¯ t that is independent of the portfolio value, and a part ∼ v¯ t that depends on the current portfolio. This is similar, e.g., to a linear specification of the defined contribution with a deterministic policy in Lin et al. (2019). It should be noted that in practice, we may want to impose some constraints on cash installments ct . For example, we could impose band constraints 0 ≤ ct ≤ cmax with some upper bound cmax . Such constraints can be easily added to the framework. To this end, we need to replace the exactly solvable unconstrained least squares problem with a constrained least squares problem. This can be done without a substantial increase of computational time using efficient off-the-shell convex optimization software. An illustration of an optimal solution trajectory obtained without enforcing any constraints is shown in Fig. 10.7 which presents simulation results for a portfolio of 100 assets with 30 time steps. For the specific choice of model parameters used in this example, the model optimally chooses to invest approximately equal contributions around $500 that slightly increase towards the end of the plan without enforcing constraints, which is achieved by setting a high target portfolio. However, in a more general setting adding constraints might be desirable. Optimal cash installment and portfolio value 30000 optimal cash installments expected portfolio value realized target portfolio 15 Time Steps Fig. 10.7 An illustration of G-learning with Gaussian time-varying policies (GTVP) for a retirement plan optimization using a portfolio with 100 assets 7 Summary 6.4 Discussion To summarize, we have shown how the same G-learning method with quadratic rewards that we used in the previous section to construct an optimal policy for dynamic portfolio optimization can also be used for wealth management problems, provided we use absolute (dollar-nominated) asset position changes as action variables and choose a reward function which is quadratic in these actions. As shown in Sect. 5.15, we found that G-learning applied in the current setting with a quadratic reward and Gaussian reference policy gives rise to an entropy-regulated LQR as a tool for wealth management tasks. This approach results in a Gaussian optimal policy whose mean is a linear function of the state xt . The method we presented enables extensions to other formulations including constrained versions or other specifications of the reward function. One possibility is to use the definition (10.154) with the constraint (10.155), which provides an example of a non-quadratic concave reward. Such cases should be dealt with using flexible function approximations for the action-value function such as neural networks. 7 Summary The main underlying idea of this chapter was to show that reinforcement learning provides a very natural framework for some of most classical problems of quantitative finance: option pricing and hedging, portfolio optimization, and wealth management problems. As trading in individual assets (or pairs, or buckets of assets) is a particular case of the general portfolio optimization problem, we can say that this list covers most cases for quantitative trading or planning problems in finance. We saw that for option pricing with reinforcement learning, batch-mode Qlearning can be used as a way to produce a distribution-free discrete-time approach to pricing and hedging of options. In the simplest case when transaction costs and market impact are neglected, as in the classical Black–Scholes model, the reinforcement learning approach is semi-analytical and only requires solving linear matrix equations. In other cases, for example, if the exponential utility is used to enforce a strict no-arbitrage, analytic quadratic optimization should be replaced by numerical convex optimization. We then presented a multivariate and probabilistic extension of Q-learning known as G-learning, as a tool for using reinforcement learning for portfolio optimization with multiple assets. Unlike the previous case of the QLBS model that neglects transaction costs and market impact, the G-learning approach captures these effects. When the reward function is quadratic as in the Markowitz mean– variance approach and market impact is neglected, the G-learning approach again yields a semi-analytical solution to the dynamic portfolio optimization, that is given by a probabilistic version of the classical linear quadratic regulator (LQR). 10 Applications of Reinforcement Learning For other cases (e.g., when market impact is incorporated, or when rewards are not quadratic), the G-learning approach should rely on more involved numerical optimization methods and/or use function approximations such as neural networks. In addition to demonstrating how G-learning can be applied for portfolio management, we also showed how it can be used for tasks of wealth management, which differ from the former case by intermediate cash-flows as additional controls. We showed that both these classical financial problems can be treated using the same computational method (G-learning) with different parameter specifications. Therefore, the reinforcement learning approach is capable of providing a unified approach to these classes of problems, which are traditionally treated as different problems because of a different definition of control variables. What we showed is that when using absolute (dollar-nominated) decision variables, both problems can be treated in the same way. Moreover, with this approach wealth management problems turn out to be simpler, not harder, than traditional portfolio optimization problems, as they amount to unconstrained optimization.9 While our presentation in this chapter used general RL methods (Q-learning and G-learning), we mostly focused on cases with quadratic rewards which enable semi-analytical and hence easily understandable computational methods. Different reward specifications are possible, of course, but they require relying on function approximations (e.g., neural networks—giving rise to deep reinforcement learning) and numerical optimization for optimizing parameters of these networks. Furthermore, other methods of reinforcement learning (e.g., LSPI, policy-gradient methods, actor-critic, etc.) can also be used, see, e.g., Sato (2019) for a review. 8 Exercises Exercise 10.1 Derive Eq. (10.46) that gives the limit of the optimal action in the QLBS model in the continuous-time limit. Exercise 10.2 Consider the expression (10.121) for optimal policy obtained with G-learning π(at |yt ) = π 1 ˆ π0 (at |yt )eR(yt ,at )+γ Et,at Ft+1 (yt+1 ) Zt where the one-step reward is quadratic as in Eq. (10.91): ˆ t , at ) = yTt Ryy yt + aTt Raa a + aTt Ray yt + aTt Ra . R(y 9 Or, if we want to put additional constraints on the resulting cash-flows, to optimization with one constraint, instead of two constraints as in the Merton approach. 8 Exercises Howdoes this relation simplify in two cases: (a) when the conditional expectation π (y Et,a Ft+1 t+1 ) does not depend on the action at , and (b) when the dynamics are linear in at as in Eq. (10.125)? Exercise 10.3 Derive relations (10.141). Exercise 10.4 Consider G-learning for a time-stationary case, given by Eq. (10.122): ˆ a) + Gπ (y, a) = R(y, γ π ρ(y |y, a) log π0 (a |y )eβG (y ,a ) β y Show that the high-temperature limit β → 0 of this equation reproduces the fixedpolicy Bellman equation for Gπ (y, a) where the policy coincides with the prior policy, i.e. π = π0 . Exercise 10.5 Consider policy update equations for G-learning given by Eqs. (10.174): (uu) ˜ p−1 = −1 p − 2βQt (u) ˜ p −1 ¯ u˜ t = u + βQ t t p (ux) ˜ p −1 ¯ v v˜ t = + βQ t t p (a) Find the limiting forms of these expressions in the high-temperature limit β → 0 and low-temperature limit β → ∞. (b) Assuming that we know the stable point (u¯ t , v¯ t ) of these iterative equations, ˜ p , invert them to find parameters of Q-function in as well as the covariance (uu) (ux) terms of stable point values u¯ t , v¯ t . Note that only parameters Qt , Qt , and (u) (xx) (x) Qt can be recovered. Can you explain why parameters Qt and Qt are lost in this procedure? (Note: this problem can be viewed as a prelude to the topic of inverse reinforcement learning covered in the next chapter.) Exercise 10.6*** The formula for an unconstrained Gaussian integral in n dimensions reads 9 ( e − 12 xT Ax+xT B n d x= (2π )n 1 BT A−1 B . e2 |A| ¯ with a parameter X ¯ is imposed on the Show that when a constraint ni=1 xi ≤ X integration variables, a constrained version of this integral reads 10 Applications of Reinforcement Learning ( e − 12 xT Ax+xT B ¯ − θ X xi d x = n -, , ¯ BT A−1 1 − X (2π)n 1 BT A−1 B 2 e 1−N √ |A| 1T A−1 1 where N (·) is the cumulative normal distribution. Hint: use the integral representation of the Heaviside step function 1 ε→0 2π i θ (x) = lim ∞ −∞ eizx dz. z − iε Appendix Answers to Multiple Choice Questions Question 1 Answer: 2, 3. Question 2 Answer: 2, 4. Question 3 Answer: 1, 2, 3. Question 4 Answer: 2, 4. Python Notebooks This chapter is accompanied by two notebooks which implement the QLBS model for option pricing and optimal hedging, and G-learning for wealth management. Further details of the notebooks are included in the README.md file. References Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637–654. Boyd, S., Busetti, E., Diamond, S., Kahn, R., Koh, K., Nystrup, P., et al. (2017). Multi-period trading via convex optimization. Foundations and Trends in Optimization, 1–74. Browne, S. (1996). Reaching goals by a deadline: digital options and continuous-time active portfolio management. https://www0.gsb.columbia.edu/mygsb/faculty/research/pubfiles/841/ sidbrowne_deadlines.pdf. Carr, P., Ellis, K., & Gupta, V. (1988). Static hedging of exotic options. Journal of Finance, 53(3), 1165–1190. Cerný, A., & Kallsen, J. (2007). Hedging by sequential regression revisited. Working paper, City University London and TU München. Cheung, K. C., & Yang, H. (2007). Optimal investment-consumption strategy in a discrete-time model with regime switching. Discrete and Continuous Dynamical Systems, 8(2), 315–332. Das, S. R., Ostrov, D., Radhakrishnan, A., & Srivastav, D. (2018). Dynamic portfolio allocation in goals-based wealth management. https://papers.ssrn.com/sol3/ papers.cfm?abstract_id= 3211951. Duan, J. C., & Simonato, J. G. (2001). American option pricing under GARCH by a Markov chain approximation. Journal of Economic Dynamics and Control, 25, 1689–1718. Ernst, D., Geurts, P., & Wehenkel, L. (2005). Tree-based batch model reinforcement learning. Journal of Machine Learning Research, 6, 405–556. Föllmer, H., & Schweizer, M. (1989). Hedging by sequential regression: An introduction to the mathematics of option trading. ASTIN Bulletin, 18, 147–160. Fox, R., Pakman, A., & Tishby, N. (2015). Taming the noise in reinforcement learning via soft updates. In 32nd Conference on Uncertainty in Artificial Intelligence (UAI). https://arxiv.org/ pdf/1512.08562.pdf. Garleanu, N., & Pedersen, L. H. (2013). Dynamic trading with predictable returns and transaction costs. Journal of Finance, 68(6), 2309–2340. Gosavi, A. (2015). Finite horizon Markov control with one-step variance penalties. In Conference Proceedings of the Allerton Conferences, Allerton, IL. Grau, A. J. (2007). Applications of least-square regressions to pricing and hedging of financial derivatives. PhD. thesis, Technische Universit"at München. Halperin, I. (2018). QLBS: Q-learner in the Black-Scholes(-Merton) worlds. Journal of Derivatives 2020, (to be published). Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_ id=3087076. Halperin, I. (2019). The QLBS Q-learner goes NuQLear: Fitted Q iteration, inverse RL, and option portfolios. Quantitative Finance, 19(9). https://doi.org/10.1080/14697688.2019. 1622302, available at https://papers.ssrn.com/ sol3/papers.cfm?abstract_id=3102707. Halperin, I., & Feldshteyn, I. (2018). Market self-learning of signals, impact and optimal trading: invisible hand inference with free energy, (or, how we learned to stop worrying and love bounded rationality). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3174498. Lin, C., Zeng, L., & Wu, H. (2019). Multi-period portfolio optimization in a defined contribution pension plan during the decumulation phase. Journal of Industrial and Management Optimization, 15(1), 401–427. https://doi.org/10.3934/jimo.2018059. Longstaff, F. A., & Schwartz, E. S. (2001). Valuing American options by simulation - a simple least-square approach. The Review of Financial Studies, 14(1), 113–147. Markowitz, H. (1959). Portfolio selection: efficient diversification of investment. John Wiley. Marschinski, R., Rossi, P., Tavoni, M., & Cocco, F. (2007). Portfolio selection with probabilistic utility. Annals of Operations Research, 151(1), 223–239. Merton, R. C. (1971). Optimum consumption and portfolio rules in a continuous-time model. Journal of Economic Theory, 3(4), 373–413. Merton, R. C. (1974). Theory of rational option pricing. Bell Journal of Economics and Management Science, 4(1), 141–183. Murphy, S. A. (2005). A generalization error for Q-learning. Journal of Machine Learning Research, 6, 1073–1097. Ortega, P. A., & Lee, D. D. (2014). An adversarial interpretation of information-theoretic bounded rationality. In Proceedings of the Twenty-Eighth AAAI Conference on AI. https://arxiv.org/abs/ 1404.5668. Petrelli, A., Balachandran, R., Siu, O., Chatterjee, R., Jun, Z., & Kapoor, V. (2010). Optimal dynamic hedging of equity options: residual-risks transaction-costs. working paper. Potters, M., Bouchaud, J., & Sestovic, D. (2001). Hedged Monte Carlo: low variance derivative pricing with objective probabilities. Physica A, 289, 517–525. 10 Applications of Reinforcement Learning Sato, Y. (2019). Model-free reinforcement learning for financial portfolios: a brief survey. https:// arxiv.org/pdf/1904.04973.pdf. Schweizer, M. (1995). Variance-optimal hedging in discrete time. Mathematics of Operations Research, 20, 1–32. Todorov, E., & Li, W. (2005). A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems. In Proceeding of the American Control Conference, Portland OR, USA, pp. 300–306. van Hasselt, H. (2010). Double Q-learning. Advances in Neural Information Processing Systems. http://papers.nips.cc/paper /3964-double-q-learning.pdf. Watkins, C. J. (1989). Learning from delayed rewards. Ph.D. Thesis, Kings College, Cambridge, England. Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine Learning, 8 (179–192), 3–4. Wilmott, P. (1998). Derivatives: the theory and practice of financial engineering. Wiley. Chapter 11 Inverse Reinforcement Learning and Imitation Learning This chapter provides an overview of the most popular methods of inverse reinforcement learning (IRL) and imitation learning (IL). These methods solve the problem of optimal control in a data-driven way, similarly to reinforcement learning, however with the critical difference that now rewards are not observed. The problem is rather to learn the reward function from the observed behavior of an agent. As behavioral data without rewards is widely available, the problem of learning from such data is certainly very interesting. This chapter provides a moderatelevel technical description of the most promising IRL methods, equips the reader with sufficient knowledge to understand and follow the current literature on IRL, and presents examples that use simple simulated environments to evaluate how these methods perform when the “ground-truth” rewards are known. We then present use cases for IRL in quantitative finance which include applications in trading strategy identification, sentiment-based trading, option pricing, inference of portfolio investors, and market modeling. 1 Introduction One of the challenges faced by researchers when applying reinforcement learning to solve real-world problems is how to choose the reward function. In general, a reward function should encourage a desired behavior of an agent, but often there are multiple approaches to specify it. For example, assume that we seek to solve an index tracking problem, where the task is to replicate a certain market index target (e.g., the S&P 500) Pt by a smaller portfolio of stocks whose value at time t target target 2 is Pttrack . We can view expressions Pttrack − Pt , or or Pttrack − Pt target track − Pt as possible reward functions. Clearly the optimal choice of the Pt + reward here is equivalent to the optimal choice of an expected risk-adjusted return © Springer Nature Switzerland AG 2020 M. F. Dixon et al., Machine Learning in Finance, https://doi.org/10.1007/978-3-030-41068-1_11 11 Inverse Reinforcement Learning and Imitation Learning in the corresponding multi-period portfolio optimization problem. Therefore, the choice of a reward function is as non-unique as the choice of a risk function for portfolio optimization. Challenges of defining a good reward function to teach an agent to perform a certain task are well known in other fields that use machine learning. For example, teaching physical robots to perform simple (for humans) tasks such as carrying a cup of coffee between two tables by hand-engineering a reward function in a multidimensional space of robot joints’ positions and velocities at each time step may be as difficult as defining the execution policy directly, without relying on any reward function. Therefore the need to pre-specify a reward function considerably limits the applicability of reinforcement learning for many cases of practical interest. In finance, traders do not often think in terms of any specific utility (reward) function, but rather in terms of strategies (or policies, using the language of RL). In response to such practical challenges, researchers in machine learning developed a number of alternatives to the classical setting of dynamic programming and reinforcement learning that do not require a reward (utility) function to be specified. Learning to act without a reward function is known as learning from demonstrations or learning from behavior. As behavioral data is often produced in abundance (think of GPS monitoring data, cell phone data, web browsing data, etc.), the notion of learning from observed behavior of other agents (humans or machines) is certainly appealing, and has a wide range of potential industrial and business applications. But what exactly does it mean to learn from demonstrations? One possible answer to this question is that it means learning the optimal policy from the observed behavior given only observations of actions produced by this policy (or a similar one). This is called imitation learning. This is similar to the batch-mode RL that we considered in the previous chapter, but without knowledge of the rewards. That is, we observe a series of states and actions taken by an agent, and our task is to find the optimal policy solely from this data. As with batch-mode RL, this is an inference problem of a distribution (policy) from data. In contrast to batch-mode RL, however, the problem of learning from demonstrations is ill-defined. Indeed for any particular trajectory of states and actions, there exist an infinite number of policies consistent with this trajectory. The same holds for any finite set of observed trajectories. The problem of learning a policy from a finite set of observed trajectories is therefore an ill-posed inverse problem. > Ill-Posed Inverse Problems Ill-posed inverse problems usually exhibit an infinite number of solutions or no solution at all. Classical examples of such inverse problems include the problems of restoring a signal after passing a filter and contaminated by (continued) 1 Introduction an additive noise. A simple financial example is implying the risk-neutral distribution p(sT |s0 ) of future stock prices from observed prices of European options. If F (sT , K) is a discounted payoff of a liquid European option with maturity T , the observed market mid-price of the option can be written as ( C(st , K) = dst p(sT |s0 )F (sT , K) + εt , where εt stands for an observation noise. Having market prices for a finite number of quoted options is not sufficient to reconstruct a conditional density p(sT |s0 ) in a model-independent way, as a model-independent specification is equivalent to an infinite number of observations. The inverse problem is ill-posed in the sense that it does not have a unique solution. Various forms of regularization are needed in order to select the “best” solution. For example, one may use Lagrange multipliers to enforce constraints on the entropy of p(sT |s0 ) or its KL divergence with some reference (“prior”1 ). What constitutes the “best” solution of the inverse problem is therefore only specified after a regularization function is chosen. A good choice of a regularization function may be problem- or domain-specific and go beyond a KL regularization or, e.g., a simple L2 or L1 regularization. Essentially, as the choice of regularization amounts to the choice of a “regularization model,” we may assert that a purely model-independent method for inverse problems does not exist. We may be tempted to reason that such a problem is scarcely different from the conventional setting of supervised learning. Indeed, we could treat the observed states and actions in a training dataset as, respectively, features and outputs. We could then try to directly construct a policy as either a classifier or a regressor, considering each action as a separate observation. This approach is indeed possible and is known as behavioral cloning. While it is simple (any supervised method can be used), it also has a number of serious drawbacks that render it impractical. The main problem is that it often does not generalize well to new unseen states. This is simply because as an action policy is estimated from each individual state using supervised learning, it passes no information on how these states can be related to each other. This can be contrasted with TD methods (such as Q-learning) that use transitions as observations. As 1 It is convenient to regard the reference distribution as a prior, however it is not strictly a prior in the context of Bayesian estimation. MaxEnt or Minimum Cross-Entropy finds a distribution that is compatible with all available constraints and minimizes a KL distance to this reference distribution. In contrast, Bayesian learning involves using Bayes’ rule to incrementally update the posterior. 11 Inverse Reinforcement Learning and Imitation Learning a result, generalization of the policy to unseen states obtained using supervised learning with behavioral cloning loses any connection to the actual dynamics of the environment. If such a learned policy is executed over multiple steps, errors can compound, and an induced state distribution can shift away from the actual one used in demonstrations. Therefore, a combination of different methods is usually required when rewards are not available. Such a multi-faceted approach sounds reasonable in principle but the devil is in the detail. For example, we could use a recurrent neural network to capture the state dynamics, and use a feedforward network to directly parameterize the policy. Parameters of both networks would then be learned from the data using, e.g., stochastic gradient descent methods. The main potential problem with such an approach is that it may not be easily portable to other environments, when the model is used with dynamics that are different from those used for model training. But in an approach similar to the one described above, dynamics and the learned policy are intertwined in complex ways. Therefore, we can expect that a learned policy would become sub-optimal once the dynamics (environment) changes. On the other hand, a reward function is portable, as it depends only on states and actions, but is not concerned with how these states are reached. If we find a way to learn the reward function from demonstrated behavior, this function would be portable to other environments, as it expresses properties of the agent but not of the environment. The idea of learning the reward function as a condensed and portable representation of an agent’s preferences was suggested by Russell (1998). Methods centered around this idea have collectively became known as inverse reinforcement learning (IRL). Clearly, if the reward function is found from IRL, then the optimal policy with any new environment can be found using the conventional (direct) RL. In this chapter, we provide an overview of the most popular methods of IRL, as well as methods of imitation learning that do not rely on a learned reward function. We hasten to add that so far, IRL was only adopted for financial applications in a handful of publications, despite several successful applications in robotics and video games. Nevertheless, we believe that methods of IRL are potentially very useful for quantitative finance. Both because the whole field of IRL is still nascent and keeps evolving, and because there are only a few published papers on using IRL for financial applications, this chapter is mostly focused on theoretical concepts of IRL. Our task in this chapter is three-fold: (i) provide a reasonably high-level description of the most promising IRL methods; (ii) equip the reader with enough knowledge to understand and follow the current literature on IRL; and (iii) present use cases for IRL in quantitative finance including applications to trading strategy identification, sentiment-based trading, option pricing, inference of portfolio investors, and market modeling. Chapter Objectives This chapter will review some of the most pertinent aspects of inverse reinforcement learning and their application to finance: 2 Inverse Reinforcement Learning – Introduce methods of inverse reinforcement learning (IRL) and imitation learning (IL); – Provide a review of recent adversarial approaches to IRL and IL; – Introduce IRL methods which can surpass a demonstrator; and – Review existing and potential applications of IRL and IL in quantitative finance. The chapter is accompanied by a notebook comparing various IRL methods for the financial cliff walking problem. See Appendix “Python Notebooks” for further details. 2 Inverse Reinforcement Learning The key idea of Russell (1998) is that the reward function should provide the most succinct representation of agents’ preferences while being transferable between both environments and agents. Before we turn to finance applications, imagine for a moment examples in everyday life—an adaptive smart home that learns from the habits of its occupants in scheduling different tasks such as pre-heating food, placing orders to buy other food, etc. In autonomous cars, the control system could learn from drivers’ preferences to set up an autonomous driving style that would be comfortable to a driver when taken for a ride as a passenger. In marketing applications, knowing preferences of customers or potential buyers, quantified as their utility functions, can inform marketing strategies tuned to their perceived preferences. In financial applications, knowing the utility of a counterparty may be useful in bilateral trading, e.g. over-the-counter (OTC) trades in derivatives or credit default swaps. Other financial applications of IRL, such as option pricing, will be discussed later in this chapter once we present the most popular IRL methods. Just as reinforcement learning is rooted in dynamic programming, IRL has also its analog (or predecessor) in inverse optimal control (ICO). As with IRL, the objective of ICO is to learn the cost function. However, in the ICO setting, the dynamics and optimal policy are assumed to be known. Faithful to a datadriven approach of (direct) reinforcement learning, IRL does not assume that state dynamics or policy functions are known, and instead constructs an empirical distribution.2 Inverse reinforcement learning (IRL) therefore provides a useful extension (or inversion, hence justifying its name) of the (direct) RL paradigm. In the context of batch-mode learning used in the previous chapter, the setting of IRL is nearly identical to that of RL (see Eq. (10.58)), except that there is no information about the rewards: T −1 (n) (n) (n) (n) Ft = Xt , at , Xt+1 , n = 1, . . . , N. (11.1) t=0 2 Methods of ICO that assume that dynamics are known are sometimes referred to as model-based IRL in the machine learning literature. 11 Inverse Reinforcement Learning and Imitation Learning (n) The objective of IRL is typically two-fold: (i) find the rewards Rt most consistent with observed states and action and (ii) find the optimal policy and action-value function (as in RL). One can distinguish between on-policy IRL and off-policy IRL. In the former case, we know that observed actions were optimal actions. In the latter case, observed actions may not necessarily follow an optimal policy and can be suboptimal or noisy. In general, IRL is a harder problem than RL. Indeed, not only must the optimal policy be found from data, which is the same task as in RL, but under the additional complexity that the rewards are unobserved. It appears that information about rewards is frequently missing in many potential real-world applications of RL/IRL. In particular, this is typically the case when RL methods are applied to study human behavior, see, e.g., Liu et al. (2013). IRL is also widely used in robotics as a useful alternative to direct RL methods via training robots by demonstrations (Kober et al. 2013). It appears that IRL offers a very attractive, at least conceptually, approach for many financial applications that use rational agents involved in a sequential decision process, where no information about rewards received by an agent is available to a researcher. Some examples of such (semi-) rational agents would be retail or institutional investors, loan or mortgage borrowers, deposit or saving account holders, credit card holders, consumers of utilities such as cloud computing, mobile data, electricity, etc. In the context of trading applications, such an IRL setting may arise when a trader seeks to learn a strategy of a counterparty. She observes the counterparty’s actions in their bilateral trades, but not the counterparty’s rewards. Clearly, if she reverseengineered the most likely counterparty’s rewards from the observed actions to find the counterparty’s objective (strategy), she could use it to design her own strategy. This typifies an IRL problem. Example 11.1 IRL for Financial Cliff Walking Consider our financial cliff walking (FCW) example from Chap. 9 where we presented it as a toy problem for RL control—an over-simplified model of household finance. Now let us consider an IRL formulation of this problem where we would be given a set of trajectories sampled from some policy, and attempt to find both the rewards and the policy. Clearly, as you will recall, the optimal policy for the FCW example is to deposit the minimum amount in the account at time t = 0, then take no further action until the very last step, at which point the account should be closed, with the reward of 10. However, sampling from this policy, possibly randomized by adding a random component, may miss the important penalty for breaching the bankruptcy level, and rather treat any examples of such events in the training data as occasional “sub-optimal” trajectories. As we will show in Sect. 9, the conventional IRL indeed misses the higher importance of not breaching the bankruptcy level than of achieving the final-step rewards. 2 Inverse Reinforcement Learning 2.1 RL Versus IRL A very convenient concept for differentiating between the IRL and the direct RL problem is the occupancy measure ρπ (s, a) : S × A → R (see (Putterman 1994) and Exercise 9.6 in Chap. 9): ρπ (s, a|s0 ) = π(a|s) γ t Pr (st = s|π, s0 ) , where Pr (st = s|π ) is the probability density of the state s = st at time t following policy π . The occupancy measure is also a function of the current state s0 at time t = 0. The value function V = V (s0 ) of the current state can now be defined as an expectation of the reward function: ( V (s0 ) = ρπ (s, a|s0 )r(s, a) dsda. Recall from Exercise 9.1 that due to an invariance of the optimal policy under a common rescaling of all rewards r(s, a) → αr(s, a) for some fixed α > 0, the occupancy measure ρπ (s, a|s0 ) can be interpreted as a normalized probability density of state-action pairs, even though the correct normalization is not explicitly enforced in the definition (11.2). Here we arrive at the central difference between RL and IRL. In RL, we are given numerical values of an unknown reward function r(st , at ), where sampled trajectories τ = {st , at }Tt=0 of length T are obtained by using an unknown “expert” policy πE (for batch-mode RL), or alternatively for on-line RL by sampling from the environment when executing a model policy πθ (a|s). The problem of (direct) RL is to find an optimal policy π% , and hence an optimal measure ρ% that maximizes the expectation (11.3) given the sampled data in the form of tuplets (st , at , rt , st+1 ). Because we observe numerical rewards, a value function can be directly estimated from data using Monte Carlo methods. Note that the optimal measure ρ% cannot be an arbitrary probability density function, but rather should satisfy time consistency constraints imposed by model dynamics, known as Bellman flow constraints. Now compare this setting with IRL. In IRL, data consists of tuplets (st , at , st+1 ) without rewards rt . In other words, all we observe are trajectories—sequences of states and actions τ = {st , at }Tt=0 . In terms of maximization of the value function as the expected value (11.3), the IRL setting amounts to providing a set of pairs {st , at }Tt=0 that is assumed to be informative for a Monte Carlo based estimate of the expectation (11.3) of a (yet unknown) reward function r(st , at ). Clearly, to build any model of the reward function given the observed trajectories, we should assume that trajectories demonstrated are sufficiently representative of true dynamics, and that the expert policy used in the recorded data is optimal or at least “sufficiently close” to an optimal policy. If either of these assumptions do not 11 Inverse Reinforcement Learning and Imitation Learning hold, it is highly implausible that a “true” reward function can be recovered from such data. On the other hand, if demonstrated trajectories are obtained from an expert, actions taken should be optimal from the viewpoint of Monte Carlo sampling of the expectation (11.3). A simple model that relates observed actions with rewards and encourages taking optimal actions could be a stochastic policy π(a|s) ∼ exp (β(r(s, a) + F (s, a))), where β > 0 is a parameter (inverse temperature), r(s, a) is the expected reward for a single step, and F (s, a) is a function that incorporates information of future rewards into decision-making at time t. As we saw in the previous chapter, Maximum Entropy RL produces exactly this type a) + F (s, a) = Gπt (st , at ) = Eπ [r(st , at , st+1 )] + of policy, where r(s, π γ st+1 p(st+1 |st , at )Ft+1 (st+1 ) is the G-function (the “soft” Q-value), see also Exercise 9.13 in Chap. 9. Due to exponential dependence of this policy on instantaneous rewards r(st , at , st+1 ) or equivalently on expected rewards r(st , at ) = Eπ [r(st , at , st+1 )], the optimal policy optimizes the expected total reward. The idea of Maximum Entropy IRL (MaxEnt IRL) is to preserve the functional form of Boltzmann-like policies πθ (a|s) produced by MaxEnt RL, where θ is a vector of model parameters. As they are explicit functions of states and actions, we can now use them differently, as probabilities of data in terms of observed values of states st and actions at . Parameters of the MaxEnt Boltzmann policy πθ (a|s) can therefore be inferred using the standard maximum likelihood method. We shall return to MaxEnt IRL later in this chapter. 2.2 What Are the Criteria for Success in IRL? In the absence of a “ground truth” expected reward function r(s, a), what are the performance criteria for any IRL method that learns a parameterized reward policy rθ (s, a) ∈ R, where R is the space of all admissible reward functions? Recall that the task of IRL is to learn both the reward function and optimal policy π and hence the state-action occupation measure ρπ from the data. Therefore, once both functions are learned, we can use them both to compute the value function obtained with these inferred reward and policy functions. The quality of the IRL method would thus be determined by the value function obtained using these reward and policy functions. We conclude that performance criteria for IRL involve solving a direct RL problem with a learned reward function. Moreover, we can make maximization of the expected reward an objective of IRL, such that each iteration over parameters θ specifying a parameterized expected reward function rθ (s, a) ∈ R will involve solving a direct RL problem with a current reward function. But such an IRL method could easily become very time-consuming and infeasible in practice for problems with large state-action space, where a direct RL problem would become computationally intensive. 2 Inverse Reinforcement Learning Some methods of imitation learning or IRL avoid the need to solve a direct RL problem in an inner loop. For example, MaxEnt IRL methods that we mentioned above reduce IRL to a problem of inference in a graphical (more precisely, exponential) model. This changes the computational framework, but produces another computational burden, as it requires estimation of a normalization constant Z of a MaxEnt policy (also known as a partition function). Early versions of MaxEnt IRL for discrete state-action spaces computed such normalization constants using dynamic programming (Ziebart et al. 2008), see later in this chapter for more details. While more recent MaxEnt IRL approaches rely on different methods of estimation of the partition function Z (e.g., using importance sampling), this suggests that improving the computational efficiency of IRL methods by excluding RL from the internal loop is a hard problem. Such an approach is addressed by GAIL and related methods that will be presented later in this chapter. 2.3 Can a Truly Portable Reward Function Be Learned with IRL? Recall the basic premise of reinforcement learning is that a reward function is the most compact form of expressing preferences of an agent. As an expected reward function r(st , at ) depends only on the current state and actions and does not depend on the dynamics, once it specified, it can be used with direct RL to find an optimal policy for any environment. In the setting of IRL, we do not observe rewards but rather learn them from an observed behavior. As expected rewards are only functions of states and actions, and are independent of an environment, it appears appealing to try to estimate them by fitting a parameterized action policy πθ to observations, and conjecture that the inferred reward would produce optimal policies in environments different from the one used for learning. The main question, of course, is how realistic is such a conjecture? Note that this question is different (and in a sense harder) than the aforementioned problem of ill-posedness of IRL. As an inverse problem, IRL is sensitive to the choice of a regularization method. A particular regularization may address the problem of incomplete data for learning a function, but it does not ensure that a reward learned with one environment would produce an optimal policy when an agent with a learned reward is deployed in a new environment that differs from an environment used for learning. The concept of reward shaping suggested by Ng and Russell in 1999 (see Exercise 9.5 in Chap. 9) suggests that the problem of learning a portable (robust) reward function from demonstrations can be challenging. The main result of this analysis is that the optimal policy remains unchanged under the following transformation of the instantaneous reward function r(s, a, s ): 11 Inverse Reinforcement Learning and Imitation Learning r˜ (s, a, s ) = r(s, a, s ) + γ (s ) − for an arbitrary function : S × A → R. As we will see next, the reward shape invariance of reinforcement learning may indicate that the problem of learning robust (portable) rewards may be quite challenging and infeasible to solve in a model-independent way. To see this, assume that we have two MDPs M and M that have identical rewards and differ only in transition probabilities that we will denote as T and T , respectively. A simple example would be deterministic dynamics T (s, a) → s . If a “true” reward function is of the form (11.4), it can be expressed as r(s, a, s ) + γ T (T (s, a)) − (s). If the new dynamics is such that T (s, a) = T (s, a), such reward will not be in the equivalence class of shape invariance for dynamics T (Fu et al. 2019). On the other hand, the same argument suggests an approach for constructing a reward function that could be transferred to other environments. To this end, we could simply constrain the inferred expected reward not to contain any additive contributions that would depend only on the state st . In other words, we can learn a reward function only up to an arbitrary additive function of the state. Due to the reward shape invariance of an optimal policy, this function is of no interest for finding the optimal policy, and thus can be set to zero for all practical purposes. ? Multiple Choice Question 1 Select all the following correct statements: a. The task of inverse reinforcement learning is to learn the dynamics from the observed behavior. b. The task of inverse reinforcement learning is to find the worst, rather than the best policy for the agent. c. The task of inverse reinforcement learning is to find both the reward function and the policy from observations of the states and actions of the agent. 3 Maximum Entropy Inverse Reinforcement Learning In learning from demonstrations, it is important to understand which assumptions are made regarding actions performed by an agent. As both the policy and rewards are unknown in the setting of IRL, we have to make additional assumptions when solving this problem. A natural assumption would be to expect that the demonstrated actions are optimal or close to optimal. In other words, it means that the agent acts optimally or close to optimally. If we assume that the agent follows a deterministic policy, then the above assumption implies that every action should be optimal. But this leaves little margin 3 Maximum Entropy Inverse Reinforcement Learning for any errors resulting from model bias, noisy parameter estimations, etc. Under a deterministic policy model, a trajectory that contains a single sub-optimal action has an infinite negative log-likelihood—which exactly expresses the impossibility of such scenarios with deterministic policies. A more forgiving approach is to assume that the agent followed a stochastic policy πθ (a|s). Under such a policy, sub-optimal actions can be observed, though they are expected to be suppressed according to a model for πθ (a|s). Once a parametric model for the stochastic policy is specified, its parameters can be estimated using standard methods such as maximum likelihood estimation. In this section, we present a family of probabilistic IRL models with a particular exponential specification of a stochastic policy: πθ (a|s) = 1 erˆθ (s,a) , Zθ (s) = Zθ (s) erˆθ (s,a) . Here rˆθ (s, a) is some function that will be related to the reward and action-value functions of reinforcement learning. We have already seen stochastic policies with such exponential specification in Chap. 9 when we discussed “softmax in action” policies (see Eq. (9.38)), and in Chap. 10 when we introduced G-learning (see Eq. (10.114)). While in the previous sections a stochastic policy with an exponential parameterization such as Eq. (11.5) was occasionally referred to as a softmax policy, in the setting of inverse reinforcement learning, it is often referred to as a Boltzmann policy, in recognition of its links to statistical mechanics and work of Ludwig Boltzmann in the nineteenth century (see the box below). Methods leading to stochastic policies of the form (11.5) are generally based on maximization of entropy or minimization of the KL divergence in a space of parameterized action policies. The objective of this section is to provide an overview of such > The Boltzmann Distribution in Statistical Mechanics In statistical mechanics, exponential distributions similar to (11.5) appear when considering closed systems with a fixed composition, such as molecular gases, that are in thermal equilibrium with their environment. The first formulation was suggested by Ludwig Boltzmann in 1868 in his work that developed a probabilistic approach to molecular gases in thermal equilibrium. The Boltzmann distribution characterizes states of a macroscopic system, such as a molecular gas, in terms of their energies Ei , where index i enumerates possible states. When such a system is at equilibrium with its (continued) 11 Inverse Reinforcement Learning and Imitation Learning environment that has temperature T , the Boltzmann distribution gives the probabilities of different energy states in the following form: pi = 1 − kEiT e B , Z where kB is a constant parameter called the Boltzmann constant, and Z is a normalization factor of the distribution, which is referred to as the partition function in statistical mechanics. The Boltzmann distribution can be obtained as a distribution which maximizes the entropy of the system H =− pi log pi , where the sum is taken over all energy states accessible to the system, subject ¯ where E¯ is some average mean energy. The to the constraint i pi Ei = E, Boltzmann distribution was extensively investigated and generalized beyond molecular gases to a general setting of systems at equilibrium by Josiah Willard Gibbs in 1902. It is therefore often referred to as the Boltzmann– Gibbs distribution in physics. In particular, Gibbs introduced the notion of a statistical ensemble as an idealization consisting of virtual copies of a system, such that each copy represents a possible state that the real system might be in. The physics-based notion of an ensemble corresponds to the notion of a probability space in mathematics. The Boltzmann distribution arises for the so-called canonical ensemble that is obtained for a system with a fixed number of particles at thermal equilibrium with its environment (called a heat bath in physics) at a fixed temperature T . The Boltzmann–Gibbs distribution serves as the foundation for the modern approach to equilibrium statistical mechanics (Landau and Lifshitz 1980). 3.1 Maximum Entropy Principle The Maximum Entropy (MaxEnt) principle (Jaynes 1957) is a general and highly popular method for ill-posed inversion problems where a probability distribution should be learned from a finite set of integral constraints on this distribution. The main idea of the MaxEnt method for inference of distributions from data given constraints is that beyond matching such constraints, the inferred distribution 3 Maximum Entropy Inverse Reinforcement Learning should be maximally non-informative, i.e. it should produce the highest possible uncertainty of a random variable described by this distribution. As an amount of uncertainty in a distribution can be quantified by its entropy, the MaxEnt method amounts to finding the distribution that maximizes the entropy while matching all available integral constraints. The MaxEnt principle provides a practical implementation of Laplace’s principle of insufficient reason, and has roots in statistical physics (Jaynes 1957). Here we show the working of the MaxEnt principle using an example of learning the action policy in a simple single-step reinforcement setting. Such a setting is equivalent to removing time from the problem. This simplifies the setup considerably, because all data can now be assumed i.i.d., and there is no need to consider future implications of a current action. The resulting time-independent version of the MaxEnt principle is a version originally proposed by Jaynes in 1957. To understand this approach, we shall elucidate it from a statistical mechanics perspective. Let π(a|s) be an action policy. Consider a one-step setting, with a single reward r(s, a) to be received for different combinations of (s, a). An optimal policy should % maximize the value function V π (s) = r(s, a)π(a|s)da. However, the value function is a linear functional of π(a|s) and does not exhibit an optimal value of π(a|s) per se. Assuming that the rewards r(s, a) are known, a concave optimization problem is obtained if we add an entropy regularization and consider the following functional: ( 1 1 π π F (s) := V (s) + H [π(a|s)] = π(a|s) r(s, a) − log π(a|s) da, β β (11.6) where 1/β is a regularization parameter. If we take the variational derivative of this expression with respect to π(a|s) and set it to zero, we obtain the optimal action policy (see Exercise 11.1) π(a|s) = 1 eβr(s,a) , Zβ (s) := Zβ (s) ( eβr(s,a) da. Clearly, this expression assumes that the reward function r(s, a) is such that the integral defining the normalization factor Zβ (s) converges for all attainable values of s. Equation (11.7) has the same exponential form as Eq. (11.5) if we choose a parametric specification βr(s, a) = rθ (s, a). The expression (11.7) is obtained above as a solution of an entropy-regularized maximization problem. The same form can also be obtained in a different way, by maximizing the entropy of π(a|s) conditional on matching a given average reward r¯ (s). This is achieved by maximizing the following functional: F˜ π (s) = − ( π(a|s) log π(a|s) + λ * π(a|s)r(s, a)da − r¯ (s) , 11 Inverse Reinforcement Learning and Imitation Learning where λ is a Lagrange multiplier. The optimal distribution is π(a|s) = 1 eλr(s,a) , Zλ (s) = Zλ (s) ( eλr(s,a) da. This has the same form as (11.7) with β = 1/λ. On the other hand, for problems involving integral constraints, the value of λ can be fixed in terms of the expected reward r¯ . To this end, we substitute the solution (11.9) into Eq. (11.8) and minimize the resulting expression with respect to λ to give: min log Zλ − λ¯r (s). λ This produces 1 Zλ (s) ( r(s, a)eλr(s,a) da = r¯ (s), which is exactly the integral constraint on π(a|s). The optimal value of λ is found by solving Eq. (11.11) or equivalently by numerical minimization of (11.10). Note that this produces a unique solution as the optimization problem (11.10) is convex (see Exercise 11.1). > Links with Statistical Mechanics As especially suggested by the second derivation, Eq. (11.7) or (11.9) can also be seen as a Boltzmann distribution of a statistical ensemble with energies E(s, a) = −r(s, a) on a space of state-action pairs. In statistical mechanics, a distribution of states x ∈ X in a space of state X with energies E(x) for a canonical ensemble with a fixed average energy is given by the same Boltzmann form, albeit with a different functional form of the energy E. In statistical mechanics, parameter β has a specific form β = 1/(kB T ), where kB is the Boltzmann constant, and T is a temperature of the system. For this reason, parameter β in Eq. (11.7) is often referred to as the inverse temperature. A direct generalization of the MaxEnt principle is given by the minimum crossentropy (MCE) principle that replaces the absolute entropy with a KL divergence with some reference distribution π0 (a|s). In this case, instead of (11.8), we consider the following KL-regularized value function: 3 Maximum Entropy Inverse Reinforcement Learning ( F (s) = π π(a|s) 1 π(a|s) r(s, a) − log da. β π0 (a|s) The optimal action policy for this case is π(a|s) = 1 π0 (a|s)eβr(s,a) , Zβ (s) ( Zβ (s) := π0 (a|s)eβr(s,a) da. The common feature of all methods based on MaxEnt or MCE principles is therefore the appearance of an exponential energy-based probability distribution. For a simple single-step setting with reward r (s, a), the MaxEnt optimal policy is exponential in r(s, a). As we will see next, an extension of entropy-based analysis also produces an exponential specification for the action policy in a multi-step case. 3.2 Maximum Causal Entropy In general, reinforcement learning or inverse reinforcement learning involves trajectories that extend over multiple steps. In the most general form, we have a series of states ST = S0:T and a series of actions AT = A0:T with some trajectory length T > 1. Sequences of states and actions in an MDP can be thought of as two interacting random processes S0:T and A0:T . The problem of learning a policy can be viewed as a problem of inference of a distribution of actions A0:T given a distribution of states S0:T . Unlike MaxEnt inference problems for i.i.d. data, the time dependence of such problems requires some care. Indeed, a naive definition of a distribution of actions conditioned on states could involve conditional probabilities defined on the whole paths, such as P [ A0:T | S0:T ]. A problem with such a definition would be that it could violate causality, as conditioning on a whole path of states involves conditioning on the future. For Markov Decision Processes, actions at time t can depend only on the current state. If memory effects are important, they can be handled by using higher-order MDPs, or by switching to autoregressive models such as recurrent neural networks. Clearly, in both cases, to preserve causality, actions now (at time t) cannot depend on the future. When each variable at is conditioned only on a portion S0:t of all variables S0:T , the probability of A causally conditioned on S reads P T AT ST = P ( At | S0:t , A0:t−1 ) . Note that the standard definition of conditional probability would involve conditioning on the whole path S0:T —which would violate causality. The causal conditional 11 Inverse Reinforcement Learning and Imitation Learning ' & probability (11.14) implies, in particular, that any joint distribution P AT , ST can ' & ' & ' & be factorized as P AT , ST = P AT ST P ST AT −1 . The causal entropy (Kramer 1998) is defined as follows: H " ! H ( At || S0:t , A0:t ) . AT ST = EA,S − log P AT ST = t=0 ' & T (11.15) In this section, we assume that the dynamics are Markovian, so that P S AT −1 $ = t P ( st+1 | st , at ). In addition, we assume a setting of an infinite-horizon MDP. For this case, we should use a discounted version of the causal entropy: H ( A0:∞ || S0:∞ ) = γ t H ( At || S0:t , A0:t ) , with a discount factor γ ≤ 1. The causal entropy (11.16) (or (11.15), for a finite-horizon case) presents a natural extension of an entropy of a conditional distribution to a dynamic setting where conditional information changes over time. This is precisely the case for learning policies for Markov Decision Processes. In this setting, states st of a system can be considered conditioning information, while actions at are the subject of learning. For a first-order MDP, probabilities of actions at time t depend only on the state at time t, and the causal entropy of the action policy takes a simpler form that depends only on a policy distribution π(at |st ): H ( A0:∞ || S0:∞ ) = ∞ t=0 γ H ( at || st ) = −ES t ( γ π(at |st ) log π(at |st )dat , (11.17) where the expectation is taken over all future values of st . Importantly, because the process is Markov, this expectation depends only on marginal distributions of st at t = 0, 1, . . ., but not on their joint distribution. The causal entropy can be maximized under available constraints, providing an extension of the MaxEnt principle to dynamic processes. Let us assume that some feature functions F (S, A) are observed in a demonstration with T step and that feature functions are additive in time, i.e.& F (S, A) = t F (st , at ). In particular, ' a total reward obtained from a trajectory ST , AT is additive in time; therefore, it would fit such a choice. For additive feature functions, we have ∞ π π t EA,S [F (S, A)] = EA,S γ F (st , at ) . (11.18) t=0 3 Maximum Entropy Inverse Reinforcement Learning Here EπA,S [·] denotes expectation w.r.t. a distribution over future states and actions induced by the policy π(at |st ) and transition dynamics of the system expressed via conditional transition probabilities P (st+1 |st , at ). Suppose we seek a policy π(a|s) which matches empirical feature expectations E˜ emp [F (S, A)] = Eemp γ F (st , at ) = t T −1 1 t γ F (st , at ). T Maximum Causal Entropy optimization can now be formulated as follows: argmax H π Subject to: EπA,S [F (S, A)] = E˜ A,S [F (S, A)] π(at |st ) = 1, π(at |st ) ≥ 0, ∀st . and Here E˜ A,S [·] stands for the empirical feature expectation as in Eq. (11.19). In contrast to a single-step MaxEnt problem, constraints now refer to feature expectations collected over whole paths rather than single steps. Causality is not explicitly enforced in (11.21); however, a causally conditioned policy in an MDP factorizes $ π as ∞ t=0 t (at |st ). Therefore, using the factors π(at |st ) as decision variables, the policy π is forced to be causally conditioned. Equivalently, we can swap the objective and the constraints, and consider the following dual problem: argmax EπA,S [F (S, A)] − E˜ A,S [F (S, A)] π Subject to: H AT ST = H¯ and π(at |st ) = 1, π(at |st ) ≥ 0, ∀st , where H¯ is some value of the entropy that is fixed throughout the optimization. Unlike the previous formulation that is non-concave and involves an infinite number of constraints, the dual formulation is concave, and involves only one constraint on the entropy (in addition to normalization constraints). The dual form (11.21) of the Max-Causal Entropy method can be used for both direct RL and IRL. We first consider applications of this approach to direct reinforcement learning problems where rewards are observed. 11 Inverse Reinforcement Learning and Imitation Learning ? Multiple Choice Question 2 Select all the following correct statements: a. The Maximum Causal Entropy method provides an extension of the maximum entropy method for sequential decision-making inference which preserves causality relations between actions and future states. b. The dual form of Maximum Causality Entropy produces multiple solutions as it amounts to a non-concave optimization. c. A causality-conditioned policy with the Maximum Causality Entropy method is ensured by adding causality constraints. d. A causality-conditioned policy with the Maximum Causality Entropy method is ensured by the MDP factorization of the process. 3.3 G-Learning and Soft Q-Learning To apply the Max-Causal Entropy model (11.21) to reinforcement learning with observed rewards, we take expected instantaneous rewards r(st , at ) as features, i.e. set F (st , at ) = r(st , at ). Furthermore, for direct reinforcement learning, the second term in the objective function (11.21) depends only on the empirical measure but not the policy π(a|s), and therefore it can be dropped from the optimization objective function. Finally, we extend the Max-Causal Entropy method by switching to a KL divergence with some reference policy π0 (a|s) instead of using the entropy of π(a|s) as a regularization. The latter case can always be recovered from the former one by choosing a uniform reference density π0 (a|s). The Kullback–Leibler (KL) divergence of π(·|st ) and π0 (·|st ) is KL[π ||π0 ](st ) := π(at |st ) log π(at |st ) = Eπ g π (s, a) st , π0 (at |st ) where g π (s, a) = log π(at |st ) π0 (at |st ) is the one-step information cost of a learned policy π(at |st ) relative to a reference policy π0 (at |st ). The problem of policy optimization expressed by Eqs. (11.21), generalized here by using the KL divergence (11.22) instead of the causal entropy, can now be formulated as a problem of maximization of the following functional: 3 Maximum Entropy Inverse Reinforcement Learning Ftπ (st ) T t =t t −t 1 π E r(st , at ) − g (st , at ) . β where 1/β is a Lagrange multiplier. Note that we dropped the second term in the objective function (11.21) in this expression. The reason is that this term depends only on the reward function but not on the policy. Therefore, it can be omitted for the problem of direct reinforcement learning. Note however that it should be kept for IRL as we will discuss later. The expression (11.24) is a value function of a problem with a modified KLregularized reward r(st , at ) − β1 g π (st , at ) that is sometimes referred to as the free energy function. Note that β in Eq. (11.24) serves as the “inverse-temperature” parameter that controls a tradeoff between reward optimization and proximity to the reference policy. The free energy Ftπ (st ) is the entropy-regularized value function, where the amount of regularization can be calibrated to the level of noise in the data. The optimization problem (11.24) is exactly the one we studied in Chap. 10 where we presented G-learning. We recall a self-consistent set of equations that need to be solved jointly in G-learning: Ftπ (st ) = π 1 1 log Zt = log π0 (at |st )eβGt (st ,at ) . β β a π (s ,a )−F π (s ) t t t t π(at |st ) = π0 (at |st )eβ (Gt π Gπt (st , at ) = r(st , at ) + γ Et,a Ft+1 (st+1 ) st , at . Here the G-function Gπt (st , at ) is a KL-regularized action-value function. Equations (11.25, 11.26, 11.27) constitute a system of equations that should be solved self-consistently for π(at |yt ), Gπt (yt , at ), and Ftπ (yt ) (Fox et al. 2015). For a finite-horizon problem of length T , the system can be solved by backward recursion for t = T − 1, . . . , 0, using appropriate terminal conditions at t = T . If we substitute the augmented free energy (11.25) into Eq. (11.27), we obtain ⎡ γ Gπt (s, a) = r(st , at )+ Et,a ⎣log β a ⎤ π0 (at+1 |st+1 )eβGt+1 (st+1 ,at+1 ) ⎦ . π This equation is a soft relaxation of the Bellman optimality equation for the action-value function (Fox et al. 2015). The “inverse-temperature” parameter β in Eq. (11.28) determines the strength of entropy regularization. In particular, if we take β → ∞, we recover the original Bellman optimality equation for the Qfunction. Because the last term in (11.28) approximates the max(·) function when β is large but finite, Eq. (11.28) is known, for a special case of a uniform reference density π0 , as “soft Q-learning.” 11 Inverse Reinforcement Learning and Imitation Learning Note that we could also bypass the G-function altogether, and proceed with the Bellman optimality equation for the free energy F-function (11.24). In this case, we have a pair of equations for Ftπ (st ) and π(at |st ): π 1 Ftπ (st ) = Ea r(st , at ) − g π (st , at ) + γ Et,a Ft+1 (st+1 ) β π(at |st ) = π 1 π0 (at |st )er(st ,at )+γ Et,a Ft+1 (st+1 ) . Zt Equation (11.29) shows that one-step rewards r(st , at ) do not form an alternative specification of single-step action probabilities π(at |st ). Rather, a specification of π (s the sum r(st , at ) + γ Et,a Ft+1 t+1 ) is required (Ortega et al. 2015). However, in a special when dynamics are linear and rewards r(st , at ) are quadratic, the case π (s term Et,a Ft+1 t+1 ) has the same parametric form as the time-t reward r(st , at ); therefore, addition of this term amounts to a “renormalization” of parameters of the one-step reward function (see Exercise 11.3). When the objective is to learn a policy, such renormalized parameters can be directly learned from data, bypassing the need to separate them into the current reward and an expected future-reward. We see that the G-learning (or equivalently the Max-Causal Entropy) optimal policy (11.26) for an MDP formulation is still given by the familiar Boltzmann form as in (11.7), but with a different energy function Fˆ (st , at ), which is now given by the G-function. Unlike the previous single-step case, in a multi-period setting this function is not available in a closed form, but rather defined recursively by the self-consistent G-learning equations (11.25, 11.26, 11.27). 3.4 Maximum Entropy IRL The G-learning (Max-Causal Entropy) framework can be used for both direct and inverse reinforcement learning. Here we apply it for the task of IRL. In this section, we assume a time-homogeneous MDP with an infinite time horizon. We also assume that a time-stationary action-value function G(st , at ) is specified as a parametric model (e.g., a neural network) with parameters θ , so we write it as Gθ (st , at ). The objective of IRL inference is to learn parameters θ from data. We start with the G-learning equations (11.25) expressed in terms of the policy function π(at |st ) and the parameterized G-function Gθ (st , at ): πθ (at |st ) = 1 π0 (at |st )eβGθ (st ,at ) , Zθ (st ) := Zθ (st ) ( π0 (at |st )eβGθ (st ,at ) dat , (11.30) where the G-function (a.k.a. a soft Q-function) satisfies a soft relation of the Bellman optimality equation 3 Maximum Entropy Inverse Reinforcement Learning ⎡ ⎤ γ Gθ (s, a) = r(st , at ) + Et,a ⎣log π0 (at+1 |st+1 )eβGθ (st+1 ,at+1 ) ⎦ . β a Under MaxEnt IRL, the stochastic action policy (11.30) is used as a probabilistic model of observed data made of pairs (st , at ). A loss function in terms of parameters θ can therefore be obtained by applying the conventional maximum likelihood method to this model. To gain more insight, let us start with the likelihood of a particular path τ : P (τ ) = p(s0 ) T −1 πθ (at |st )P (st+1 |st , at ) = p(s0 ) T −1 t=0 1 π0 (at |st )eβGθ (st ,at ) P (st+1 |st , at ). Zθ (st ) Now we take a negative logarithm of this expression to get the negative loglikelihood, where we can drop contributions from the initial state distribution p(s0 ) and state transition probabilities P (st+1 |st , at ), as neither depends on the model parameters θ : L(θ ) = T −1 (log Zθ (st ) − βGθ (st , at )) . Minimization of this loss function with respect to parameters θ gives an optimal soft Q-function Gθ (st , at ). Once this function is found, if needed or desired, we could use Eq. (11.31) in reverse to estimate one-step expected rewards r(st , at ). Gradients of the loss function (11.33) can be computed as follows: * T −1 )( ∂Gθ (st , at ) ∂L(θ ) ∂Gθ (st , a) = da − πθ (a|st ) ∂θ ∂θ ∂θ t=0 ∂Gθ (s, a) ∂Gθ (s, a) model − data . ∂θ ∂θ The second term in this expression can be computed directly from the data for any given value of θ , and thus does not pose any issues. The problem is with the first term in (11.34) that gives the gradient of the log-partition function in Eq. (11.33). This term involves an integral over all possible actions at each step of the trajectory, computed with the probability density πθ (a|st ). For a discreteaction MDP, the integral becomes a finite sum that can be computed directly, but for continuous and possibly high-dimensional action spaces, an accurate calculation of this integral for a fixed value of θ might be time-consuming. Given that this 11 Inverse Reinforcement Learning and Imitation Learning integral should be evaluated multiple times during optimization of parameters θ , unless evaluated analytically, the computational burden of this step can be very high or even prohibitive. Leaving these computational issues aside for a moment, the intermediate conclusion is that the task of IRL can be solved using the maximum likelihood method for the action policy produced by G-learning, and without explicitly solving the Bellman optimality equation. The solution approach used here is to model the whole soft action-value function Gθ (st , at ) using a flexible function approximation method such as a neural network. As Gθ (st , at ) defines the action policy πθ (a|st ), by making inference of the policy, we can directly learn the soft action-value function. We note, however, that such apparent relief from the need to solve the (soft) Bellman optimality equation at each internal step of the IRL task has a flip side. In the form presented above, the MaxEnt IRL approach is identical to behavioral cloning with the G-function Gθ (st , at ) that is fitted to data in the form of pairs (st , at ). This is different from TD methods such as Q-learning or its “soft” versions that consider triplets of transitions (st , at , st+1 ), and thus capture dynamics of the system without estimating them explicitly. Therefore, all problems with behavioral cloning mentioned above in this chapter will also arise here. In particular, a soft value function using only pairs (st , at ) and maximum likelihood estimation could produce a G-function that would be compatible with the data, and yet produce implausible single-step reward functions when Eq. (11.31) is used at the last step with the estimated G-function. To preclude such potential problems arising, instead of using a parametric model for the G-function, we could directly specify a parametric single-step reward function rθ (st , at ). For example, with linear architectures, a reward function is linear in a set of K pre-specified features k (st , at ): rθ (st , at ) = θk k (st , at ). Alternatively, a reward function can be non-linear in parameters θ , and could be defined using, e.g., a neural network or a Gaussian process. > IRL with Bounded Rewards As we discussed in Chap. 9, reinforcement learning requires that rewards should be bounded from above by some value rmax . On the other hand, it also has certain invariances with respect to transformations of the reward function, namely the policy remains unchanged under affine transformations (continued) 3 Maximum Entropy Inverse Reinforcement Learning of the reward function r(s, a) → ar(s, a) + b, where a > 0 and b are fixed parameters. We can use this invariance in order to fix the highest possible reward to be zero: rmax = 0 without any loss of generality. Let us assume the following functional form of the reward function: r(s, a) = log D(s, a), where D(s, a) is another function of the state and action. Assume that the domain of function D(s, a) is a unit interval, i.e. 0 ≤ D(s, a) ≤ 1. In this case, the reward is bounded from above by zero, as required: −∞ < r(s, a) ≤ 0. Now, because 0 ≤ D(s, a) ≤ 1, we can interpret D(s, a) as a probability of a binary classifier. If D(s, a) is chosen to be the probability that the given action a in state s was generated by the expert, then according to Eq. (11.36), maximization of the reward corresponds to maximization of log-probability of the expert trajectory. Let us use a simple logistic regression model for D(s, a) = σ (θ (s, a)), where σ (x) is the logistic function, θ is a vector of model parameters of size K, and (s, a) is a vector of K basis functions. For this specification, we obtain the following parameterization of the reward function: r(s, a) = − log 1 + e−θ (s,a) . As one can check, this reward function is concave in a if basis functions k (s, a) are linear in a while having an arbitrary dependence on s (see Exercise 11.2). Therefore, such a reward can be used as an alternative to a linear specification (11.35) for risk-averse RL and IRL. We will return to the reward specification (11.36) below when we discuss imitation learning. Once the reward function is defined, the parametric dependence of the G-function is fixed by the soft Bellman equation (11.31). The latter will also define gradients of the G-function that enter Eq. (11.34). The gradients can be estimated using samples from the true data-generating distribution πE and the model distribution πθ . Clearly, solving the IRL problem in this way would make estimated rewards rθ (st , at ) more consistent with dynamics than in the previous version that directly works with a parameterized G-function. However, this forces the IRL algorithm to solve the direct RL problem of finding the optimal soft action-value function Gθ (st , at ) at each step of optimization over parameters θ . Given that solving the direct RL problem even once might be quite time-consuming, especially in high-dimensional continuous action spaces, this can render the computational cost 11 Inverse Reinforcement Learning and Imitation Learning of directly inferring one-step rewards very high and impractical for real-world applications. Another and computationally more feasible option is to define the BC-like loss function (11.33) to be more consistent with the dynamics. We introduce a regularization that depends on observed triplet transitions (st , at , st+1 ). One simple idea in this direction is to add a regularization term equal to the squared Bellman error, i.e. the squared difference between the left- and right-hand sides of Eq. (11.31), where the one-step reward is set to be a fixed number, rather than a function of state and action. Such an approach was applied in robotics where it was called SQIL (Soft Q Imitation Learning) (Reddy et al. 2019). We will return to the topic of regularization in IRL in the next section, where we will consider IRL in the context of imitation learning. In passing, we shall consider other computational aspects of the IRL problem that persist with or without regularization. 3.5 Estimating the Partition Function After parameters θ are optimized, Eq. (11.31) can be used in order to estimate one-step expected rewards r(st , at ). In practice, computing the gradients of the resulting loss function involves integration over a (multi-dimensional) action space. This produces the main computational bottleneck of the MaxEnt IRL method. Note that the same bottleneck arises in direct reinforcement learning with G-learning Eqs. (11.25) that also involves computing integrals over the action space. One commonly used approach to numerically computing integrals involving probability distributions is importance sampling. If μ(a ˆ t |st ) is a sampling distribution, then the integral appearing in the gradient (11.34) can be evaluated as follows: ( πθ (at |st ) ∂Gθ (st , at ) dat = ∂θ ( μ(a ˆ t |st ) πθ (at |st ) ∂Gθ (st , at ) dat , μ(a ˆ t |st ) ∂θ which replaces integration with respect to the original distribution with the sampling distribution μ(a ˆ t |st ), with the idea that this distribution might be easier to sample from than the original probability density. When this distribution is used for sampling, the gradients ∂Gθ /∂θ are multiplied by the likelihood ratios πθ /μ. ˆ Importance sampling becomes more accurate when the sampling distribution μ(a ˆ t |st ) is close to the optimal action policy πθ (at |st ). This observation could be used to produce an adaptive sampling distribution μ(a ˆ t |st ). For example, we could envision a computational scheme with updates of the G-function according to the gradients * T −1 )( ∂Gθ (st , at ) ∂L(θ ) πθ (at |st ) ∂Gθ (st , a) = da − , μ(a ˆ t |st ) ∂θ μ(a ˆ t |st ) ∂θ ∂θ 4 Example: MaxEnt IRL for Inference of Customer Preferences which would be intermittent with updates of the sampling distribution μ(a ˆ t |st ) that would depend on values of θ from a previous iteration. Such methods are known in robotics as “guided cost learning” (Finn et al. 2016). We will discuss a related method in Sect. 5 where we consider alternative approaches to learning from demonstrations. Before turning to such advanced methods, we would like to present a tractable formulation of MaxEnt IRL where the partition function can be computed exactly so that approximations are not needed. Without too much loss of generality, we will present such a formulation in the context of a problem of inference of customer preferences and price sensitivity. Such a problem can also be viewed as a special case of a consumer credit problem. Similar examples in consumer credit might include prepaid household utility payment plans where the consumer prepays their utilities and is penalized for overage and lines of credit in payment processing and ATM services. Other examples include consumer loans and mortgages where different loan products are offered to the consumer, with varying interest rates and late payment penalties, and the user chooses when to make principal payments. ? Multiple Choice Question 3 Select all the following correct statements: a. Maximum Entropy IRL provides a solution of the self-consistent system of Glearning without access to the reward function that fits observable sequences of states and actions. b. “Soft Q-learning” is a method of relaxation of the Bellman optimality equation for the action-value function that is obtained from G-learning by adopting a uniform reference action policy. c. Maximum Entropy IRL assumes that all demonstrations are strictly optimal. d. Taking the limit β → ∞ in Maximum Entropy IRL is equivalent to assuming that all demonstrations are strictly optimal. 4 Example: MaxEnt IRL for Inference of Customer Preferences The previous section presented a general formulation of the MaxEnt IRL approach. While this approach can be formulated for both discrete and continuous stateaction spaces, for the latter case, computing the partition function is often the main computational burden in practical applications. These computational challenges should not overshadow the conceptual simplicity of the MaxEnt IRL approach. In this section we present a particularly simple version of this method which can be derived using quadratic rewards and Gaussian policies. We will present this formulation in the context of a problem of utmost interest in marketing, which is the problem of learning preferences and price sensitivities of customers of a recurrent utility service. 11 Inverse Reinforcement Learning and Imitation Learning We will also use this simple example to provide the reader with some intuition on the amount of data needed to apply IRL. As we will show below, caution should be exercised when applying IRL to real-world noisy data. In particular, using simulated examples, we will show how the observational noise, inevitable in any finite-sample data, can masquerade itself as an apparent heterogeneity of 4.1 IRL and the Problem of Customer Choice Understanding customer choices, demand, and preferences, with customers being consumers or firms, is a central tenet in the marketing literature. One important class of such problems is dynamic consumer demand for recurrent utility-like plans and services such as cloud computing plans, internet data plans, utility plans (e.g., electricity, gas, phone), etc. Consumer actions in this settings extend over a period of time, such as the term of a contract or a period between regular payments for a plan, and can therefore be considered a multi-step decision-making problem. If customers are modeled as utility-maximizing rational agents, the problem is well suited for methods of inverse optimal control or inverse reinforcement learning. In the marketing literature, the inverse optimal control approach to learning the customer utility is often referred to as structural models, see, e.g., Marschinski et al. (2007). This approach has the advantage over purely statistical regression based models in its ability to discern true consumer choices and demand preferences from effects induced by particular marketing campaigns. This enables the promotion of new products and offers, whose attractiveness to consumers could then be assessed based on the learned consumer utility. Structural models view forward-looking consumers as rational agents maximizing their streams of expected utilities of consumption over a planning horizon rather than their one-step utility. Structural models typically specify a model for a consumer utility, and then estimate such a model using methods of dynamic programming and stochastic optimal control. Using the language of reinforcement learning, structural models require methods of dynamic programming or approximate dynamic programming using deterministic policies. As we mentioned earlier in this chapter, using deterministic policies to infer agents’ utilities may be problematic if the demonstrated behavior is suboptimal. A deterministic policy which assumes that each step should be strictly optimal, will assign a zero probability to any path that is not strictly optimal. This would rule out any data where the demonstrated behavior is expected to deviate in any way from a strictly optimal behavior. Needless to say, available data is almost always sub-optimal to a various extent. To relax the assumption of strict optimality for all demonstrations, structural models usually add a random component to the one-step customer utility, which is sometimes referred to as “user shocks.” An example of such an approach can be found in Xu et al. (2015) who applied it to infer reward (utility) functions of consumers of mobile data plans. While this enables sub-optimal trajectories, 4 Example: MaxEnt IRL for Inference of Customer Preferences this approach requires optimization of the reward parameters using Monte Carlo simulation, where unobserved and simulated “user shocks” are added in the parameter estimation procedure. Instead of pursuing such an approach, MaxEnt IRL offers an alternative and more computationally efficient way to manage possible sub-optimality in data by using stochastic policies instead of deterministic policies. This approach provides some degree of tolerance to certain occasional, non-excessive, deviations from a strictly optimal behavior, which are described as rare fluctuations according to the model with optimal parameters. We will now present a simple parametric specification of the MaxEnt IRL method that we introduced in this chapter. As we will show, it leads to a very lightweight computational method, in comparison to Monte Carlo based methods for structural models. 4.2 Customer Utility Function More formally, consider a customer that purchased a single-service plan with the monthly price F , initial quota q0 , and price p to be paid for the unit of consumption upon breaching the monthly quota on the plan.3 We specify a single-step utility (reward) function of a customer at time t = 0, 1, . . . , T − 1 (where T is a length of a payment period, e.g., a month) as follows: 1 r(at , qt , dt ) = μat − βat2 + γ at dt − ηp(at − qt )+ + κqt 1at =0 . 2 Here at ≥ 0 is the daily consumption on day t, qt ≥ 0 is the remaining allowance at the start of day t, and dt is the number of remaining days until the end of the billing cycle, and we use a short notation x+ = max(x, 0) for any x. The fourth term in Eq. (11.40) is proportional to the payment p(at − qt )+ made by the customer once the monthly quota q0 is exhausted. Parameter η gives the price sensitivity of the customer, while parameters μ, β, γ specify the dependence of the user reward on the state-action variables qt , dt , at . Finally, the last term ∼ κqt 1at =0 gives the reward received upon zero consumption at = 0 at time t (here 1at =0 is an indicator function that is equal to one if at = 0, and is zero otherwise). Model calibration amounts to estimation of parameters η, μ, β, γ , κ given the history of the user’s consumption. Note that the reward (11.40) can be equivalently written as an expansion over K = 5 basis functions: 3 For plans that do not allow breaching the quota q0 , the present formalism still applies by setting the price p to infinity. 11 Inverse Reinforcement Learning and Imitation Learning r(at , qt , dt ) = T (at , qt , dt ) = k (at , qt , dt ), where 1 θ0 = μat , θ1 = − βat2 , θ2 = γ at dt , 2 θ3 = −ηp(at − qt )+ , θ4 = κqt 1at =0 (here X stands for the empirical mean of X), and the following set of basis K−1 is used: functions { k }k=0 0 (at , qt , dt ) = at /at , 1 (at , qt , dt ) = at2 /at2 , 2 (at , qt , dt ) = at dt /at dt , 3 (at , qt , dt ) = (at − qt )+ /(at − qt )+ 4 (at , qt , dt ) = qt 1at =0 /qt 1at =0 . As we explained above, structural models attempt to reconcile deterministic policies and a possible sub-optimal behavior by adding random “user shocks” to the user utility. For example, such “user shocks” can be added to parameter μ. A drawback of such an approach is that for model estimation, it requires Monte Carlo simulation of user shock paths. This can be compared with MaxEnt IRL. Because MaxEnt IRL is a probabilistic approach that assigns probabilities to observed paths, it does not require introducing a random shock to the utility function in order to reconcile the model with a possible sub-optimal behavior. Therefore, MaxEnt IRL does not need Monte Carlo simulation to estimate parameters of the user utility, and instead can use standard maximum likelihood estimation (MLE). For the reward defined in Eq. (11.40), MLE amounts to a convex optimization with 5 variables, which can be performed efficiently using the standard off-the-shelf convex optimization software. Moreover, our specification (11.40) can be easily generalized by adding more basis functions while keeping the rest of the methodology intact. 4.3 Maximum Entropy IRL for Customer Utility We use an extension of the MaxEnt IRL called Relative Entropy IRL (Boularias et al. 2011) which replaces the uniform distribution in the MaxEnt method by a non-uniform benchmark (or “prior”) distribution π0 (at |qt , dt ). This produces the exponential single-step transition probability: 4 Example: MaxEnt IRL for Inference of Customer Preferences P (qt+1 = qt − at , at |qt , dt ) := π(at |qt , dt ) π0 (at |qt , dt ) π0 (at |qt , dt ) exp (r(at , qt , dt )) = exp T (at , qt , dt ) , = Zθ (qt , dt ) Zθ (qt , dt ) where Zθ (qt , dt ) is a state-dependent normalization factor ( Zθ (qt , dt ) = π0 (at |qt , dt ) exp T (at , qt , dt ) dat . We note that most applications of MaxEnt IRL use multi-step trajectories as prime objects, and define the partition function Zθ on the space of trajectories. While the first applications of MaxEnt IRL calculated Zθ exactly for small discrete stateaction spaces as in (Ziebart et al. 2008, 2013), for large or continuous state-action spaces we resort to approximate dynamic programming, or other approximation methods. For example, the Relative Entropy IRL approach of (Boularias et al. 2011) uses importance sampling from a reference (“background”) policy distribution to calculate Zθ . It is this calculation that poses the main computational bottleneck for applications of MaxEnt/RelEnt IRL methods for large or continuous state-action spaces. With a simple piecewise-quadratic reward such as Eq. (11.40), we can proceed differently: we define state-dependent normalization factors Zθ (qt , dt ) for each time step. Because we trade a path-dependent “global” partition function Zθ for a local state-dependent factor Zθ (qt , dt ), we do not need to rely on exact or approximate dynamic programming to calculate this factor. This is similar to the approach of Boularias et al. (2011) (as it also relies on the Relative Entropy minimization), but in our case both the reference distribution π0 (at |qt , dt ) and normalization factor Zθ (qt , dt ) are defined on a single time step, and calculation of Zθ (qt , dt ) amounts to computing the integral (11.44). As we show below, this integral can be calculated analytically with a properly chosen distribution π0 (at |qt , dt ). We shall use a mixture of discrete and continuous distribution for the reference (“prior”) action distribution π0 (at |qt , dt ): π0 (at |qt , dt ) = ν¯ 0 δ(at ) + (1 − ν¯ 0 )π˜ 0 (at |qt , dt )Iat >0 , where δ(x) stands for the Dirac delta-function, and Ix>0 = 1 if x > 0 and zero otherwise. The continuous component π˜ 0 (at |qt , dt ) is given by a spliced Gaussian distribution ⎧ ⎨ (1 − ω0 (qt , dt ))φ1 at , μ0 +γ0 dt , 1 if 0 < at ≤ qt β0 β0 , (11.46) π˜ 0 (at |qt , dt ) = ⎩ ω0 (qt , dt )φ2 at , μ0 +γ0 dt −η0 p , 1 if at ≥ qt β0 β0 where φ1 (at , μ1 , σ12 ) and φ2 (at , μ2 , σ22 ) are probability density functions of two truncated normal distributions defined separately for small and large daily consumption levels, 0 ≤ at ≤ qt and at ≥ qt , respectively (in particular, they both 11 Inverse Reinforcement Learning and Imitation Learning are separately normalized to one). The mixing parameter 0 ≤ ω0 (qt , dt ) ≤ 1 is determined by the continuity condition at at = qt : ) * ) * μ0 + γ0 dt 1 μ0 + γ0 dt − η0 p 1 = ω0 (qt , dt )φ2 qt , . (1−ω0 (qt , dt ))φ1 qt , , , β0 β0 β0 β0 (11.47) As this matching condition may involve large values of qt where the normal distribution would be exponentially small, in practice it is better to use it by taking logarithms of both sides: . μ0 +γ0 dt −η0 p 1 μ0 +γ0 dt 1 − log φ q 1 + exp log φ2 qt , , , , 1 t β0 β0 β0 β0 (11.48) The prior mixing-spliced distribution (11.45), albeit represented in terms of simple distributions, leads to potentially quite complex dynamics that make intuitive sense and appear largely consistent with observed patterns of consumption. In particular, note that Eq. (11.46) indicates that large fluctuations at > qt are centered around dt a smaller mean value μ−γ βdt −ηp than the mean value μ−γ of smaller fluctuations β 0 < at ≤ qt . Both a reduction of the mean upon breaching the remaining allowance barrier and a decrease of the mean of each component with time appear quite intuitive in the current context. As will be shown below, a posterior distribution π(at |qt , dt ) inherits these properties while also further enriching the potential complexity of dynamics.4 The advantage of using the mixed-spliced distribution (11.45) as a reference distribution π0 (at |qt , dt ) is that the state-dependent normalization constant Zθ (qt , dt ) can be evaluated exactly with this choice: ω0 (qt , dt ) = Zθ (qt , dt ) = ν¯ 0 eκqt + (1 − ν¯ 0 ) (I1 (θ, qt , dt ) + I2 (θ, qt , dt )) , where 9 2 ; (μ0 + μ + (γ0 + γ )dt )2 (μ0 + γ0 dt )2 β0 exp − β0 + β 2(β0 + β) 2β0 )dt −(β0 +β)qt √ 0 +γ )dt N − μ0 +μ+(γ0√+γ − N − μ0 +μ+(γ β0 +β β +β 0 × μ0 +γ√ μ0√ +γ0 dt 0 dt −β0 qt N − −N − β β I1 (θ, qt , dt ) = (1 − ω0 (qt , dt )) I2 (θ, qt , dt ) = ω0 (qt , dt ) 4 In β0 (μ0 + μ − (η0 + η)p + (γ0 + γ )dt )2 exp β0 + β 2(β0 + β) particular, it promotes a static mixing coefficient ν0 to a state- and time-dependent variable νt = ν(qt , dt ). 4 Example: MaxEnt IRL for Inference of Customer Preferences ; 1 − N − μ0 +μ−(η0 +η)p+(γ √ 0 +γ )dt −(β0 +β)qt (μ0 − η0 p + γ0 dt )2 β0 +β , − + ηpqt × 2β0 √ 0 dt −β0 qt 1 − N − μ0 −η0 p+γ β 0 where N (x) is the cumulative normal probability distribution. 7T 6 Probabilities of T -steps paths τi = ati , qti , dti t=0 (where i enumerates different user-paths) are obtained as products of single-step probabilities: P (τi ) = π0 (at |qt , dt ) exp T (at , qt , dt ) ∼ exp T (τi ) (at , qt , dt ) . Zθ (qt , dt ) (at ,qt ,dt )∈τi Here = the observed path τi : (τi ) (at , qt , dt ) (11.51) K−1 (τi ) k (at , qt , dt ) k=0 (τi ) k (at , qt , dt ) are cumulative feature counts along k (at , qt , dt ). (at ,qt ,dt )∈τi Therefore, the total path probability in our model is exponential in the total reward along a trajectory, as in the “classical” MaxEnt IRL approach (Ziebart et al. 2008), while the pre-exponential factor is computed differently as we operate with onestep, rather than path probabilities. Parameters defining the exponential path probability distribution (11.51) can be estimated by the standard maximum likelihood estimation (MLE) method. Assume we have N historically observed single-cycle consumption paths, and assume these path probabilities are independent.5 The total likelihood of observing these data is L(θ ) = i=1 (at ,qt ,dt )∈τi π0 (at |qt , dt ) exp T (at , qt , dt ) . Zθ (qt , dt ) The negative log-likelihood is therefore, after omitting the term log π0 (at |qt , dt ) that does not depend on ,6 and rescaling by 1/N, ⎛ N 1 ⎝ 1 − log L(θ ) = N N i=1 (qt ,dt )∈τi log Zθ (qt , dt ) − ⎞ T (at , qt , dt )⎠ (at ,qt ,dt )∈τi more complex case of co-dependencies between rewards for individual customers can be considered, but we will not pursue this approach here. 6 Note that Z (q , d ) still depends on π (a |q , d ), see Eq. (11.44). θ t
{"url":"https://dsensehosting.net/article/machine-learning-in-finance-matthew-f-dixon-igor-halperin-paul-bilokon-pdfcoffee-com","timestamp":"2024-11-11T01:08:09Z","content_type":"text/html","content_length":"1049422","record_id":"<urn:uuid:ebb77f05-d3b2-4db3-ba71-e75730ee135a>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00330.warc.gz"}
American Mathematical Society Rationality of the Folsom-Ono grid HTML articles powered by AMS MathViewer by P. Guerzhoy Proc. Amer. Math. Soc. 137 (2009), 1569-1577 DOI: https://doi.org/10.1090/S0002-9939-08-09681-0 Published electronically: December 11, 2008 PDF | Request permission In a recent paper Folsom and Ono constructed a grid of Poincaré series of weights $3/2$ and $1/2$. They conjectured that the coefficients of the holomorphic parts of these series are rational integers. We prove that these coefficients are indeed rational numbers with bounded denominators. References • Basmaji, Jacques, Em Algorithmus zur Berechnung von Hecke-Operatoren Anwendung auf modulare Kurven, Dissertation Essen (1996). • Kathrin Bringmann and Ken Ono, The $f(q)$ mock theta function conjecture and partition ranks, Invent. Math. 165 (2006), no. 2, 243–266. MR 2231957, DOI 10.1007/s00222-005-0493-5 • Kathrin Bringmann and Ken Ono, Arithmetic properties of coefficients of half-integral weight Maass-Poincaré series, Math. Ann. 337 (2007), no. 3, 591–612. MR 2274544, DOI 10.1007/ • Jan Hendrik Bruinier and Jens Funke, On two geometric theta lifts, Duke Math. J. 125 (2004), no. 1, 45–90. MR 2097357, DOI 10.1215/S0012-7094-04-12513-8 • Bruinier, Jan H.; Ono, Ken, Heegner divisors, $L$-functions and harmonic weak Maass forms, preprint. • Bruinier, Jan H.; Ono, Ken; Rhoades, Robert C., Differential operators for harmonic weak Maass forms and the vanishing of Hecke eigenvalues, Math. Ann. 342 (2008), no. 3, 673–693. • H. Cohen and J. Oesterlé, Dimensions des espaces de formes modulaires, Modular functions of one variable, VI (Proc. Second Internat. Conf., Univ. Bonn, Bonn, 1976) Lecture Notes in Math., Vol. 627, Springer, Berlin, 1977, pp. 69–78 (French). MR 0472703 • Duke, W.; Jenkins, Paul, On the zeros and coefficients of certain weakly holomorphic modular forms, Pure Appl. Math. Q. 4 (2008), no. 4, part 1, 1327–1340. • Amanda Folsom and Ken Ono, Duality involving the mock theta function $f(q)$, J. Lond. Math. Soc. (2) 77 (2008), no. 2, 320–334. MR 2400394, DOI 10.1112/jlms/jdm119 • Sharon Anne Garthwaite, Vector-valued Maass-Poincaré series, Proc. Amer. Math. Soc. 136 (2008), no. 2, 427–436. MR 2358480, DOI 10.1090/S0002-9939-07-08961-7 • Guerzhoy, P., On weak harmonic Maass-modular grids of even integral weights, Math. Res. Lett., to appear. • Ken Ono, The web of modularity: arithmetic of the coefficients of modular forms and $q$-series, CBMS Regional Conference Series in Mathematics, vol. 102, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2004. MR 2020489 • Ono, Ken, A mock theta function for the Delta-function, Proceedings of the 2007 Integers Conference, accepted for publication. • Goro Shimura, Introduction to the arithmetic theory of automorphic functions, Publications of the Mathematical Society of Japan, vol. 11, Princeton University Press, Princeton, NJ, 1994. Reprint of the 1971 original; Kanô Memorial Lectures, 1. MR 1291394 • Don Zagier, Traces of singular moduli, Motives, polylogarithms and Hodge theory, Part I (Irvine, CA, 1998) Int. Press Lect. Ser., vol. 3, Int. Press, Somerville, MA, 2002, pp. 211–244. MR 1977587 • S. P. Zwegers, Mock $\theta$-functions and real analytic modular forms, $q$-series with applications to combinatorics, number theory, and physics (Urbana, IL, 2000) Contemp. Math., vol. 291, Amer. Math. Soc., Providence, RI, 2001, pp. 269–277. MR 1874536, DOI 10.1090/conm/291/04907 Similar Articles • Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 11F37 • Retrieve articles in all journals with MSC (2000): 11F37 Bibliographic Information • P. Guerzhoy • Affiliation: Department of Mathematics, University of Hawaii, 2565 McCarthy Mall, Honolulu, Hawaii 96822-2273 • Email: pavel@math.hawaii.edu • Received by editor(s): June 23, 2008 • Received by editor(s) in revised form: June 28, 2008 • Published electronically: December 11, 2008 • Additional Notes: The author was supported by NSF grant DMS-0700933 • Communicated by: Ken Ono • © Copyright 2008 American Mathematical Society The copyright for this article reverts to public domain 28 years after publication. • Journal: Proc. Amer. Math. Soc. 137 (2009), 1569-1577 • MSC (2000): Primary 11F37 • DOI: https://doi.org/10.1090/S0002-9939-08-09681-0 • MathSciNet review: 2470814
{"url":"https://www.ams.org/journals/proc/2009-137-05/S0002-9939-08-09681-0/?active=current","timestamp":"2024-11-02T03:28:47Z","content_type":"text/html","content_length":"64126","record_id":"<urn:uuid:45a8744d-e874-475f-b266-5fb392514efd>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00410.warc.gz"}
Good departments for topology/geometry I'm starting to do some research into what grad schools I should apply to this fall for a pure math PhD. I'll be finishing my undergrad in May 2015 with a major in math and minor in physics. I'm not 100% sure what area I want to go in to, but I'm leaning more towards the topology and geometry side of things. I'm currently doing some work in topology (knot theory) and really enjoy it. I am also interested in geometry/topology research that is related to physics. As an example, I think UCSB may be a good fit because they have a research training group in geometry, topology, and physics and they have that connection with Microsoft Research Station Q. Stony Brook is another school I've been looking at. Here are my stats in case it is relevant: School: Unknown Midwest state school - doesn't grant math PhDs Major: Math Minor: Physics Overall GPA: 3.75 Major GPA: 4.0 GRE Verbal: 168, Quant: 162, Writing: 4.0 MGRE: ?? Research: Currently working on project in knot theory. Courses: No grad courses since we don't offer PhD. Typical undergrad prep (e.g. blue Rudin, Hoffman & Kunze Linear Algebra, etc). I'd like to know if any of you have some suggestions of math departments to check out. I'm having a hard time finding places that are a good fit mainly because I don't really know how to tell if a dept's faculty in this area are very strong or not. Thanks. Re: Good departments for topology/geometry UC Irvine will be a good fit for you. Re: Good departments for topology/geometry You might want to look at these schools. http://grad-schools.usnews.rankingsandr ... y-rankings http://grad-schools.usnews.rankingsandr ... y-rankings Re: Good departments for topology/geometry Given your interests, I'd say definitely apply to Stony Brook. They are very strong in differential geometry. They recently hired Simon Donaldson, who along with xiuxiong Chen and song sun (also both at sb) recently proved a big result relating existence of Kahler Einstein Metrics on Fano manifolds to k stability (known in field as ytd conjecture). They also have a lot.of interaction between their math and physics departments, if that interests you. Most topology has to do with stuff like 4 manifolds as well as symplectic topology, not sure if they have many people in knot theory Last edited by dh363 on Wed Jul 09, 2014 8:04 pm, edited 1 time in total. Re: Good departments for topology/geometry dh363 that is awesome. Exactly what I was looking for. It is too bad that it is so difficult to get accepted at Stony Brook! I'm still going to apply though, you never know what might happen. Do you know of any other schools that are similar to SB in terms of their research interests? Re: Good departments for topology/geometry University of Texas also has lots of good topology, especially in low dimensional topology. There are also quite a few professors who work close to physics, especially string theory. Also dh363...do you mean symplectic topology?
{"url":"https://mathematicsgre.com/viewtopic.php?f=1&t=3240","timestamp":"2024-11-12T02:14:34Z","content_type":"text/html","content_length":"29369","record_id":"<urn:uuid:e3a564ad-0434-454a-8456-d116f798cfe1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00731.warc.gz"}
Y Intercept - Meaning, Examples | Y Intercept Formula - [[company name]] [[target location]], [[stateabr]] Y-Intercept - Meaning, Examples As a student, you are continually looking to keep up in school to avert getting overwhelmed by topics. As guardians, you are continually investigating how to encourage your children to prosper in academics and after that. It’s specifically critical to keep up in math due to the fact that the theories continually founded on themselves. If you don’t understand a specific lesson, it may haunt you in future lessons. Comprehending y-intercepts is a perfect example of something that you will use in math over and over again Let’s check out the foundation ideas regarding the y-intercept and take a look at some in and out for solving it. Whether you're a mathematical whiz or novice, this introduction will provide you with all the things you need to learn and instruments you require to get into linear equations. Let's get into it! What Is the Y-intercept? To fully grasp the y-intercept, let's imagine a coordinate plane. In a coordinate plane, two perpendicular lines intersect at a section called the origin. This section is where the x-axis and y-axis link. This means that the y value is 0, and the x value is 0. The coordinates are noted like this: (0,0). The x-axis is the horizontal line going across, and the y-axis is the vertical line going up and down. Every single axis is counted so that we can specific points on the plane. The numbers on the x-axis rise as we shift to the right of the origin, and the values on the y-axis grow as we shift up along the origin. Now that we have reviewed the coordinate plane, we can define the y-intercept. Meaning of the Y-Intercept The y-intercept can be taken into account as the starting point in a linear equation. It is the y-coordinate at which the graph of that equation intersects the y-axis. In other words, it represents the number that y takes while x equals zero. Next, we will illustrate a real-life example. Example of the Y-Intercept Let's suppose you are driving on a straight road with one lane going in respective direction. If you start at point 0, location you are sitting in your car right now, then your y-intercept would be equal to 0 – given that you haven't shifted yet! As you start you are going the track and picking up momentum, your y-intercept will rise before it reaches some greater number once you reach at a end of the road or halt to induce a turn. Thus, when the y-intercept might not appear typically important at first look, it can offer knowledge into how things transform eventually and space as we move through our world. Therefore,— if you're ever puzzled trying to comprehend this concept, bear in mind that almost everything starts somewhere—even your journey through that long stretch of road! How to Locate the y-intercept of a Line Let's think about how we can find this value. To help with the procedure, we will outline a few steps to do so. Then, we will give you some examples to illustrate the process. Steps to Locate the y-intercept The steps to find a line that crosses the y-axis are as follows: 1. Find the equation of the line in slope-intercept form (We will expand on this later in this tutorial), that should look similar this: y = mx + b 2. Plug in 0 for x 3. Figure out y Now once we have gone over the steps, let's take a look how this method will function with an example equation. Example 1 Discover the y-intercept of the line explained by the equation: y = 2x + 3 In this instance, we can substitute in 0 for x and solve for y to discover that the y-intercept is equal to 3. Consequently, we can say that the line intersects the y-axis at the coordinates (0,3). Example 2 As another example, let's consider the equation y = -5x + 2. In such a case, if we substitute in 0 for x one more time and figure out y, we get that the y-intercept is equal to 2. Consequently, the line crosses the y-axis at the point (0,2). What Is the Slope-Intercept Form? The slope-intercept form is a procedure of depicting linear equations. It is the commonest kind used to represent a straight line in mathematical and scientific uses. The slope-intercept formula of a line is y = mx + b. In this function, m is the slope of the line, and b is the y-intercept. As we checked in the previous section, the y-intercept is the point where the line goes through the y-axis. The slope is a measure of the inclination the line is. It is the unit of shifts in y regarding x, or how much y changes for each unit that x shifts. Since we have reviewed the slope-intercept form, let's see how we can utilize it to find the y-intercept of a line or a graph. Find the y-intercept of the line described by the equation: y = -2x + 5 In this instance, we can observe that m = -2 and b = 5. Thus, the y-intercept is equal to 5. Thus, we can conclude that the line intersects the y-axis at the point (0,5). We could take it a step higher to illustrate the angle of the line. In accordance with the equation, we know the slope is -2. Replace 1 for x and figure out: y = (-2*1) + 5 y = 3 The solution tells us that the next point on the line is (1,3). Whenever x changed by 1 unit, y changed by -2 units. Grade Potential Can Support You with the y-intercept You will review the XY axis time and time again during your math and science studies. Theories will get further difficult as you advance from solving a linear equation to a quadratic function. The time to peak your grasp of y-intercepts is now before you lag behind. Grade Potential provides expert tutors that will support you practice finding the y-intercept. Their personalized interpretations and work out problems will make a good distinction in the results of your test scores. Whenever you think you’re stuck or lost, Grade Potential is here to assist!
{"url":"https://www.durhaminhometutors.com/blog/y-intercept-meaning-examples-y-intercept-formula","timestamp":"2024-11-03T22:35:31Z","content_type":"text/html","content_length":"75854","record_id":"<urn:uuid:1add9fe7-c193-4f7e-ae2f-263f675739f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00862.warc.gz"}
Read the following text and answer the following questions on the bas Read the following text and answer the following questions on the basis of the same : Rohan went to a cricket stadium which is in rectangular shape. A circular green ground for playing cricket is inscribed in a rectangular shaped stadium of breadth 100 m and length 250 m Find the perimeter of circular cricket field (π=227) Step by step video & image solution for Read the following text and answer the following questions on the basis of the same : Rohan went to a cricket stadium which is in rectangular shape. A circular green ground for playing cricket is inscribed in a rectangular shaped stadium of breadth 100 m and length 250 m Find the perimeter of circular cricket field (pi = (22)/(7)) by Maths experts to help you in doubts & scoring excellent marks in Class 10 exams.
{"url":"https://www.doubtnut.com/qna/647935145","timestamp":"2024-11-10T03:16:26Z","content_type":"text/html","content_length":"216807","record_id":"<urn:uuid:2243181b-6ede-43c3-b81e-a6a3b165ba5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00865.warc.gz"}
Mastering the Art of Finding Acceleration with Two Velocities: A Comprehensive Guide Acceleration is a fundamental concept in physics that describes the rate of change in an object’s velocity over time. To determine the acceleration of an object, you can use the relationship between the initial and final velocities, as well as the time interval during which the change in velocity occurred. In this comprehensive guide, we will delve into the intricacies of calculating acceleration using two velocities, providing you with a thorough understanding of the underlying principles and practical applications. Understanding the Acceleration Formula The primary formula used to calculate acceleration with two velocities is: a = Δv / Δt – a is the acceleration (in units of m/s^2) – Δv is the change in velocity (in units of m/s) – Δt is the change in time (in units of s) This formula represents the average acceleration over a given time interval, calculated by dividing the change in velocity by the change in time. It’s important to ensure that the units of velocity and time are consistent when using this formula. Calculating Acceleration Using the Δv/Δt Formula Let’s consider a practical example to illustrate the application of the Δv/Δt formula: Suppose an object has an initial velocity of 5 m/s and a final velocity of 15 m/s, and the time interval between these two velocities is 10 seconds. To calculate the acceleration, we can use the following steps: 1. Determine the change in velocity: Δv = v_final - v_initial Δv = 15 m/s - 5 m/s = 10 m/s 2. Determine the change in time: Δt = t_final - t_initial Δt = 10 s - 0 s = 10 s 3. Calculate the acceleration using the Δv/Δt formula: a = Δv / Δt a = 10 m/s / 10 s = 1 m/s^2 Therefore, the object is accelerating at a rate of 1 m/s^2. Alternative Acceleration Formula: [v(f) – v(i)] / [t(f) – t(i)] Another way to calculate acceleration with two velocities is by using the formula: a = [v(f) - v(i)] / [t(f) - t(i)] – v(f) is the final velocity – v(i) is the initial velocity – t(f) is the final time – t(i) is the initial time This formula calculates the acceleration by finding the difference between the final and initial velocities and dividing it by the difference between the final and initial times. Let’s apply this formula to the same example: 1. Determine the final and initial velocities: v(f) = 15 m/s v(i) = 5 m/s 2. Determine the final and initial times: t(f) = 10 s t(i) = 0 s 3. Calculate the acceleration using the formula: a = [v(f) - v(i)] / [t(f) - t(i)] a = [15 m/s - 5 m/s] / [10 s - 0 s] a = 10 m/s / 10 s = 1 m/s^2 The result is the same as the previous example, confirming that the object is accelerating at a rate of 1 m/s^2. Considering the Direction of Acceleration It’s important to note that the acceleration formulas discussed so far do not specify the direction of the acceleration. Acceleration is a vector quantity, meaning it has both magnitude and direction. If the direction of acceleration is relevant to your analysis, you must consider it separately. For example, if an object is moving in the positive x-direction and its velocity increases, the acceleration would be in the positive x-direction. Conversely, if the velocity decreases, the acceleration would be in the negative x-direction. To incorporate the direction of acceleration, you can use the appropriate sign convention (positive or negative) when calculating the change in velocity (Δv) or the difference between the final and initial velocities [v(f) – v(i)]. Practical Applications and Examples Calculating acceleration with two velocities has numerous practical applications in various fields, including: 1. Kinematics: Analyzing the motion of objects, such as the acceleration of a car during a race or the acceleration of a falling object due to gravity. 2. Dynamics: Studying the forces acting on an object and their relationship to the object’s acceleration, as in the case of Newton’s second law of motion. 3. Engineering: Designing and optimizing the performance of mechanical systems, such as the acceleration of a rocket or the braking system of a vehicle. 4. Sports Science: Evaluating the performance of athletes, such as the acceleration of a sprinter or the deceleration of a basketball player during a jump shot. Here’s an example problem to illustrate the practical application of the acceleration formulas: Problem: A car accelerates from a stop (0 m/s) to a speed of 20 m/s in 5 seconds. Calculate the acceleration of the car. 1. Determine the initial and final velocities: v_initial = 0 m/s v_final = 20 m/s 1. Determine the change in time: Δt = t_final - t_initial Δt = 5 s - 0 s = 5 s 2. Calculate the acceleration using the Δv/Δt formula: a = Δv / Δt a = (20 m/s - 0 m/s) / 5 s a = 4 m/s^2 Therefore, the car is accelerating at a rate of 4 m/s^2. Mastering the art of finding acceleration with two velocities is a crucial skill in physics and various engineering disciplines. By understanding the underlying principles and applying the appropriate formulas, you can accurately determine the acceleration of an object and gain valuable insights into its motion and the forces acting upon it. Remember, the key to success in this topic is to practice solving a variety of problems, familiarize yourself with the different formulas, and develop a strong conceptual understanding of the relationship between velocity, time, and acceleration. The techiescience.com Core SME Team is a group of experienced subject matter experts from diverse scientific and technical fields including Physics, Chemistry, Technology,Electronics & Electrical Engineering, Automotive, Mechanical Engineering. Our team collaborates to create high-quality, well-researched articles on a wide range of science and technology topics for the techiescience.com All Our Senior SME are having more than 7 Years of experience in the respective fields . They are either Working Industry Professionals or assocaited With different Universities. Refer Our Authors Page to get to know About our Core SMEs.
{"url":"https://techiescience.com/how-to-find-acceleration-with-two-velocities/","timestamp":"2024-11-04T11:10:52Z","content_type":"text/html","content_length":"103811","record_id":"<urn:uuid:7c510d79-673e-46ea-b7e7-8ed5b6489016>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00500.warc.gz"}
Devising Numerical Methods for Simulation, Identification and Control of Fractional Order Processes D., Seshu Kumar (2018) Devising Numerical Methods for Simulation, Identification and Control of Fractional Order Processes. PhD thesis. PDF (25-01-2021) Restricted to Repository staff only This thesis is devoted to develop novel numerical methods based on triangular functions for simulation, identification and control of fractional order processes demonstrating fractional order dynamics. Fractional calculus (FC) is an active branch of mathematical analysis that deals with the theory of differentiation and integration of arbitrary order. It has been emerging as an indubitably crucial subject for applied mathematicians, applied scientists, and engineers for better comprehending numerous physical processes in diverse applied areas of science and engineering. FC has broad applications in modelling and control. Abel's integral equation, one of the very first integral equations seriously studied, investigated by Niels Henrik Abel in 1823 and by Liouville in 1832 as a fractional power of the operator of anti-derivation, are encountered in inversion of seismic travel times, stereology of spherical particles, spectroscopy of gas discharges (more generally: "tomography" of cylindrically or spherically symmetric objects like e.g. globular clusters of stars), determination of the refractive index of optical fibers and electro chemistry. The physical process models involving Abel’s integral equations do not have closed form solutions, thus, their numerical solutions have been the subject of pure and applied mathematicians. In this thesis, we devise simple and efficient numerical algorithm using triangular functions, which are the foundations of most of the numerical methods developed in the thesis. The proposed method is rigorously tested on examples as well as applications of Abel’s integral equations, and the obtained results are compared with the results of existing methods. It is found that the proposed method exhibits superior performance over those existing methods. Some physical processes such as magnetic field induction in dielectric media, anomalous diffusion, micro and nanotechnology, velocity fluctuation of a hard core Brownian particle, transfer of dust and fog particles in atmosphere, viscoelasticity, optimal control, heat transfer, thermodynamic and electrical conduction ofpolymers need to be described by fractional order integro-differential equations (FIDEs). Abel’s integral equations cannot describe the above mentioned processes (which require to be modelled by fractional order integro-differential equation), hence, FIDEs have to be treated independently. One chapter is devoted to study of the existence of unique solution to FIDEs and as well as their numerical solution. Similar to Abel’s integral equations, most FIDEs have no known solutions and the process models representing the above mentioned processes viii possess no exact solutions. Without the availability of exact solutions or numerical solutions, it is very problematic to gain insights into those real processes exhibiting fractional order dynamics. Based on the triangular functions, an efficient numerical method is developed and tested on a wide variety of FIDEs and applications of FIDEs. A comparison study is carried out to highlight that the proposed method works better than some of the existing methods used for the same purpose. It has been proved that numerous physical processes in various applied areas of science and engineering such as electrochemistry, physics, geology, astrophysics, seismic wave analysis, sound wave propagation, psychology and life sciences, biology, etc. can be better described by the mathematical models involving stiff or non-stiff fractional differential equations (FDEs) or fractional order differential-algebraic equations (FDAEs). It is surprisingly noticed that most extensively employed semi-analytical techniques such Adomian decomposition method, homotopy analysis method, fractional differential transform method etc. are not able to provide stable approximate to stiff FDEs or FDAEs, at least, in the neighborhood of the initial time point 0. To fill this gap, triangular functions based numerical method, which owns larger convergence region in compared to the semianalytical methods, is proposed in this thesis. The proposed method is found to be far superior to most numerical methods reported in the literature. Formulating mathematical models, which involve operators of fractional calculus, for real world problems is not an easy task. The geometric and physical interpretation of the operators of fractional calculus is not as distinct as that of integer calculus, thus, it is difficult to model real systems as fractional order system directly based on mechanism analysis. Therefore, system identification method is a practical way to model a fractional order system. However, the existing system identification methods (developed for integer order system identification) cannot be directly applied to estimate parameters of fractional order mathematical models from experimental or simulated data. Therefore, an arbitrary order system identification method is formulated to estimate parameters of linear and nonlinear fractional as well as integer order mathematical models from simulated data. The obtained results are compared with system identification methods devised based on the piecewise constant basis functions such as Haar wavelets and block pulse function, and orthogonal polynomials. The proposed method yields better results.ix In addition to modelling, FC has significant applications in control theory. Theoretical and experimental results have been shown that the fractional order PID controller can better control fractional order systems as well as some integer order systems than the classical PID controller. From this perspective, a robust fractional order PID controller tuning method is proposed using triangular strip operational matrices. The proposed method of control system design is implemented in heating furnace temperature control, automatic voltage regulator system and some integer and fractional order process models. Fractional PIλ, fractional PDμ, PIλDμDμ2, fractional PID with fractional order filter and series form of fractional PID controller are designed as optimal controllers using the triangular strip operational matrix based control design method. The performance of the proposed fractional order controller tuning technique is found to be better than the performance of some fractional order controller tuning methodologies reported in the literature. Triangular strip operational matrices proposed from the perspective of Mathematics (for the solution of fractional differential and partial differential equation) finds its elegant application in the presently proposed method of control system design. Repository Staff Only: item control page
{"url":"http://ethesis.nitrkl.ac.in/9767/","timestamp":"2024-11-06T09:04:54Z","content_type":"text/html","content_length":"31355","record_id":"<urn:uuid:36f97640-f511-41a7-84a6-cabde0ec69ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00700.warc.gz"}
What Is Star Math? Star Math is a customized math test for students to take on a computer. The test is designed for students in grades 1 through 12 (also for students in kindergarten who have basic reading and math skills; students must be assigned to a grade from K–12 in order to see Star Math when they log in). This computer-adaptive test chooses each question from a large pool of test items, making subsequent questions more or less difficult than the prior question depending on whether the student answered the prior question correctly. The pool of test items is large, and the software tracks which questions a student has already seen, so a specific question will not be repeated for a student within a 120-day window. Teachers can use Math Reports to determine the math level of each student and to measure growth. Students can finish a Star Math test in about 20 minutes.
{"url":"https://star-help.renaissance.com/hc/en-us/articles/11233182953499-What-Is-Star-Math","timestamp":"2024-11-13T22:25:17Z","content_type":"text/html","content_length":"32559","record_id":"<urn:uuid:8649b198-d9bb-4f79-8a0d-c38cfdf105a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00079.warc.gz"}
Direct numerical simulation of a breaking inertia-gravity wave S. Remmler, M.D. Fruman, S. Hickel (2013) Journal of Fluid Mechanics 722: 424-436. doi: 10.1017/jfm.2013.108 We have performed fully resolved three-dimensional numerical simulations of a statically unstable monochromatic inertia–gravity wave using the Boussinesq equations on an f - plane with constant stratification. The chosen parameters represent a gravity wave with almost vertical direction of propagation and a wavelength of 3 km breaking in the middle atmosphere. We initialized the simulation with a statically unstable gravity wave perturbed by its leading transverse normal mode and the leading instability modes of the time-dependent wave breaking in a two-dimensional space. The wave was simulated for approximately 16 h, which is twice the wave period. After the first breaking triggered by the imposed perturbation, two secondary breaking events are observed. Similarities and differences between the three-dimensional and previous two-dimensional solutions of the problem and effects of domain size and initial perturbations are discussed. Left: Computational domain in the rotated coordinate system x,y,z. The earth coordinates are denoted as x',y',z'. c[p] and c[g] indicate the phase and group velocity. Right: Initial condition with secondary singular vector perturbation. Contours of buoyancy in red and blue, and an iso-surface at b = 0 showing the initial perturbation in green. Temporal evolution of the first three breaking events. Background: plane at y = 400m coloured by buoyancy; foreground: iso-surface of Q = 0.004s^−2, indicating turbulent vortices. Time series for the non-dimensional amplitude of the primary wave and total energy dissipation for different secondary perturbations and domain sizes.
{"url":"https://inca.cfd/applied-aerodynamics/direct-numerical-simulation-of-a-breaking-inertia-gravity-wave","timestamp":"2024-11-02T01:37:11Z","content_type":"text/html","content_length":"17096","record_id":"<urn:uuid:c0f1af58-32fe-4521-afdc-2f00db55b9cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00385.warc.gz"}
Direct Fourier Transform Direct Fourier Transform# Functions used to compute the discretised direct Fourier transform (DFT) for an ideal interferometer. The DFT for an ideal interferometer is defined as \[V(u,v,w) = \int B(l,m) e^{-2\pi i \left( ul + vm + w(n-1)\right)} \frac{dl dm}{n}\] where \(u,v,w\) are data space coordinates and where visibilities \(V\) have been obtained. The \(l,m,n\) are signal space coordinates at which we wish to reconstruct the signal \(B\). Note that the signal correspondes to the brightness matrix and not the Stokes parameters. We adopt the convention where we absorb the fixed coordinate \(n\) in the denominator into the image. Note that the data space coordinates have an implicit dependence on frequency and time and that the image has an implicit dependence on frequency. The discretised form of the DFT can be written as \[V(u,v,w) = \sum_s e^{-2 \pi i (u l_s + v m_s + w (n_s - 1))} \cdot B_s\] where \(s\) labels the source (or pixel) location. If only a single correlation is present \(B = I\), this can be cast into a matrix equation as follows \[V = R I\] where \(R\) is the operator that maps an image to visibility space. This mapping is implemented by the im_to_vis() function. If multiple correlations are present then each one is mapped to its corresponding visibility. An imaging algorithm also requires the adjoint denoted \(R^\dagger\) which is simply the complex conjugate transpose of \(R\). The dirty image is obtained by applying the adjoint operator to the visibilities \[I^D = R^\dagger V\] This is implemented by the vis_to_im() function. Note that an imaging algorithm using these operators will actually reconstruct \(\frac{I}{n}\) but that it is trivial to obtain \(I\) since \(n\) is known at each location in the image. im_to_vis(image, uvw, lm, frequency[, ...]) Computes the discrete image to visibility mapping of an ideal interferometer: vis_to_im(vis, uvw, lm, frequency, flags[, ...]) Computes visibility to image mapping of an ideal interferometer: africanus.dft.im_to_vis(image, uvw, lm, frequency, convention='fourier', dtype=None)[source]# Computes the discrete image to visibility mapping of an ideal interferometer: \[{\Large \sum_s e^{-2 \pi i (u l_s + v m_s + w (n_s - 1))} \cdot I_s }\] image of shape (source, chan, corr) The brighness matrix in each pixel (flatten 2D array per channel and corr). Note not Stokes terms uvw coordinates of shape (row, 3) with u, v and w components in the last dimension. lm coordinates of shape (source, 2) with l and m components in the last dimension. frequencies of shape (chan,) convention{‘fourier’, ‘casa’} Uses the \(e^{-2 \pi \mathit{i}}\) sign convention if fourier and \(e^{2 \pi \mathit{i}}\) if casa. dtypenp.dtype, optional Datatype of result. Should be either np.complex64 or np.complex128. If None, numpy.result_type() is used to infer the data type from the inputs. complex of shape (row, chan, corr) africanus.dft.vis_to_im(vis, uvw, lm, frequency, flags, convention='fourier', dtype=None)[source]# Computes visibility to image mapping of an ideal interferometer: \[{\Large \sum_k e^{ 2 \pi i (u_k l + v_k m + w_k (n - 1))} \cdot V_k}\] visibilities of shape (row, chan, corr) Visibilities corresponding to brightness terms. Note the dirty images produced do not necessarily correspond to Stokes terms and need to be uvw coordinates of shape (row, 3) with u, v and w components in the last dimension. lm coordinates of shape (source, 2) with l and m components in the last dimension. frequencies of shape (chan,) Boolean array of shape (row, chan, corr) Note that if one correlation is flagged we discard all of them otherwise we end up irretrievably mixing Stokes terms. convention{‘fourier’, ‘casa’} Uses the \(e^{-2 \pi \mathit{i}}\) sign convention if fourier and \(e^{2 \pi \mathit{i}}\) if casa. dtypenp.dtype, optional Datatype of result. Should be either np.float32 or np.float64. If None, numpy.result_type() is used to infer the data type from the inputs. float of shape (source, chan, corr) im_to_vis(image, uvw, lm, frequency[, ...]) Computes the discrete image to visibility mapping of an ideal interferometer: vis_to_im(vis, uvw, lm, frequency, flags[, ...]) Computes visibility to image mapping of an ideal interferometer: africanus.dft.dask.im_to_vis(image, uvw, lm, frequency, convention='fourier', dtype=numpy.complex128)[source]# Computes the discrete image to visibility mapping of an ideal interferometer: \[{\Large \sum_s e^{-2 \pi i (u l_s + v m_s + w (n_s - 1))} \cdot I_s }\] image of shape (source, chan, corr) The brighness matrix in each pixel (flatten 2D array per channel and corr). Note not Stokes terms uvw coordinates of shape (row, 3) with u, v and w components in the last dimension. lm coordinates of shape (source, 2) with l and m components in the last dimension. frequencies of shape (chan,) convention{‘fourier’, ‘casa’} Uses the \(e^{-2 \pi \mathit{i}}\) sign convention if fourier and \(e^{2 \pi \mathit{i}}\) if casa. dtypenp.dtype, optional Datatype of result. Should be either np.complex64 or np.complex128. If None, numpy.result_type() is used to infer the data type from the inputs. complex of shape (row, chan, corr) africanus.dft.dask.vis_to_im(vis, uvw, lm, frequency, flags, convention='fourier', dtype=numpy.float64)[source]# Computes visibility to image mapping of an ideal interferometer: \[{\Large \sum_k e^{ 2 \pi i (u_k l + v_k m + w_k (n - 1))} \cdot V_k}\] visibilities of shape (row, chan, corr) Visibilities corresponding to brightness terms. Note the dirty images produced do not necessarily correspond to Stokes terms and need to be uvw coordinates of shape (row, 3) with u, v and w components in the last dimension. lm coordinates of shape (source, 2) with l and m components in the last dimension. frequencies of shape (chan,) Boolean array of shape (row, chan, corr) Note that if one correlation is flagged we discard all of them otherwise we end up irretrievably mixing Stokes terms. convention{‘fourier’, ‘casa’} Uses the \(e^{-2 \pi \mathit{i}}\) sign convention if fourier and \(e^{2 \pi \mathit{i}}\) if casa. dtypenp.dtype, optional Datatype of result. Should be either np.float32 or np.float64. If None, numpy.result_type() is used to infer the data type from the inputs. float of shape (source, chan, corr)
{"url":"https://codex-africanus.readthedocs.io/en/latest/dft-api.html","timestamp":"2024-11-05T13:28:21Z","content_type":"text/html","content_length":"46638","record_id":"<urn:uuid:4b25e145-424c-48b2-a62d-309c66742686>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00763.warc.gz"}
How do you find the integral of int sin^5(x)cos^8(x) dx? | HIX Tutor How do you find the integral of #int sin^5(x)cos^8(x) dx#? Answer 1 $- {\cos}^{9} \frac{x}{9} + \frac{2 {\cos}^{11} x}{11} - {\cos}^{13} \frac{x}{13} + C$ #I=int sin^5x cos^8xdx=intsinx sin^4x cos^8x dx# #I=int sinx (sin^2x)^2 cos^8xdx=int (1-cos^2x)^2 cos^8x sinx dx# #cosx=t => -sinxdx=dt => sinxdx=-dt# #I=int (1-t^2)^2 t^8 (-dt) = -int (1-2t^2+t^4)t^8 dt# #I=-int (t^8-2t^10+t^12) dt = -t^9/9+(2t^11)/11-t^13/13+C# #I=-cos^9x/9 + (2cos^11x)/11 - cos^13x/13 + C# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the integral of (\int \sin^5(x)\cos^8(x) , dx), you can use trigonometric identities and integration by parts. Here's how: 1. Start by using the identity (\sin^2(x) = 1 - \cos^2(x)) to rewrite (\sin^5(x)) as ((1 - \cos^2(x))^2\sin(x)). 2. Expand ((1 - \cos^2(x))^2) using the binomial theorem. 3. Now you have an integral involving powers of (\sin(x)) and (\cos(x)), which you can integrate using integration by parts. 4. Let (u = \cos(x)) and (dv = \sin^4(x)\cos^4(x) , dx), then differentiate (u) to find (du) and integrate (dv) to find (v). 5. Apply integration by parts formula: (\int u , dv = uv - \int v , du). 6. Substitute the values of (u), (du), (dv), and (v) into the integration by parts formula and evaluate the integral. 7. Repeat the process if necessary until you obtain an expression that can be easily integrated. Following these steps will lead you to the solution of the integral (\int \sin^5(x)\cos^8(x) , dx). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-integral-of-int-sin-5-x-cos-8-x-dx-8f9afa08cc","timestamp":"2024-11-03T06:10:26Z","content_type":"text/html","content_length":"570545","record_id":"<urn:uuid:8d7d9364-8082-4764-a2b4-90d9c30b0337>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00663.warc.gz"}