content
stringlengths
86
994k
meta
stringlengths
288
619
Through the Interface Following on from the previous post in this series, today’s post completes the implementation to create a full Apollonian gasket in AutoCAD using F#. As a comment on the original Common LISP implementation, someone had contributed a more complete version which allowed me to complete today’s F# version. Here’s the additional F# file for the project (which I’ll be providing in full at the end of the series): module CirclePackingFullFs open System.Numerics; // Use Descartes' theorem to calculate the radius/position // of the 4th circle // k4 = k1 + k2 + k3 +/- sqrt(k1k2 + k2k3 + k3k1)... Read more →
{"url":"https://through-the-interface.typepad.com/through_the_interface/f/page/2/","timestamp":"2024-11-13T06:07:19Z","content_type":"text/html","content_length":"111954","record_id":"<urn:uuid:eb610c41-1369-4aa7-9f27-603db09c4eb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00290.warc.gz"}
r.mapcalc: Allow rounding of floating numbers Reported by: pvanbosgeo Owned by: Priority: normal Milestone: 7.6.2 Component: Raster Version: unspecified Keywords: r.mapcalc Cc: CPU: Unspecified Platform: Unspecified The round() function in r.mapcalc always returns an integer, regardless of its argument types. Integers are always 32-bit, so the result is limited to the range +/- 2147483647 (2^31-1). Suggestion: extend the function to allow to round numbers outside the integer range, and to round with a specific number of decimal places, like e.g., the function round() in R. Various options have been discussed on the mailing list, see this email thread Change History (19) Component: Default → Raster Keywords: r.mapcalc added Summary: Allow rounding of floating numbers → r.mapcalc: Allow rounding of floating numbers Replying to pvanbosgeo: The round() function in r.mapcalc always returns an integer, regardless of its argument types. Integers are always 32-bit, so the result is limited to the range +/- 2147483647 (2^31-1). Suggestion: extend the function to allow to round numbers outside the integer range, and to round with a specific number of decimal places, like e.g., the function round() in R. Try trunk r56313. The output type of round() is now the same like the input type. Rounding to a given number of decimal places is supported with round(x, y) with y = number of decimal places. The new function round(x, y) supports a negative number of decimal places: for example, round(119, -1) results in 120, and round(119, -2) results in 100. Replying to mmetz: The output type of round() is now the same like the input type. This changes long-standing behaviour in a way which could break scripts. E.g. if k is an integer, round(x) / k would always evaluate to an integer, whereas now it may result in a fraction. Rounding to a given number of decimal places is supported with round(x, y) From the mailing list discussion ... Rounding to a given number of decimals is unnecessarily limiting. The algorithm for generalised rounding is essentially: roundTo(x, k) = round(x / k) * k. Rounding to N decimal places is just a case of using k=1/10^N. If you allow k to be specified directly, then you can round to arbitrary steps (e.g. k=5 would round to the nearest multiple of 5, etc). However: there's a slight problem with doing it that way: 0.1 isn't exactly representable in binary, so e.g. x/0.1 isn't equal to x*10; it would be more accurate to use: roundTo(x, k) = round(x * k) / k where k is the reciprocal of the step, so k=10^N to round to N decimal places (or k=2 to round to 1/2). The downside is that the interface is less useful if you want to round to something other than a fixed number of decimal places. E.g. if you wanted to round to the nearest multiple of 45 degrees, you'd need to use k=1.0/45, which isn't exactly representable. Unless someone has a better idea, I plan to change the round() function so that the second argument is the step value, and add an roundi() (round-inverse) function where the second argument is the reciprocal of the step value (to avoid the rounding error when using a step of 10^-N). Replying to glynn: Replying to mmetz: The output type of round() is now the same like the input type. This changes long-standing behaviour in a way which could break scripts. OK. Restoring the original behaviour for round(x) is easy. But it would be nice to have a round(x, y) function that preserves the data type of x in order to have a possibility to avoid integer Rounding to a given number of decimal places is supported with round(x, y) From the mailing list discussion ... Rounding to a given number of decimals is unnecessarily limiting. The algorithm for generalised rounding is essentially: roundTo(x, k) = round(x / k) * k. Rounding to N decimal places is just a case of using k=1/10^N. If you allow k to be specified directly, then you can round to arbitrary steps (e.g. k=5 would round to the nearest multiple of 5, I was just looking at the function round() in R which rounds to decimal places. Generalised rounding makes more sense. However: there's a slight problem with doing it that way: 0.1 isn't exactly representable in binary, so e.g. x/0.1 isn't equal to x*10; it would be more accurate to use: roundTo(x, k) = round(x * k) / k where k is the reciprocal of the step, so k=10^N to round to N decimal places (or k=2 to round to 1/2). Unless someone has a better idea, I plan to change the round() function so that the second argument is the step value, and add an roundi() (round-inverse) function where the second argument is the reciprocal of the step value (to avoid the rounding error when using a step of 10^-N). Sounds good to me. Replying to glynn: Unless someone has a better idea, I plan to change the round() function so that the second argument is the step value, Done in r56365. An optional third argument is the start value, so e.g. round(x,1.0,0.5) will round to the nearest something-point-five value. and add an roundi() (round-inverse) function where the second argument is the reciprocal of the step value (to avoid the rounding error when using a step of 10^-N). I haven't bothered with this. If the step value can't be represented exactly, then in the general case, neither can the rounded value. If it's desired, it would be better to clone xround.c and modify the i_round() function (swap the multiplication and division) than to try to get yet another case into that file. Rounding to a given number of decimal places (as opposed to rounding to a multiple of 10^-N) is something which really needs to be done during conversion to a string. Attempting to round a floating-point value to a given number of decimal places will inevitably add rounding errors which may be visible if the value is subsequently converted to a string using sufficient precision. Hi Glynn, this looks and works great. Initially I wasn't really clear about the implementation, but the round(x,y) seems to do exactly what I hoped for (including rounding numbers outside the integer range), great! The explanations for round(x,y) and round(x,y,z) in the r.mapcalc help file are perhaps not immediately clear (but that might be me). Maybe it is possible to get some examples in the help file? In any case, great work. Replying to pvanbosgeo: The explanations for round(x,y) and round(x,y,z) in the r.mapcalc help file are perhaps not immediately clear (but that might be me). Maybe it is possible to get some examples in the help file? An example for each round(x,y) and round(x,y,z) would be appreciated (if here, I am happy to add them the manual). E.g. convert degree Celsius map/floating point to 10*degC (like BIOCLIM) as integer? What is status of this issue? Ticket retargeted after milestone closed I also think that two simple examples in the manual would be useful in understanding the use of round(x,y) and round(x,y,z). I tested locally adding split_window_expression = '(round({swe}, 2, 0.5))'.format(swe=split_window_expression) in https://gitlab.com/NikosAlexandris/i.landsat8.swlst/blob/master/i.landsat8.swlst.py after a bit of trial and error. What is the state of the ticket? All enhancement tickets should be assigned to 7.6 milestone. Ticket retargeted after milestone closed Ticket retargeted after milestone closed
{"url":"https://trac.osgeo.org/grass/ticket/1976","timestamp":"2024-11-10T00:14:36Z","content_type":"text/html","content_length":"65781","record_id":"<urn:uuid:c275f21a-97ad-47d8-a57b-92de0ba80e37>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00239.warc.gz"}
Club Penguin January 2012 Furniture Catalog Cheats Club Penguin has released their first furniture catalog of 2012, for the month of January. Here is what is new as well as the cheats on the hidden furniture items. The main theme of all the new items has to do with this month’s underwater theme. They’ve given the catalog a more soft look and changed the buy button. The name on the catalog has also been changed from Better Igloos Catalog to Furniture Catalog. Here are the cheats on the locations of the hidden items, new and old: Click the lock on the Treasure Chest for the Ancient Archway, which costs 600 coins. Click the top of the Barbecue for the Bamboo Torch, which costs 200 coins. Click the Gingerbread man’s button for the Candy Cane, which costs 300 coins each. Click the word “Swirly” in “Swirly Lollipop” for the Log Bench, which costs 500 coins each. Click the top of the Gumdrop Tree for the Log Chair, which costs 200 coins. Click the icing decoration for the Icicle Lights, which costs 75 coins each. Click the top of the Cozy Fireplace for the Holiday Lights, which costs 30 coins each. Click the Coins For Change Donation Station for the Log Drawers, which costs 250 coins each. Click the candle on the right side of the Festive Coffee Table for the Holiday Bells, which costs 100 coins each. Click the word “Holiday” in “Holiday Star Decoration” for the Holiday Tree Decoration, which costs 150 coins each. Click the coin under the Stockings for the Presents, which costs 170 coins each. Click the top of the mountain for the Holiday Tree, which costs 600 coins each. Click the hat on the Top Hat Snowman for the Wooden Reindeer, which costs 450 coins each. Click the hat of the Orange Scarf Snowman for the Leaning Tree, which costs 250 coins each. Click the blue bird on the Santa Hat Snowman for the Lamp Post, which costs 600 coins each. Click the Brown Penguin for the Ninja Cauldron, which costs 150 coins. Click the word “Ninja” for the Training Dummy, which costs 500 coins. Click the word “only” for the Modern Chair, which costs 700 coins. Click the Yellow Lantern for the Modern Couch, which costs 850 coins. Click the Window for the Wall Clock, which costs 450 coins. Click the center of the Scoop Chair for the Green Birdhouse, which costs 200 coins. Click the bottom shelf of the Burgundy Bookshelf for the Blue Birdhouse, which costs 170 coins. Click the red lollipop towards the left of the Trick-or-Treats for the Terrifying Tissue Ghost, which costs 150 coins. Click the 230 under the Swamp Slime for the Control Terminal, which costs 800 coins. Click the word “wall” for the Iron Chandelier, which costs 600 coins. Click the handle of the Wall Pumpkin for the Cauldron, which costs 630 coins. Click the mouth of the Wall Ghost for the Plasma Ball, which costs 550 coins. Click the word “spooky” for the Laboratory Desk, which costs 700 coins. Click the mouth of The Laughing Lantern for the Perched Puffle Statue, which costs 275 coins. Click the eyes of The Glowing Grin, for the Candelabra, which costs 650 coins. Click the third section of the Stone Wall for the Crystal Ball, which costs 350 coins. Click the top of the Antique Clock for the Torn Carpet, which costs 100 coins. Click the first pumpkin in the Jack-O-Lights for the Tombstone, which costs 300 coins. Click the right window of the Haunted Mansion Cut-Out for the Pile o’ Goo, which costs 50 coins. Click the door of the Haunted Mansion Cut-Out for the Spider Web, which costs 75 coins. Here is a video of all the cheats: The following pages have been updated: Leave a Response Fun Fact: The Suit you see on the cover, which is from a 2009 cave expedition, allows you to do the swimming dance that Ducky tubes and Lifeguard suits can do. A Green variation of this suit came out on the treasure book…yet, it is missing the swimming dance. My question is….why?…
{"url":"https://clubpenguinmemories.com/2012/01/january-2012-furniture-catalog/","timestamp":"2024-11-03T06:06:39Z","content_type":"text/html","content_length":"76269","record_id":"<urn:uuid:e927eda6-c2f2-43a2-b486-a7e09902f1e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00795.warc.gz"}
Find The Slope Of Each Line Worksheet Answers The sloping surface of a line graph can be entered into a Microsoft excel table by first right-clicking the plot and then going to Insert Table. Select the column and then enter the name of the data you have entered into the cell. Click on the data and then click on the drop down menu and choose the drop down menu for the dropdown that says slope of the line. Enter the name of the slope of the line and then click on OK. You should see a new column added onto your Microsoft excel table. Now you can plot your data plot using Microsoft Excel. Collection of Find the slope of the line worksheet from find the slope of each line worksheet answers , source:worksheets-library.com Using a Microsoft Excel table, you can plot the horizontal or vertical lines that represent your data plot. You can add a title to the cell and then highlight it by choosing the color blue and then entering the value 8 in the range formula. Click on OK. Finally you should see your new plot displayed on your Microsoft Excel graph. You can change the color of your highlighted cell by selecting the color blue and then choosing the color light gray. There are other ways to plot a line graph in Microsoft Excel. In the Project menu, go to Page Layout and select Page Layout Wizard. Here you can choose the grid where your data plot is located. Use the drop down menus to set the size and format of your data plot and then click OK. Introduction to coordinate geometry from find the slope of each line worksheet answers , source:amsi.org.au Horizontal line graphs can be used to display sales price information for products. To draw a horizontal line from the high point of the highest priced item to the low point of the lowest priced item, enter a price of 200 in the range formula and then drag the mouse to the lower left corner of the plot. Click on the line and then use the normal mouse control to draw a horizontal line from the high point to the low point. Repeat this process to create a second line or angle for additional analysis. Be careful when creating angle or lines, because if you accidentally move the mouse to one side, Excel will complain and you have to redraw the portion that was erased. There are times when it is not feasible to simply drag a line from point A to point B, and so the plot would require more advanced tools. One of these tools is the histogram, which is basically a graphical representation of the data plot created in Microsoft Excel. When you hover your cursor over the x-axis, you will see a histogram of the data, which visually shows the variation in prices over time. Worksheets by Math Crush Graphing Coordinate Plane from find the slope of each line worksheet answers , source:mathcrush.com Another useful tool is the logistic regression, which plots a line graph of the predicted path of the selected price variable over time. This tool is available only with Microsoft Excel 2021. When you hover your cursor over the x-axis, you will see a logistic regression plot, which visually displays the change in price over the course of the selected period of time. To draw the line graphs, select the desired points on the chart and then enter a value for the points on the x-axis. Each point must be on its own line; otherwise the data will not be consistent. Click on one of the points on the chart and then watch it change as the price varies. To add more data, you can add more points, and if you don’t like the way that the plot looks at the beginning, you can just erase some of the points. Slope of a line negative slope video from find the slope of each line worksheet answers , source:khanacademy.org You might notice that there is no slope associated with the data plot. Don’t be alarmed! This is very common; most graphing packages do not include slope commands. The reason why there is no slope for an Excel line graph is that the data is always plotted as a mean value over a zero interval. When a point is plotted as a mean, it will become very insignificant as the value begins to saturate (i.e., the slope of the line tends to zero). 19 New Find the Slope Each Line Worksheet Answers from find the slope of each line worksheet answers , source:purf.us Parallel and Perpendicular Lines from find the slope of each line worksheet answers , source:saylordotorg.github.io Finding Missing Angles Worksheet Luxury Polygon Angle Sum Worksheet from find the slope of each line worksheet answers , source:goybparenting.com 29 New Finding the Slope A Line Worksheet Pics grahapada from find the slope of each line worksheet answers , source:grahapada.com Slope of a line negative slope video from find the slope of each line worksheet answers , source:khanacademy.org Worksheet for analytical calibration curve from find the slope of each line worksheet answers , source:terpconnect.umd.edu
{"url":"https://briefencounters.ca/35083/find-the-slope-of-each-line-worksheet-answers/","timestamp":"2024-11-07T22:23:47Z","content_type":"text/html","content_length":"92391","record_id":"<urn:uuid:ab05ad55-bb94-42d0-b03e-49c643bc7dd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00137.warc.gz"}
PLUS Factorization of Matrices PLUS factorization was proposed as a new framework of matrix factorization, A=PLUS, where matrices P, L and U are almost the same as in LU factorization, permutation, unit lower and upper triangular matrices, respectively, and S is a very special matrix, which is unit, lower and triangular, but only with a small number of non-zeros. Different from LU factorization, all the diagonal elements of U in PLUS factorization are customizable, i.e., the elements can be assigned by users almost freely. With PLUS factorization, the matrix A is easily factorized further into a series of special matrices similar to S. The new computational mechanics with PLUS factorization has a few very elegant and promising properties that other factorizations do not have, such as in-place computation and simple inversion. PLUS factorization also allows of transforming integers reversibly and losslessly if the diagonal elements of U are all designated as 1, -1, i, or - i. The theory was mainly published in the following two papers: * Pengwei Hao, "Customizable Triangular Factorizations of Matrices", Linear Algebra and Its Applications, Vol. 382, pp. 135-154, May 2004. * Pengwei Hao and Qingyun Shi, "Matrix Factorization for Reversible Integer Mapping", IEEE Transactions on Signal Processing, Vol. 49, No. 10, pp. 2314-2324, Oct. 2001. It has applications in lossless source coding, fast image registration and fast volumetric data rendering. All our publications are downloadable on http://www.dcs.qmul.ac.uk/~phao/Papers/. Our programs in C (its EXE version) and MATLAB are also available for research ONLY, and please cite the above papers in your publications. For any commercial applications, a written permission MUST be obtained from the authors. email: phao@cis.pku.edu.cn, phao@dcs.qmul.ac.uk
{"url":"http://www.eecs.qmul.ac.uk/~phao/PLUS/","timestamp":"2024-11-13T12:26:46Z","content_type":"text/html","content_length":"14481","record_id":"<urn:uuid:83e10cf8-b061-4c18-96f6-ff0dabbb01c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00700.warc.gz"}
Parallel Breadth-First Search and Exact Shortest Paths and Stronger Notions for Approximate Distances Parallel Breadth-First Search and Exact Shortest Paths and Stronger Notions for Approximate Distances We introduce stronger notions for approximate single-source shortest-path distances, show how to efficiently compute them from weaker standard notions, and demonstrate the algorithmic power of these new notions and transformations. One application is the first work-efficient parallel algorithm for computing exact single-source shortest paths graphs – resolving a major open problem in parallel computing. Given a source vertex in a directed graph with polynomially-bounded nonnegative integer lengths, the algorithm computes an exact shortest path tree in m log^O(1) n work and n^1/2+o(1) depth. Previously, no parallel algorithm improving the trivial linear depths of Dijkstra's algorithm without significantly increasing the work was known, even for the case of undirected and unweighted graphs (i.e., for computing a BFS-tree). Our main result is a black-box transformation that uses log^O(1) n standard approximate distance computations to produce approximate distances which also satisfy the subtractive triangle inequality (up to a (1+ε) factor) and even induce an exact shortest path tree in a graph with only slightly perturbed edge lengths. These strengthened approximations are algorithmically significantly more powerful and overcome well-known and often encountered barriers for using approximate distances. In directed graphs they can even be boosted to exact distances. This results in a black-box transformation of any (parallel or distributed) algorithm for approximate shortest paths in directed graphs into an algorithm computing exact distances at essentially no cost. Applying this to the recent breakthroughs of Fineman et al. for compute approximate SSSP-distances via approximate hopsets gives new parallel and distributed algorithm for exact shortest paths.
{"url":"https://api.deepai.org/publication/parallel-breadth-first-search-and-exact-shortest-paths-and-stronger-notions-for-approximate-distances","timestamp":"2024-11-12T08:31:56Z","content_type":"text/html","content_length":"156238","record_id":"<urn:uuid:21afc0bb-ea03-4a6e-9612-18a0dffc4e9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00504.warc.gz"}
Full-tank - math word problem (83857) A full-tank of petrol a car lasts for 10 days. If driver starts using 25% more everyday, how many days will the full-tank petrol last? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators percentage calculator will help you quickly calculate various typical tasks with percentages. Tip: Our volume units converter will help you convert volume units. Do you want to convert time units like minutes to seconds? You need to know the following knowledge to solve this word math problem: Units of physical quantities: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/83857","timestamp":"2024-11-09T07:03:36Z","content_type":"text/html","content_length":"59111","record_id":"<urn:uuid:0d5674f0-7562-49a6-98ee-7fe21a6db8ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00384.warc.gz"}
RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF To score well in mathematics, students from class 8 are required to be fully prepared before the mathematics board exam. Vedantu is the best source of study material for class 8 mathematics including RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF. Free PDF download of RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions solved by Expert Mathematics Teachers on Vedantu. All Exercise with Solutions for Class 8 Math RS Aggarwal to help you to revise the complete Syllabus and Score More marks. Register for online coaching for IIT JEE (Mains & Advanced) and other engineering entrance exams. FAQs on RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 1. How to prepare for Class 8 math exam with RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF? The following are the steps for students from class 8 to prepare with RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF. • First, students are advised to learn all the basic concepts of the Chapter-6 Operations on Algebraic Expressions and be thorough with all important topics. • Then, students can start solving RS Aggarwal Solutions . If any doubt arises, students can check the solutions provided in RS Aggarwal Solutions Class 8 - Free PDF. 2. How to get the most out of the RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF? The Solutions of Operations on Algebraic Expressions given here on Vedantu are completely free and any student can clear his/her doubts from the given exercises in the RS Aggarwal book with the greatest ease. After learning all the basic concepts from , students from class 8 can start solving RS Aggarwal Solutions . Students from class 8 need to solve the questions by themselves without checking solutions. 3. What are the key benefits of RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3? The following are the key benefits of RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3. • Easy to understand explanations of Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 • Detailed and stepwise solutions for all the exercise questions are given. • The best methods to solve the questions are given • Diagrams have been provided for a better understanding • The complete solutions are free of cost for students from class 8 • Contain short and crisp methods of solving questions . 4. What are the advantages of using RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF? The RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF contains illustrative examples related to each concept. With the help of well-explained examples, students can easily relate to the situations efficiently. There are an abundance of questions available in RS Aggarwal Solutions - Free PDF, which help students practice thoroughly and have conceptual clarity. Therefore, the students from class 8 must refer to RS Aggarwal Solutions while preparing for the math board exam. 5. How can RS Aggarwal Solutions Class 8 Chapter-6 Operations on Algebraic Expressions (Ex 6C) Exercise 6.3 - Free PDF help students score good marks? Each question contained in RS Aggarwal Solutions are of higher-order thinking skills questions, which enable students to think out of the box to solve those questions. Practicing RS Aggarwal - Free PDF regularly will help students from class 8 redefine the approach to get the correct answer quickly. They will be able to develop problem-solving abilities and have a deeper understanding of the concepts from math Chapter-6 Operations on Algebraic Expressions.
{"url":"https://www.vedantu.com/rs-aggarwal-solutions/class-8-maths-chapter-6-exercise-6-3","timestamp":"2024-11-05T11:00:18Z","content_type":"text/html","content_length":"179960","record_id":"<urn:uuid:b29dffd1-493a-4cff-830d-6530ce509768>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00580.warc.gz"}
PPT - College Algebra Prerequisite Topics Review PowerPoint Presentation - ID:1034491 1. College Algebra Prerequisite Topics Review • Quick review of basic algebra skills that you should have developed before taking this class • 18 problems that are typical of things you should already know how to do 2. Review of Like Terms • Recall that a term is a constant, a variable, or a product of a constant and variables • Like Terms: terms are called “like terms” if they have exactly the same variables with exactly the same exponents, but may have different coefficients • Example of Like Terms: 3. Adding and Subtracting Like Terms • When “like terms” are added or subtracted, the result is a like term and its coefficient is the sum or difference of the coefficients of the other terms • 4. Polynomial • Polynomial – a finite sum of terms • Examples: 5. Adding and Subtracting Polynomials • To add or subtract polynomials: • Distribute to get rid of parentheses • Combine like terms • Example: 6. Problem 1 • Perform the indicated operation: • Answer: 7. Multiplying Polynomials • To multiply polynomials: • Get rid of parentheses by multiplying every term of the first by every term of the second using the rules of exponents • Combine like terms • 8. Problem2 • Perform the indicated operation: • Answer: 9. Squaring a Binomial • To square a binomial means to multiply it by itself • Although a binomial can be squared by foiling it by itself, it is best to memorize a shortcut for squaring a binomial: 10. Problem3 • Perform the indicated operation: • Answer: 11. Dividing a Polynomial by a Polynomial • First write each polynomial in descending powers • If a term of some power is missing, write that term with a zero coefficient • Complete the problem exactly like a long division problem in basic math 13. Problem4 • Perform the indicated operation: • Answer: 14. Factoring Polynomials • To factor a polynomial is to write it as a product of two or more other polynomials, each of which is called a factor • In a sense, factoring is the opposite of multiplying polynomials: We have learned that: (2x – 3)(3x + 5) = 6x2 + x – 15 If we were asked to factor 6x2 + x – 15 we would write it as: (2x – 3)(3x + 5) So we would say that (2x – 3) and (3x + 5) are factors of6x2 + x – 15 15. Prime Polynomials • A polynomial is called prime, if it is not 1, and if its only factors are itself and 1 • Just like we learn to identify certain numbers as being prime we will learn to identify certain polynomials as being prime • We will also completely factor polynomials by writing them as a product of prime polynomials 16. Importance of Factoring • If you don’t learn to factor polynomials you can’t pass college algebra or more advanced math classes • It is essential that you memorize the following procedures and become proficient in using them 17. 5 Steps in Completely Factoring a Polynomial (1)Write the polynomial in descending powers of one variable (if there is more than one variable, pick any one you wish) (2) Look at each term of the polynomial to see if every term contains a common factor other than 1, if so, use the distributive property in reverse to place the greatest common factor outside a parentheses and other terms inside parentheses that give a product equal to the original polynomial (3) After factoring out the greatest common factor, look at the new polynomial factors to determine how many terms each one contains (4) Use the method appropriate to the number of terms in the polynomial: 4 or more terms: “Factor by Grouping” 3 terms: PRIME UNLESS they are of the form “ax2 + bx + c”. If of this form, use “Trial and Error FOIL” or “abc Grouping” 2 terms: Always PRIME UNLESS they are: “difference of squares”: a2 – b2 “difference of cubes”: a3 – b3 “sum of cubes”: a3 + b3 In each of these cases factor by a formula (5) Cycle through step 4 as many times as necessary until all factors are “prime” 18. Factoring the Greatest Common Factor from Polynomials (Already in descending powers of a variable) 9y5 + y2 y2( ) y2(9y3 + 1) 6x2t + 8xt + 12t 2t( ) 2t(3x2 + 4x + 6) 19. Factor by Grouping(Used for 4 or more terms) (1) Group the terms by underlining: If there are exactly 4 terms try: 2 & 2 grouping, 3 & 1 grouping, or 1 & 3 grouping If there are exactly 5 terms try: 3 & 2 grouping, or 2 & 3 grouping 20. Factoring by Grouping (2) Factor each underlined group as if it were a factoring problem by itself (3) Now determine if the underlined and factored groups contain a common factor, if they contain a common factor, factor it out if they don’t contain a common factor, try other groupings, if none work, the polynomial is prime (4) Once again count the terms in each of the new polynomial factors and return to step 4. 21. Example of Factoring by Grouping Factor: ax + ay + 6x + 6y (1) Group the terms by underlining (start with 2 and 2 grouping): ax + ay + 6x + 6y (2) Factor each underlined group as if it were a factoring problem by itself: a(x + y) + 6(x + y) [notice sign between groups gets carried down] 22. Factoring by Grouping Example Continued (3) Now determine if the underlined and factored groups contain a common factor, if they do, factor it out: a(x + y) + 6(x + y) (x + y)(a + 6) ax + ay + 6x + 6y = (x + y)(a + 6) (4) Once again count the terms in each of the new polynomial factors and return to step 4. Each of these polynomial factors contains two terms, return to step 4 to see if these will factor (SINCE WE HAVE NOT YET DISCUSSED FACTORING POLYNOMIALS WITH TWO TERMS WE WILL NOT CONTINUE AT THIS TIME) 23. Example of Factoring by Grouping Factor: (1) Group the terms by underlining (Try 2 and 2 grouping): • Factor each underlined group as if it were a factoring problem by itself: [notice sign between groups gets carried down and you have to be careful with this sign] 24. Factoring by Grouping Example Continued (3) Now determine if the underlined and factored groups contain a common factor, if they do, factor it out: (4) Once again count the terms in each of the new polynomial factors and return to step 4. Each of these polynomial factors contains two terms, return to step 4 to see if these will factor (AGAIN WE HAVE LEARNED TO FACTOR BINOMIALS YET, SO WE WON’T CONTINUE ON THIS EXAMPLE) 25. Note on Factoring by Grouping • It was noted in step 3 of the factor by grouping steps that sometimes the first grouping, or the first arrangement of terms might not result in giving a common factor in each term – in that case other groupings, or other arrangements of terms, must be tried • Only after we have tried all groupings and all arrangement of terms can we determine whether the polynomial is factorable or prime 26. Try Factoring by GroupingWithout First Rearranging Factor: (1) Group the terms by underlining (Try 2 and 2): • Factor each underlined group as if it were a factoring problem by itself: . 27. Now Try Same Problemby Rearranging Factor: Rearrange: (1) Group the terms by underlining: • Factor each underlined group as if it were a factoring problem by itself: . 28. Factoring by Grouping Example Continued (3) Now factor out the common factor: (4) Once again count the terms in each of the new polynomial factors and return to step 4. Each of these polynomial factors contains two terms, return to step 4 to see if these will factor (AGAIN WE TO WAIT UNTIL WE LEARN TO FACTOR BINOMIALS BEFORE WE CAN CONTINUE) 29. Factoring Trinomials byTrial and Error FOIL(Used for 3 terms of form ax2 + bx + c) • Given a trinomial if this form, experiment to try to find two binomials that could multiply to give that trinomial • Remember that when two binomials are multiplied: First times First = First Term of Trinomial Outside times Outside + Inside times Inside = Middle Term of Trinomial Last times Last = Last Term of Trinomial 30. Steps in Using Trial and Error FOIL • Given a trinomial of the form: • Write two blank parentheses that will each eventually contain a binomial • Use the idea that “first times first = first” to get possible answers for first term of each binomial 31. Continuing Steps in Trial and Error FOIL • Given a trinomial of the form: • Next use the idea that “last times last = last” to get possible answers for last term of each binomial 32. Continuing Steps in Trial and Error FOIL • Given a trinomial of the form: • Finally use the idea that “Outside times Outside + Inside times Inside = Middle Term of Trinomial” to get the final answer for two binomials that multiply to give the trinomial 33. Prime Trinomials • A trinomial is automatically prime if it is not of the form: • However, a trinomial of this form is also prime if all possible combinations of “trial and error FOIL” have been tried, and none have yielded the correct middle term • Example: Why is this prime? • The only possible combinations that give the correct first and last terms are: • Neither gives the correct middle term: 34. Example of Factoring byTrial and Error FOIL • Factor: 12x2 + 11x – 5 • Using steps on previous slides, we see all the possibilities that give the correct first and last terms on the left and the result of multiplying them on the right (we are looking for the one that gives the correct middle term): (12x + 1)(x – 5) = 12x2 – 59x – 5 (12x – 1)(x + 5) = 12x2 + 59x – 5 (12x + 5)(x – 1) = 12x2 – 7x – 5 (12x – 5)(x + 1) = 12x2 + 7x – 5 (6x + 1)(2x – 5) = 12x2 – 28x – 5 (6x – 1)(2x + 5) = 12x2 +28x – 5 (6x + 5)(2x – 1) = 12x2 + 4x – 5 (6x – 5)(2x + 1) = 12x2 – 4x – 5 (4x + 1)(3x – 5) = 12x2 – 17x – 5 (4x – 1)(3x + 1) = 12x2 + x – 5 (4x + 5)(3x – 1) = 12x2 +11x – 5 (4x – 5)(3x + 1) = 12x2 -11x – 5 35. A Second Method of Factoring Trinomials • While the “Trial and Error FOIL” method can always be used in attempting to factor trinomials, and is usually best when first and last terms have “small coefficients,” there is a second method that is usually best to use when first and last coefficients are “larger” • We call the second method: “abc grouping” 36. Factoring Trinomials byabc Grouping(Used for 3 terms of form ax2 + bx + c) • When a polynomial is of this form: ax2 + bx + c • Identify “a”, “b”, and “c” • Multiply “a” and “c” • Find two numbers “m” and “n”, that multiply to give “ac” and add to give “b” (If this can not be done, the polynomial is already prime) • Rewrite polynomial as: ax2 + mx + nx + c • Factor these four terms by 2 and 2 grouping 37. Example of Factoring byabc Grouping • Factor: 12x2 + 11x – 5 (1) Identify “a”, “b”, and “c” a = 12, b = 11, c = - 5 (2) Multiply “a” and “c” ac = - 60 (3) Find two numbers “m” and “n”, that multiply to give “ac” and add to give “b” (If this can not be done, the polynomial is already prime) m = 15 and n = - 4, because mn = -60 and m + n = 11 (4) Rewrite as four terms: 12x2 + 15x – 4x – 5 (5) Factor by grouping: 12x2 + 15x – 4x – 5 3x(4x + 5) – 1(4x + 5) (4x + 5)(3x – 1) 38. Example of Factoring byabc Grouping (with two variables) • Factor: 35x2 – 12y2 – 13xy 35x2 – 13xy – 12y2 (descending powers of x) (1) Identify “a”, “b”, and “c” (Ignore y variable) a = 35, b = - 13, c = - 12 (2) Multiply “a” and “c” ac = - 420 (3) Find two numbers “m” and “n”, that multiply to give “ac” and add to give “b” (If this can not be done, the polynomial is already prime) m = 15 and n = - 28, because mn = - 420 and m + n = - 13 (4) Rewrite as four terms: 35x2 + 15xy – 28xy – 12y2 (5) Factor by grouping: 35x2 + 15xy – 28xy – 12y2 5x(7x + 3y) – 4y(7x + 3y) (7x + 3y)(5x – 39. Factoring Binomials by Formula • Factor by using formula appropriate for the binomial: “difference of squares”: a2 – b2 = (a – b)(a + b) “difference of cubes”: a3 – b3 = (a – b)(a2 + ab + b2) Trinomial is prime “sum of cubes”: a3 + b3 = (a + b)(a2 – ab + b2)Trinomial is prime • If none of the formulas apply, the binomial is prime BINOMIALS ARE PRIME UNLESS THEY ARE ONE OF THESE 40. Example of Factoring Binomials • Factor: 25x2 – 9y2 • Note that this binomial is a difference of squares: (5x)2 – (3y)2 • Using formula gives: (5x – 3y)(5x + 3y) 41. Example of Factoring Binomials • Factor: 8x3 – 27 • Note that this is a difference of cubes: (2x)3 – (3)3 • Using formula gives: (2x – 3)(4x2 + 6x + 9) 42. Example of Factoring Binomials • Factor: 4x2 + 9 • Note that this is not a difference of squares, difference of cubes, or sum of cubes, therefore it is prime • (4x2 + 9) • To show factoring of a polynomial that is prime, put it inside parentheses 43. Problem5 • Factor completely: • Answer: 44. Problem6 • Factor completely: • Answer: 45. Problem7 • Factor completely: • Answer: 46. Problem8 • Factor completely: • Answer: 47. Problem9 • Factor completely: • Answer: 48. Rational Expression • A ratio of two polynomials where the denominator is not zero (an “ugly fraction” with a variable in a denominator) • Example: 49. Reducing Rational Expressions to Lowest Terms • Completely factor both numerator and denominator • Apply the fundamental principle of fractions: divide out common factors that are found in both the numerator and the denominator 50. Example of Reducing Rational Expressions to Lowest Terms • Reduce to lowest terms: • Factor top and bottom: • Divide out common factors to get:
{"url":"https://www.slideserve.com/clive/college-algebra-prerequisite-topics-review-powerpoint-ppt-presentation","timestamp":"2024-11-03T21:27:00Z","content_type":"text/html","content_length":"104247","record_id":"<urn:uuid:8dee455f-fde3-4fa3-aaf7-903f36201396>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00004.warc.gz"}
Which one of the following accurately describes the three parts of the dupont identity? Which one of the following accurately describes the three parts of the dupont identity? Which one of the following accurately describes the three parts of the dupont identity? A. Operating efficiency, equity multiplier, and profitability ratio. B. Financial leverage, operating efficiency, and profitability ratio. C. Equity multiplier, profit margin, and total asset turnover. D. Debt-equity ratio, capital intensity ratio, and profit margin. E. Return on assets, profit margin, and equity multiplier. The Correct Answer Is: C. Equity multiplier, profit margin, and total asset turnover. Correct Answer Explanation: C. Equity multiplier, profit margin, and total asset turnover. The DuPont identity, also known as the DuPont analysis, is a fundamental tool used in financial analysis to break down the return on equity (ROE) into its three key components: equity multiplier, profit margin, and total asset turnover. C. Equity multiplier, profit margin, and total asset turnover is the correct answer because it accurately represents the three components of the DuPont identity. i. Equity Multiplier: This ratio measures the financial leverage employed by a company, indicating how much debt a company uses to finance its assets relative to shareholders’ equity. It is calculated as total assets divided by shareholders’ equity. A higher equity multiplier suggests a higher level of financial risk due to increased reliance on debt financing. ii. Profit Margin: This ratio shows the company’s profitability by measuring how much profit a company generates for every dollar of sales. It’s calculated by dividing net income by total revenue. A higher profit margin indicates that the company is more efficient in managing its expenses relative to its revenue. iii. Total Asset Turnover: This ratio measures a company’s efficiency in using its assets to generate sales. It’s calculated by dividing total revenue by average total assets. A higher total asset turnover signifies that the company is generating more revenue per dollar of assets employed. Now, let’s dissect why the other options are incorrect: A. Operating efficiency, equity multiplier, and profitability ratio. While this option correctly includes the equity multiplier and the profitability ratio, “operating efficiency” is not a direct component of the DuPont identity. The DuPont identity specifically breaks down the return on equity (ROE) into three components: equity multiplier, profit margin, and total asset turnover. Operating efficiency is a broad concept that may involve various efficiency measures but is not a standard part of the DuPont analysis. B. Financial leverage, operating efficiency, and profitability ratio. This option, like option A, includes “operating efficiency,” which is not a component of the DuPont identity. The term “financial leverage” is typically associated with the equity multiplier, which measures the financial leverage employed by a company. The profitability ratio is relevant, but the correct breakdown in the DuPont analysis is profit margin rather than a more generic profitability ratio. D. Debt-equity ratio, capital intensity ratio, and profit margin. This option combines different financial ratios, none of which precisely match the three components of the DuPont identity. The debt-equity ratio is related to financial leverage but is not the same as the equity multiplier. The capital intensity ratio is not a direct component of the DuPont analysis. The correct breakdown for DuPont is profit margin, equity multiplier, and total asset turnover. E. Return on assets, profit margin, and equity multiplier. While this option includes profit margin and equity multiplier, “return on assets” is not a part of the original three components of the DuPont identity. Return on assets is a distinct ratio that measures the efficiency of utilizing assets to generate profits but is not one of the standard components of DuPont. The correct breakdown for DuPont is profit margin, equity multiplier, and total asset turnover. In summary, the DuPont identity is a specific financial analysis tool that breaks down return on equity into three components: equity multiplier, profit margin, and total asset turnover. None of the other options accurately capture this breakdown, either including extraneous elements or missing key components of the DuPont analysis. The precise terminology and definitions matter in financial analysis, and understanding the correct components is crucial for an accurate assessment of a company’s financial performance. Related Posts Leave a Comment
{"url":"https://www.managementnote.com/which-one-of-the-following-accurately-describes-the-three-parts-of-the-dupont-identity/","timestamp":"2024-11-10T06:31:58Z","content_type":"text/html","content_length":"181444","record_id":"<urn:uuid:c8a4567a-1c7c-4f7f-8726-dbd65d940b7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00690.warc.gz"}
A Class of Well Posed Damped PDEs A large class of well posed PDEs is given by [45] Thus, to the ideal string wave equation ) we may add any number of even-order partial derivatives in well posed , as we now show. To show Eq.D.5) is well posed [45], we must show that the roots of the characteristic polynomial equation (§D.3) have negative real parts, i.e., that they correspond to decaying exponentials instead of growing exponentials. To do this, we may insert the general eigensolution into the PDE just like we did in § to obtain the so-called characteristic polynomial equation Let's now set spatial frequency (called the ``wavenumber'' in acoustics) and of course Laplace transform to a spatial Fourier transform. Since there are only even powers of the spatial Laplace transform variable real. Therefore, the roots of the characteristic polynomial equation (the natural frequencies of the time response of the system), are given by Next Section: Proof that the Third-Order Time Derivative is Ill PosedPrevious Section: Poles at
{"url":"https://www.dsprelated.com/freebooks/pasp/Class_Well_Posed_Damped.html","timestamp":"2024-11-12T20:23:34Z","content_type":"text/html","content_length":"29500","record_id":"<urn:uuid:8d70ce07-551d-4bb9-b7db-098756f90bac>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00169.warc.gz"}
2. Data Structure For Testers – Reverse Linked List in Java As a part of the Data structure for Testers in this post, we will learn to reverse a Linked List data structure in Java. Prerequisite post Data Structure For Testers – Implement Linked List In Java Logic to reverse Linked List 1. A node in a Linked List points to the next node but in a reversed Linked List, a node will point to the previous element. 2. Head will become Tail in a reversed Linked List. To reverse a Linked List every node will go through four steps:- Let’s have three references:- a. current = To refer current node b. next = To refer next node of current node c. previous = To refer to previous node of current node. Null in beginning Store the reference of next of current node Currently, Node will be pointing to the next node or NULL if there is no further node in List. To reverse Linked List, the current node needs to point to the previous node. If we directly change from the next node to the previous node then we will lose the reference to the next node in the List. So the first step will be to store the next node. next = current -> next Change the reference of next of current node to the previous node This is the major step where the current node will point to the previous node instead of the next node. We already stored the next node reference. But there is a question that who will be the previous node and how we will get that. There will be no previous node for the head node. As I said Head node will become the tail so the head next will be null. For any other node in the list, there will be a previous node. prev = null ( at beginning) current -> next = prev Assign reference of current to the previous node We have updated the next of the current node in step 2. Now current node will behave like the previous node for the next node in the list. Let’s store the current node reference in the previous. After this step current will not refer to any node. previous = current Assign reference of next to current to repeat the process Now we need to repeat Steps 1 to 3 for the next node in Linked List. We have already stored the next reference in Step 1. Now next node will be the current node. current = next Once all nodes go through above four steps, we need to mark current head of the Linked List to “previous” reference as after end of iteration “previous” will be pointing to the last node in Linked List. In short Head and Tail will be interchanged now. Let’s understand the above flow with the below images:- Let’s suppose we have a Linked List as below:- After reverse it should be as below:- Logic in pictures Java Program package DataStructure; * A node consists of two components. First one is data another one is a pointer to next node. * If a node does not point to any other node, it will be NULL or point to NULL. public class Node { // Data component int data; // Pointer component Node next; // A constructor to create a node Node(int d) data = d; // Since while creating a node we can not say in advance about next node so pointer will be null next = null; package DataStructure; * A single linked list consists of zero or more nodes. * First node in LinkedList is called Head and last node in a LinkedList * is called a tail. public class LinkedList { private Node headNode; // start of list private Node tailNode; // end of list // To add a new node in LinkedList public LinkedList addNode(LinkedList list, int data) { // Create a new node with given data Node newNode = new Node(data); newNode.next = null; * before adding a new node, check if list is empty. If list is empty, head and * tail will be same. if (list.headNode == null) { list.headNode = newNode; list.tailNode = newNode; // if list is not empty then set next of last node to new node from NULL // and new node will have already null from constructor else { list.tailNode.next = newNode; list.tailNode = newNode; // Return the list by head return list; // To print a LinkedList public void printList(LinkedList list) { // Get the hold of starting node Node currNode = list.headNode; System.out.print("Nodes in LinkedList are: "); // Traverse through the LinkedList till current node becomes null while (currNode != null) { // Print the data at current node System.out.print(currNode.data + " "); // Go to next node currNode = currNode.next; // To reverse a Linked List public LinkedList reverseList(LinkedList list) { Node current; Node previous = null; Node next; current = list.headNode; while (current != null) { next = current.next; current.next = previous; previous = current; current = next; list.headNode = previous; return list; Reverse LinkedList package DataStructure; public class ReverseLinkedList { public static void main(String[] args) { // Create Linked list LinkedList linkedList = new LinkedList(); // Reverse LinkedList linkedList = linkedList.reverseList(linkedList); Nodes in LinkedList are: 1 2 3 4 5 Nodes in LinkedList are: 5 4 3 2 1 You can download/clone the above sample project from here. You can subscribe to my YouTube channel RetargetCommon to learn from video tutorials. If you have any doubt, feel free to comment below. If you like my posts, please like, comment, share and subscribe. Find all Selenium related posts here, all API manual and automation related posts here, and find frequently asked Java Programs here. Many other topics you can navigate through the menu
{"url":"https://makeseleniumeasy.com/2021/02/10/data-structure-for-testers-reverse-linked-list-in-java/","timestamp":"2024-11-08T15:55:42Z","content_type":"text/html","content_length":"48729","record_id":"<urn:uuid:ff2ff712-5f41-47d4-a74b-a6bc51f95cbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00326.warc.gz"}
Visual Multiplication - The Robertson Program for Inquiry-based Teaching in Mathematics and Science Visual Multiplication Further Explained In this video from tecmath, they break down each step required to multiply two-digits by two-digits using this method. They also breakdown how to multiply three-digit by two-digit numbers. “Chinese Stick Multiplication”: Additional resource This article from the Mathematics Education program at the University of Georgia breaks down the steps when multiplying one-digit by one-digit numbers, two-digit by two-digit numbers and then three-digit by three digit-numbers using this method. The author has included visual examples to show you exactly how to group the intersections of sticks to find your products.
{"url":"https://wordpress.oise.utoronto.ca/robertson/portfolio-item/japanese-math-visual-multiplication/","timestamp":"2024-11-12T17:14:44Z","content_type":"text/html","content_length":"109614","record_id":"<urn:uuid:1c681670-5c7a-480e-a337-ca7d850077cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00321.warc.gz"}
Percentage Increase Calculator (2024) Table of contents How to calculate percent increasePercent increase formulaCalculating percent decreaseClosely related topicsFAQs The percentage increase calculator is a useful tool if you need to calculate the increase from one value to another in terms of a percentage of the original amount. Before using this calculator, it may be beneficial for you to understand how to calculate the percent increase by using the percent increase formula. The upcoming sections will explain these concepts in further detail. How to calculate percent increase The concept of percent increase is basically the amount of increase from the original number to the final number in terms of 100 parts of the original. An increase of 5 percent would indicate that, if you split the original value into 100 parts, that value has increased by an additional 5 parts. So if the original value increased by 14 percent, the value would increase by 14 for every 100 units, 28 for every 200 units, and so on. To make this even more clear, we will get into an example using the percent increase formula in the next section. 🙋 While the percentage increase calculator is important in mathematics, it is also useful in science, such as calculating the percent increase in mass of a chemical element in a compound. Percent increase formula The percent increase formula is as follows: $\footnotesize \rm\%\ increase = 100 \times \frac{(final - initial)}{|initial|}$%increase=100×∣initial∣(final−initial) An example using the formula is as follows. Suppose a $1,250 investment increased in value to $1,445 dollars in one year. What is the percent increase of the investment? To answer this, use the following steps: 1. Identify the initial value and the final value. 2. Input the values into the formula. 3. Subtract the initial value from the final value, then divide the result by the absolute value of the initial value. 4. Multiply the result by 100. The answer is the percent increase. 5. Check your answer using the percentage increase calculator. Working out the problem by hand, we get: 1. [(1,445 - 1,250)/1,250] × 100 2. (195/1,250) × 100 3. 0.156 × 100 4. 15.6 percent increase. The percentage growth calculator is a great tool to check simple problems. It can even be used to solve more complex problems that involve a percent increase. You may also find the percentage calculator is also useful in this type of problem. Calculating percent decrease If you want to know how to calculate percent decrease, we follow a very similar process as percent increase. Notice the slight modification of the formula: $\footnotesize \rm\%\ decrease = 100 \times \frac{(initial - final)}{|initial|}$%decrease=100×∣initial∣(initial−final) Suppose we have the same investment value after one year of $1,445. A year later the value decreased to $1,300. The percent decrease would be calculated as follows: 1. [(1,445 - 1,300)/1,445] × 100 2. (145/1,445) × 100 3. 0.10 × 100 = 10 percent decrease Closely related topics Although we have just covered how to calculate percent increase and percent decrease, sometimes we just are interested in the change in percent, regardless if it is an increase or a decrease. If that is the case, you can use the percent change calculator or the percentage difference calculator. A situation in which this may be useful would be an opinion poll to see if the percentage of people who favor a particular political candidate differs from 50 percent. If you want to learn how to express the relative error between the observed and true values in any measurement, check our percent error calculator. Where is percentage increase useful? Percentage increase is useful when you want to analyze how a value has changed with time. Although percentage increase is very similar to absolute increase, the former is more useful when comparing multiple data sets. For example, a change from 1 to 51 and from 50 to 100 both have an absolute change of 50, but the percentage increase for the first is 5000%, while for the second, it is 100%, so the first change grew relatively a lot more. This is why percentage increase is the most common way of measuring growth. How do I calculate percentage increase over time? Here are the steps to calculate a percentage increase over time: 1. Divide the larger number by the original number. If you have already calculated the percentage change, go to step 4. 2. Subtract one from the result of the division. 3. Multiply this new number by 100. You now have the percentage change. 4. Divide the percentage change by the period of time between the two numbers. 5. You now have the percentage increase over time. Remember that the units will be % / [time], where time is the units you divided by, e.g., s for seconds, min for minutes, etc. 6. For linear plots, multiply this number by any time difference to get the percentage change between the two times. 7. For non-linear plots, just replace the larger number with your equation and solve algebraically. This will only find the percentage change between a number you input and the original number. How do I add a percentage increase to a number? If you want to increase a number by a certain percentage, follow these steps: 1. Divide the number you wish to increase by 100 to find 1% of it. 2. Multiply 1% by your chosen percentage. 3. Add this number to your original number. 4. There you go. You have just added a percentage increase to a number! How do I add 5% to a number? To add 5% to a number: 1. Divide the number you wish to add 5% to by 100. 2. Multiply this new number by 5. 3. Add the product of the multiplication to your original number. 4. Enjoy working at 105%! How do I add two percentages? To add two percentages together follow these steps: 1. Calculate the first percentage by dividing the number you wish to find the percentage of by 100. 2. Multiply the result by the percentage in its percentage form (e.g., 50 for 50%) to get the percentage of the original number. 3. Repeat steps 1 & 2 for the other number. 4. Add these two numbers together to get the addition of two percentages. 5. If the number you wish to find the percentage of is the same for both percentages, you can just add the two percentages together and use this new percentage to get the result of the addition. How do I calculate a 10% increase? 1. Divide the number you are adding the increase to by 10. 2. Alternatively, multiply the value by 0.1. 3. Add the product of the previous step to your original number. 4. Be proud of your mathematical ability! How do I make a percentage? 1. Decide two things — the number of which you want to find the percentage of and your chosen percentage. 2. Divide the chosen number by 100. 3. Multiply this new number by your chosen percentage. 4. There you go. You’ve just made a percentage! What is a 50% increase? A 50% increase is where you increase your current value by an additional half. You can find this value by finding half of your current value and adding this to the value. For example, if you wanted to find what a 50% increase to 80 was, you’d divide by 2 to get 40, and add the two values together to get 120. A 50% increase is different to a 100% increase, which is double the original value. How do I calculate percentage increase in Excel? While it's easier to use Omni Calculator's Percentage Increase Calculator, here are the steps to calculate the percentage increase in Excel: 1. Input the original number (for example, into cell A1). 2. Input the increased number (for example, into cell B1). 3. Subtract the original number from the increased number (In C1, input =B1-A1) and label it 'difference'. 4. Divide the difference by the original price and multiply it by 100 (In D1, input =(C1/A1)*100) and label it 'percentage increase'. 5. Right-click on the final cell and select Format Cells. 6. In the Format Cells box, under Number, select Percentage and specify your desired number of decimal places. How do I add 20% to a number? 1. Divide the original number by 100 to get 1% of it. 2. Multiply 1% by your desired percentage, in this case, 20. 3. Add the product of the previous step to your original number. 4. Congratulate yourself on adding 20% to your number!
{"url":"https://anchoraudioclub.com/article/percentage-increase-calculator","timestamp":"2024-11-11T13:21:13Z","content_type":"text/html","content_length":"74911","record_id":"<urn:uuid:b273a0c1-25f5-472e-bfb6-0e2e5560ebce>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00744.warc.gz"}
Permutation group Symmetric group In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric group defined over a finite set of symbols consists of the permutations that can be performed on the symbols. Since there are ( factorial) such permutation operations, the order (number of elements) of the symmetric group is . Group theory In abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or process of changing the linear order of an ordered set. Permutations differ from combinations, which are selections of some members of a set regardless of order. For example, written as tuples, there are six permutations of the set {1, 2, 3}, namely (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), and (3, 2, 1).
{"url":"https://graphsearch.epfl.ch/en/concept/24634","timestamp":"2024-11-08T01:25:21Z","content_type":"text/html","content_length":"125229","record_id":"<urn:uuid:f493d8d1-61a5-48db-855d-fef7c66ff1cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00501.warc.gz"}
Boris Valerianovich Chirikov From Scholarpedia Dima Shepelyansky (2008), Scholarpedia, 3(10):6628. doi:10.4249/scholarpedia.6628 revision #199457 [link to/cite this article] Boris Valerianovich Chirikov (Russian Борис Валерианович Чириков), born 6 June 1928 in Oryol, Russia, USSR, died 12 February 2008 in Akademgorodok, Novosibirsk, was an outstanding Russian physicist. He was the founder of the physical theory of Hamiltonian chaos and made pioneering contributions to the theory of quantum chaos. In 1959, he invented the Chirikov criterion, an analytical method to determine the conditions for emergence of deterministic chaos in dynamical Hamiltonian systems. Life and Physics B.V.Chirikov's mother was Chirikova Lidia Vasilievna, who worked as a teacher, pedagogue, and librarian. His father, Leronskii Valerian Nikolaevich, left the family and Boris did not remember him. Their small family lived in Oryol approximately until 1936 when they both fled famine and went to Leningrad, where one of his mother's sisters helped them to settle. They lived there until the war. Around 1942, with other children who were attended by the mother, they were evacuated from Leningrad to the southern region of Russia around Krasnodar. About four months later this region was occupied by the German army, and they lived under occupation, to be liberated by the Soviet Army in 1944. Soon after that, the mother died from leukemia. Fortunately, Boris was helped by a teacher of his school, who took him to her home. After the school was finished in 1945, Boris went to Moscow to continue his studies. Chirikov was admitted to the Moscow Pedagogical Institute, and in the second year became a student at the recently created Department of Physics and Technology at Moscow State University (which later become the Moscow Institute of Physics and Technology, MIPT). He did his undergraduate and master studies there, and continued his experimental studies at the Thermotechnical Laboratory (TTL), later evolved into the Institute for Theoretical and Experimental Physics (ITEP). After graduating from the Moscow Institute of Physics and Technology in 1952, for few years Chirikov was involved in the study of meson physics at the TTL. In 1954 he accepted the offer of Gersh Budker, at that time Head of Laboratory of Novel Acceleration Methods, to join his group at LIPAN (currently the Kurchatov Institute) and to start working on problems of accelerator and plasma physics. In 1958 Budker founded the Institute of Nuclear Physics (INP) in Akademgorodok, Novosibirsk (now Budker Institute of Nuclear Physics). Chirikov became the member of the INP at April 15, 1958, the same day as Budker. He moved to Siberia in September 1959. Since then and until his last days, he was working at the INP, first as an experimentalist, and then gradually evolving into a world class theoretician. He became a corresponding member of the Russian Academy of Sciences in 1983, and a full member in 1992. Chirikov contributed much to the teaching of physics at the Novosibirsk State University, where he began to give lectures immediately after the university's foundation in September 1959. His lectures attracted to physics hundreds of students around the world. He is survived by his wife Olga Bashina and daughter Galya Chirikova. The name of Boris Chirikov is associated with an impressive list of fundamental results in the field of dynamical chaos, and the foundations of statistical mechanics. As early as 1959, in a seminal article, Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems, now known as the Chirikov criterion (Atom. Energ. 6: 630 (1959)). In the same paper, he applied this criterion to explain some puzzling experimental results on plasma confinement in open mirror traps, or magnetic bottles, that had just been obtained at the Kurchatov Institute. As in an old oriental tale, Chirikov opened such a bottle, and freed the genie of Chaos, which spread the world over. In fact, this was the very first physical theory of chaos, which succeeded in explaining a concrete experiment, and which was developed long before computers made the icons of chaos familiar to everyone. In the early 1960's Chirikov understood the importance of numerical experiments for the research of chaotic dynamics and used computers intensively to understand deep the properties of chaos. Other results obtained by him with his group at INP include: the determination of the strong chaos border and the explanation of the Fermi-Pasta-Ulam problem; the derivation of the chaos border for the Fermi acceleration model; the numerical computation of the Kolmogorov-Sinai entropy in area-preserving maps; the answer on the question of Poincaré about the width of chaotic separatrix layer, investigation of chaotic attractors in dissipative dynamical systems; the investigations of weak instabilities in many-dimensional Hamiltonian systems (Arnold diffusion and modulational diffusion); the demonstration that the homogeneous models of classical Yang-Mills field have positive Kolmogorov-Sinai entropy, and therefore are generally not integrable; the discovery of the power law decay of Poincaré recurrences in Hamiltonian systems with divided phase space; the demonstration that the dynamics of the Halley comet is chaotic, and is described by a simple map. He (essentially) invented the Chirikov standard map, described its chaos properties, established its universality and found a variety of applications. In 1977, he initiated the investigations of the quantum version of this map, also known as the kicked rotator. This led to a discovery of the phenomenon of dynamical localization of quantum chaos, which can be considered as a dynamical, deterministic version of the Anderson localization appearing in disordered solid-state systems. The research performed by his group established the grounds of the correspondence principle for dynamics of quantum chaos systems, and showed that the classical chaos survives only on the logarithmically short Ehrenfest time scale. The predictions of the theory of dynamical localization have been observed in experiments with hydrogen and Rydberg atoms in a microwave field and cold atoms in optical lattices. The quantum Chirikov standard map has been experimentally implemented with cold atoms and Bose-Einstein condensates in kicked optical lattices. The main results of Chirikov, known as X Chirikov Chaos Commandments (see Fig.3), are described in more detail in the Special Volume dedicated to his 70th birthday, published by Physica D (see also The influence of Chirikov's ideas on the field of chaos can also be gauged by the abundance of terms of common use, which were originally coined by him: the Kolmogorov-Arnold-Moser (KAM) theory (Ref.4), the Kolmogorov-Sinai entropy (Refs.4,5), the Arnold diffusion (Ref.4), the standard map (Ref.5), the kicked rotator (Ref.6), dynamical localization, the Ehrenfest time (Ref.12). Chirikov was a rare example of a physicist who was able to discuss science with mathematicians and philosophers, and to publish articles in philosophy (Ref.15). The physical theory of deterministic chaos developed by Boris Chirikov finds applications for the dynamics of solar system, particle dynamics in accelerators, magnetic plasma traps, complex quantum dynamics and various other systems. Chaotic Stories • Chirikov presented his criterion at the seminar in the Kurchatov Institute in 1958. Soon after that he had meeting with Kolmogorov at Kolmogorov's home. After listening the explanations of Chirikov about the criterion Kolmogorov said: "one should be a very brave young man to claim such things!". Indeed, even now a mathematical proof of the criterion is still lacking and there are even known counterexamples of nonlinear systems with a hidden symmetry, where the dynamics remains integrable even in the case of strongly overlapped resonances. A typical example is the Toda lattice. However, such systems with a hidden symmetry are quite rare and specific, while for generic Hamiltonian systems this physical criterion works nicely and determines very well the border for the onset of chaos. The paper with the criterion was published in 1959 since the research on plasma physics became public only after the London agreement in 1958. • During the Mathematical Congress in Moscow in 1966 Chirikov met Stanislaw Ulam, and they spent all days of the conference in hot discussions about chaotic dynamics, nonlinear chains and simple chaotic maps they both were interested in. In fact Ulam came to Moscow via France where he got a Soviet visa. Ulam was due to visit Novosibirsk in the following year (see documents from the archive of Boris Chirikov), but all invitations from Novosibirsk remained without any reply: indeed Ulam was too involved in the thermonuclear weapons project, and these letters were stopped by the American side. However, in 1966 the visit of Ulam at the Mathematical Congress in Moscow was completely ignored by the intelligence services of USA and USSR. Chirikov and Ulam met shortly again in 1979, during the visit of Chirikov to the USA but at that time Ulam was already well protected. • To give an idea of the importance of chaos research in Siberia, it is useful to quote a passage from the letter of Joe Ford (Georgia Tech) addressed to Chirikov at December 18, 1986 and found in the archive of Boris Chirikov: "... your fame has grown and the number of your visits to the West have remained small, you have become something of a "cult" figure. When your name comes up in a crowd of chaos workers (as it inevitably does), the majority who do not know you personally begin to show signs of envy towards those few who do. Stories about Chirikov are told and retold, eventually by people who only claim to know you but do not. I noticed a couple of years ago that even Michael Berry showed signs of succumbing to the Chirikov mania when he somewhat wistfully mentioned to me that he might like to visit Novosibirsk someday. Of course, now he has met you, so he is one of the "in crowd". You have become a legend in your own time by simply not being • Other stories can be found in the reminiscences of Boris Chirikov (see more at http://www.quantware.ups-tlse.fr/chirikov/publications.html) 1. B.V.Chirikov, "Resonance processes in magnetic traps",At. Energ. 6: 630 (1959) (in Russian; Engl. Transl., J. Nucl. Energy Part C: Plasma Phys. 1: 253 (1960)) 2. B.V.Chirikov, G.M.Zaslavsky, "On the mechanism of one-dimensional Fermi acceleration", Dokl. Akad. Nauk SSSR 159: 306 (1964) 3. B.V.Chirikov, F.M.Izrailev, "Statistical properties of a non-linear string", Dokl. Akad. Nauk SSSR 166: 57 (1966) 4. B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity",Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969), (Engl. Trans., CERN Trans. 71-40 (1971)) 5. B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979) 6. G.Casati, B.V.Chirikov, F.M.Izrailev, J.Ford, "Stochastic behavior of a quantum pendulum under a periodic perturbation", Lecture Notes in Physics, Springer, Berlin, 93: 334 (1979) 7. B.V.Chirikov, F.M.Izrailev, D.L.Shepelyansky, "Dynamical stochasticity in classical and quantum mechanics", Sov. Scient. Rev. C 2: 209 (1981) (Section C - Mathematical Physics Reviews, Ed. S.P.Novikov vol.2, Harwood Acad. Publ., Chur, Switzerland (1981)) 8. B.V.Chirikov, D.L.Shepelyanskii, "Stochastic oscillations of classical Yang-Mills fields", JETP Lett. 34: 163 (1981) 9. B.V.Chirikov, F.M.Izraelev, "Degeneration of turbulence in simple systems", Physica D 2: 30 (1981) 10. B.V.Chirikov, D.L.Shepelyansky, "Correlation properties of dynamical chaos in Hamiltonian systems", Physica D 13: 395 (1984) 11. G.Casati, B.V.Chirikov, D.L.Shepelyansky, I.Guarneri, ""Relevance of classical chaos in quantum mechanics: the hydrogen atom in a monochromatic field", Phys. Rep. 154: 77 (1987) 12. B.V.Chirikov, F.M.Izrailev, D.L.Shepelyansky, "Quantum chaos: localization vs. ergodicity", Physica D 33: 77 (1988) 13. B.V.Chirikov, V.V.Vecheslavov, "Chaotic dynamics of comet Halley", Astron. Astrophys. 221: 146 (1989) 14. B.V.Chirikov, "Time-dependent quantum systems", in "Chaos and quantum physics", Eds. M-J.Giannoni, A.Voros and J.Zinn-Justin, Les Houches, Session LII (1989), Elsvier Sci. Publ. B.V. p.443 (1991) 15. B.V.Chirikov, "Natural laws and human prediction", in "Law and prediction in the light of chaos research", Eds. P.Weingartner, G.Schurz, Springer, Berlin, Lecture Notes in Physics 473: 10 (1996) Internal references • John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815. • David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924. External links See also internal links Chirikov standard map, Chirikov criterion, Hamiltonian systems, Mapping, Chaos, Kolmogorov-Arnold-Moser Theory, Kolmogorov-Sinai entropy, Aubry-Mather theory, Quantum chaos
{"url":"http://www.scholarpedia.org/article/Boris_Valerianovich_Chirikov","timestamp":"2024-11-05T04:28:16Z","content_type":"text/html","content_length":"48363","record_id":"<urn:uuid:176c7b27-0483-4267-a2ae-088285e83f0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00560.warc.gz"}
Hitachi AMS 200, AMS 500, and AMS 1000, WMS100 You can attach Hitachi Data Systems (HDS) Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, WMS100, and HDS TagmaStore Workgroup Modular Storage (WMS) systems to the system. Note: In Japan, the HDS Thunder 9200 is referred to as the HDS SANrise 1200. Therefore, the information that refers to the HDS Thunder 9200 also applies to the HDS SANrise 1200.
{"url":"https://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.2.1.doc/svc_mds9200confhds9200cont_1eskkx.html","timestamp":"2024-11-10T08:18:56Z","content_type":"application/xhtml+xml","content_length":"9623","record_id":"<urn:uuid:b9d606eb-7332-4938-aec7-20d201d006d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00327.warc.gz"}
ib Calculus math • Expert Tutors for Online Tuition in Pakistan & Saudi Arabia Calculus | Pre Calculus | Advance Calculus Tutors USA Find expert calculus tutors in the USA at ASVA for top-notch tuition in Pre-Calculus, college-level, and advanced calculus. Highly qualified and experienced tutors available. Select The Best Calculus Tutors USA Mastering Calculus with the Best Tutors in the USA When it comes to conquering the intricacies of calculus, having the right guidance can make all the difference. Whether you’re navigating the challenges of Pre-Calculus, delving into college-level calculus, or exploring the realms of advanced calculus, ASVA is your destination for the finest calculus tutors in the USA. Our handpicked team of highly qualified and experienced tutors is dedicated to helping you not only understand but excel in the world of calculus. Pre-Calculus Made Simple: For those laying the foundation for their calculus journey, Pre-Calculus can sometimes feel like a daunting task. Our experienced tutors are well-versed in simplifying complex concepts, breaking them down into digestible chunks, and providing personalized guidance that caters to your learning style. With their support, you’ll build a solid understanding that will set you up for success in more advanced calculus studies. College-Level Calculus Excellence: Transitioning from high school to college-level calculus might feel overwhelming, but ASVA’s tutors possess the tools to guide you through this transition effortlessly. Leveraging their profound subject expertise and years of experience, our tutors will assist you in mastering the intricacies of college-level calculus. This preparation ensures you’re ready to confidently overcome the challenges that await. Unveiling Advanced Calculus: Our experts are ready to accompany those who dare to venture into advanced calculus on this intellectual odyssey. After mastering advanced calculus intricacies, they provide the insights and strategies necessary to conquer complex integrals, differential equations, and more. Personalized Approach, Tangible Results: At ASVA, we understand that every student’s learning journey is unique. That’s why our calculus tutors tailor their teaching methods to suit your individual needs. Whether you’re an auditory, visual, or kinesthetic learner. They will tailor their approach to provide the guidance you need to excel in calculus. Don’t let calculus become an obstacle in your academic path. ASVA’s team of dedicated tutors is here to transform your calculus experience, making it engaging, understandable, and even enjoyable. Whether you’re a beginner in Pre-Calculus or delving into advanced calculus, our expert tutors are here to guide you. They’ll support you every step of the way, ensuring mastery of the subject and helping you reach your academic aspirations. Calculus Tutor Saudi Arabia Calculus Tutor Saudi Arabia Top-notch calculus tutor in Saudi Arabia for students of all levels. Expert and experienced teachers for Pre-Calculus, college-level, and advanced calculus. Get the Best Calculus Expert Online Tutor Saudi Arabia Enhance Your Calculus Skills with Expert Tutors in Saudi Arabia When it comes to mastering calculus, having a skilled tutor by your side can make all the difference. Whether you’re navigating the complexities of Pre-Calculus, delving into college-level calculus, or tackling advanced calculus concepts, our team of highly qualified tutors in Saudi Arabia is here to provide the guidance you need. Pre-Calculus Excellence with Expert Saudi Arabia Calculus Tutors: Pre-Calculus acts as the cornerstone for a triumphant calculus expedition. Our Saudi Arabia calculus tutors excel at dissecting intricate pre-calculus concepts, guaranteeing you establish a strong understanding of the basics before tackling more advanced subjects. Excel in College-Level Calculus: Seamless Transition to College-Level Calculus with Saudi Arabia Calculus Tutors: Stepping into college-level calculus may seem challenging, yet under the guidance of our expert Saudi Arabia calculus tutors, you’ll emerge well-prepared and self-assured. Employing a student-centric methodology, our tutors adjust their teaching techniques to harmonize with your unique learning rhythm and preferences. Unveiling Advanced Calculus: Unveiling Profound Insights in Advanced Calculus with Saudi Arabia Calculus Tutors: If you yearn for a more profound grasp of calculus, our tutors possess the expertise to lead you through advanced calculus intricacies. Covering subjects like multivariable calculus and differential equations, they will demystify intricate theories and assist you in their practical application. Why Choose Our Tutors? Our tutors are not only highly qualified but also bring a wealth of experience to the table. With a proven track record of success, they have honed their skills in assisting students of varying abilities and learning preferences. Creating a Personalized Learning Journey: We understand that every student is unique. That’s why our tutors craft personalized lesson plans that cater to your strengths, address your weaknesses, and ultimately foster substantial progress in your calculus proficiency. A Supportive Learning Environment: Our tutors foster a supportive and engaging learning environment, where asking questions is encouraged and no concept is too challenging to conquer. By promoting active dialogue, we ensure you grasp each concept thoroughly. Elevate your calculus skills with the finest tutors in Saudi Arabia. Embarking on Pre-Calculus, overcoming college-level hurdles, or delving into advanced calculus, our experts support you at every step. Empower yourself with a solid foundation and a comprehensive understanding of calculus, setting the stage for a successful academic journey. Don’t miss out on the opportunity to excel – join us and embark on a calculus-learning adventure like no other.
{"url":"https://pakistanonlinetuition.com/tag/ib-calculus-math/","timestamp":"2024-11-04T17:26:21Z","content_type":"text/html","content_length":"95742","record_id":"<urn:uuid:59c0dd2d-69ec-4ebf-aff8-7d59a33c1c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00499.warc.gz"}
Given a set $\mathcal S$, and a class (collection of sets) $\mathcal H$. For any subset of $\mathcal S$, denoted as $\mathcal s$, if we have an element of class $\mathcal H$, denoted as $\mathcal h$, that leads to^1 $$ \mathcal h \cap \mathcal S = \mathcal s. $$ Since the power set of $\mathcal S$ ($P(\mathcal S)$) contains all the possible subsets of $\mathcal S$, we can also rephrase the concept using power set. If we can find the power set $P(\mathcal S)$ by looking into intersections of elements $\mathcal h$ of $\mathcal H$ ($\mathcal h\in \mathcal H$), then we say $\mathcal H$ shatters $\mathcal S$ ^1. Set $\mathcal S$ is shattered by class $\mathcal H$ if we can generate all possible subsets of $\mathcal S$ (power set of $\mathcal S$). Planted: by L Ma; L Ma (2021). 'Shatter', Datumorphism, 10 April. Available at: https://datumorphism.leima.is/cards/machine-learning/learning-theories/set-shatter/.
{"url":"https://datumorphism.leima.is/cards/machine-learning/learning-theories/set-shatter/","timestamp":"2024-11-12T03:05:22Z","content_type":"text/html","content_length":"112068","record_id":"<urn:uuid:eb205733-c4b8-4573-89a2-37cafed18bba>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00067.warc.gz"}
[ 445VCOETFDK23 ] KV Thermofluiddynamics Workload Education level Study areas Responsible person Hours per week Coordinating university 3 ECTS M1 - Master's programme 1. year ^(*)Maschinenbau Philipp Gittler 2 hpw Johannes Kepler University Linz Detailed information Original study plan Master's programme Mechanical Engineering 2023W After successfully engaging with the topics of this course, students will be able to • describe the physical meaning of the individual terms of the Navier-Stokes equations, • comprehend the solution approach of classical examples of exact solutions of the Navier-Stokes equations and provide a physical interpretation of the results, • comprhend the derivation of boundary layer equations and describe basic solutions of laminar boundary layer problems, • explain the fundamentals of turbulent boundary layers, Objectives • solve basic steady and transient heat conduction problems, • estimate heat transfer due to forced and natural convection, • conduct analytical calculations of flow- and heat transfer problems. The level of mathematical modeling and analysis of the mentioned topics is comparable to the level of the textbooks: White, Viscous Fluid FLow, 1991 und Baehr, Stephan, Wärme- und Stoffübertragung, 2013. • basic equations (Navier-Stokes equations) • exact solutions of Navier-Stokes equations Subject • boundary-layer theory • heat conduction • convective heat transfer Criteria for Written assignment, written and/or oral exam Methods Lecture by means of a script, development of practical examples Language German • H. Schlichting, K. Gersten: Grenzschicht-Theorie, Springer Verlag, 1997. • F. M. White: Viscous Fluid Flow, McGraw-Hill, 1991. Study material • M. Jischa: Konvektiver Impuls-, Wärme- und Stoffaustausch, Vieweg, 1982. • H. D. Baehr, K. Stephan: Wärme- und Stoffübertragung, Springer 2013. Changing subject? No Further information none They also cover the requirements of the curriculum (from - to) Earlier variants 481VMSSTFDK22: KV Thermo-Fluid Mechanics (2022W-2023S) MEMPBKVTHFD: KV Thermo-Fluid Mechanics (2010W-2022S) On-site course Maximum number of participants - Assignment procedure Assignment according to sequence
{"url":"https://studienhandbuch.jku.at/170999","timestamp":"2024-11-11T00:32:05Z","content_type":"application/xhtml+xml","content_length":"18143","record_id":"<urn:uuid:2cfadf54-8338-4b93-84e5-7e1b66c62446>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00676.warc.gz"}
Determination of the density of solid by using a spring balance and a measuring cylinder - BrainIgniter Determination of the density of solid by using a spring balance and a measuring cylinder To determine the density of solid (denser than water) by using a spring balance and a measuring cylinder. Materials Required Spring balance, measuring cylinder, water, solid body (denser than water) The density of a substance is defined as its mass per unit volume. i.e., Density = $\inline&space;\frac{Mass}{Volume}$ S.I. unit of volume:- Kg/m^3 1. Firstly find the least count and zero error of spring balance. 2. Suspend the given solid body from the hook of the spring balance and find out the true weight of the solid body. Let it be m gf. Therefore, mass of the given solid body is m g. 3. Take the measuring cylinder and half fill it with water. Note the initial reading of the cylinder. 4. Gently immerse the given solid body completely in water and note the final reading of cylinder. 5. Repeat the steps 3 and 4 thrice and calculate the mean of the three observations taken. Least count of spring balance = 5 gf Zero error of spring balance = ± 0 gf Weight of solid body in air = 20 gf Mass of solid body in air = 20 g S. No. Initial volume of water Final volume of water Volume of solid (V[1]) (V[2]) (V[2]– V[1]) 1. 155 ml 162.5 ml 7.5 ml 2. 210 ml 218 ml 8 ml 3. 220 ml 227.5 ml 7.5 ml Mean volume of the solid body = $\inline&space;\frac{7.5&space;+&space;7.5&space;+&space;8}{3}$ = 7.67 ml Density = $\inline&space;\frac{Mass}{Volume}$ = $\inline&space;\frac{20}{7.67}$≈ 2.6 g/ml The density of the given solid = 2.6 g/ml = 2.6 g/cm^3 1. The concave surface reading of liquid should be taken parallel to eye from measuring cylinder. 2. The solid should not touch the sides or bottom of the cylinder. 3. The solid should be completely immersed in liquid. 4. While taking the reading of measuring cylinder, keep your eye in horizontal plane with liquid level. • Shivam yadav It was very helpful for me in this pandamic situation। • Miss London It’s very helpful • Prabhavthegr8 It was helpful but no too much because of it’s theory portion. • Chhavi Very helpful • Mahesh It’s Very helpful. Thank you so much. • Ashima Rajesh Thnx for this • Huzaifa Pls make it lengthy or long 😔 • Lon Hey there, You have done an incredible job. I will definitely digg it and personally recommend to my friends. I am sure they will be benefited from this website.
{"url":"https://brainigniter.in/class-9/determination-of-density-of-solid-using-spring-balance-and-measuring-cylinder/","timestamp":"2024-11-10T02:13:05Z","content_type":"text/html","content_length":"76952","record_id":"<urn:uuid:cd51957a-094b-4206-b97f-e8ec450cd03a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00869.warc.gz"}
How do you find sec x= (-2sqrt3)/3? | HIX Tutor How do you find #sec x= (-2sqrt3)/3#? Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-sec-x-2sqrt3-3-8f9afabe3a","timestamp":"2024-11-02T22:13:09Z","content_type":"text/html","content_length":"561444","record_id":"<urn:uuid:f349c92d-b4fb-4081-b583-a51b00c3181c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00605.warc.gz"}
Data Science Interview Questions If you're looking for Data Science Interview Questions & Answers for Experienced or Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Data Science Market Expected to Reach $128.21 Billion With 36.5% CAGR Forecast To 2022. So, You still have opportunity to move ahead in your career in Data Science Analytics. Q. What Is Data Science? Data Science is a new area of specialization being developed by the Department of Mathematics and Statistics at Bowling Green State University. This field integrates math, statistics, and computer science to prepare for the rapidly expanding need for data scientists. Students seeking to pursue studies in Data Science should declare a mathematics major as entering freshmen, in anticipation of completing the specialization in years three and four. Q. What are the requirements of a Data Science program? The Data Science specialization requires three semesters of calculus (MATH 1310, MATH 2320, and MATH 2330 or MATH 2350), linear algebra (MATH 3320), introduction to programming (CS 2010), probability and statistics I (MATH 4410), and regression analysis (STAT 4020). In addition, the requirements include: a) MATH 2950 Introduction to Data Science. This one-hour seminar would introduce freshmen students to a variety of data-science applications and give them an introduction to programming. b) MATH 3430 Computing with Data. This course will focus on the data wrangling and data exploration computational skills in the context of a modern computing language such as Python or R. c) MATH 3440 Statistical Programming. This course will focus on writing scripts and functions using a modern statistical language such as R. d) MATH 4440 Statistical Learning. This course deals with modern methods for modeling data including a variety of supervised and unsupervised methods. In addition, the student will be asked to choose two of the following seven courses: a) MATH 4320 Linear Algebra with Applications b) MATH 4420 Probability and Statistics II c) MATH 4470 Exploratory Data Analysis d) CS 2020 Object-Oriented Programming e) STAT 4440 Data Mining in Business Analytics f) CS 4400 Optimization Techniques g) CS 4620 Database Management Systems Q. Compare SAS, R, Python, Perl? a) SAS is a commercial software. It is expensive and still beyond reach for most of the professionals (in individual capacity). However, it holds the highest market share in Private Organizations. So, until and unless you are in an Organization which has invested in SAS, it might be difficult to access one. R & Python, on the other hand are free and can be downloaded by any one. b) SAS is easy to learn and provides easy option (PROC SQL) for people who already know SQL. Even otherwise, it has a good stable GUI interface in its repository. In terms of resources, there are tutorials available on websites of various university and SAS has a comprehensive documentation. There are certifications from SAS training institutes, but they again come at a cost. R has the steepest learning curve among the 3 languages listed here. It requires you to learn and understand coding. R is a low level programming language and hence simple procedures can take longer codes. Python is known for its simplicity in programming world. This remains true for data analysis as well. c) SAS has decent functional graphical capabilities. However, it is just functional. Any customization on plots are difficult and requires you to understand intricacies of SAS Graph package. R has the most advanced graphical capabilities among the three. There are numerous packages which provide you advanced graphical capabilities. Python capabilities will lie somewhere in between, with options to use native libraries (matplotlib) or derived libraries (allowing calling R functions). d) All 3 ecosystems have all the basic and most needed functions available. This feature only matters if you are working on latest technologies and algorithms. Due to their open nature, R & Python get latest features quickly (R more so compared to Python). SAS, on the other hand updates its capabilities in new version roll-outs. Since R has been used widely in academics in past, development of new techniques is fast. Q. Mention features of Teradata? a) Parallel architecture:The Teradata Database provides exceptional performance using parallelism to achieve a single answer faster than a non-parallel system. Parallelism uses multiple processors working together to accomplish a task quickly. b) Single datastore:The Teradata Database acts as a single data store, instead of replicating database for different purposes with the teradata database we can store the data once and use it for all applications. The Teradata database provides same connectivity for all systems. c) Scalability:Scalability is nothing but we can add components to the system, the performance increase as linear. Scalability enables the system to grow to support more users/data/queries/complexity of queries without experiencing performance degradation. Q. What does a data scientist do? A data scientist represents an evolution from the business or data analyst role. The formal training is similar, with a solid foundation typically in computer science and applications, modeling, statistics, analytics and math. What sets the data scientist apart is strong business acumen, coupled with the ability to communicate findings to both business and IT leaders in a way that can influence how an organization approaches a business challenge. Good data scientists will not just address business problems, they will pick the right problems that have the most value to the organization. A traditional data analyst may look only at data from a single source – a CRM system, for example – a data scientist will most likely explore and examine data from multiple disparate sources. The data scientist will sift through all incoming data with the goal of discovering a previously hidden insight, which in turn can provide a competitive advantage or address a pressing business problem. A data scientist does not simply collect and report on data, but also looks at it from many angles, determines what it means, then recommends ways to apply the data. Q. How do Data Scientists Code in R? R is a popular open source programming environment for statistics and data mining. The good news is that it is easily integrated into ML Studio. I have a lot of friends using functional languages for machine learning, such as F#. It’s pretty clear, however, that R is dominant in this space. Polls and surveys of data miners are showing R’s popularity has increased substantially in recent years. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently developed by the R Development Core Team, of which Chambers is a member. R is named partly after the first names of the first two R authors. R is a GNU project and is written primarily in C, Fortran. Q. What is Machine Learning? Machine learning represents the logical extension of simple data retrieval and storage. It is about developing building blocks that make computers learn and behave more intelligently. Machine learning makes it possible to mine historical data and make predictions about future trends. Search engine results, online recommendations, ad targeting, fraud detection, and spam filtering are all examples of what is possible with machine learning. Machine learning is about making data-driven decisions. While instinct might be important, it is difficult to beat empirical data. Q. What is the use of Machine Learning? Machine Learning is found in things we use every day such as Internet search engines, email and online music and book recommendation systems. Credit card companies use machine learning to protect against fraud. Using adaptive technology, computers recognize patterns and anticipate actions. Machine Learning is used in more complex applications such as: 1. Self-parking cars 2. Guiding robots 3. Airplane navigation systems (manned and unmanned), 4. Space exploration 5. Medicine Q. What is Machine Learning best suited for? Machine Learning is good at replacing labor-intensive decision-making systems that are predicated on hand-coded decision rules or manual analysis. Six types of analysis that Marchine Learning is well suited for are: 1. classification (predicting the class/group membership of items) 2. regression (predicting real-valued attributes) 3. clustering (finding natural groupings in data) 4. multi-label classification (tagging items with labels) 5. recommendation engines (connecting users to items) Q. Define Boxplot? In descriptive statistics, a boxplot, also known as a box-and-whisker diagram or plot, is a convenient way of graphically depicting groups of numerical data through their FIVE-NUMBER SUMMARIES (the smallest observation, lower quartile (Q1), median (Q2), upper quartile (Q3), and largest observation). A boxplot may also indicate which observations, if any, might be considered outliers. Q. What are the most important machine learning techniques? In Associative rule learning computers are presented with a large set of observations, all being made up of multiple variables. The task is then to learn relations between variables such us A & B 3C (if A and B happen, then C will also happen). In Clustering computers learn how to partition observations in various subsets, so that each partition will be made up of similar observations according to some well-defined metric. Algorithms like K-Means and DBSCAN belong also to this class. In Density estimation computers learn how to find statistical values that describe data. Algorithms like Expectation Maximization belong also to this class. Q. Why is it important to have a robust set of metrics for machine learning? Any machine learning technique should be evaluated by using metrics for analytically assessing the quality of results. For instance: if we need to categorize objects such as people, movies or songs into different classes, precision and recall might be suitable metrics. Precision is the ratio tp / tp+fp where tP is the number of true positives and fP is the number of false positives. Recall is the ratio tp / tP+fn where tP is the number of true positives and fn is the number of false negatives. True and false are attributes derived by using manually created data. Precision and Recall are typically reported in a 2-d graph known as P/R Curves, where different algorithmic graphs can be compared by reporting the achieved Precision for fixed values of Recall. In addition F1 is another frequently used metric, which combines precision and Recall into a single value: F1 = 2*precision*recall / precision+recall Scikit-learn provides a comprehensive set of metrics for classification, clustering, regression, ranking and pairwise judgment’. As an example the code below computes Precision and Recall. import numpy as np from skleam.metrics import precision_recall_curve y_true = np.array([0,1,1,0, 1]) y_scores = np.array([0.5, 0.6, 0.38, 0.9, 1]) precision, recall, thresholds = precision_recall_curve(y_true, y_scores) print precision print recall Q. Why are Features extraction and engineering so important in machine learning? The Features are the selected variables for making predictions. For instance, suppose you’d like to forecast, if tomorrow there will be a sunny day, then you will probably pick features like humidity (a numerical value), speed of wind (another numeric value), some historical information (what happened during the last few years), whether or not it is sunny today (a categorical value yes/no) and a few other features. Your choice can dramatically impact on your model for the same algorithm and you need to run multiple experiments in order to find what the right amount of data and what the right features are in order to forecast with minimal error. It is not unusual to have problems represented by thousands of features and combinations of them and a good feature engineer will use tools for stack ranking features according to their contribution in reducing the error for prediction. Different authors use different names for different features including attributes, variables and predictors. In this book we consistently use features. Features can be categorical such as marital status, gender, state of residence, place of birth, or numerical such as age, income, height and weight. This distinction is important because certain algorithms such as linear regression work only with numerical attributes and if categorical features are present, they need to be somehow encoded into numerical values. In other words, feature engineering is the art of extracting, selecting and transforming essential characteristics representing data. It is sometimes considered less glamourous than machine learning algorithms but in reality any experienced Data Scientist knows that a simple algorithm on a well-chosen set of features performs better than a sophisticated algorithm on a not so good set of Also simple algorithms are frequently easier to implement in a distributed way and therefore they scale well with large datasets. So the rule of thumb is in what Galileo already said many centuries ago: “Simplicity is the ultimate sophistication”. Pick your algorithm carefully and spend a lot of time in investigating your data and in creating meaningful summaries with appropriate feature Real world objects are complex and features are used to analytically represent those objects. From one hand this representation has an inherent error which can be reduced by carefully selecting a right set of representatives. From the other hand we might not want to create a too complex representation because it might be computationally expensive for the machine to learn a sophisticate model, indeed such model could possibly not generalize well to the unseen data. Real world data is noisy. We might have very few instances (outliers) which show a sensible difference from the majority of the remaining data, while the selected algorithm should be resilient enough to outliers. Real world data might have redundant information. When we extract features, we might be interested in optimizing simplicity of learned models and discard new features which show a high correlation with the already observed ones. ETL is the process of Extraction, Transformation and Loading of features from real data for creating various learning sets. Transformation in particular refers to operations such as features weighting, high correlated features discarding, the creation of synthetic features derivative of the one observed in the data and the reduction of high dimension features space into a lower one by using either hashing or rather sophisticate space projection techniques. For example in this book we discuss: 1. TFxIDF, an example of features weighting used in text classification 2. ChiSquare, an example of filtering of highly correlated features 3. Kernel Trick, an example of creation of derivative features 4. Hashing, a simple technique to reduce feature space dimensions 5. Binning, an example of transformation of continuous features into a discrete one. New synthetic features might be created in order to represent the bins. Q. Can you provide an example of features extraction? Let’s suppose that we want to perform machine learning on textual files. The first step is to extract meaningful feature vectors from a text. A typical representation is the so called bag of words 1. Each word w in the text collection is associated with a unique integer = wordld(w) assigned to it. 2. For each document i, the number of occurrences of each word w is computed and this value is stored in a matrix M(i.D. Please, note that M is typically a sparse matrix because when a word is not present in a document, its count will be zero. Numpy, Scikit-learn and Spark all support sparse vectors2. Let’s see an example where we start to load a dataset made up of Usenet articles; where the altatheism category is considered and the collection of text documents is converted into a matrix of token counts. We then print the wordId(‘man). From sklearn.datasets import fetch_20 new groups Two additional observations can be here highlighted: first, the training set, the validation set and the test set are all sampled from the same gold set but those samples are independent. Second, it has been assumed that the learned model can be described by means of two different functions f and h combined by using a set of hyper-parameters A. Unsupervised machine learning consists in tests and application phases only because there is no model to be learned a-priori. In fact unsupervised algorithms adapt dynamically to the observed data. Q. What is a Bias – Variance tradeoff? Bias and Variance are two independent sources of errors for machine learning which prevent algorithms to generalize the models learned beyond the training set. a) Bias is the error representing missing relations between features and outputs. In machine learning this phenomenon is called underfitting. b) Variance is the error representing sensitiveness to small training data fluctuations. In machine learning this phenomenon is called overfitting. A good learning algorithm should capture patterns in the training data (low bias), but it should also generalize well with unseen application data. In general a complex model can show low bias because it captures many relations in the training data and, at the same time, it can show high variance because it will not necessarily generalize well. The opposite happens with models with high bias and low variance. In many algorithms an error can be analytically decomposed in three components: bias, variance and the irreducible error representing a lower bound on the expected error for unseen sample data. One way to reduce the variance is to try to get more data or to decrease the complexity of a model. One way to reduce the bias is to add more features or to make the model more complex, as adding more data will not help in this case. Finding the right balance between Bias and Variance is an art that every Data scientist must be able to manage. Q. What is a cross-validation and what is an overfitting? Learning a model on a set of examples and testing it on the same set is a logical mistake because the model would have no errors on the test set but it will almost certainly have a poor performance on the real application data. This problem is called overfitting and it is the reason why the gold set is typically split into independent sets for training, validation and test. An example of random split is reported in the code section, where a toy dataset with diabetics’ data has been randomly split into two parts: the training set and the test set. As discussed in the previous question: given a family of learned models, the validation set is used for estimating the best hyper-parameters. However by adopting this strategy there is still the risk that the hyper-parameters overlit a particular validation set. The solution to this problem is called cross-validation. The idea is simple: the test set is split in k smaller sets called folds and the model is then learned on k – 1 folds, while the remaining data is used for validation. This process is repeated in a loop and the metrics achieved for each iterations are averaged. An example of cross validation is reported in the section below where our toy dataset is classified via SVM and accuracy is computed via cross-validation. 5VM is a classification technique and “accuracy” is a quality measurement, which will be discussed later in the book. Stratified KFold is a variation of k-fold where each set contains approximately the same balanced percentage of samples for each target class as the complete set. import numpy as np front skleam import cross_validation from skleam import datasets from skleam import svm diabets = datasets.load_diabetes() X_train, X_test, y_train, y_test = diabets.data, diabets.target, test_size*).2, random_state=0) print X_train.shape, y_trainshape # test size 20% print X_test.shape, y_test.shape cif = svm.SVC(kemel=qineari, C=1) scores = cross_validation.cross_valscore( clf, diabets.data, diabets.target, cv=4) # 4-folds print scores print(“Accuracy: %0.2f (+/- %0.2f)” % (scores.meanQ, scores.std())) Q. Why are vectors and norms used in machine learning? Objects such as movies, songs and documents are typically represented by means of vectors of features. Those features are a synthetic summary of the most salient and discriminative objects characteristics. Given a collection of vectors (the so-called vector space) V, a norm on V is a function P: V A satisfying the following properties: For all complex numbers a and all u, v E V, 1. P(av) = |a| p(v) 2. P(u+v) <= p(u) + p(v) 3. If p(v) = 0 then v is the zero vector The intuitive notion of length for a vector is captured by ||x||2 = root (x12 + … + xn2 ) More generally we have from numpy import linalg as LA import numpy as np a = np.arange(22) print LA.nonn(a) print LA.nonn(a, I) Q. What are Numpy, Scipy and Spark essential datatypes? Numpy provides efficient support for memorizing vectors and matrices and for linear algebra operations 5. For instance: dot(a, b[, out]) is the dot product of two vectors, while inner(a, b) and outer (a, b[, out]) are respectively the inner and outer products. Scipy provides support for sparse matrices and vectors with multiple memorization strategies in order to save space when dealing with zero entries.6 In particular the COOrdinate format specifies the non-zero v value for the coordinates(•), while the Compressed Sparse Colum matrix (CSC) satisfies the relationship M[row_ind[k], col_ind[k]] = data[k] Spark has many native datatypes for local and distributed computations. The primary data abstraction is a distributed collection of items called “Resilient Distributed Dataset (RDD)”. RDDs can be created from Hadoop InputFormats’ or by transforming other RDDs. Numpy arrays, Python list and Scipy CSC sparse matrices are all supported. In addition: MLIB, the Spark library for machine learning, supports SparseVectors and LabeledPoint, i.e. local vectors, either dense or sparse, associated with a label/response import numpy as np from scipy.sparse import csr_matrix M = csr_matrix ([[4, 1, 0], [4, 0, 3], [0, 0, 1]]) from pyspark.mllib.linalg import SparseVector from pyspark.mllib.regression import LabeledPoint label = 0.0 point = LabeledPoint(label. SparseVector(3, [0, 2], [1.0, 3.0])) textRDD = sc.textFile(“README.md”) print textRDD.count() # count items in RDD Q. Can you provide an example for Map and Reduce in Spark? (Let’s compute the Mean Square Error) Spark is a powerful paradigm for parallel computations which are mapped into multiple servers with no need of dealing with low level operations such as scheduling, data partitioning, communication and recovery. Those low level operations were typically exposed to the programmers by previous paradigms. Now Spark solves these problems on our behalf. A simple form of parallel computation supported by Spark is the “Map and Reduce” which has been made popular by Google. In this framework a set of keywords is mapped into a number of workers (e.g. parallel servers available for computation) and the results are then reduced (e.g. collected) by applying a “reduce” operator. The reduce operator could be very simple (for instance a sum) or sophisticated (e.g. a user defined function). As an example of distributed computation let’s compute the Mean Square Error (MSE), the average of the squares of the difference between the estimator and what is estimated. In the following example we suppose that valuesAndPreds is an RDD of many (vi = true labels, pi = predictions) tuples. Those are mapped into values (vi – pi)2. All intermediate results computed by parallel workers are then reduced by applying a sum operator. The final result is then divided by the total number of tuples as defined by the mathematical definition MSE = 1/n nEi=1 (vi – pi)2. Note that Spark hides all the low level details to the programmer by allowing to write a distributed code which is very close to a mathematical formulation. MSE = valuesAndPreds.rnap(lambda (v, p): (v – p)**2).reduce(lanthda x, y: x + y) / valuesAndPreds.count() Spark can however support additional forms of parallel computation by taking inspiration from the 20 years of work on skeletons computations and, more recently, on Microsoft’s Cosmos. Q. Can you provide examples for other computations in Spark? The first code fragment is an example of map reduction, where we want to find the line with most words in a text. First each line is mapped into the number of words it contains. Then those numbers are reduced and the maximum is taken. Pretty simple: one single line of code stays here for something which requires hundreds of lines in other parallel paradigms such as Hadoop. Spark supports two types of operations: transformations, which create a new RDD dataset from an existing one, and actions, which return a value to the driver program after running a computation on the dataset. All transformations in Spark are lazy because they postpone computation as much as possible until the results are really needed by the program. This allows Spark to run efficiently — for example the compiler can realize that an RDD created through map will be used in a reduce and return only the result of the reduce to the driver, rather than the larger mapped dataset. Intermediate results can be persisted and cached. Basic transformations include (the list below is not comprehensive. Check online for a full list) │Transformation │Use │ │Map(func) │Returns a new distributed dataset formed by passing each element of the source through a function func. │ │filter(func) │Returns a new dataset formed by selecting those elements of the source on which func returns true. │ │flatMap(func) │Similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item). │ │sample(withReplacement, │Samples a fraction of the data, with or without replacement, using a given random number generator seed. │ │fraction, seed) │ │ │union(otherDataset) │Returns a new dataset that contains the union of the elements in the source dataset and argument. │ │intersection(otherDataset) │Returns a new ROD that contains the intersection of elements in the source dataset and argument. │ │distinct([numTasks])) │Returns a new dataset that contains the distinct elements of the source dataset. │ │groupByKey([numTasks]) │When called on a dataset of (K, V) pairs, a dataset of (K, Iterable) pairs returns. │ │reduceByKey(func, [numTasks]) │When called on a dataset of (K, V) pairs, a dataset of (K, V) pairs returns, where the values for each key are aggregated using the given reduce function func, │ │ │which must be of type (V,V) => V │ │sortByKeyflascending], │When called on a dataset of (K, V) pairs, where K implements Ordered, a dataset of (K, V) pairs sorted by keys in ascending or descending order returns, as │ │[numTasks]) │specified in the Boolean ascending argument. │ 120 Data Science Interview Questions
{"url":"https://www.bigdatavietnam.org/2018/07/data-science-interview-questions.html","timestamp":"2024-11-03T19:01:55Z","content_type":"application/xhtml+xml","content_length":"146096","record_id":"<urn:uuid:dac36f5c-16f2-44fb-8a12-57e26f0ef517>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00046.warc.gz"}
Gaussian distribution characteristic function • A • Thread starter senobim • Start date In summary, the conversation involved solving for the characteristic function of a normal distribution and reaching a point where a new term appeared due to "completing the square". This term had a mistake in it, causing confusion. The correct version should have a sigma squared term and a c squared term in the denominator. Hello, guys. I am trying to solve for characteristic function of normal distribution and I've got to the point where some manipulation has been made with the term in integrands exponent. And a new term of t^2σ^2/2 has appeared. Could you be so kind and explain that to me, please. [tex]=Ae^{it\mu}\int_{-\infty}^{\infty}e^{-\frac{1}{c^2}(\alpha^{2}-i2t\sigma ^{2}\alpha)}d\alpha=Ae^{(it\mu-\frac{t^{2}\sigma^{2}}{2})}\int_{-\infty}^{\infty}e^{-\frac{(\alpha-it\sigma^{2})^{2}}{c^ {2}}} d\alpha [/tex] What was done is called "completing the square". e.g. take ## x^2+6x ##. You divide the x coefficient by 2 and square that result to complete the square: ## x^2+6x=x^2+6x+9-9 =(x+3)^2-9 ##. ( ## (6/ 2)^2=9 ##). You add it and also subtract it from the expression. editing ... In the case you presented, ## \alpha=x ##. ## \\ ## Additional editing: On closer inspection, the term should have ## \ sigma^4 ## and not ## \sigma^2 ##, and it should have a ## c^2 ## in the denominator instead of a 2. No wonder it puzzled you ! Last edited: Nice! Thank you very much! FAQ: Gaussian distribution characteristic function 1. What is a Gaussian distribution characteristic function? The Gaussian distribution characteristic function is a mathematical function that fully describes the probability distribution of a Gaussian or normal random variable. It is defined as the expected value of the complex exponential function of the random variable. 2. How is the Gaussian distribution characteristic function related to the probability density function? The Gaussian distribution characteristic function is the Fourier transform of the probability density function of a Gaussian distribution. This means that the characteristic function contains all the same information as the probability density function, but in a different form. 3. What are the properties of a Gaussian distribution characteristic function? The Gaussian distribution characteristic function has several important properties, including being continuous, positive, and bounded. It is also infinitely differentiable and symmetric around the mean of the distribution. 4. How is the Gaussian distribution characteristic function used in statistical analysis? The Gaussian distribution characteristic function is used in various statistical analyses to calculate probabilities and perform data transformations. It is also used in the Central Limit Theorem, which states that the sum of a large number of independent random variables will be approximately normally distributed. 5. Can the Gaussian distribution characteristic function be used for non-Gaussian distributions? While the Gaussian distribution characteristic function is specifically defined for Gaussian distributions, it can also be used for other distributions under certain conditions. For example, the characteristic function of a sum of independent random variables can often be approximated by a Gaussian distribution, making it useful for non-Gaussian distributions as well.
{"url":"https://www.physicsforums.com/threads/gaussian-distribution-characteristic-function.899317/","timestamp":"2024-11-05T01:11:22Z","content_type":"text/html","content_length":"87303","record_id":"<urn:uuid:b8873117-bd6e-4f9f-b298-fcc1fb202eee>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00347.warc.gz"}
Date or N/A in a cell I am trying to figure out a column formula that will auto-populate a cell with either a date or an N/A: N/A, if the [TOTAL BUDGET] cell is under $100,000 =[Final Report Due]@row + 15 if the [TOTAL BUDGET] cell@row is $100,000 or more (this would be the audit report due date) I tried this but it's not quite working: =IF([TOTAL BUDGET]@row < $100,000, "N/A", IF([TOTAL BUDGET]@row >= $100,000, "[Final Report Due]@row +15")) One problem seems to be that if I set the Column Type to "Date", then putting N/A isn't possible. But if I set the Column Type to Text/Number, the date calculations don't work. Is there another way to approach this? Thank you in advance for your help! Best Answers • Hi @LizTo, =IF([TOTAL BUDGET]@row < 100000, [Final Report Due]@row + 15, "N/A") Ensure the column you're calculating in is a Date column, but not restricted to Date values which allows for the "N/A". All the best, • Awesome, glad you got it! You could have also just flipped the < to a and left the results in that order. Either way would accomplish the same. =IF([TOTAL BUDGET]@row < 100000, [Final Report Due]@row + 15, "N/A") would become =IF([TOTAL BUDGET]@row > 100000, [Final Report Due]@row + 15, "N/A") • Hi @LizTo, =IF([TOTAL BUDGET]@row < 100000, [Final Report Due]@row + 15, "N/A") Ensure the column you're calculating in is a Date column, but not restricted to Date values which allows for the "N/A". All the best, Ah, thank you! That works (after switching the value if true and the value if false. • Awesome, glad you got it! You could have also just flipped the < to a and left the results in that order. Either way would accomplish the same. =IF([TOTAL BUDGET]@row < 100000, [Final Report Due]@row + 15, "N/A") would become =IF([TOTAL BUDGET]@row > 100000, [Final Report Due]@row + 15, "N/A") Help Article Resources
{"url":"https://community.smartsheet.com/discussion/98667/date-or-n-a-in-a-cell","timestamp":"2024-11-07T11:04:03Z","content_type":"text/html","content_length":"413623","record_id":"<urn:uuid:6a0b011a-57af-4f03-91e9-ff15a55c4be5>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00135.warc.gz"}
Utah: How do party or group applications affect my odds? Utah adds the number of bonus or preference points together for all hunters in a group and then divides that by the number of hunters. That number will then be rounded down and is the number of points the application will go into the draw with. For example, a group of 3 applicants with 7, 8, and 10 points would go into the draw with 8pts (7+8+10=25/3=8.33 rounded down to 8) Going in as a group does have the potential to decrease your odds because Utah will not over-allocate permits. Meaning, if your group application is drawn for the last permit available for any given hunt, they will reject that application because there are not enough permits for the applicants in the group.
{"url":"https://help.gohunt.com/hc/en-us/articles/115015898127-Utah-How-do-party-or-group-applications-affect-my-odds","timestamp":"2024-11-13T20:49:55Z","content_type":"text/html","content_length":"19891","record_id":"<urn:uuid:350cdf00-226a-4692-bde3-0ef4b9249bc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00286.warc.gz"}
Let f such that f:RR->RR and for some positive a the equation f(x+a)=1/2+sqrt(f(x)+f(x)^2) holds for all x. Prove that the function f(x) is periodic? | HIX Tutor Let #f# such that #f:RR-&gt;RR# and for some positive #a# the equation #f(x+a)=1/2+sqrt(f(x)+f(x)^2)# holds for all #x#. Prove that the function #f(x)# is periodic? Answer 1 If f is periodic, with period a, #f(x)=1/4, a constant#, and the period a becomes the distance between two neighboring points. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 There is no such function $f$ The conditions cannot be satisfied, so there is no such function #f#. #f(x+a) = 1/2 + sqrt(f(x)+f(x)^2)# Note first that #sqrt(...) >= 0# and hence #f(x) >= 1/2# for all #x in RR# Given that #f(x) > 0#, we have #f(x) + f(x)^2 > f(x)^2# and hence: #sqrt(f(x) + f(x)^2) > sqrt(f(x)^2) = f(x)# #f(x+a) > f(x) + 1/2# #f(x-a) < f(x) - 1/2# So if #n > ceil(2(f(0) - 1/2))# then: #f(-na) < f(0) - n/2 < f(0) - (f(0) - 1/2) = 1/2# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 3 Making #x = x+a# #f(x+2a)=1/2+sqrt(f(x+a)-f(x+a)^2)=# #=1/2+sqrt(1/2+sqrt(f(x)-f(x)^2)-(1/2+sqrt(f(x)-f(x)^2))^2)=# #=1/2+sqrt(1/2+sqrt(f(x)-f(x)^2)-1/4-f(x)+f(x)^2-sqrt(f(x)-f(x)^2))=# #=1/2+sqrt(1/4-f(x)+f(x)^2)=# #=1/2+sqrt((1/2-f(x))^2)=# #=1/2+abs(1/2-f(x))# #f(x) ge 1/2# so #abs(1/2-f(x))=f(x)-1/2# and finally #f(x+2a)=f(x)# so #f(x)# is periodic with period #2a# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 4 To prove that the function ( f(x) ) is periodic given the equation ( f(x+a) = \frac{1}{2} + \sqrt{f(x) + f(x)^2} ) for some positive ( a ), we will utilize the properties of the function and show that it repeats its values at regular intervals. Let's denote ( g(x) = f(x+a) - f(x) ). We aim to show that ( g(x) ) is periodic with period ( a ). Given the equation ( f(x+a) = \frac{1}{2} + \sqrt{f(x) + f(x)^2} ), we can rewrite it as: [ f(x+a) - f(x) = \frac{1}{2} + \sqrt{f(x) + f(x)^2} - f(x) ] Since ( g(x) = f(x+a) - f(x) ), we have: [ g(x) = \frac{1}{2} + \sqrt{f(x) + f(x)^2} - f(x) ] Now, we can simplify ( g(x) ) to: [ g(x) = \frac{1}{2} + \sqrt{f(x) + f(x)^2} - f(x) ] [ g(x) = \frac{1}{2} + \sqrt{f(x)(1+f(x))} - f(x) ] Given that ( f(x) \geq 0 ) for all ( x ), the expression under the square root is non-negative. Therefore, ( \sqrt{f(x)(1+f(x))} \geq 0 ). Consequently, ( g(x) \geq \frac{1}{2} ) for all ( x ). Now, let's consider ( g(x+a) ): [ g(x+a) = f(x+2a) - f(x+a) ] [ = \frac{1}{2} + \sqrt{f(x+a) + f(x+a)^2} - f(x+a) ] Substituting ( f(x+a) = \frac{1}{2} + \sqrt{f(x) + f(x)^2} ), we get: [ g(x+a) = \frac{1}{2} + \sqrt{\left(\frac{1}{2} + \sqrt{f(x) + f(x)^2}\right) + \left(\frac{1}{2} + \sqrt{f(x) + f(x)^2}\right)^ 2} - \left(\frac{1}{2} + \sqrt{f(x) + f(x)^2}\right) ] This expression can be simplified and manipulated to show that ( g(x+a) = g(x) ), proving that ( f(x) ) is periodic with period ( a ). Therefore, the function ( f(x) ) is periodic. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/let-f-such-that-f-rr-rr-and-for-some-positive-a-the-equation-f-x-a-1-2-sqrt-f-x--8f9af9fcd2","timestamp":"2024-11-04T17:51:19Z","content_type":"text/html","content_length":"593036","record_id":"<urn:uuid:45ded963-5bd3-4cdc-9e52-52a5bfab1eac>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00082.warc.gz"}
Fosgerau, M., and Bierlaire, M. (2009) Discrete choice models with multiplicative error terms, Transportation Research Part B: Methodological 43(5):494-505. The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due to theoretical and practical considerations. In this paper, we explore an alternative RUM model where the summation of V and the error term is replaced by multiplication. This is consistent with the notion that choice makers may sometimes evaluate relative differences in V between alternatives rather than absolute differences. We develop some properties of this type of model and show that in several cases the change from an additive to a multiplicative formulation, maintaining a specification of V, may lead to a large improvement in fit, sometimes larger than that gained from introducing random coefficients in V. doi:10.1016/j.trb.2008.10.004 (click here for the full paper)
{"url":"https://transp-or.epfl.ch/php/abstract.php?type=5&id=FosgBier08","timestamp":"2024-11-03T20:15:17Z","content_type":"application/xhtml+xml","content_length":"2412","record_id":"<urn:uuid:e681bad6-d480-403f-9054-17ce4dce28ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00620.warc.gz"}
Log Periodic Power Analysis of Critical Crashes: Evidence from the Portuguese Stock Market Advance/CSG, ISEG—Lisbon School of Economics & Management, Universidade de Lisboa, 1200-781 Lisboa, Portugal Facultad de Ciencias Económicas, Unidad de Posgrado, Ciudad Universitaria, Universidad Nacional Mayor de San Marcos, Lima 15081, Peru Author to whom correspondence should be addressed. Submission received: 31 October 2021 / Revised: 20 December 2021 / Accepted: 28 December 2021 / Published: 4 January 2022 The study of critical phenomena that originated in the natural sciences has been extended to the financial economics’ field, giving researchers new approaches to risk management, forecasting, the study of bubbles and crashes, and many kinds of problems involving complex systems with self-organized criticality (SOC). This study uses the theory of self-similar oscillatory time singularities to analyze stock market crashes. We test the Log Periodic Power Law/Model (LPPM) to analyze the Portuguese stock market, in its crises in 1998, 2007, and 2015. Parameter values are in line with those observed in other markets. This is particularly interesting since if the model performs robustly for Portugal, which is a small market with liquidity issues and the index is only composed of 20 stocks, we provide consistent evidence in favor of the proposed LPPM methodology. The LPPM methodology proposed here would have allowed us to avoid big loses in the 1998 Portuguese crash, and would have permitted us to sell at points near the peak in the 2007 crash. In the case of the 2015 crisis, we would have obtained a good indication of the moment where the lowest data point was going to be 1. Introduction The importance of bubbles and crashes in the stock market has always attracted a large amount of interest, given how wealth is created quickly, just to end abruptly when most economists, forecasters, and experts expected the positive trend to continue in an indefinite way (see Harsha and Ismail 2019 Ziemann 2021 ) for a review on the mechanisms behind bubbles and crashes). Although most of this interest comes from behavioral areas that explore the psychological mechanisms behind such phenomena (e.g., Andraszewicz 2020 Pan 2019 ), it is not limited to so (e.g., Zheng 2020 ). In this paper, we adopted an alternative approach, based on Johansen et al. ) that is considered to have the potential to produce promising results ( Harsha and Ismail 2019 ). We present the theory of self-similar oscillatory finite-time in finance and its application to the prediction of crashes. We test the Log Periodic Power Law/Model (LPPM) to analyze the Portuguese stock market, in its crises in 1998, 2007, and 2015. Parameter values are in line with those observed in other markets. The model performs robustly for Portugal, which is a small market with liquidity issues and the index is only composed of 20 stocks. Thus, we provide consistent evidence in favor of the proposed LPPM methodology. Our results were robust and consistent with previous literature: predictions of critical time appear to be stable regarding the last data point included. The number of oscillations required to verify that the log-periodicity is not coming from noise (Nosc = 3), thus, signaled robust results. Regarding the fitting, the minimization of SRSD (sum of root squared deviations) or RMSD (root of the sum of the squared differences) led to similar results. By using artificial series, we were able to provide a measure of how stable our critical time predictions are relative to small perturbations, and to generate bands that could be expected to land around the real critical time. The LPPM methodology proposed here would have allowed us to avoid big loses in the 1998 Portuguese crash, and would have permitted us to sell at points near the peak in the 2007 crash. In the case of the 2015 crisis, we would have obtained a good indication of the moment where the lowest data point was going to be achieved. The paper is organized as follows: in the Literature Review, we will discuss market rationality and its relationship with self-organized criticality (SOC). Next, we will present the model behind the Log Periodic Power Law (LPPL) and some guidelines for its application. The following section will present the results obtained from the analysis of the Portuguese stock market in three critical periods—1997, 2007–2008, and 2015. Lastly, we conclude and present recommendations and suggestions for future research. 2. Literature Review In our framework, markets are considered open and non-linear complex systems that exhibit emanating patterns ( Bastiaensen et al. 2009 ) and include evolutionary adaptive characteristics populated by bounded rational agents interacting with each other ( Sornette and Andersen 2002 An important property of complex systems is the possible manifestation of coherent large-scale collective behaviors with a very rich structure, resulting from the repeated nonlinear interactions among its constituents. Most complex systems cannot be solved and instead should be explored using numerical methods. In most cases, given that the systems are computationally irreducible, their dynamical future time evolution would not be predictable. Even with the continuous increasing computing power, a prediction of critical events could still be difficult due to the under sampling of extreme situations. However, it should be noted that the interest is not in predicting the detailed evolution of systems, but in trying to detect the arrival of critical times and extreme events ( Sornette 2003 Self-Organized Criticality (SOC) is defined as “the spontaneous organization of a system driven from the outside into a globally stationary state, which is characterized by self-similar distributions of event sizes and fractal geometrical properties” ( Sornette 2007, p. 7013 ). This stationary state is dynamical and characterized by the emergence of statistical fluctuations that are usually called ‘avalanches’. In this context ‘criticality’ refers to the state of a system at a critical point at which the correlation length and the susceptibility become infinite in the infinite size limit, while self-organized is related to pattern formation among many interacting elements. The concept is that the structuration, the patterns, and the large-scale organization appear spontaneously. The notion of self-organization refers to the absence of control parameters ( Sornette 2007 ). Critical points are points in time where we observe the explosion to infinity of a normally well-behaved quantity ( Johansen et al. 2000 ), which could be a common occurrence. The ‘Dragon King’ concept, proposed by ), where the term ‘king’ is used to refer to events which are even beyond the extrapolation of the fat-tail distribution of the rest of the population. Inside the SOC framework, stock market crashes are due to the slow build-up of long-range correlations ( Johansen et al. 2000 ), and the markets eventually land into a crash since those correlations lead to a global cooperative behavior ( Kaizoji and Sornette 2008 ). This theory can be applied equally to bubbles ending in a crash as well as to those that land smoothly ( Johansen et al. 1999 The forecasting of crashes is compatible with rational markets since even if investors know that there is a high risk of a crash, this crash could still happen and investors would not be able to earn any abnormal risk-adjusted return by the use of this information ( Johansen et al. 2000 ), as investors must be compensated by the chance of a higher return in order to be induced to hold an asset that might crash ( Johansen and Sornette 1999b ). As such, the price of the assets moves up, but that is rational due to the risk of a crash increasing. The similar patterns arising before crashes at different times have been attributed to the stable nature of humans, since they are essentially driven by greed and fear in the process of trading. Even when technology changes the ways of interaction, the human elements remain ( Sornette 2003 ). Two characteristics usually associated with crashes are: (1) the later come unexpectedly and (2) financial collapses never occur when the future looks bad ( Sornette 2003 ). A possible explanation is that people tend to forecast the future as a linear continuation of the present (e.g., Gonçalves et al. 2020 Self-Organizing Criticality theory is consistent with a weaker form of the weak efficient market hypothesis ( Sornette 2004 ) that purports market prices contain not only easily accessible public information (information that has been demonstrated to disseminate in an efficient way under controlled circumstances ( Sornette and Andersen 2002 )), but also more subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use ( Sornette 2003 ). It was also proposed that the market as a whole can exhibit an emergent behavior not shared by any of its constituents ( Sornette 2003 ), in a process compared to the behavior of an ant colony. The aggregate effect of market participants could take the price to the level expected by the rational expectations theory, even when every participant traded in a sub-optimal way. Prices could not be in a journey to an equilibrium point, but in a self-adaptive dynamical state emerging from traders’ actions ( Johansen et al. 2000 News could be unnecessary in order to provoke movement in financial prices, given that self-organization of the market dynamics is sufficient to create complexity endogenously ( Sornette and Andersen 2002 ). In this case, it would not be necessary to match every price movement to different news; in contrast to the efficient market theory, where crashes are caused by the revelation of a dramatic piece of information ( Johansen et al. 2000 ). Furthermore, typical analysis after crashes usually have conflicted conclusions as to what that information could have been. A market will be in a bubble state when faster-than-exponential accelerating price behavior appears ( Zhou and Sornette 2009 ), and a crash will be defined as an extraordinary event with an amplitude above 15% ( Johansen and Sornette 1999a ). Controversy regarding the existence of bubbles appears since we never know with certainty which are the fundamentals ( Youssefmir et al. 1998 ), and since bubbles can be reinterpreted as unobserved market fundamentals ( Sornette and Andersen 2002 ). Another point of discussion is if bubbles should appear if market participants are rational or if that demonstrates that they are irrational. There has been analysis providing rational explanations for the South Sea Bubble, the Mississippi Bubble and the Tulipmania, using analysis with omitted variables where the crash did not occur. However, it is mentioned that it is difficult to find a rational explanation based on news for the 19 October 1987 crash, where US stocks went down around 22%. Bubbles and crashes could also be understood on the context of business cycles. These depend on many small factors that are difficult to measure and control, instead of a few large controllable parameters of what makes business cycles essentially uncontrollable ( Black 1986 ). An additional complication is that speculative bubbles may take all kinds of shapes. Detecting their presence or rejecting their existence is likely to prove very hard ( Blanchard 1979 The definition of a bubble as a transient upward acceleration of prices above fundamental value brings us to another problem, given that we cannot differentiate easily between a growing bubble price and a growing fundamental price, as mentioned by Yan et al. ). A more direct approach to bubble identification would be to consider that markets are in a bubble when prices accelerate at a faster-than-exponential rate, also known as ‘super-exponential’. In those cases, the growth rate itself keeps growing, something that is inevitable and unsustainable ( Zhou and Sornette 2009 ). A super-exponential growth process would lead to finite-time singularities; at that point the bubble dynamics have to end, and the market has to change to a different regime ( Kaizoji and Sornette 2008 A recurrent characteristic of stock prices during bubbles is their accelerating oscillations roughly organized according to a geometrically convergence series of characteristic time scales decorating the power law acceleration. Such patterns have been coined “log-periodic power law” (LPPL) ( Zhou and Sornette 2009 ). Another observed fact during bubbles is the reduced liquidity as we approach the top of the bubble. This occurs due to an increase in the rate of the market order submission reducing the liquidity and thus increasing the price ( Farmer et al. 2005 Johansen and Sornette ) analyzed the most important financial indices, currencies, gold, and a sample of individual stocks in the US, finding fat-tails in all series (except for the CAC40). In addition to that, they found that 98% of drawdowns and draw-ups could be fitted to an exponential model, 98% of the time. These 98% could be produced by a financial market following a GARCH process, while approximately 1–2% of the largest drawdowns could not be fitted to the exponential or Weibull functions. This could indicate that the largest drawdowns are outliers, even when most of the time the very largest daily drops are not outliers. An explanation for this could be the emergence of a sudden persistence of consecutive daily drops, with a correlated magnification of the amplitude of drops ( Johansen and Sornette 2002 ). Their main result was the existence of the emergence of transient correlations across daily returns. These have been found in emerging markets (reflecting the low volume) but also in the October 1987 crash. Such would lead us to analyze the problems related to the extended use of Value-at-Risk (VaR) and extreme value theory (EVT). In the case of VaR, this is focused on the analysis of one-day extreme events happening during a specified timeframe. However, the bigger losses occur due to the emergence of transient correlation, which in turn will lead to runs of cumulative losses. These correlations would make the drawdowns much more frequent than expected when independence between daily returns is assumed. Regarding EVT, if large drawdowns are outliers, extrapolating the tails from smaller values cannot be correct. Drastic price changes without a change in economic fundamentals could be explained by panicked uninformed traders that sell, causing prices to drop ( Barlevy and Veronesi 2003 ). However, those sells could be rational if they are acting in response to perceived information that they received from the market. If that is the original cause of crashes, there would not be a need of an exogenous cause. Eguiluz and Zimmermann ) mention that the occurrence of crashes could be explained by the mechanisms of information dispersion and herding. Threshold models, where outcomes depend on how people react to other people’s actions, apply to a multitude of situations, including the stock market. These models start with the initial distribution of thresholds and try to estimate how many will end choosing each of the two alternatives presented ( Granovetter 1978 ); that is, to find the equilibrium that will arise over time. Finding these equilibria is difficult, since people have different thresholds regarding how many people would have to hold an opinion for them to consider changing their own opinion. Threshold models and mimetic contagion processes are related, since these processes can be identified, for as the imitation disseminates, it reinforces itself given that individuals show an increasing tendency to imitate ( Orlean 1989 ). An opinion shared by a great number of people would be very attractive, increasing the chances that those that ignored it at first, could change their mind. Devenow and Welch ) mention that influential market participants highlight the high influence that other market participants have in their decisions, which could lead to mimetic contagion whenever there is a bubble or It has been considered by ) that a trader that analyses information considering the Walrasian general equilibrium would decide if this were relevant or not, in an objective way, according to market fundamentals. However, in his framework, the speculator would only consider how other traders think and act, drawing a parallelism between this situation and the beauty contest example created by Keynes. In cases of mimetic contagion, investors are not interested in fundamentals, but only in the information they can obtain from market participants; they could just copy the actions of their neighbors ( Orlean 1989 ). This could help explain bubbles and crashes, but received little attention since it was considered akin to irrationality. However, when an agent has no information, it could end better off by copying somebody with information, or simply end in the same situation (in case the copied agent has no information). Between the two extreme positions, where one extreme sees herding as an example of irrational behavior, and the other sees it as an example of rationality, but considers externalities, information, and incentives, there is an intermediate view that holds that decision-makers are near-rational, economizing on information processing or information acquisition costs by using heuristics, and that rational activities by third-parties cannot eliminate this influence ( Devenow and Welch 1996 Gonçalves et al. 2021 Orlean’s model is compatible with the model proposed by ), where there are two types of traders: smart ones, who receive informative signals, and dumb ones, who receive uninformative signals. Given that the smart analysts’ signals are positively correlated, they would tend to act in a similar way, as a consequence, in certain circumstances, an analyst can look smart by herding. On the other hand, an analyst would have a bigger tendency to ignore leaders’ opinions and trust his own personal information if he has a bigger perceived ability (or high confidence). This point of view is shared by Zhou and Sornette 2009, p. 869 ), who add that “it is actually rational to imitate when lacking sufficient time, energy, and information to take a decision based only on private information and processing, that is (…), most of the Bubbles are more likely to appear in isolated industries or markets ( Krause 2004 ), given that analysts and traders in the sector are usually interacting amongst themselves most of the time, and that those industries or markets could be not well integrated into the rest of the economy, magnifying the effects of biases. Johansen and Sornette ) mention that traders do not maintain a fixed position with respect to their colleagues, instead they are in constant change, creating new interactions and correlations. The effects of herding behavior in financial markets can be seen as positive or negative feedback mechanisms causing price accelerations or decelerations and (anti)-bubble formation, where asset prices become detached from the underlying fundamentals ( Bastiaensen et al. 2009 ). This phenomenon is closely related to the concept of positive and negative feedbacks; the latter tend to regulate systems towards an equilibrium, while positive feedbacks make high prices or returns even higher ( Sornette 2003 ). In the stock market’s context, positive feedback would be referred to as trend-chasing ( Johansen and Sornette 1999b ); however, it is also noted that, at some point, not only technical analysts but also fundamentalists will have to act as trend-chasers to increase benefits. Positive feedbacks, caused among others by derivative hedging, portfolio insurance, and imitative trading, are considered an essential cause for the appearance of non-sustainable bubble regimes. Specifically, the positive feedbacks give rise to power law (i.e., faster than exponential) acceleration of prices ( Zhou and Sornette 2009 Using tools to quantify the degree of endogeneity, it has been determined that it has increased from 30% in the 1990s to at least 80% as of today, showing that due to technological advances that make possible to trade several times in a short period of time, we develop bubbles and crashes that can develop and evolve increasingly over time scales of seconds to minutes ( Sornette and Cauwels 2014 3. The Model: The Log-Periodic Power Law (LPPL) The proposed framework, following Lin et al. ), and Ko et al. ) considers the existence of two types of traders: Perfectly Rational Investors (Fundamental Value Investors) with rational expectations, and Irrational Traders (Trend Followers/Noise Traders/ Technical Traders that exhibit herding behavior). From their interaction we develop the characteristic periodic oscillations in the stock market that are visible in the logarithm of the price in periods before crashes. These oscillations will provide evidence towards the increasingly greater frequencies that eventually reach a point of no return, where the unsustainable growth has the highest probability of ending in a violent crash or gentle deflation of the bubble ( Yan et al. 2011 ). In a crash, “there is a steady build-up of tension in the system (…) and without any exogenous trigger a massive failure of the system occurs. There is no need for big news events for a crash to happen” ( Bastiaensen et al. 2009, p. 1 Accelerating prices at the end of bubbles occurs due to the higher the probability of a crash, the faster the price must increase (conditional on having no crash) ( Johansen et al. 2000 ). This happens since investors expect higher prices in order to be compensated for the higher risk of a crash. That way prices are driven by the hazard rate of a crash, defined as the probability per unit of time that the crash will happen in the next instant if it has not happened yet ( Johansen et al. 2000 ). We will represent the hazard rate conditional on time as . The higher the hazard rate, the higher the price, since this is the only result consistent with rational expectations. Two characteristics of critical systems have been observed in the stock market by Johansen et al. ): “(a) local influences [that] propagate over long distances that makes the average state of the system very sensible to small perturbations (that is, it becomes highly correlated) and (b) self-similarity across scales at critical points where big concentrations of bearish traders may have within it several islands of traders who are mostly bullish, each of which in turn surrounds lakes of bearish traders with islets of bullish traders; the progression continues all the way down to the smallest possible scale: a single trader” ( Johansen et al. 2000, p. 234 Local imitation cascades through the scales into global coordination due to critical self-similarity ( Johansen et al. 2000 ). Given that similar crashes have happened during this century, we will have to consider that maybe it is the structure of markets that leads to crashes, since almost everything else have changed during the years. The origin of the crashes could lay on the organization of the system itself ( Johansen and Sornette 1999a According to ), when you add intelligence to a group, this starts to behave in more complicated ways, since agents try to anticipate each other creating oscillations in the market; something that would not happen with simple agents without big memories or complex strategies. He concludes that dynamical systems consisting of adaptive agents typically do not tend to a mutually beneficial global condition—they cannot find the Nash Equilibrium. The lesson is that dynamical instability is inherent to collectives of adaptive agents. Traders are inserted inside a network of contacts, and it is from these interactions that they will be influenced and take decisions, either buy or sell. Traders tend to imitate their closest neighbors; in periods where imitation is high, there would be an increased order in the market (e.g., people agreeing to sell), and that would lead to a crash ( Johansen et al. 2000 ). However, the normal state of the market is a disordered one where buyers and sellers disagree with each other and roughly balance each other out ( Johansen et al. 2000 ). Despite the usual characterization of chaos as something negative, it is the predominance of order that brings bubbles and crashes to the market. Another dynamical explanation of the emergence of oscillatory patterns in prices considers the competition between positive feedback (self-fulfilling sentiment), negative feedbacks (contrarian behavior and fundamental value analysis) and inertia (everything takes time to adjust) ( Zhou and Sornette 2009 ). According to the later, the competition between these market participants, plus the effect of inertia, would lead to nonlinear oscillations approximating log-periodicity. Another point to have in mind, as to what provokes the log-periodic behavior, is the fact that most investment strategies followed by trend followers are not linear, they tend to under-react for small price changes and over-react for large ones ( Ide and Sornette 2002 The log-periodicity observed in the stock market before crashes has been interpreted as “the observable signature of the developing discrete hierarchy of alternating positive and negative feedbacks culminating in the final ‘rupture’, which is the end of the bubble often associated with a crash” ( Zhou and Sornette 2009, p. 870 Initial crash rates are exogenous, and investors receive this information and translate it into prices. Later, there could be (or not) a feedback between agent actions and the hazard rate. The crash itself is an exogenous event, even when everybody knows that it could be coming, nobody knows exactly why, so even when they can be compensated for it in the form of high prices, they cannot obtain abnormal returns (after adjusting risk) by anticipating the crash ( Johansen and Sornette 2002 ). However, the specific way the market collapsed is not the most important problem, since a crash occurs due to the market entering an unstable phase, and any small disturbance or process may trigger the instability ( Sornette 2003 ). Once a system is unstable, many situations could trigger the reaction (the crash), and as such that is why sometimes it is so difficult to find the exact origin of a crash, and many different news points could be indicated as the origin of the crisis, even when the real origin was that the hazard rate was already high and the log-periodic price oscillations had no room to keep accelerating. It has been suggested by Sornette and Johansen 1997, p. 420 ) that “the market anticipates the crash in a subtle self-organized and cooperative fashion, hence releasing precursory ‘fingerprints’ observable in the stock market prices.” They consider that there is information to be picked up by investors from prices; however, this subtle information has not been discovered by most. Specific events can act as revelators rather than the deep sources of the instability ( Johansen et al. 1999 ). Even political events can be considered revelators of the state of a bigger dynamical system in which the stock market is included. Endogenous crashes can be understood as the natural deaths of self-organized, self-reinforcing, speculative bubbles giving rise to specific precursory signatures in the form of log-periodic power laws accelerating super-exponentially ( Johansen and Sornette 2010 ). However, there have been some exogenous crashes identified. In those cases, crashes can be related to some extraordinary events ( Johansen and Sornette 2010 The LPPL Equation The Log Periodic Power Law (LPPL) equation used in this document has been dubbed the “Linear” Log-Periodic Formula (even when it is not linear): $log [ p ( t ) ] = A + B ( t c − t ) β { 1 + C cos [ ω log ( t c − t ) + ϕ ] }$ = critical time, = log-periodic angular frequency, = phase, = exponent, and other important quantities that do not appear in the equation are which represent the first and last data point used for the fit. Originally, the LPPL equation, designed to predict the critical time, used the market index as a measure of the level of the market. However, to avoid getting distorted signals due to the exponential rise of price (and avoiding the need to de-trend and getting additional distortion), the logarithm of the index was preferred ( Sornette and Johansen 1997 ). Another advantage of this specification is due to investors being concerned with relative changes in stock prices rather than absolute changes ( Feigenbaum 2001 ). Correct specification depends on the initial assumptions regarding the expected size of the crash. If we expect it to be proportional to the current price level, we would need to use the logarithm of the price (preferred for longer time scales, such as eight years). However, if we expect the crash to be proportional to the amount earned during the bubble, the price itself should be used. This could be better suited for shorter time scales, such as two years ( Johansen and Sornette 1999a An interesting relationship is: which represents the ratio of consecutive time intervals. This is important since it is a constant and permits us to identify the oscillations that contain the critical date . This is possible since the time intervals tend to zero at the critical date and complete it in a geometric progression ( Johansen et al. 1999 ). A curious observation with regards to is that it tends to be around two in a wide variety of systems, including growth processes, rupture, and earthquakes ( Sornette 1998 ). In this representation, ω is encoding the information on discrete scale invariance, and thus is on the preferred scaling ratio between successive peaks. With regards to , we can say that it is determined by initial conditions ( Johansen and Sornette 1999a ) and marks the estimated end of a bubble, which could take the form of a significant correction or a crash 66% of the time ( Zhou and Sornette 2009 ). However, there is a finite probability of a phase transition to a different regime (without a crash), such as a slow correction. This finite probability is given by: $1 − ∫ t 0 t c h ( t ) d t > 0$ It is important to stress the residual probability for the coherence of the model, since otherwise agents would anticipate the crash and not remain in the market ( Johansen and Sornette 1999b Tests of sensitivity and robustness found that are very robust, with respect to the choice of the starting time ( ) of the fitting interval ( Zhou and Sornette 2009 ). They found similar results analyzing the sensitivity of . These results confirmed that fits are robust and predictions reliable. Using parameters obtained from fitting the LPPL equation ( ), it is possible to calculate the number of oscillations (represented as ) appearing in the time series by using an equation presented by Zhou and Sornette $N o s c = ω 2 π l n | t c − t f i r s t t c − t l a s t |$ Zhou and Sornette ) mention that multiplicative noise on a power law accelerating function has a most probable value of ≈ 1.5, and that, if ≥ 3, we can reject with 95% of confidence that the log-periodicity observed comes from noise. To model LPPL equation, we will use the usual restrictions suggested by (The exponent needs to be between 0 and 1, in order to accelerate and to remain finite, but we will use a more stringent suggested range): (This corresponds to 1.5 < < 3.5): After fitting the Portuguese stock market index to the LPPL equation, it could be expected to obtain reasonable fits with low errors, as well as post-dictions of critical dates close to the real observed dates. However, it would be unrealistic to expect that the predicted coincides exactly with the time of the crash, since the not fully deterministic nature of crashes ( Johansen et al. 2000 ). Another point considered by the latter is that false alarms could be unavoidable, but most endogenous crashes will be predicted. As a way to calculate the significance of the values for in the usual range, Johansen and Sornette ) analyzed 400-week random intervals from 1910 to 1996 of the Dow Jones average and tried to fit the log-periodic equation. They only were able to find six data sets in the usual ranges, and all corresponded to periods before crashes: 1929, 1962, and 1987. These results strengthened the case for the reliability of this method of analysis. Regarding the log-periodic angular frequency, a fundamental has been identified in a different analysis in the range 1 ≈ 6.4 ± 1.5, with other peaks having been found on its harmonics: However, even when the importance of the harmonics is expected to decrease exponentially, 2 and 3 have been observed to be very significant, with this being something more prevalent in individual stocks rather than in aggregate indexes, due to the additional noise of the data ( Zhou and Sornette 2009 4. Data and Methodology Data comprises the prices of PSI-20 Index from the Portuguese stock market retrieved from DataStream, starting at 31 December 1993. We use the linear log-periodic defined in Equation (1), and we created post-dictions of the 1998, 2007, and 2015 crashes using data from the start of the new trend, until 8 months before the crash. Then, we created a new sample with the same starting point and advancing to the last data point in 2 weeks. We repeated the procedure until 2 months before the crash point. In this way, we analyzed if the results were robust to the sample changes, and if they tended to converge to the same critical date, (the date with the highest probability of a crash), or if it continued changing, making the results not reliable. We restricted parameters following the usual conventions mentioned in Section 3 , and fitted the model by minimizing the sum of root squared deviations (SRSD), using the Generalized Reduced Gradient (GRG2) Algorithm for optimizing nonlinear problems. We allowed for a search with a high precision (small convergence value: 0.00001), that converged in probability to globally optimal solutions. As an additional security step, we analyzed it from 400 different random starting points (different values for the multiple variables) to avoid being trapped in local minima. 5. Results and Discussion 5.1. Analysis of the 1998 Crash For the 1998 Crash, we use as our first data point = 96.01 (2 January 1996), since it is the moment the upward trend began, and our earlier last data point considered for the fitting process will be = 98.08 (29 January 1998). The global maximum for the period is located at 98.57 (22 April 1998). There is another peak before the big crash on 98.55 (20 July 1998). The minimum point after the crash is located at 98.76 (2 October 1998), as shown in Figure 1 The data series for this period can be fitted to a GARCH (2, 2) model, where the main model is an autoregressive distributed lagged model with lag 1. The model was fitted to the differences of the log of the PSI20 index. Using these parameters, all the ARCH effects and correlations were included into the model, while one differentiation of the data was enough to achieve stationarity in the series. However, the histogram of the residuals was leptokurtic. Figure 2 presents the histogram of the Standardized Residuals for the GARCH (2, 2) model. Table 1 , we present the results of all the minimization processes, where SRSD is the sum of root squared deviations, NDP is the number of data points, and is the last data point. All other parameters are defined in the model section. In all cases, the analysis started in 96.01 (2 January 1996). Highlighted in the table is the post-diction realized 3 months before the lowest point, which means that the was 98.50 (the lowest data point in the period was 98.75, and there was a tenuous local peak at 98.55). We can observe that, when our analysis has samples ending from 98.08 to 98.17, the critical time is calculated as 98.18, well before the time of the crash (and almost immediately after the last data point); that seems to be a usual occurrence. It seems as if the log-periodic fit is coupling to some structure in the data that does not represent the complete series. A similar situation can be observed when the last data point is 98.21. However, a totally different result is obtained when the last data point is between 98.25 and 98.50, when the critical time starts being predicted at a period between 99.64 and 98.70 (the lowest data point is at 98.75), which is consistent with the fact that usually the predicted critical point is before the real data of the crash. For samples ending in 98.54 and 98.59, we end again in a situation where the critical time is predicted immediately after the last data point. Considering that the lowest level of the stock market was in 98.75, we would have been able to prevent with some anticipation the upcoming crash, considering that we were able to obtain reasonable predictions, while restricting the parameters to the conventional ranges for the method used, (namely β between 0.2 to 0.8 and ω between 5 and 15). The value of λ ranged from 1.52 to 2.35, showing values around 2, as expected. As an additional point, we can mention that Nosc > 3, which would show with 95% of confidence that the log-periodic oscillations are not coming from noise. We will see similar results regarding oscillations in the following results tables. Sensitivity Analysis of the Critical Times (tc) for 1996–1998: The Impact of Tlast The fitting was realized with multiple tlast points to test the stability of the solutions, such that they tend to converge to some value, which would be expected if the fitting was able to capture correctly the log-periodic frequency of the stock market. In this case, it is possible to observe on Figure 3 how the forecasts for tend to be grouped between 98.64 and 98.70, since a peak was registered at 98.55 and the lowest point at 98.76. It appears that the results are consistent with the real data and among them. Despite seven changes of last data point, the forecasts were around the same values. 5.2. Analysis of the 2007 Crash For the analysis of the 2007 crash in the PSI-20, we take = 103.15 (24 February 2003) as the first data point of the sample, since it is the point where the bullish trend started. Our earlier last data point is equal to 106.96 (15 December 2006). It can be observed as a global peak at 107.54 (17 July 2007), a local minimum after the first downward movement at 107.74 (26 September 2007), and another lower minimum at 108.06 (23 January 2008), as observed in Figure 4 As in the previous case, we will first run a time series analysis of the data, before proceeding with Log-Periodic Analysis of the data series. In the sample from 24 March 2003 to 17 May 2007, we were able to incorporate all the significant ARCH effects and correlation using a GARCH (1, 1) to model the variance and an ARIMA (1, 1, 1) model with intercept as the main equation. As in the previous analysis, our residuals are leptokurtic. Figure 5 presents the histogram of the Standardized Residuals for the GARCH (1, 1) model for the 2003–2007 period. In the case of the 2007 crash, we stress that the peak before the crash was registered at 107.54, while the lowest point was registered at 108.06. We highlighted in Table 2 the last data point is 3 months before the peak at 107.29. We can observe that when tlast ranges from 106.96 to 107.09, the predicted critical time is shortly after the last data point. However, from 107.13 to 107.38 we observe how the predicted critical time starts ranging from 107.44 to 107.60. Those results would have allowed us to be aware of the upcoming crash, and retreat of the market at near peak prices. In this group of forecasts, it can be seen that λ ranges from 1.52 to 1.73, similar to those in the 1998, and closer to 2, the usual expected value. Sensitivity Analysis of the Critical Times (tc) for 2007: The Impact of Tlast Eleven forecasts were included in Figure 6 . We can identify four data points where the forecast would end shortly after ; later forecasts start to become grouped in the range 107.44–107.51. Considering that the peak before the crash was at 107.54, this range would have allowed us to exit the market in a safe way. It is common that predicted dates are earlier than the realized crash 5.3. Analysis of the 2015 Crash For the analysis of the 2015 crash, the data starts at tfirst = 112.45 (13 June 2012) and our earliest last data point will be tlast = 115.09 (2 February 2015). Figure 7 , we focused on the data after 115.09, where we can observe the existence of local valleys at 115.02 (7 January 2015), 115.52 (7 July 2015), and 115.65 (24 August 2015), while there are local peaks at 115.27 (9 April 2015) and 115.54 (14 July 2015). Later, the peaks and valleys from the real data will be compared with the results obtained from the LPPL fit. We fitted an Autoregressive Distributed Lag Model with one lag, to remove correlation, and one level of differentiation, to remove stationarity. We do not include an intercept, since it was statistically significant. The modelling of the variance was a GARCH (2, 2) that removed all the significant ARCH effects from the data. As in the previous two cases, the residuals showed excessive Figure 8 shows the histogram of the Standardized Residuals for the GARCH (2, 2) model for the 2012–2015 period. The 2014–2015 period is interesting since it shows different peak points, where we obtain critical times not long after the last data point, but not immediately after either. There is a local maximum in the stock market at 115.30 (25 April 2015), and the lowest point after that peak is achieved at 115.65 (24 August 2015). Table 3 presents the results of our fittings. When the last data point ranges from 115.09 to 115.25, the critical time ranges from 115.20 to 115.29, around the local peak at 115.30. Nevertheless, when the last data point is around 3 months before the lowest data point but after the local peak (115.42 and 115.46), the critical time is indicated as 115.65, the point of the lowest data point. In this group of tests, λ ranges from 1.91 to 3.31, and the Nosc is greater than three starting with tc = 115.28. Sensitivity Analysis of the Critical Times (tc) for 2015: The Impact of Tlast In the case of the 2015 crisis, results seemed to vary more frequently. However, the stock market behavior showed many curves and the seem to converge to two different local peaks, as shown in Figure 9 Considering that the peak at 115.27 was not a global maximum, it is possible that the bubble had already started to become deflated, which caused the fits to be not so stable. An analysis considering a sample ending before the global peak in the 2012–2015 period could shed additional light. However, in this case, we intended to include as much information as possible, to see how good the fits were adjusted to the latest real data. 5.4. Robustness Analysis 5.4.1. Log-Periodic Analysis of the Data Minimizing the Root Mean Squared Deviation In the previous optimization processes, we obtained the best fit by minimizing the sum of the root of the squared deviations between the log linear model and the log of the data. However, the root mean squared deviation measure requires the minimization of the root of the sum of the squared differences divided by the number of data points: $R M S D = ∑ t = 1 n ( y ^ t − y ) 2 n$ $y ^ t$ are the predicted values, are the market values, and are the number of data points included in the sample. In this case, the number to minimize is going to be normalized around zero (and not around a random number, dependent on the number of data points). However, the results of both processes must be similar and equivalent. To test this approach, we conducted the minimization procedure in three selected data points for the stock market in 1998, 2007, and 2015. The selected are 98.50, 107.29, and 115.42, respectively. Table 4 presents our results. For the 1998 crisis, we obtained tc = 98.75, while in our original analysis we obtained 98.70 (the lowest data point in the period was 98.75). In the case of the 2007 crisis, we obtained tc = 107.51, the same as in our original result (in this case, the lowest data point occurred in 107.54). For the 2015 crisis, we obtained tc = 115.54, while the original result was 115.65 (and the peak occurred at 115.65). Results were robustly similar. In the 1998 crisis, the minimization of the RMSD gave us a closer result to the lowest data point, while for 2007 and 2015, the results were closer with our original method of analysis. 5.4.2. Artificial Series and Critical Time Sensitivity In order to test how sensitive the calculated tc to small variations in the series was, we created new artificial series based on real prices, and refitted the LPPL equation for the three time periods analyzed in this work. The procedure to generate the new series was partly based on suggestions by Sornette et al. ). We used the residuals obtained from the fitting of the equations with = 98.50, 107.29, and 115.42. In each case, we reshuffled the residuals in blocks of 27 days, at least up to a month, in order to generate variability, but preserved the local transient correlations that emerged in critical times. Later, we added it to the log-price of that day (New Log-Price = Log-Price + Reshuffled Residual). We generated 10 series for each time period and refitted the LPPL equation. Our results, not tabulated, are as follows: For the 1998 Crash, the tc for the artificial series goes from 98.65 to 98.70. In the real series, the estimated tc was 98,70, showing that even when there was some variation in the results, these were not that different, and the obtained result was inside the band of the values obtained in the artificial series. For the 2007 Crash, the tc ranges from 107.50 to 107.51. In the real time series, the tc was 107.51. In this case, the band that appeared was very narrow, showing little alteration of the tc after small variations to the real data. For the 2015 Crash, the tc ranges from 115.54 to 115.55. In the real time series, the tc was 115.65. However, there is a local peak at 115.54 (14 July 2015), that the equation could be predicting correctly. The difference, in this case, could arise from the fact that we were already predicting the later drawdowns, and not the most critical crash that had already happened. In fact, we were already in a downward trend, and a different data set, (starting in 115.02, the beginning of the new mini-trend), could be more appropriate to use. Recent studies proposing similar approaches add additional support to our model. For example, Contessi and De Pace ) identified some instability in the stock markets from 18 countries during the first wave of COVID 19. Their evidence suggested a contagion from the crash of the Chinese stock market, associated in the literature to the COVID-19 Pandemic ( Liu et al. 2021 Song et al. ) further explored this instability with Log-Periodic Power Low Singularity, concluding that from the 18 markets studied, the crashes identified in London, Tokyo, and Hong Kong were the only ones that might have been caused by COVID-19. Shu et al. ) conducted a study based in LPPM but focused only in the U.S. Stock Market. Interestingly, the authors found that COVID-19 was not the main cause of the crash in 2020, with the data supporting the pre-existence of a bubble. However, the rise of the COVID-19 pandemic might have been the trigger that unleashed the stock crash. Evidence clearly supported LPPM as a high value technique to identify bubbles and anticipate crashes, even if it would hardly help to understand the true nature of the trigger: just a trigger, a fundamental reason to the crash, or just one of many interrelated factors that explain the stock market behavior. Dai et al. ) reported that the uncertainty in the economic policy in the U.S. associated to COVID-19 increased the risk of a crash. Dai et al. ) results suggested that many other reasons contributed to the unexpected stock prices jumps. In a sense, these results support the random walk hypothesis. Regardless of the natural conditions for a disaster (which LPPM successfully identifies), its occurrence depended on the new information arriving to the stock markets. 6. Conclusions Bubbles and the way these develop have been presented in different markets during different eras showing common features, despite technological changes or differences among countries. In all cases, when the bubble ends analysts would look for what caused the crash, and they will usually use hindsight some piece of news. However, it seems to be that financial markets are unstable by design, and that the oscillations of prices before crashes could be caused by the way markets are organized. It is possible that the cause of the crashes are the bubbles itself, and the news are just triggers, but the instability must exist to begin with. In this work, we present the theory of self-similar oscillatory finite-time in finance and its application to the prediction of crashes. First, we discussed some terms and concepts, followed by a brief introduction to the theory, to analyze its different aspects and the challenges concerning the prediction aspect. We test the Log Periodic Power Law/Model (LPPM) to analyze the Portuguese stock market in its crises in 1998, 2007, and 2015. Parameter values were in line with those observed in other markets. The model performs robustly for Portugal, which was a small market with liquidity issues and the index was only composed of 20 stocks. Thus, we provided consistent evidence in favor of the proposed LPPM The LPPM methodology proposed here would have allowed us to avoid big loses in the 1998 Portuguese crash, and would have permitted us to sell at points near the peak in the 2007 crash. In the case of the 2015 crisis, we would have obtained a good indication of the moment where the lowest data point was going to be achieved. Predictions of critical time appeared to be stable, regarding the last data point included. We obtained a string of values of tc surrounding some specific number consistent with real data. The number of oscillations required to verify that the log-periodicity was not coming from noise (Nosc = 3) signaled robust results that assured to take into consideration results from the model. Parameter values were in line with those observed in other markets. Regarding the fitting procedure, the minimization of SRSD or RMSD led to similar results. By using artificial series, we were able to provide a measure of how stable are our critical time predictions relative to small perturbations, and to generate bands that could be expected to land around the real critical time. Altogether, these results supported the potential of our approach to study bubbles and crashes. If the model performed robustly for Portugal, which was a small market with liquidity issues and the index was only composed of 20 stocks, then our evidence provided consistent support in favor of the proposed LPPM methodology. The use of LPPM methodology could be helpful in order to prevent major losses in diversified stocks portfolios, whenever we are at a bubble situation. LPPM can help to identify in advance the signs of a bubble that are becoming increasingly instable and that can blow to any unexpected trigger. This is of relevance for policy makers and market supervisors, in the sense that it can help to implement risk mitigating actions. Expectably, we can also apply LPPM to smaller crashes. Baker et al. ) used small ranges of variation (+/−2.5%) to identify small but out of ordinary jumps. It is relevant to test LPPM in smaller crashes and investigate eventual differences. LPPM can be extended to other phenomena. For example, it can be applied to individual stocks. In this case it is necessary to remember that the series are going to be much noisier, and it could be necessary to allow more degrees of freedom for the parameters to achieve a good fit. Given that Log-Periodic Analysis can be applied to the Gross National Product, a joint analysis of both conditions can be undertaken. Our research is limited to the study of critical crashes in the Portuguese stock market during the latest three crisis. Our results could be extended to study the impact of COVID-19 induced stock returns. The data is still difficult to analyze, in the case of Portuguese market, since it is unclear whether we have reached the critical point. Further research can look at the predictive ability of LPPM in this context. Author Contributions Conceptualization: all authors; methodology: J.V.Q.B. and P.R.V.; software: J.V.Q.B.; validation: all authors.; formal analysis: T.C.G., J.V.Q.B. and P.R.V.; investigation: all authors; resources: T.C.G., P.R.V. and P.V.M.; data curation: J.V.Q.B., T.C.G. and P.R.V.; writing—original draft preparation: T.C.G. and J.V.Q.B.; writing—review and editing: all authors; visualization: all authors; supervision: P.R.V. and P.V.M.; project administration: T.C.G.; funding acquisition: T.C.G., P.R.V. and P.V.M. All authors have read and agreed to the published version of the manuscript. This research was funded by FCT—Fundação para a Ciência e Tecnologia (Portugal), grant number UID/SOC/04521/2020, via Advance/CSG. Data Availability Statement Restrictions apply to the availability of these data. Data was obtained from DataStream (Thomson Reuters) and are available from the authors with their permission. Conflicts of Interest The authors declare no conflict of interest. Table 1. Fitting of the Log-Periodic Linear Model for the 1996–1998 period, for samples with different ending points (tlast). A B C β φ tc ω λ SRSD SRSD/NDP Tlast Tlast Natural Nosc NDP 1 9.41 −0.68 0.14 0.51 199.91 98.18 7.35 2.35 12.57 0.02 98.08 29 January 1998 3.60 542 2 9.41 −0.68 0.14 0.51 199.91 98.18 7.35 2.35 12.74 0.02 98.12 13 February 1998 4.23 553 3 9.41 −0.69 0.14 0.51 199.91 98.18 7.42 2.33 12.88 0.02 98.17 2 March 1998 6.05 564 4 9.69 −0.93 0.09 0.42 199.42 98.28 8.91 2.02 13.53 0.02 98.21 17 March 1998 4.84 575 5 11.78 −2.86 −0.02 0.20 74.86 98.64 14.09 1.56 13.69 0.02 98.25 1 April 1998 4.27 586 6 11.79 −2.86 −0.02 0.20 74.91 98.64 13.96 1.57 14.09 0.02 98.29 16 April 1998 4.49 597 7 10.94 −1.99 0.03 0.30 109.37 98.66 15.00 1.52 14.83 0.02 98.33 1 May 1998 4.95 608 8 10.94 −1.99 0.03 0.30 109.36 98.66 15.00 1.52 15.36 0.02 98.38 18 May 1998 5.33 619 9 10.87 −1.92 0.03 0.30 109.35 98.66 15.00 1.52 16.12 0.03 98.42 2 June 1998 5.70 630 10 10.61 −1.65 0.04 0.35 109.26 98.68 15.00 1.52 17.93 0.03 98.46 17 June 1998 5.97 641 11 10.57 −1.60 0.04 0.36 109.13 98.70 15.00 1.52 20.63 0.03 98.50 2 July 1998 6.21 652 12 9.61 −0.71 −0.08 0.74 194.99 98.55 15.00 1.52 22.95 0.03 98.54 17 July 1998 12.93 663 13 9.61 −0.68 −0.08 0.77 194.77 98.60 15.00 1.52 24.27 0.04 98.59 3 August 1998 13.05 674 Table 2. Fitting of the Log-Periodic Linear Model for the 2003–2007 period, for samples with different ending points (tlast). A B C β φ tc ω λ SRSD SRSD/NDP Tlast Tlast Natural Nosc NDP 1 9.33 −0.25 0.16 0.76 −20.35 107.02 11.59 1.72 23.00 0.02 106.96 15 December 2006 7.65 994 2 9.33 −0.25 0.16 0.75 −20.29 107.01 11.50 1.73 23.09 0.02 107.00 1 January 2007 11.11 1005 3 9.35 −0.27 0.16 0.72 −20.64 107.05 12.23 1.67 23.32 0.02 107.04 16 January 2007 12.55 1016 4 9.37 −0.28 0.15 0.70 −20.90 107.10 12.57 1.65 23.87 0.02 107.09 1 February 2007 11.54 1028 5 9.52 −0.35 0.11 0.64 −22.52 107.44 14.48 1.54 25.43 0.02 107.13 15 February 2007 6.02 1038 6 9.54 −0.36 0.11 0.63 −22.71 107.47 14.72 1.53 25.79 0.02 107.16 1 March 2007 6.18 1048 7 9.54 −0.36 0.11 0.63 −22.72 107.47 14.75 1.53 25.89 0.02 107.21 16 March 2007 6.52 1059 8 9.54 −0.35 0.11 0.63 −22.88 107.50 14.97 1.52 26.00 0.02 107.25 2 April 2007 6.85 1070 9 9.54 −0.35 0.11 0.64 −22.92 107.51 15.00 1.52 26.25 0.02 107.29 17 April 2007 7.20 1081 10 9.54 −0.35 0.11 0.64 −22.95 107.51 15.00 1.52 26.33 0.02 107.33 2 May 2007 7.62 1092 11 9.53 −0.35 0.11 0.64 −22.94 107.51 15.00 1.52 26.47 0.02 107.38 17 May 2007 8.32 1103 Table 3. Fitting of the Log-Periodic Linear Model for the 2012–2015 period, for samples with different ending points (tlast). A B C β φ tc ω λ SRSD SRSD/NDP Tlast Tlast Natural Nosc NDP 1 8.37 0.31 0.56 0.20 31.84 115.20 5.64 3.05 34.15 0.05 115.09 2 February 2015 2.88 685 2 8.37 0.31 0.55 0.20 31.67 115.25 5.92 2.89 34.61 0.05 115.13 16 February 2015 2.99 695 3 8.37 0.31 0.55 0.20 31.54 115.28 6.19 2.76 34.90 0.05 115.17 2 March 2015 3.13 705 4 8.40 0.28 0.60 0.25 0.17 115.28 6.23 2.74 35.05 0.05 115.21 16 March 2015 3.63 715 5 8.63 0.01 −21.10 0.23 53.27 115.29 5.25 3.31 36.56 0.05 115.25 2 April 2015 3.57 728 6 8.30 0.39 0.40 0.20 −13.07 115.47 8.41 2.11 36.86 0.05 115.29 16 April 2015 3.81 736 7 8.65 0.01 −20.07 0.23 52.95 115.39 6.10 2.80 37.54 0.05 115.34 4 May 2015 3.95 747 8 8.42 0.26 0.57 0.20 −13.45 115.54 8.53 2.09 37.86 0.05 115.38 18 May 2015 4.03 757 9 8.42 0.27 0.53 0.20 30.04 115.65 9.72 1.91 38.77 0.05 115.42 2 June 2015 4.09 768 10 8.45 0.24 0.59 0.20 −13.96 115.65 9.55 1.93 39.31 0.05 115.46 16 June 2015 4.29 778 11 8.66 0.01 −19.07 0.27 52.50 115.51 7.21 2.39 40.29 0.05 115.50 2 July 2015 6.70 790 Parameters 1998 2007 2015 A 10.58 9.55 8.55 B −1.58 −0.37 0.13 C 0.04 −0.11 1.20 β 0.36 0.61 0.20 φ 108.75 11.45 30.48 tc 98.75 107.51 115.54 ω 15.00 15.43 7.76 RMSD 0.04 0.03 0.06 tlast 98.50 107.29 115.42 tlast natural 2 July 1998 17 April 2007 2 June 2015 NDP 652 1081 768 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Gonçalves, T.C.; Borda, J.V.Q.; Vieira, P.R.; Matos, P.V. Log Periodic Power Analysis of Critical Crashes: Evidence from the Portuguese Stock Market. Economies 2022, 10, 14. https://doi.org/10.3390/ AMA Style Gonçalves TC, Borda JVQ, Vieira PR, Matos PV. Log Periodic Power Analysis of Critical Crashes: Evidence from the Portuguese Stock Market. Economies. 2022; 10(1):14. https://doi.org/10.3390/ Chicago/Turabian Style Gonçalves, Tiago Cruz, Jorge Victor Quiñones Borda, Pedro Rino Vieira, and Pedro Verga Matos. 2022. "Log Periodic Power Analysis of Critical Crashes: Evidence from the Portuguese Stock Market" Economies 10, no. 1: 14. https://doi.org/10.3390/economies10010014 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2227-7099/10/1/14","timestamp":"2024-11-03T17:11:29Z","content_type":"text/html","content_length":"515775","record_id":"<urn:uuid:8b929990-7444-4b85-82a3-ff42239883a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00198.warc.gz"}
Pythagoras’ Theorem: a^2+b^2=c^2 Historical Reference: Attributed to Pythagoras, an ancient Greek mathematician. Meaning: Describes the relationship between the lengths of the sides of a right triangle, where aa and bb are the lengths of the two shorter sides, and cc is the length of the hypotenuse. Implication: Fundamental in geometry and trigonometry, providing a method to calculate unknown side lengths in right triangles. Newton’s Law of Universal Gravitation: F = G(m[1]m[2])/R^2 Historical Reference: Proposed by Sir Isaac Newton in his work “Philosophiæ Naturalis Principia Mathematica” in 1687. Meaning: States that every point mass attracts every other point mass with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. Implication: Fundamental in classical mechanics, providing a quantitative description of the gravitational force between two objects. Variation of Gravity with Height: g[h] = g (r/r+h)^2 Historical Reference: Derived from Newton’s law of universal gravitation. Meaning: The acceleration due to gravity at a height of h above the earth’s surface is g[h], and the radius of the earth is R. This equation is used where h is a considerable distance compared to r. Implication: Provides insight into the decrease in gravitational acceleration as altitude increases, influencing phenomena such as weightlessness in space travel and atmospheric dynamics. Coulomb’s Law: F= ke*(q[1]* q[2]) / r^2 Historical Reference: Developed by Charles-Augustin de Coulomb in the 18th century, Coulomb’s law describes the electrostatic force between two charged particles. Meaning: Organizes energy into inverse square concentrations, illustrating how the strength of the electrostatic force decreases with distance according to the square of the separation distance. Implication: Fundamental in understanding the behavior of electric charges and the interactions between them, forming the basis for electrostatics and contributing to the development of electromagnetic theory. Gauss’s Law for Electric Fields: ∇⋅E=ρ/ε[0] Historical Reference: Formulated by Carl Friedrich Gauss in the early 19th century, Gauss’s law for electric fields relates the electric flux through a closed surface to the charge enclosed within the surface. Meaning: In vector calculus, this law describes the flow of electric field lines through a closed surface, providing insights into the distribution of electric charge. Implication: Essential in analyzing the behavior of electric fields and charges, Gauss’s law helps in solving electrostatic problems and understanding the principles of electric field behavior in various contexts. Gauss’s Law for Magnetic Fields: ∇⋅B=0 Historical Reference: Formulated by Carl Friedrich Gauss in the early 19th century, Gauss’s law for electric fields relates the electric flux through a closed surface to the charge enclosed within the surface. Meaning: It states that the magnetic field B has divergence equal to zero, It is equivalent to the statement that magnetic monopoles do not exist. Implication: The law in this form states that for each volume element in space, there are exactly the same number of “magnetic field lines” entering and exiting the volume. No total “magnetic charge” can build up in any point in space. Gauss’s Law for Gravitation: ΦD= Q[free] Historical Reference: The name “Gauss’s law for magnetism” is not universally used. The law is also called “Absence of free magnetic poles”. Meaning: States that the gravitational flux through any closed surface is proportional to the enclosed mass, providing a mathematical representation of gravitational interactions. Implication: Fundamental in gravitational theory, Gauss’s law for gravitation helps in understanding the distribution of gravitational fields and predicting gravitational effects based on mass Faraday’s Law of Electromagnetic Induction: E=d[ΦB]/d[t] Historical Reference: Discovered by Michael Faraday in the 19th century, Faraday’s law of electromagnetic induction describes how a changing magnetic field induces an electromotive force (EMF) or electric field. Meaning: States that a changing magnetic field induces an electric field, demonstrating the connection between magnetic and electric phenomena. Implication: Crucial in the development of electromagnetism and electrical engineering, Faraday’s law explains the principles behind generators, transformers, and various electrical devices. Resonance f[0]=1/2π√(LC) Resonance: In a circuit, resonance occurs when the frequency of an applied alternating current (AC) matches the natural frequency of the circuit. LC Circuit: In an LC circuit, the inductor stores energy in a magnetic field, while the capacitor stores energy in an electric field. When an AC voltage is applied to the circuit, these components interact to produce oscillations Meaning: At this frequency, the impedance of the circuit is at a minimum, and the current flowing through it is at a maximum. Natural Frequency: The natural frequency of an LC circuit is determined by the values of its inductance and capacitance. It is the frequency at which the circuit would oscillate if left to itself after being disturbed. Resonant Frequency: The resonant frequency of an LC circuit is the frequency at which it exhibits resonance. At this frequency, the inductive reactance (X[L]) of the inductor and the capacitive reactance (X[C]) of the capacitor are equal in magnitude but opposite in sign. This means that they cancel each other out, resulting in a purely resistive circuit. Ohm’s Law: E=IR Historical Reference: Named after Georg Simon Ohm, Ohm’s law defines the relationship between electric potential (voltage), current, and resistance in an electrical circuit. Meaning: States that the electric potential (voltage) across a resistor is directly proportional to the current flowing through it and the resistance of the resistor. Implication: Fundamental in circuit analysis and electrical engineering, Ohm’s law governs the behavior of electrical circuits and is used extensively in the design and analysis of electrical Stefan-Boltzmann Law: E=σT^4 Historical Reference: Developed by Josef Stefan and Ludwig Boltzmann in the late 19th century, the Stefan-Boltzmann law relates the radiant power emitted by a surface to its temperature. Meaning: States that the total radiant power emitted by a black body per unit surface area is directly proportional to the fourth power of its absolute temperature. Implication: Essential in thermodynamics and astrophysics, the Stefan-Boltzmann law helps in understanding the energy radiation from various objects, including stars and planets Maxwell’s Equations Historical Reference: Formulated by James Clerk Maxwell in the 19th century, Maxwell’s equations describe the behavior of electric and magnetic fields in classical electromagnetism. First: Gauss’s law for static electric fields – Electric fiends emanate from electric charges with an electrical charge there is an electrical field emanating from it. The strength of this field is proportional to e0. Static charges only affect other charges, not magnets: Where ∇·E is the divergence of the electric field, ε0 is the vacuum permittivity and ρ is the total volume charge density (charge per unit volume). Second: Gauss’s law for static magnetic fields – There are no magnetic (monopoles) charges in the universe. There is always as much field pointed in as there is pointed out. Static magnets will affect only other magnets, not charges: Where ∇·B is the divergence of the magnetic field. Third: Faraday’s law states a changing (in time) magnetic field produces an electric field. A moving magnet will create an electrical field. A moving magnet it will affect a charge, creating a current of electricity. ∇xE = d[B]/d[t] Where E is the electromotive force (emf) and dB is the magnetic flux. Fourth: Ampere-Maxwell’s law states a changing electric field produces a magnetic field. The first term describes a moving charge it will generate a magnetic field. Two wires will be attracted to each other if they have a current flowing through them. The second term says a magnetic field is created by moving charges which are created by moving electrical fields (i.e., an electromagnetic wave is propagated). This term recognized the current flow through capacitors (displacement capacitance) can create magnetic fields, The idea that changing magnetic fields are created by changing electrical fields was Maxwell’s addition to Amperes law. This fourth equation is a wave equation. ∇×B = μ[0]J+ϵ[0] ∂E/∂t Where: E is the electric field, B is the magnetic field, ρ is the charge density, J is the current density, ϵ[0] is the permittivity of free space, and μ[0] is the permeability of free space. Equations involving the speed and density of energy Speed of energy: c^2 = 1/μ[0]ε[0] c = 1/√μ[0]ε[0] η[space] = 1/√μ[0]ε[0] where μ[0] = Z[0]^2ε[0] Impedance of space: Z[0] = √(μ[0]/ε[0]) Admittance of space: The Rate at Which Energy is Accepted by Time: Y[0] Y[0] = √(ε[0]/μ[0]) These equations form the bedrock of classical electromagnetism and are crucial for understanding the behavior of electric and magnetic field. Where: Y[0] represents the Admittance of a Field to a Change in Energy Density, ε[0] represents Permittivity, and μ[0] represents Permeability. This equation unveils the dynamic interplay between energy and the electromagnetic (EM) field. It offers insights into how time accommodates and interacts with energy concentrations, shaping the gravitational phenomena we observe. The Energy Momentum Equation: E = √m^2c^4 + p^2c^2 Where: E represents energy, m represents mass, and c represents the speed of light in a vacuum, P is the momentum of an object. Then translated to energy at rest gives us this famous equation: Einstein’s Mass-Energy Equivalence in Quantum Admittance: E = m/Z[0]^2ε0^2 This equation expresses the relationship between mass (m) and energy (E), proposing that the energy of an object is equal to its mass multiplied by a new constant—where the phase angle of the wave (ϕ) serves as a fundamental constant, rather than the speed of light (c). Einstein’s Mass-Energy Equivalence: E=mc^2 Historical Reference: Proposed by Albert Einstein in his theory of special relativity in 1905, the mass-energy equivalence principle states that mass and energy are equivalent and interchangeable. Einstein’s Field Equations: Rμν − 1/2(gμνR) + Λgμν = (8πG/c4) Tμν The Einstein Field Equations are ten equations, contained in the tensor equation shown above, which describe gravity as a result of spacetime being curved by mass and energy. Guv is determined by the curvature of space and time at a particular point in space and time, and is equated with the energy and momentum at that point. The solutions to these equations are the components of the metric tensor guv, which specifies the spacetime geometry. The inertial trajectories of particles can then be found using the geodesic equation. Despite extensive experimental validation, these equations face fundamental clashes with quantum mechanics. They assume precise localization of energy and momentum for particles, conflicting with the uncertainty principle of quantum theory, which limits our ability to know both properties simultaneously. This contradiction necessitates resolution. The equations seem to have a fundamental flaw. Even though both the theory and its equations have been repeatedly confirmed through experiments, they clash with the well-established principles of quantum mechanics. This clash isn’t just theoretical – it applies to any experiment in a lab. The equations assume we can pinpoint the energy and momentum of a particle at any exact point in space and time. However, the uncertainty principle, a cornerstone of quantum theory, tells us this is impossible. There’s a fundamental limit to how precisely we can know both a particle’s energy and momentum. Planck’s Equation: E=hf Historical Reference: Introduced by Max Planck in 1900. Meaning: Relates the energy of a photon to its frequency, where h is Planck’s constant. Implication: Foundational in quantum mechanics, providing a key relationship between the energy and frequency of electromagnetic radiation Schwarzschild radius R[s]=2GM/c^2) Historical Reference: The Schwarzschild radius was named after the German astronomer Karl Schwarzschild, who calculated this exact solution for the theory of general relativity in 1916. Meaning: The Schwarzschild radius or the gravitational radius is a physical parameter in the Schwarzschild solution to Einstein’s field equations that corresponds to the radius defining the event horizon of a Schwarzschild black hole. Implication: It is a characteristic radius associated with any quantity of mass. Lorentz Factor γ=1/√(1- v^2/c^2) Historical Reference: Introduced in the context of special relativity by Hendrik Lorentz and confirmed by Albert Einstein. Meaning: The Lorentz factor (γ) quantifies the effect of time dilation and length contraction on moving objects relative to a stationary observer, increasing with velocity according to the formula , where v is the velocity and c is the speed of light. Implication: The Lorentz factor plays a crucial role in relativistic mechanics, accounting for the observed phenomena of time dilation and length contraction at speeds approaching the speed of light. It’s essential for understanding the behavior of particles in particle accelerators, the stability of high-speed spacecraft, and the fundamentals of cosmology. Lorentz Force Law: F=q[E]+q[v]*B Historical Reference: Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force. Meaning: Lorentz’s force explains the mathematical equations along with the physical importance of forces acting on the charged particles that are traveling through the space containing electric as well as the magnetic field. Implication: It says that the electromagnetic force on a charge q is a combination of (1) a force in the direction of the electric field E (proportional to the magnitude of the field and the quantity of charge), and (2) a force at right angles to both the magnetic field B and the velocity v of the charge (proportional to the magnitude of the field, the charge, and the velocity) Lorentz Magnetic Force (Torque) Equation τ=q(r×B) Historical Reference: This concept is derived from the Lorentz force law and is fundamental in understanding the rotational motion of charged particles in magnetic fields. Meaning: This equation describes the additional rotational force experienced by a charged particle moving through a magnetic field. It accounts for the interaction between the magnetic field and the motion of the particle, resulting in a twisting or rotational motion. Implication: The magnetic Lorentz force (torque) is essential for understanding phenomena such as the behavior of charged particles in cyclotrons, where particles are accelerated in circular paths by magnetic fields, as well as the dynamics of magnetic materials and electromagnetic devices. Lorentz Time Dilation Equation Δt′=Δt/√ (v^2/c^2) Historical Reference: Developed by Hendrik Lorentz as part of his transformations to account for the effects of relative motion between inertial frames of reference, contributing to the foundation of special relativity. Meaning: The Lorentz time dilation equation describes how time intervals appear to be dilated or stretched when observed from a frame of reference moving at a significant fraction of the speed of light relative to a stationary frame. Implication: This equation has profound implications for the nature of time and motion, leading to phenomena such as time dilation in high-speed travel and relativistic effects in particle accelerators and astrophysical phenomena. Schwarzschild metric r=2G[m]/c^2 Historical Reference: Developed by Karl Schwarzschild in 1916 as a solution to Einstein’s field equations of general relativity. Meaning: Represents the Schwarzschild radius (r), which defines the size of the event horizon of a non-rotating black hole. It relates the mass (m) of an object to its Schwarzschild radius, gravitational constant (G), and the speed of light (c). Implication: Fundamental in the study of black holes and gravitational phenomena, providing a theoretical framework for understanding the curvature of spacetime around massive objects. Schrödinger Equation: HΨ=iℏ(∂t/∂)Ψ Historical Reference: Proposed by Erwin Schrödinger in 1926 as part of the development of quantum mechanics. Meaning: Represents the fundamental wave equation of quantum mechanics, where H is the Hamiltonian operator, Ψ is the wave function, i is the imaginary unit, ℏ is the reduced Planck constant, and ∂t/ ∂ represents the partial derivative with respect to time. Implication: Describes the behavior of quantum systems, including the time evolution of wave functions, and serves as a cornerstone in quantum mechanics, facilitating the prediction of particle behavior and the interpretation of experimental results. Redshift: z = (λ[obs] – λ[emit]) / λ[emit] Where: λ[obs] is the wavelength of the radiation as observed by the observer, and λ[emit] is the wavelength of the radiation as emitted by the source. Meaning: A positive value of z indicates a redshift, meaning the source is moving away from the observer. A negative value indicates a blueshift, meaning the source is moving towards the observer. Doppler Effect: Redshift is a manifestation of the Doppler effect, a phenomenon where the perceived frequency of a wave changes due to relative motion between the source and the observer. In the case of light, a moving source causes a shift in wavelength. Expansion of the Universe: One of the most significant applications of redshift is in cosmology. Astronomers have observed that the light from distant galaxies is redshifted, indicating that these galaxies are moving away from us. This observation led to the development of the Big Bang theory, which posits that the universe began as a hot, dense point and has been expanding ever since. Measuring Distance: Redshift can also be used to estimate the distance to astronomical objects. The more redshifted the light from a galaxy, the farther away it is. This relationship is based on the Hubble Law, which states that the recessional velocity of a galaxy is proportional to its distance. By measuring the redshift of light from distant objects, astronomers can gain insights into the universe’s age, its rate of expansion, and the distribution of matter and energy within it.
{"url":"https://gravityz0.com/home/notes/formulas/","timestamp":"2024-11-06T08:43:38Z","content_type":"text/html","content_length":"185422","record_id":"<urn:uuid:bd161271-376c-4b9e-a7c5-1e4160889edc>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00384.warc.gz"}
How do you explain time relativity? In the Special Theory of Relativity, Einstein determined that time is relative—in other words, the rate at which time passes depends on your frame of reference. How do you solve relativity problems? How to Solve Special Relativity Problems 1. Draw a Picture. In Special Relativity problems, you relate the observations made by two observers in different reference frames measuring the same thing. 2. Select the Relation. There are three key relations for Special Relativity. 3. Solve the Problem. 4. Understand the Results. What are the two basic principles of relativity? The theory is based on two fundamental principles: Relativity – The laws of physics do not change. Even for objects moving at inertial, constant speed frames of reference. The speed of light – It is the same for all observers regardless of their relative motion to the source of light. Are there any real problems with special relativity? Special Relativity Questions & Problems (Answers) 1. If you were on a spaceship travelling at 0.50c away from a star, what speed would the starlight pass you? (The speed of light: 3.00 x 108m/s) 2. Does time dilation mean that time actually passes more slowly in moving references frames or that it only seems to pass more slowly? Which is postulate of special relativity does not fit with classical physics? 1. Which of Einstein’s postulates of special relativity includes a concept that does not fit with the ideas of classical physics? Explain. 2. Is Earth an inertial frame of reference? Is the Sun? Justify your response. 3. What should the relativistic effect of Γ be? If relativistic effects are to be less than 3%, then γ must be less than 1.03. At what relative velocity is γ = 1.03? 31. (a) At what relative velocity is γ = 1.50? (b) At what relative velocity is γ = 100? 32. (a) At what relative velocity is γ = 2.00? (b) At what relative velocity is γ = 10.0? 33. Unreasonable Results Why are relativistic effects present in cars and airplanes? Relativistic effects such as time dilation and length contraction are present for cars and airplanes. Why do these effects seem strange to us? 9. Suppose an astronaut is moving relative to the Earth at a significant fraction of the speed of light.
{"url":"https://princeharrymemorial.com/how-do-you-explain-time-relativity/","timestamp":"2024-11-06T15:27:35Z","content_type":"text/html","content_length":"59246","record_id":"<urn:uuid:4fb07ef7-5e4f-4776-9093-b2e55af401b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00518.warc.gz"}
Home Loan Calculation Formula Home Loan Calculation Formula Interest on your home loan is usually calculated daily and then charged to you at the end of each month. Your bank will take the outstanding loan amount at the. Use our free mortgage calculator to get an estimate of your monthly mortgage payments, including principal and interest, taxes and insurance, PMI, and HOA. Mortgage Formulas · P = L[c(1 + c)n]/[(1 + c)n - 1]. The next formula is used to calculate the remaining loan balance (B) of a fixed payment loan after p months. Mortgage Formulas · P = L[c(1 + c)n]/[(1 + c)n - 1]. The next formula is used to calculate the remaining loan balance (B) of a fixed payment loan after p months. The formula to determine home loan EMI amount ; E · EMI amount ; P · Principal amount ; R · Rate of interest ; N · Loan tenure. A good rule of thumb is that banks will loan you up to 30% of your gross income annually. For instance, let's say your annual income is RM50, 30% of that. Open Excel and select a cell to display the EMI. · Use the formula "=PMT(interest rate/12, tenure in months, loan amount)". · Replace "interest rate" with your. The major variables in a mortgage calculation include loan principal, balance, periodic compound interest rate, number of payments per year, total number of. Calculate home loan payments for Purchase or Refinance. Take the guesswork out of your mortgage payments. Use this mortgage loan calculator to generate a. What is the EMI for 20 lakhs home loan? The EMI amount for 20 lakhs for year tenure is Rs. 17, You can use the home loan EMI calculator given in IDFC. The EMI calculator uses the formula EMI = [P x R x (1+R)^N]/[(1+R)^N-1] to compute the EMI amount. Enter the principal loan amount you need, a reasonable. What Is a Fixed-Rate Loan? How Do I Calculate It? · Number of periodic payments (n) = payments per year times number of years · Periodic Interest Rate (i). Calculate your monthly VA mortgage payments with taxes, insurance and the VA funding fee with this VA loan calculator from Veterans United Home Loans. decoding the home loan calculator formula here, e is the emi amount, p is the principal, r is the interest rate, and n is the loan term. here, e amounts to. Formula to determine Home Loan EMI amount · P is the principal loan amount · r is the monthly interest rate (annual rate divided by 12) · n is the number of. Determine what you could pay each month by using this mortgage calculator to calculate estimated monthly payments and rate options for a variety of loan. Open Excel and select a cell to display the EMI. · Use the formula "=PMT(interest rate/12, tenure in months, loan amount)". · Replace "interest rate" with your. Formula for EMI Calculation is - ; P x R x (1+R)^N / [(1+R)^N-1] where- ; P = Principal loan amount ; N = Loan tenure in months ; R = Monthly interest rate. The home loan EMI calculator considers three variables: the loan amount, interest rate, and tenure. The interest rate could either be fixed or floating. While a. To calculate your DTI, add all your monthly debt payments, such as credit card debt, student loans, alimony or child support, auto loans and projected mortgage. Calculate Home Loan EMI Use our Home Loan Calculator to get insights on your loan plan! Just select an amount, set an approximate interest rate and loan. You can figure out how much equity you have in your home by subtracting the amount you owe on all loans secured by your house from its appraised value. This. Interest Rate Calculation Formula: This calculation is based on the textbook interest rate formula. You can use this simple formula to calculate Home Loan. Payments: Multiply the years of your loan by 12 months to calculate the total number of payments. A year term is payments (30 years x 12 months = A good rule of thumb is that banks will loan you up to 30% of your gross income annually. For instance, let's say your annual income is RM50, 30% of that. Free loan calculator to find the repayment plan, interest cost, and amortization schedule of conventional amortized loans, deferred payment loans. Lenders multiply your outstanding balance by your annual interest rate and divide by 12, to determine how much interest you pay each month. Your monthly mortgage payment depends on a number of factors, like purchase price, down payment, interest rate, loan term, property taxes and insurance. Calculate your monthly USDA home loan payment to get a breakdown of estimated USDA mortgage fees, taxes, and insurance costs. EMI is computed based on an unequal distribution of the principal amount and interest. In the initial phase of a home loan, majority of the EMI constitutes the. Calculate Your Monthly Mortgage Payment Use our mortgage calculator to determine the monthly principal and interest payment based on the information you. What To Ask When Starting A Business | Business Checking Account Promo
{"url":"https://theanisenkova.ru/news/home-loan-calculation-formula.php","timestamp":"2024-11-04T05:55:43Z","content_type":"text/html","content_length":"12220","record_id":"<urn:uuid:a748cb27-b33c-4c99-b0b1-6d71a09d0b35>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00147.warc.gz"}
The cylindrical container - math word problem (15433) The cylindrical container The cylindrical container has a base area of 300 cm^3 and a height of 10 cm. It is 90% filled with water. We gradually insert metal balls into the water, each with a volume of 20 cm^3. After inserting how many balls for the first time does water flow over the edge of the container? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! You need to know the following knowledge to solve this word math problem: Units of physical quantities: We encourage you to watch this tutorial video on this math problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/15433","timestamp":"2024-11-10T19:36:53Z","content_type":"text/html","content_length":"81358","record_id":"<urn:uuid:853fc0c5-180a-4aa8-a245-22cffb9899b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00309.warc.gz"}
Solving Proportion Equations to Find the Value of an Unknown Question Video: Solving Proportion Equations to Find the Value of an Unknown Mathematics • Third Year of Preparatory School If we subtract three times a certain number from each of the two terms in the ratio 33 : 19, the ratio becomes 3 : 10. What is the number? Video Transcript If we subtract three times a certain number from each of the two terms in the ratio 33 to 19, the ratio becomes three to 10. What is the number? So we’re told we need to subtract three times a certain number from the terms in the ratio 33 to 19. We’re going to use an algebraic method here. So let’s begin by defining our number to be equal to 𝑥. Then, we know we’re going to be subtracting three times this number, where three times is simply three 𝑥. Subtracting this from each of our two terms and the ratio becomes 33 minus three 𝑥 to 19 minus three 𝑥. But we’re told that when this happens, the ratio becomes three to 10. So 33 minus three 𝑥 to 19 minus three 𝑥 must be equal to three to 10. And this is really useful because whilst we have ratio symbols in here at the moment, we can form and solve an equation in 𝑥 by dividing one side of the ratio by the other. In other words, if we divide 33 minus three 𝑥 by 19 minus three 𝑥, this is equivalent to dividing three by 10. So we form an equation. It’s 33 minus three 𝑥 over 19 minus three 𝑥 equals three over 10. And now we notice that these fractions are making our life a little bit difficult. So we’re going to multiply both sides of the equation by 19 minus three 𝑥 and by 10. When we do, our left-hand side becomes 10 times 33 minus three 𝑥. And our right-hand side is three times 19 minus three 𝑥. We’re now going to distribute our parentheses. Multiplying 10 by each term in the expression 33 minus three 𝑥 gives us 330 minus 30𝑥. Then, multiplying three by each term in the expression 19 minus three 𝑥 and we get 57 minus nine 𝑥. To solve for 𝑥, let’s add 30𝑥 to both sides. When we do, our equation becomes 330 equals 57 plus 21𝑥. To isolate the term containing the 𝑥-variable, we’ll subtract 57, and that gives us 273 equals 21𝑥. Finally, since 21 is multiplying the 𝑥, we need to divide both sides of our equation by 21. So 𝑥 is 273 divided by 21, and this in fact is equal to 13. So if we subtract three times a certain number from each of the two terms in the ratio 33 to 19 and get a ratio of three to 10, that number must be 13.
{"url":"https://www.nagwa.com/en/videos/717120290706/","timestamp":"2024-11-12T07:39:21Z","content_type":"text/html","content_length":"250382","record_id":"<urn:uuid:64ae72a1-e9fa-4127-9800-4fbea913ed9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00815.warc.gz"}
Why is Velocity Squared? PSI Blog 20120317 Why is Velocity Squared? From henk: Glenn, thanks for your answers. I am still thinking about kinetic energy. In the 17th century, experiments by Willem Jacob 's Gravesande showed that the striking of a ball in clay was proportional to squared velocity and later on a French physician showed it was proportional to 'mass times squared velocity'. What is it about, is my question? How to get the formulae not by math manipulation, but from experimental considerations? By the way, I explained to a friend the idea of matter-motion by using examples and despite he is a dummy in math and physics, the matter-motion explanation seems to be more appropriate to grasp physics even at his age of 75. Thanks again for the comment. Before the Gravesande experiment, one would have thought that doubling the velocity of a microcosm would result in double the impact. Not so. As you mentioned, the doubling of velocity causes four times the impact. In other words, a microcosm impacting a soft, wet clay at velocity 2v will create a crater 4 times as deep as a microcosm impacting the clay at velocity v; an auto crashing into a wall at 40 mph will suffer 4 times the damage as one crashing into a wall at 20 mph. Your question: What gives? I am afraid that this is one occasion in which I will not be able to avoid math, as much as I would like too. The simple reason that velocity is squared is the fact that there is a macrocosm. Motion Without a Macrocosm This was explained by Newton’s First Law of Motion (The velocity of a body remains constant unless the body is acted upon by an external force). This observation, the law of the universe, makes Newton the most brilliant scientist who ever lived. Thus, a microcosm moving through “empty space” at 2 m/s will move two meters in one second. At a velocity of 1 m/s it will move one meter in one second. In other words, if Gravesande’s wet clay was really “empty space,” doubling the velocity would have produced a “hole” twice as deep. Of course, there would be no “hole” and the microcosm would not stop either, because empty space offers no resistance. Motion With a Macrocosm This was explained by Newton’s Second Law of Motion (The acceleration a of a body is parallel and directly proportional to the net force F and inversely proportional to the mass m, i.e., F = ma.). While the First Law is just an astute observation concerning inertia, this Second Law describes causality. A cause produces acceleration, that is, a change in velocity. Here, Newton is describing a cause as a “force.” Of course, “forces” do not exist, only microcosms exist. The force concept is a handy, necessary mathematization. It is especially useful when we really do not know the actual cause. The true cause of the acceleration of any particular microcosm must be at least one other microcosm that collides with it. Incidentally, this is why determinists deny the possibility of ESP. “Extra Sensory Perception” is the indeterministic hypothesis that something or some motion might be perceived without microcosms colliding with a sensory organ. Back to velocity squared… As generally explained, the Second Law is all about increasing the velocity of microcosms (acceleration). The collider hits the collidee. The same equations explain the opposite result (deceleration) when the situation is reversed and the collidee becomes the collider and the collider becomes the collidee. In either case, we are recognizing the effect of the macrocosm—the presence of something other than empty space. The explanation below, from http://hyperphysics.phy-astr.gsu.edu/hbase/ke.html#c3, gives the standard mathematics showing how a microcosm gets its motion: So what does all this mean? First, remember that, in neomechanics, there is no such thing as “energy” or “energy of motion.” Energy is neither matter nor motion. Instead, we define energy as a calculation: the multiplication of a term for matter times a term for motion. Nevertheless, the calculation or “energy concept” admitted above is a handy way to help us understand matter in motion. The illustration above shows how the microcosm gets its motion even before it collides with the clay. The somewhat mysterious “force” in the illustration is simply the push provided by some other microcosm. Incidentally, indeterminists who believe in finity often speculate about where the “first” push came from. Of course, this question becomes moot for an infinite universe—there is always yet another microcosm to do the pushing. The gist of the KE explanation is the equation: Work = KE = Fd = mad = ½mv^2. If you have ever pushed a car out of the ditch, you will have some practical feel for this. It takes a lot of work (Fd, force over a distance) to get a vehicle from zero velocity to any velocity at all. The heavier the vehicle (m), the harder it is; the farther you have to push it (d), the harder it is. Once, the vehicle is moving, it is just as hard to stop it (as our daughter learned when her friend’s formerly stuck car rolled driverless and brakeless into the neighbor’s garage door). From the car example, we learn that the velocity of a microcosm cannot be increased or decreased instantaneously. The increase or decrease must occur over some distance and take some time. In the KE = ½mv^2 equation, we get the distance by multiplying the average velocity ( ½ v) times the time it takes to reach the final velocity, v[f]. For instance, to accelerate a car from 0 to 60 mph in 10 seconds would take a distance of 440 feet (30 mph X 10 s or 44 ft/s X 10). That gives us the “d” in the Fd = mad equation. The mass, m, is assumed to be constant. Acceleration is the change in velocity. A change in velocity from 0 to 60 mph over 10 seconds is an acceleration of (0 + 60)/10 = 6 mph/s. That is, we increased the velocity by 6 mph for each second that we held the pedal to the Stopping a microcosm in wet clay involves the same process in reverse. Velocity must drop to zero as the microcosm transfers its motion to the wet clay over the period in which it decelerates. As shown by the math manipulation done in the KE illustration above, time cancels out and velocity appears twice. I find the KE = mad equation to be a bit more intuitive. Kinetic energy then becomes what happens to a mass as it accelerates or decelerates over distance. For more no-nonsense physics and cosmology, see: 4 comments: I'm really stuck on V squared. Your explanation helps but I think you are missing the forest for the trees though... On an intuitive level one would expect that if one drove a car into a wall at twice the speed it would hit twice as hard. Not four times as hard. This counter intuitive physical impact force doesn't happen because there are tow V's in the equation when you rearrange it. But rather indicates something strange and counter-intuitive is going on. Gravity is indistinguishable from acceleration. Motion in constant gravity is constantly accelerating. Comment 20140812 god particle god particle: So glad to hear from you. According to my assumptions, you were not supposed to exist. Oh well, at least you have lower case humility… About that v squared stuff… Remember that for describing the motion of a microcosm travelling through “empty space” we use only one v in the matter-motion term for momentum: P = mv Doubling the velocity would double the momentum. Your intuition about hitting a wall “twice as hard” would be correct only if the “wall” was “empty space.” Once the macrocosm contains something to hit, everything changes. The velocity of the microcosm has to decrease from v to 0. During that interval, the average velocity will be: (v + 0)/2 = ½ v. We then must describe the motion of the microcosm and its rapid decrease to zero with a different matter-motion term: kinetic energy: E = mv (½ v) = ½ mv^2 The microcosm travelling through “empty space” thus has motion we can describe either as momentum or energy. In either case, we must hold fast to the Fifth Assumption of Science, conservation (Matter and the motion of matter can be neither created nor destroyed) and the Fourth Assumption of Science, inseparability (Just as there is no motion without matter, so there is no matter without motion). In the momentum situation, it is clear that both assumptions hold as the microcosm passes from left to right through “empty space.” The momentum and kinetic energy of the microcosm remain unchanged. When the macrocosm presents an impenetrable barrier, the motion of the microcosm stops. Whether calculated as either momentum, force, or kinetic energy, all will be reduced to zero during the collision when the velocity drops to zero. According to conservation, of course, that motion must go somewhere, perhaps as internal submicrocosmic motion of the microcosm and vibratory motion of the macrocosm. According to Newton’s Third Law of Motion, a decrease in velocity of the collider requires an equal and opposite increase in velocity of the collidee. If the microcosm was a vehicle, we would use the brakes to decrease its velocity to zero. The motion of the vehicle is transmitted to the brakes, tires, and pavement, generally appearing as the vibratory motion we call heat. Again, the second v shows up in the equation because we use it to calculate the displacement produced by the collision. Ah, so why is Kinetic Energy proportional to the square of velocity? Well, I am not sure I can answer that, but i have had an interesting insight that shows if anything measures energy of motion "intuitively" it must be proportional to v^2. Now by "intuitively" I mean that if I put 1 "oomph" of energy into generating movement in a Newtonian system, the system measures 1, and if I put another N "Oomphs" it measures N+1. All while conserving momentum (which is CRITICAL). Consider the Oomph to be that energy of motion generated by splitting a unit mass into 2 halves, each travelling at 1 unit of speed. Let us make the further intuitive assumption that the amount of kinetic energy generated by an "Oomph" is proportional to the mass of the particle (this can be intuited by considering a big particle to be made of N small ones and assuming KE is additive). Then at rest, we say (reasonably) that Kinetic Energy is 0. After 1 Oomph (splitting of unit mass into 2 particles of 1/2u mass going at 1u velociy), let us define the Kinetic energy of this system to be 1, and since things are additive and symmetric, each particle has KE of 0.5. Then let the right hand particle undergo a "Half Oomph", splitting it into 2 1/4 mass units with unit velocity (relative to its inertial frame). Now what is the energy of the system? 0.5 (left particle) + 0 (left of right particle) + energy of the right right particle. BUT this must equal 1.5 if it is to conserve "Oomphs". So the energy of the right right particle MUST be 1. Assuming it is proportional to mass, we can say that IF it was mass 1, it would have KE of 4. And we can carry on subividing the right hand particle and easily prove by induction that for all N, the KE of a unit mass particle travelling at speed N is N^2. So if there IS an additive measure which measures "Oomphs" faithfully, it must increase on a particle as the square of its velocity, at last for integral values. Once we have it for integers it is easy to extend it to rationals by considering smaller Oomphs than unit Oomphs.
{"url":"https://thescientificworldview.blogspot.com/2012/03/why-is-velocity-squared.html","timestamp":"2024-11-11T21:03:16Z","content_type":"application/xhtml+xml","content_length":"403301","record_id":"<urn:uuid:c24df645-20dd-43d4-88b0-643a4a538f2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00719.warc.gz"}
On the Virtualization of Audio Transducers Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Piazza L. Da Vinci 32, 20133 Milano, Italy Author to whom correspondence should be addressed. Submission received: 22 March 2023 / Revised: 21 May 2023 / Accepted: 24 May 2023 / Published: 1 June 2023 In audio transduction applications, virtualization can be defined as the task of digitally altering the acoustic behavior of an audio sensor or actuator with the aim of mimicking that of a target transducer. Recently, a digital signal preprocessing method for the virtualization of loudspeakers based on inverse equivalent circuit modeling has been proposed. The method applies Leuciuc’s inversion theorem to obtain the inverse circuital model of the physical actuator, which is then exploited to impose a target behavior through the so called Direct–Inverse–Direct Chain. The inverse model is designed by properly augmenting the direct model with a theoretical two-port circuit element called nullor. Drawing on this promising results, in this manuscript, we aim at describing the virtualization task in a broader sense, including both actuator and sensor virtualizations. We provide ready-to-use schemes and block diagrams which apply to all the possible combinations of input and output variables. We then analyze and formalize different versions of the Direct–Inverse–Direct Chain describing how the method changes when applied to sensors and actuators. Finally, we provide examples of applications considering the virtualization of a capacitive microphone and a nonlinear compression driver. 1. Introduction Audio transducers are devices that convert electrical signals into acoustic waves or vice versa [ ]. In the first case, they are called audio actuators (e.g., loudspeakers), while in the second case audio sensors (e.g., microphones). The transduction process that characterizes such devices involves different physical domains (such as mechanical, acoustic, electrical, magnetic, etc.), which not only are affected by different nonlinear behaviors but they do interact in a nonlinear fashion. For instance, piezoelectric loudspeakers are impaired by hysteretic phenomena which do increase the Total Harmonic Distortion (THD) [ ], electrodynamic loudspeakers are characterized by a nonlinear force factor and compliance [ ], while clipping is the major source of distortion in microphones [ ]. Audio transducers are pervasive devices that have become, over the years, of fundamental importance all over the markets. It is thus desirable to come up with solutions that enable a control on the nonlinear behavior of acoustic transducers, and thus on the amount of distortion that they introduce, such that better acoustic performance can be obtained. Since the rise of the audio industry, different techniques have been proposed for improving the sonic response of audio transducers. Apart from solutions based on a more refined analog design, many techniques do exploit digital audio signal processing for accomplishing such a task. For the case of audio actuators, the simplest solutions make use of filters to equalize the acoustic response over the frequency spectrum [ ]. Other solutions, instead, pre-distort the electrical signal with the aim of reducing the impact of nonlinearities [ ]; others involve feedback loops for accomplishing linearization and compensation [ ], while more recent virtual bass enhancement techniques exploit psychoacoustic effects for deceiving the human perception of sound [ ]. On the other side, similar approaches have been proposed for digitally enhancing the performance of audio sensors [ In this work, we introduce and analyze a novel class of digital signal processing algorithms which we refer to as algorithms. We define virtualization as the task of digitally altering and conditioning the acoustic behavior of an audio transducer with the aim of mimicking the sound of a (virtual) target transducer. Such algorithms are based on general signal processing chains which can be exploited to perform all the traditional tasks envisaged by the algorithms mentioned in the previous paragraph, e.g., linearization and equalization. Recently, loudspeaker virtualization has been tackled by using a digital signal processing approach based on physical modeling [ ], which exploits the inverse model of the loudspeaker equivalent circuit. The design of the inverse system relies on Leuciuc’s theorem [ ], reworded in [ ], and it is derived by duly adding to the direct circuital system a theoretical two-port element, known in circuit theory as . The digital inverse system can then be used to compensate for the behavior of the physical loudspeaker and, hence, impose the behavior of a digital target system. This is achieved by implementing the so called Direct–Inverse–Direct Chain [ ] composed of a target direct system, which is a digital filter characterized by the desired transduction behavior to be imposed; the inverse loudspeaker system, which is a digital filter whose response is the inverse of that of the physical transducer; and the (direct) phyisical loudspeaker. While in the approach of [ ] inversion is digitally attained, other methods to design inverse circuital systems, which rely on analog filters or integrated circuits, such as operational transconductance amplifiers, current conveyors, current differencing buffered amplifiers, etc. [ ], have been proposed. However, for the sake of simplicity, in this manuscript, we will only consider inverse design approaches based on digital filters. In this regard, nullors can be efficiently implemented in the discrete-time domain making use of the Wave Digital Filter (WDF) paradigm. WDF theory was originally introduced by A. Fettweis in the late 1970s for designing stable digital filters through the discretization of linear passive analog filters [ ], and was later extended to also efficiently implement active and nonlinear circuits in the discrete-time domain [ ]. In the WDF framework, port voltages and port currents are substituted with linear combinations of reflected waves introducing a free parameter per port called port resistance . In the Wave Digital (WD) domain, circuit elements are modeled in a modular fashion as input–output blocks characterized by scattering relations, while topological interconnections or, more generally, connection networks are described by multi-port junctions characterized by scattering matrices [ ]. The introduced free parameters can be properly set to eliminate some delay-free-loops (i.e., implicit relations between circuit variables) appearing in the digital structure composed of input–output elements and junctions. Circuits with up to one nonlinear element (described by an explicit mapping) can be digitally implemented in the WD domain in a fully explicit fashion, i.e., without making use of any iterative solver [ ], while using stable discretization methods (e.g., Backward Euler, trapezoidal rule, etc.) to approximate time-derivatives. As far as the implementation of nullors is concerned, different techniques have been proposed in the literature of WDFs. In [ ], stamps are provided for encompassing nullors into scattering junctions by means of the Modified Nodal Analysis (MNA) formalism. The same result is reached in a more efficient fashion considering a double digraph decomposition of the connection network, as pointed out in [ ]. In [ ], vector waves are used to derive a vectorial scattering relation that allows to implement a nullor as a two-port input–output element in the WD domain. Moreover, in [ ], it has already been shown that WDFs are suitable to efficiently emulate direct and inverse models of nonlinear loudspeakers in the discrete-time domain with no need of iterative solvers. In this paper, we discuss the task of audio transducer virtualization from a general theoretical perspective, by analyzing different scenarios and combinations of input/output signals. Our aim is to provide a for the design of inverse circuital models of audio transducers in different vitualization scenarios. In fact, we describe both the case of actuator virtualization and of sensor virtualization, making appropriate adjustments to the employed Direct–Inverse–Direct Chain [ ]. In doing so, we will consider electrical equivalent models of audio transducers which are derived by exploiting the electro-mechano-acoustical analogy [ ]. Finally, we present two case studies showing how the proposed methodology can be exploited to alter the acoustic response of different audio devices. The manuscript is organized as follows. Section 2 provides, first, background knowledge on nullor-based inversion of circuital systems, and, then, introduces ready-to-use schemes and block diagrams for the inversion of electrical equivalent models of audio systems taking into account all possible combinations of input and output variables. Virtualization algorithms that exploit nullor-based inversion of circuits are, instead, presented in Section 3 . Examples of application of such algorithms are provided in Section 4 Section 5 , where the virtualization of a capacitive microphone and a nonlinear compression driver are presented. Conclusions are drawn in Section 6 2. Nullor-Based Inversion of Circuital Systems In this section, we first provide background knowledge on nullors, and, then, we present the four major classes of inversion scenarios, supplementing the overview that is available in the literature which only comprises two cases out of four [ 2.1. Nullors Nullors are theoretical two-port elements composed of other two theoretical one-ports: a , which has both port voltage and port current equal to zero, and , which is characterized by unconstrained port variables [ Figure 1 shows the circuital symbol of a nullor, where the nullator (on the left) is represented by means of an ellipse, and the norator (on the right) by means of two circles. The constitutive equation of such a two-port can be thus written as $v 1 i 1 = 0 0 0 0 v 2 i 2 ,$ $v 1$ $i 1$ are the voltage across and the current through the nullator, whereas $v 2$ $i 2$ are the voltage across and the current through the norator. It is worth mentioning that, while nullator and norator do not correspond to physical elements, if properly used, nullors do describe physical devices. In fact, nullors are typically employed in circuit theory to model the ideal behavior of some multi-ports (active and passive), such as Operational Amplifiers (opamp), transistors operating in linear regime, gyrators, transconductance amplifiers, etc. [ Nullor is the fundamental element for carrying out the inversion of circuits according to Leuciuc’s theorem [ ]. Although the main application of the aforementioned circuit inversion method originally was the synchronization of non-autonomous chaotic circuits (i.e., chaotic circuits with exogenous input) in analog secure communication systems [ ], the method can be generally applied to design the inverse, if this exists, of whatever linear or nonlinear non-autonomous circuit. In the following subsections, we will reword the original theorem considering the four possibile combinations of input and output signals of the system to be inverted, providing proofs and a complete overview over the method. 2.2. Inversion Theorem Before presenting the nullor-based inversion theorem, let us consider the two linear-time-invariant (LTI) non-autonomous circuits shown in Figure 2 . The input and output of the circuit shown in Figure 2 a, which we call Direct System , are $i 1$ $v 3$ , whereas the input and output of the circuit shown in Figure 2 b, which we call Inverse System , are $v ^ 3$ $i ^ 1$ , respectively. The only difference between the two networks is the position of the three depicted one-port elements, meaning that the remaining part of the circuit is the same for both the two systems. Naming as the impedance matrix of the Direct System , we can write down the following system of equations $v 1 = z 11 i 1 + z 12 i 2 + z 13 i 3 v 2 = z 21 i 1 + z 22 i 2 + z 23 i 3 v 3 = z 31 i 1 + z 32 i 2 + z 33 i 3 ,$ $z 11 , … , z 33$ are entries of matrix , and the time dependence is removed for the sake of clarity. Then, the nullator sets $v 2 = 0$ $i 2 = 0$ , which allow us to derive the transfer function $F ( s )$ and the transfer impedance $H ( s )$ in the Laplace domain as follows $F ( s ) = I 3 ( s ) I 1 ( s ) = − z 21 z 23 ,$ $H ( s ) = V 3 ( s ) I 1 ( s ) = z 23 z 31 − z 21 z 33 z 23 .$ In Equation ( $v x$ $i x$ are the voltage and current at port of the network shown in Figure 2 a. We can repeat the same procedure for the circuit shown in Figure 2 b yielding $v ^ 1 = z 11 i ^ 1 + z 12 i ^ 2 + z 13 i ^ 3 v ^ 2 = z 21 i ^ 1 + z 22 i ^ 2 + z 23 i ^ 3 v ^ 3 = z 31 i ^ 1 + z 32 i ^ 2 + z 33 i ^ 3 ,$ and then, by recalling that $v ^ 2 = 0$ $i ^ 2 = 0$ hold true, we can obtain the transfer function $F ^ ( s )$ and transfer impedance $H ^ ( s )$ in the Laplace domain as follows $F ^ ( s ) = I ^ 1 ( s ) I ^ 3 ( s ) = − z 23 z 21 = F − 1 ( s ) ,$ $H ^ ( s ) = I ^ 1 ( s ) V ^ 3 ( s ) = z 23 z 23 z 31 − z 21 z 33 = H − 1 ( s ) .$ In Equation ( $v ^ x$ $i ^ x$ are the voltage and current at port of the network shown in Figure 2 b. Equation ( ) proves the circuit in Figure 2 b to have a transfer impedance equal to $H − 1 ( s )$ . We thus conclude that the circuit shown in Figure 2 b is the inverse of the circuit shown in Figure 2 a. Finally, it is worth pointing out that, in order for this to hold true, the transfer function of the direct system must be minimum phase [ Hereafter, we first introduce a reworded version of Leuciuc’s theorem which also generalizes the above approach for the inversion of linear circuits to the case of nonlinear circuits, and then we go through possible inversion scenarios characterized by pairs of input/output variables of different kinds. Theorem 1. Let us consider a nonlinear non-autonomous circuit containing at least one nullor, as the one shown in Figure 3a. Let us also consider the circuit shown in Figure 3b, where the input generator is replaced by a norator, and the norator by a proper controlled source. If for any input signal $u ( t )$ and $y ( t )$, where t is the continuous-time variable in seconds, such systems have unique bounded solutions, and if, defined the state vectors of the two systems as $x ( t )$ and $x ^ ( t )$, the equation $x ( 0 ) = x ^ ( 0 )$ holds true, then the circuit in Figure 3b is the inverse of the circuit in Figure 3a. Let us remove, for the moment, the nullator and the norator from the circuit shown in Figure 3 a, and let us replace the norator with a voltage source $u 1$ . The circuit that we obtain is shown in Figure 3 c. By considering voltage as the output, we can describe such a system according to the state-space formalism as follows $x ˙ = f ( x , u , u 1 ) with f : R n × U × U 1 → R n v = g ( x , u , u 1 ) with f : R n × U × U 1 → R n$ $U 1$ are the sets of input signals $u 1$ that are admissible for the considered application. Note that, for the sake of clarity, here and in the following, we omit the dependence on the continuous-time variable . If we re-introduce the nullator, we constrain voltage to be zero leading the system into an unphysical condition. In order to avoid such an unphysical state, one of the two sources $u 1$ must be substituted with a norator. This will in turn assume voltage and current values such that the system of Equation ( ) presents real solutions. It follows that two possible circuits can be derived, namely, the circuits in Figure 3 a,b. Given that these two systems are characterized by the same topology, we can write for the circuit shown in Figure 3 $x ˙ = f ( x , u , y ) 0 = g ( x , u , y ) ,$ while for the circuit shown in Figure 3 $x ^ ˙ = f ( x ^ , u ^ , y ) 0 = g ( x ^ , u ^ , y ) .$ If the two circuits have a well-defined behavior, $y = h ( x , u )$ is the only possible solution for equation $g ( x , u , y ) = 0$ for each $x ∈ R n$ . Moreover, assuming $h ( x , u )$ invertible with respect to on the entire state space, we can write $u = h − 1 ( x , y )$ ; it follows that $u ^ = h − 1 ( x ^ , y )$ is the only possible solution for equation $g ( x ^ , u ^ , y ) = 0$ for each $x ^ ∈ R n$ ]. In the light of these considerations, Equation ( ) can be rewritten as $x ˙ = f ( x , u , h ( x , u ) ) y = h ( x , u ) ,$ and Equation ( ) as $x ^ ˙ = f ( x ^ , h − 1 ( x ^ , y ) , y ) u ^ = h − 1 ( x ^ , y ) .$ Therefore, if is a bounded solution for Equation ( ), and Equation ( ) has a bounded solution too, it follows that, for $x ( 0 ) = x ^ ( 0 )$ $u ^ = u$ is a solution of Equation ( ). □ It is worth pointing out that, although the theorem and proof are referred to the circuits shown in Figure 3 , they can be applied to all the four possible inversion scenarios characterized by pairs of input/output variables of different kinds. Table 1 provides an overview over such four cases, associating each type of Direct System to its corresponding Inverse System , according to input and output variables. In the next subsections, we will address each of the cases one at a time. 2.2.1. Voltage Input Voltage Output (VIVO) Let us consider Network A of Table 1 where the Direct System features both as input and output a voltage signal, namely, $V in$ $V out$ . This is one of the original examples taken into account by Leuciuc for deriving the inversion theorem [ ], and later employed in [ ] for deriving the loudspeaker virtualization algorithm. Then, let us consider the case in which a nullor is already present into the Direct System . For this particular case, the Inverse System is obtained by replacing the input voltage source of the Direct System with the norator and the norator with a Voltage-Controlled Voltage Source (VCVS) driven by the output voltage of the Direct System . The inverse of Network A is Network B of Table 1 In the case in which no nullors are present into the Direct System Figure 4 a), this can be augmented with the series connection of a nullator and a norator, as shown in Figure 4 b. In fact, this adjunct does not modify the behavior of the circuit since the series of a nullator and a norator is equivalent to an open circuit [ ]. Such a series connection must be inserted between the very same nodes where the output voltage is taken [ ]. Then, the Inverse System is obtained following the same procedure described for the case of circuits already containing nullors. Finally, Figure 4 c shows the inverse of the circuit in Figure 4 2.2.2. Voltage Input Current Output (VICO) Let us consider Network C of Table 1 where the Direct System features as input a voltage signal and as output a current signal, namely, $V in$ $I out$ . Even this scenario is part of the original examples taken into account by Leuciuc in [ ]. Let us first consider the case in which a nullor is already present into the Direct System . For this particular case, the Inverse System is obtained by replacing the input voltage source of the Direct System with the norator and the norator with a Current-Controlled Current Source (CCCS) driven by the output current of the Direct System . The inverse of Network C is Network D of Table 1 In the case in which no nullors are present into the Direct System , this can be augmented with the parallel connection of a nullator and a norator. In fact, this adjunct does not modify the behavior of the circuit given that a nullator and a norator in parallel are equivalent to a short circuit [ ]. Such a parallel connection must be inserted in series with the very same branch through which the output current flows [ ]. The result will be a circuit similar to the one shown in Figure 5 b but with a voltage input, while the Inverse System will be similar to the one shown in Figure 5 c but considering the voltage across the norator instead of the current through it. 2.2.3. Current Input Voltage Output (CIVO) Let us consider Network E of Table 1 where the Direct System features as input a current signal and as output a voltage signal, namely, $I in$ $V out$ . Let us first consider the case in which a nullor is already present into the Direct System . For this particular case, the Inverse System is obtained by replacing the input current source of the Direct System with the norator and the norator with a VCVS driven by the output voltage of the Direct System . The inverse of Network E is Network F of Table 1 In the case in which no nullors are present into the Direct System , this can be augmented with the series connection of a nullator and a norator. As for the VIVO case, such a series connection must be inserted between the very same nodes where the output voltage is taken. The result will be a circuit similar to the one shown in Figure 4 b but with a current input, while the inverse system will be similar to the one shown in Figure 4 c, but considering the current flowing through the norator instead of the voltage across it. 2.2.4. Current Input Current Output (CICO) Let us consider Network G of Table 1 where the Direct System features both as input and output a current signal, namely, $I in$ $I out$ . Let us first consider the case in which a nullor is already present into the Direct System . For this particular case, the Inverse System is obtained by replacing the input current source of the Direct System with the norator and the norator with a CCCS driven by the output current of the Direct System . The inverse of Network G is Network H of Table 1 In the case in which no nullors are present into the Direct System Figure 5 a), this can be augmented with the parallel connection of a nullator and a norator. As for the VICO case, such a parallel connection must be inserted in series with the very same branch through which the output current flows [ ], as shown in Figure 5 b. Then, the Inverse System is obtained following the same procedure described for the case of circuits already containing nullors. Finally, Figure 5 c shows the inverse of the circuit in Figure 5 2.3. Adjoint Networks In this subsection we make some considerations on homogeneous inversion scenarios, where the input and the output variables do have the same units of measurement, i.e., VIVO and CICO scenarios. In these cases, adjoint networks can be considered for transforming a voltage-voltage transfer function into a current-current transfer function and vice versa. In fact, the nature of the transfer function will have implications as far as implementation is concerned. For instance, voltages and currents might be characterized by different orders of magnitude and, thus, working with the former may be more convenient than working with the latter or vice versa, especially when the Inverse System is implemented in the digital domain. Entering more in detail, two -port networks are called if the following equation holds true [ $∑ n = 1 N v α , n i β , n − i α , n v β , n = 0 ,$ $v α , 1 , … , v α , N$ $i α , 1 , … , i α , N$ are the port voltages and port currents of network , whereas $v β , 1 , … , v β , N$ $i β , 1 , … , i β , N$ are the port voltages and port currents of network , respectively. According to Equation ( ), for example, we can expect the adjoint of an ideal voltage amplifier to maintain the same topology but act as an ideal current amplifier. The procedure for deriving the adjoint of a given circuit, whose input is a voltage signal, can be summarized as follows: • Passive elements are kept without any changes. • Nullators are replaced with norators, while norators with nullators. • The input voltage is replaced with a short circuit (i.e., a current sink). The output of the adjoint circuit will be then the current flowing through such a short, where the positive direction follows the element convention, i.e., from the positive to the negative terminal. • A current source is connected to the output port. This will be the input of the adjoint circuit. In this case, the direction of the current follows the source convention, i.e., from the negative to the positive terminal. • Controlled sources are replaced with their dual (e.g., VCVS are replaced with CCCS). Similar considerations can be drawn for current inputs. The interested reader is referred to [ ] for a more in-depth analysis of adjoint equivalent networks. The adjoint transformation can be carried out either on the Direct System or directly on the Inverse System . For example, Figure 6 shows the adjoint network of the Inverse System Figure 4 c, where the nullator is replaced with a norator and vice versa, the input VCVS is substituted with a short circuit, while a CCCS is connected to the output port. The current flowing through the short circuit will be equal to $V ^ in$ , whereas the input current will be equal to $V out$ , i.e., the output voltage of the Direct System shown in Figure 4 b. An equivalent inverse network can be obtained starting from the adjoint of the Direct System and applying then the rules presented in the previous subsections for deriving the Inverse System In the next section, we will present the inversion-based virtualization algorithm providing details on how to apply it for both the cases of actuator and sensor virtualization. 3. Direct–Inverse–Direct Chains We now present a general block chain to perform virtualization of transducers. Such a chain, shown in Figure 7 , is called Direct–Inverse–Direct Chain (DIDC) and was proposed in [ ] for addressing the virtualization of loudspeakers. It is composed of three main blocks: two Direct Systems , and one Inverse System . The Inverse System is always implemented in the digital domain, whereas, according to the considered actuator or sensor application, only the first or the last Direct System is implemented in the digital domain since the other is the actual physical transducer. Moreover, in real scenarios, amplifiers could be present in-between blocks. Hence, gains should be considered at different stages of the processing chain for the algorithm to properly work. The DIDC working principle is based on the assumption that the cascade of the Inverse System and the Physical Direct System is equivalent to the identity. This means that the digital processing chain allows us to somehow cancel out the behavior of the transducer such that the target behavior, i.e., the behavior of the digital Direct System, can be imposed. Hence, the proposed processing chain can be employed to accomplish the task of transducer virtualization (i.e., digitally altering the acoustic behavior of an audio transducer with the aim of mimicking the sound of a target transducer). In the following two subsections, we will present application-specific DIDCs targeting both the cases of actuator and sensor virtualization. 3.1. Target-Inverse-Physical Chain (TIPC) Let us consider the particular DIDC shown in Figure 8 . Such a DIDC is specifically tailored for the task of actuator virtualization, and has been first proposed in [ ] for deriving the loudspeaker virtualization algorithm. The green blocks are to be implemented in the digital domain, while the red block represents the actual physical transducer. In particular, Target Direct System is the digital implementation of the actuator circuital model which we would like to obtain, whereas the Inverse System is the inverse circuital model of the Physical Direct System , which is the transducer itself. Hence, given that we are considering actuation, we may call this chain Target-Inverse-Physical Chain (TIPC), since the target behavior must be imposed in a pre-processing phase and thus before driving the actuator, i.e., the Physical Direct System For the case of loudspeaker virtualization, the input $u in$ is the electrical signal driving the loudspeaker, while the output signal $y ˜ out$ may be the output pressure or the velocity of the speaker diaphragm (usually represented in electrical equivalent models as a voltage signal and a current signal, respectively [ ]). Then, in principle, the TIPC allows us to make a speaker A sound as a speaker B, where such a speaker B (i.e., the Target Direct System ) can be a linearized or equalized version of A, or a different speaker. It follows that the more accurate the considered electrical models, the higher the performance of the algorithm. Nonetheless, in [ ], it is shown that the algorithm is robust to parameter uncertainty, increasing the number of real scenarios in which it can be applied. In Section 5 , we will show how to apply TIPC for the virtualization of a nonlinear compression driver, and how this can be accomplished in a simple and efficient fashion making use of Wave Digital Filters It is worth adding that other variables aside currents and voltages could be taken into account to obtain the inverse of a given circuit. For example, for the case of loudspeakers, it might be convenient to consider the displacement of the diaphragm (i.e., the integral of the velocity of the diaphragm) as the output variable. In this case, another stage should be inserted into the processing chain for performing the integral of the velocity (which is a current variable in the electrical equivalent circuit) in the Direct System, and the derivative of the displacement in the Inverse System. Finally, note that according to the type of virtualization, the blocks could be either linear or nonlinear. For example, all the three blocks could be nonlinear if the chain is exploited for imposing the nonlinear sonic behavior of a target transducer. Instead, if linearization is envisaged, one out of the three blocks will be linear (i.e., the Target Direct System). In this case, the purpose of virtualization is to improve the performance of the transducers by reducing the Total Harmonic Distortion (THD) imposing somehow the acoustic behavior of an ideal version of the transducer under consideration. Such a discussion is also valid for the specific DIDC that we introduce in the next subsection. 3.2. Physical-Inverse-Target Chain (PITC) Let us now consider the DIDC shown in Figure 9 , which is specifically designed to address the task of sensor virtualization. Such a DIDC can be considered as a flipped version of the TIPC, since the target behavior is imposed in a post-processing phase instead of a pre-processing phase. Once again, the green blocks are implemented in the digital domain, whereas the red block represents the actual physical transducer. The Target Direct System is the digital implementation of the sensor circuital model whose behavior we would like to impose, while the Inverse System is the inverse circuital model of the Physical Direct System , i.e., the sensor itself. Given that we are considering sensing, we call this version of the chain Physical-Inverse-Target Chain (PITC), since first the audio signal is acquired by means of the sensor, and then, after compensating for the physical behavior by means of the Inverse System , the signal is processed to impose the target acoustic response. For the case of microphone virtualization, the input signal $u in$ might be the acoustic pressure (i.e., a voltage) acquired by the sensor while the output signal is an electrical signal (e.g., a voltage) that is usually fed to an audio interface. In this scenario, therefore, the aim of the PITC is to modify the electrical signal as if it was acquired by another sensor, which can be a linearized or equalized version of the sensor under consideration or a different sensor. In Section 4 , we will apply such a virtualization algorithm for altering the acoustic response of a condenser microphone. Once again, input and output variable different from voltages and currents (e.g., displacement signals) can be considered by introducing integrators and derivators into the green blocks of the processing chain. 4. Sensor Virtualization: Application to Capacitive Microphones In this section, we provide an example of sensor virtualization by taking into account a capacitive microphone as a case study. For this application, we employ a linear model, while an example of transducer virtualization based on nonlinear models will be provided in the next section. The microphone is described by means of the circuit shown in Figure 10 a. The circuit is similar to the one presented in [ ] and, it is composed of three subcircuits which represent, from left to right, the acoustic, mechanical, and electrical domains. In particular, $C ag$ is the acoustic compliance of the air gap, $R as$ $M as$ model the acoustic resistance and mass of the back plate slots, $R ah$ $M ah$ model the acoustic resistance and mass of the back plate holes, while the acoustic compliance of the back chamber is represented by $C ab$ . Moreover, $R a 1$ $R a 2$ $M a 1$ , and $C a 1$ model the free-field acoustic impedance. As far as the mechanical domain is concerned, $M md$ represents the mass of the diaphragm, whereas $C md$ is the mechanical compliance. Regarding the electrical domain, $C e 0$ is the electrical capacitance of the microphone, and $R L$ models the input resistance of the Junction gate Field-Effect Transistor (JFET) to which the microphone capsule is usually connected. The two transformers model the transduction between the physical domains, where $S d$ is the diaphragm area and is the electromechanical transduction factor. Finally, the input signal $P in$ is the acoustic pressure acquired by the microphone, while the output signal is the electrical voltage across resistor $R L$ Table 2 reports the values of all the parameters for modeling both the Brüel & Kjær 4134 (hereafter referred to as BK4134) and the Brüel & Kjær 4146 (hereafter referred to as BK4146) electrostatic microphones [ The implementation of the microphone circuital model in the digital domain can be carried out by employing different techniques [ ]. In this work, we use Wave Digital Filters (WDFs) [ ] since they proved to be suitable for an efficient implementation of both direct and inverse models of loudspeakers [ ]. In particular, elements and topological interconnections can be realized as explained in [ ], whereas nullors are encompassed into the scattering junction exploiting the double digraph decomposition of connection networks presented in [ ]. Then, being the circuit linear, the Wave Digital (WD) structure can be solved with no iterative solvers by means of traditional techniques [ ]. A possible WD realization of the circuit in Figure 10 a is shown in Figure 11 In order to test the accuracy of the WD implementation, we compare the Discrete Fourier Transform (DFT) of the impulse response obtained by simulating the reference circuit in the WD domain with the DFT of the impulse response obtained by simulating the same circuit in Mathworks Simscape (SSC), for both BK4134 and BK4146 microphones. The curves are then normalized with respect tot the pressure at 1 kHz as it is typically done to describe the microphone sensitivity. The results are shown in Figure 12 where the overlap between the continuous blue (WD) and the dashed red (SSC) curves confirms the accuracy of the representation. Looking at Figure 12 b, we can appreciate that microphone BK4146 is characterized by a lower resonance frequency with respect to BK4134’s (see Figure 12 b), which is due to a larger diaphragm mounted in the mic capsule. 4.1. Inverse Model Validation In this subsection, we refer to the digital implementation of the microphone BK4134 equivalent circuit as Direct System . By applying the theorem presented in Section 2 , it is possible to derive the Inverse System circuit shown in Figure 10 b. In particular, since we are in a VIVO scenario, in order to design the Inverse System , once augmented the Direct System with a parallel connection of a nullator and a norator as explained in Section 2.2.1 , we substitute the norator with a VCVS driven by $V out$ and the input source with the norator. The Inverse System can be implemented in the WD domain in a fully explicit fashion. In order to validate the Inverse System implementation, we consider the processing chain in Figure 13 which is composed of the cascade of the Direct System and the Inverse System , and we verify that the output of the cascade is equal to the input of the same cascade, i.e., $P ^ in = P in$ Figure 14 shows the results of such a test. We consider the input signal $P in$ of the Direct System to be an impulse and we compute the response of the microphone, which is shown in Figure 14 a. Then, we feed the BK4134 Inverse System with the obtained voltage signal and we compare the output $P ^ in$ with the input $P in$ . Looking at Figure 14 b, we can notice that $P in$ $P ^ in$ match perfectly since the output of the Inverse System is indeed an impulse. In order to further remark the accuracy of the Inverse System implementation, we compute the Root Mean Square Error (RMSE) between the input and output of the processing chain, obtaining a result below the machine precision. Finally, a similar test is carried considering the circuit equivalent parameters of microphone BK4146; even in this case, the RMSE is numerically zero. 4.2. Sensor Virtualization Test In this subsection, we provide an example of sensor virtualization. In particular, we employ the PITC-based algorithm presented in Section 3.2 for imposing the acoustic behavior of a target microphone. Let us suppose that the Physical Direct System is microphone BK4134 and that we would like to obtain a voltage signal $V out$ as if it was acquired by the microphone BK4146, i.e., the Target Direct System . It follows that the Inverse System is the circuital inverse model of microphone BK4134. The circuit parameters of both BK4134 and BK4146 are, once again, those listed in Table 2 . Direct, inverse, and target systems are implemented in the WD domain as explained in the previous subsections. As input signal, we consider an exponential sine sweep defined as follows $P in = sin 2 π f 1 L exp k f s L ,$ $f s = 96$ kHz is the sampling frequency, the sample index, and $L = T log ( f 2 / f 1 )$ , with $f 1 = 20$ Hz as the starting frequency, $f 2 = 20$ kHz as the final frequency, and $T = 1$ s as the total duration of the sweep. Figure 15 shows the result of such a test. The continuous yellow curve represents the output of the Physical Direct System when the PITC-based algorithm is not active, the dashed red curve represents the target behavior that we would like to obtain, while the continuous blue curve represents the output of the PITC, i.e., the output of the system when the algorithm is active. The overlap between blue and red curve is perfect given that the RMSE is below machine precision. The algorithm is thus able to impose the response of microphone BK4146 even if the pressure signal is acquired by means of BK4143, being characterized at the same time by real time capabilities. In fact, the algorithm implemented in a MATLAB script is able to process, on average, one sample in µs, which is lower than $T s = 1 / f s = 10.42$ µs. Note that, for these tests, the Physical Direct System is simulated by means of the WDF shown in Figure 11 , but, in applications of interest, it represents the actual physical transducer. Moreover, the considered WDF is composed of just one topological junction to which all the elements are connected, but other solutions can be obtained. For example, 3-port topological adaptors can be employed when possible in order to create a WDF composed of multiple junctions and reduce the size of scattering Finally, we would like to stress the fact that the circuital model shown in Figure 10 a does not take into account the nonlinear behavior introduced by the JFET, which is typically connected to the microphone capsule, since it is not directly involved in the transduction process. It follows that, depending on the application, the electrical subcircuit might be modified by introducing the circuital elements downstream in order to accomplish a proper virtualization. 5. Actuator Virtualization: Application to Compression Drivers We now consider a compression driver as an example of audio actuator. Such a transducer can be described by means of the circuit shown in Figure 16 a adapted from [ ], where we can anew distinguish three subsystems: the leftmost subcircuit modeling the electrical domain, the subcircuit in the middle modeling the mechanical domain, and the rightmost subcircuit modeling the acoustic domain. In particular, $L e$ $R e$ are the electrical inductance and resistance of the voice coil, while $R md$ $M md$ , and $C md$ are mechanical parameters: $R md$ models both the mechanical resistance and the resistance of the enclosure; $M md$ models the mass, also taking into account the voice coil, and $C md$ models both the compliance of the diaphragm and the compliance of the air in the enclosure. Moreover, $C af$ is the acoustic compliance of the front cavity. In addition, $R a 1$ $R a 2$ $M a 1$ , and $C a 1$ model the free-field acoustic impedance at the driver throat. The gyrator represents the electromechanical transduction and is characterized by a nonlinear force factor $B l$ that can be modeled as follows [ $B l ( x ) = B l 0 + B l 1 x + B l 2 x 2 + B l 3 x 3 + B l 4 x 4 ,$ is the displacement of the diaphragm in millimeters obtained integrating velocity $v out$ , and $B l 0 , … , B l 4$ are real polynomial coefficients. The ideal transformer, instead, models acoustic transduction as a function of the area of the diaphragm $S d$ . The input signal $V in$ is the electrical signal driving the loudspeaker, whereas as output signal we select the velocity $v out$ , even though other signals can be chosen, e.g., the output pressure. Table 3 shows the values of the circuital parameters of the SEAS type 27TFF (H0831) compression driver model [ ] (hereafter referred to as SEAS). Notably, the force factor coefficients are determined considering the typical $B l$ curve shown in Figure 17 We implement the circuital model of the nonlinear compression driver using WDF principles. In particular, we encompass the gyrator into the scattering matrix by considering the method presented in [ ], whereas the nonlinear force factor is modeled in the digital domain as explained in [ ], leading to a fully explicit WDF structure that does not need iterative solvers to be implemented. The used WDF realization of the circuit in Figure 16 a is shown in Figure 18 The accuracy of the WDF modeling the Direct System has been validated by means of a comparison with Mathwork Simscape. 5.1. Inverse Model Validation In this subsection, we validate the designed inverse of the loudspeaker circuital model shown in Figure 16 a. We derive the Inverse System by applying the theorem presented in Section 2 and considering the VICO case. We do this by first augmenting the Direct System with a connection of a nullator and a norator in series to the same branch through which the output current flows as explained in Section 2.2.2 ; then, we substitute the norator with a CCCS driven by $v out$ and the input source with the norator. We finally implement the Inverse System in the WD domain in a fully explicit fashion, similarly to what done with the Direct System . In order to encompass both the nullor and the gyrator into the WD multi-port scattering junction, we employ again the method presented in [ With the purpose of validating the WD implementation, we consider a processing chain similar to that shown in Figure 13 , where now the Direct System is driven by voltage $V in$ and the Inverse System by the velocity signal $v out$ , which, in turn, is the output current of the Direct System . If the implementation of the Inverse System is exact, we obtain $V ^ in = V in$ Figure 19 Figure 20 show the results of such a test, the first in the time domain, while the second in the frequency domain. In particular, we consider as input signal of the cascade of Direct System Inverse System the following exponential sine sweep $V in = A sin 2 π f 1 L exp k f s L ,$ $A = 9$ V is the amplitude, while all the other parameters are set as in Section 4.2 Figure 19 shows the signals obtained in the first s of simulation: the dashed red curve represents the input of the Direct System , i.e., $V in$ , and is perfectly overlapped with the continuous blue curve which represents the output of the Inverse System , i.e., $V ^ in$ . To further analyze the performance of the Inverse System , in Figure 20 we also provide the spectrograms of the three signals involved in the validation chain. In particular, Figure 20 a shows the spectrogram of $V in$ Figure 20 b the spectrogram of $v out$ , while Figure 20 c the spectrogram of $V ^ in$ . In the second plot, we can appreciate the nonlinear behavior of the loudspeaker circuital model since different harmonics appear over the frequency spectrum. Looking at the third plot, instead, we can further verify the action of the Inverse System since the harmonics characterizing $v out$ (i.e., the input of the Inverse System ) nicely disappear, leading to a perfect match between $V in$ $V ^ in$ . Finally, both for time- and frequency-domain studies, we compute the RMSE between $V in$ $V ^ in$ obtaining, once again, values below machine precision. 5.2. Actuator Linearization Test In a last application scenario, we show how the transducer linearization task can be accomplished as a particular case of the proposed virtualization algorithms. We aim, in fact, at eliminating the distortion effect introduced by the nonlinear behavior of the loudspeaker SEAS. In order to reach this goal, we employ the TIPC-based algorithm presented in Section 3.1 to impose the acoustic response of a target loudspeaker. In this application scenario, the Target Direct System is the linear version of the circuit shown in Figure 16 a, which can be obtained by simply setting $B l = B l 0$ ]. The parameters listed in Table 3 are again used for the WD implementations of the Target Direct System , the Inverse System , and the Physical Direct System . Contrary to what done in the microphone case, the desired behavior is imposed at the beginning of the processing chain since the physical transducer is an actuator. In order to test the chain, we set the input $V in = A sin ( 2 π f 0 k / f s )$ , where is the amplitude, is the sample index, $f 0 = 500$ Hz is the fundamental frequency, and $f s = 96$ kHz is the sampling frequency. Moreover, in order to test the nonlinear system in different operating conditions, we consider two different amplitudes. Figure 21 shows the results of such a test. In particular, the figure shows the power spectra of the Direct System output (“Non Compensated”) and of the TIPC output (“Compensated”), together with the values of THD. Figure 21 a,b are obtained by setting $A = 5$ V, whereas Figure 21 c,d are obtained by setting $A = 9$ V. In both cases, the TIPC-based algorithm is able to suppress the harmonics introduced by the nonlinearity affecting the compression driver, while maintaining the content at the fundamental $f 0$ . This can be quantified looking at the THD reduction, which for both the tests is over 220 dB. As far as efficiency is concerned, instead, the algorithm, implemented in a MATLAB script, is able to run in real-time, processing on average one sample in µs. Note that, even in this case, the Physical Target System is simulated but, in real scenarios, it represents the actual physical transducer. The TIPC-based algorithm can be thus promising for improving the acoustic response of the loudspeaker on the fly by pre-processing the electrical signal driving the loudspeaker itself. Finally, it is worth stressing that the tested virtualization algorithm can be exploited not only to accomplish linearization but also to impose a desired nonlinear behavior, similarly to what already shown in [ 6. Conclusions In this paper, we described a general approach for the virtualization of audio transducers applicable to both sensors, like microphones, and actuators, like loudspeakers. We defined virtualization as the task of altering the acquired/reproduced signal by making it sound as if acquired/reproduced by another ideal or real audio sensor/actuator. In order to accomplish such a task, we started by reformulating Leuciuc’s theorem and proof for circuit inversion, providing case-specific guidelines on how to derive the inverse circuital model for all combinations of input and output variables. In particular, circuital inversion is achieved by augmenting the Direct System with a theoretical two-port called nullor, exploiting nullor equivalent models of short and open circuits [ ]. We then presented two versions of the Direct–Inverse–Direct Chain which allow us to address virtualization for both sensors (Physical-Inverse-Target Chain) and actuators (Target-Inverse-Physical Chain). The chains are composed of three blocks: a Physical Direct System , which is the responsible for the actual transduction process, an Inverse System , which is the circuital inverse of the Physical Direct System , and a Target Direct System , which is the transducer characterized by the behavior that we would like to obtain. We exploited WDF principles to implement the digital blocks of such processing chains in a fully explicit fashion, i.e., without resorting to iterative solvers. Finally, we tested both the PITC-based and the TIPC-based algorithms for addressing microphone virtualization and linearization of a loudspeaker system with nonlinear compression driver. Future work may concern first the extension of circuital inversion theory to the Multiple Input Multiple Output (MIMO) case, and then, by exploiting this new theory, the development of refined DIDC-based algorithms for addressing the case of array virtualization, both in sensing and actuation scenarios. Author Contributions Conceptualization, R.G. and A.B.; methodology, R.G. and A.B.; software, R.G., A.B. and O.M.; validation, R.G., A.B. and O.M.; formal analysis, R.G. and A.B.; investigation, R.G., A.B., O.M. and A.S.; resources, R.G., A.B., O.M. and A.S.; data curation, R.G., A.B. and O.M.; writing—original draft preparation, R.G.; writing—review and editing, R.G., A.B., O.M. and A.S.; visualization, R.G., A.B. and O.M.; supervision, A.B. and A.S.; project administration, A.B. and A.S.; funding acquisition, A.B. and A.S. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Not applicable. Conflicts of Interest The authors declare no conflict of interest. Figure 1. Nullor circuit symbol. The nullator (port 1) is represented with an ellipse, while the norator (port 2) with two circles. Figure 3. (a) Direct System containing one nullor; (b) Inverse System; (c) circuit employed for the Proof of Theorem 1. Figure 4. Augmenting a circuit with a series connection of nullator and norator. (a) Circuit presenting no nullor; (b) circuit with nullor equivalent to the circuit in (a); (c) inverse of the circuit in (b) and, in turn, of the circuit in (a). Figure 5. Augmenting a circuit with a parallel connection of nullator and norator. (a) Circuit presenting no nullor; (b) circuit with nullor equivalent to the circuit in (a); (c) inverse of the circuit in (b) and, in turn, of the circuit in (a). Figure 6. Adjoint network of the Inverse System shown in Figure 4 Figure 11. Possible WD implementation of the circuit shown in Figure 10 Figure 12. Comparison between the DFT of the impulse responses. The blue curves represent the WD implementation of the circuit in Figure 10 a, while the red curves the Mathworks Simscape (SSC) implementation of the same circuit. ( ) DFT of the impulse response for microphone BK4134; ( ) DFT of the impulse response for microphone BK4146. Figure 14. Validation of the Inverse System for microphone BK4134. ( ) Output voltage of the Direct System ; ( ) comparison between $P in$ (dashed red curve) and $P ^ in$ (continuous blue curve) taking into account the processing chain shown in Figure 13 Figure 15. PITC-based virtualization algorithm. Output voltage signals of: the Direct System, i.e., BK4134, when no virtualization algorithm is present (“No Post-processing”), the Target Direct System, i.e., BK4146 (“Target”), and the PITC (“Virtualized”). Figure 16. (a) Direct circuital model of the compression driver; (b) Inverse circuital model of the compression driver. Figure 18. Possible WD implementation of the circuit shown in Figure 16 Figure 19. Validation of the Inverse System for loudspeaker SEAS. ( ) Output voltage of the Direct System ; ( ) comparison between $V in$ (dashed red curve) and $V ^ in$ (continuous blue curve) taking into account a processing chain similar to that shown in Figure 13 Figure 20. Validation of the Inverse System. (a) Input voltage $V in$ of the Direct System; (a) output velocity $v out$ (i.e., a current) of the Direct System; (c) output voltage $V ^ in$ of the Inverse System. Figure 21. TIPC-based linearization algorithm. The first two plots are obtained considering $A = 5$ V: (a) power spectrum of the Physcal Direct System output (“Non Compensated”); (b) power spectrum of the TIPC output (“Compensated”). Instead, the remaining rows are obtained considering $A = 9$ V: (c) power spectrum of the Physical Direct System (“Non Compensated”); (d) power spectrum of the TIPC output (“Compensated”). Input Output Direct System Inverse System Signal Signal Voltage Voltage Network A. Network B. Voltage Current Network C. Network D. Current Voltage Network E. Network F. Current Current Network G. Network H. Parameter BK4134 BK4146 Parameter BK4134 BK4146 $R a 1$$kg m 4 s$ $4.12 × 10 6$ $1.03 × 10 6$ $M ah$$kg m 4 s 2$ $278.2$ $209.52$ $R a 2$$kg m 4 s$ $6.54 × 10 6$ $1.66 × 10 6$ $C ab$$m 4 s 2 kg$ $0.89 × 10 − 12$ $4.76 × 10 − 12$ $M a 1$$kg m 4 s 2$ $54.83$ $27.44$ $M md$$kg$ $3.69 × 10 − 6$ $14.73 × 10 − 6$ $C a 1$$m 4 s 2 kg$ $1.95 × 10 − 12$ $15.58 × 10 − 12$ $C md$$m N$ $12.58 × 10 − 6$ $26.55 × 10 − 6$ $C ag$$m 4 s 2 kg$ $9.12 × 10 − 15$ $46.54 × 10 − 15$ $C e 0$$F$ $27.36 × 10 − 12$ $90.72 × 10 − 12$ $R as$$kg m 4 s$ $4.13 × 10 3$ $444.58$ $R L$$Ω$ $100 × 10 6$ $100 × 10 6$ $M as$$kg m 4 s 2$ $18.8$ $6.24$ $S d$$m 2$ $62.2 × 10 − 6$ $248.3 × 10 − 6$ $R ah$$kg m 4 s$ $99.93 × 10 3$ $86.45 × 10 3$ $α$$N V$ $121.17$ 140 Parameter SEAS Parameter SEAS $R a 1$$kg m 4 s$ $0.72 × 10 6$ $R e$$Ω$ $4.9$$Ω$ $R a 2$$kg m 4 s$ $1.64 × 10 6$ $L e$$H$ $50 × 10 − 6$ $M a 1$$kg m 4 s 2$ $36.32$ $B l 0$$N A$ $3.14$ $C a 1$$m 4 s 2 kg$ $30.11 × 10 − 12$ $B l 1$$N A mm$ $2.7 × 10 − 2$ $C af$$m 4 s 2 kg$ $9.88 × 10 − 12$ $B l 2$$N A mm 2$ $1 × 10 − 2$ $R md$$kg m$ $0.92$ $B l 3$$N A mm 3$ $1.2 × 10 − 3$ $M md$$kg$ $298.64 × 10 − 6$ $B l 4$$N A mm 4$ $2.2 × 10 − 4$ $C md$$m N$ $14.1 × 10 − 6$ $S d$$m 2$ $0.7 × 10 − 3$ Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Giampiccolo, R.; Bernardini, A.; Massi, O.; Sarti, A. On the Virtualization of Audio Transducers. Sensors 2023, 23, 5258. https://doi.org/10.3390/s23115258 AMA Style Giampiccolo R, Bernardini A, Massi O, Sarti A. On the Virtualization of Audio Transducers. Sensors. 2023; 23(11):5258. https://doi.org/10.3390/s23115258 Chicago/Turabian Style Giampiccolo, Riccardo, Alberto Bernardini, Oliviero Massi, and Augusto Sarti. 2023. "On the Virtualization of Audio Transducers" Sensors 23, no. 11: 5258. https://doi.org/10.3390/s23115258 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/1424-8220/23/11/5258","timestamp":"2024-11-13T08:54:31Z","content_type":"text/html","content_length":"582576","record_id":"<urn:uuid:195c9e80-80a9-4bdf-87ab-cd56f0332352>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00257.warc.gz"}
CBSE Class 4 Mathematics Question Papers - Free Maths Topics to be covered for Class 4 Term I • Revision: Number sense and word problems • Revision: Geometry • Unit 1 Building with Bricks: Floor patterns, different patterns, solving problem based on brick cost, numbers etc. • Unit 10 Play with Patterns: Block patterns and designs, pattern with letters and numbers, completion of pattern of series, magic patterns, magic triangle, building of number patterns, patterns with addition of different numbers, decoding of different numbers and letters, floor patterns • Unit 2 Long and Short: Distance between two points, concept of long and short, guessing the distance, comparison of length, unit of measuring length- cm, m, km • Unit 3 A Trip to Bhopal: The buses, estimation of buses, beginning of the journey (estimation of time), to Bhimbetka(estimation of length, distance),lunch time estimation of time, boating problems of cost and time • Unit 4 Tick-Tick-Tick: Understanding the clock and time reading, different hands of clock, making a clock and reading time, relation between different hands of clock second, minute and hour hand, drawing a timeline to show different events, conversion of seconds into minutes and hours into vice versa, reading, writing up dates-format comparison of time, manufacturing, expiry dates of products, time format (12 hour, 24 hour clock, Railway clock) • Unit 6 The Junk Seller: Calculation of cost and value of things by multiplication, concept of loan, finding cost by verbal or mental calculation, different ways of multiplication, counting of currency coins and notes Term II • Revisit – Number sense and word problems • Revisit – Measurement • Unit 5 The Way The World Looks: View of different objects from different directions, spacial understanding of direction and space, mapping your way, formation of dice by paper folding • Unit 7 Jugs and Mugs: Measurement of liquids in l, ml, kl etc., conversion of ml into litre and vice versa, estimation of quantity according to size of different liquids used at home • Unit 8 Carts and Wheels: Circular shapes around us- wheel and bangles etc, drawing circle with different circular shapes, radius and diameter of circle, comparison of sizes of circles on the basis of radius, drawing circle using compass, center of the circle • Unit 9 Halves and Quarters: Half and quarter parts(whole to part),dividing different shapes into halves and quarters, simple word and money problems on fractions, completion of whole part with given fraction (part of whole) 1/2, 1/4, 3/4parts of metre, litre and kilogram and 1 Rupee • Unit 11 Tables and Shares: Tables, formation of tables, making table of higher number adding tables of 2 numbers, how many cats finding the number of legs, Gangu’s sweets: Finding number of boxes, cost etc. with division and multiplication • Unit 12 How Heavy? How Light?: Compare weight of different things, unit of measurement of weight, weighing Balance, weight of elephant-comparative understanding, broken stones, post office-postage stamps, sending parcel according to weight • Unit 13 Field and Fences: Concept of perimeter, how to find perimeter of given figure, simple word problems of perimeter, puzzles and squares • Unit 14 Smart Charts: Concept of data handling, making frequency tables for time used in different activities, favourite food: Filling the tables • Unit 6 The Junk Seller: This chapter of SA-1 will be repeated and evaluated in SA-2 as per departmental Guidelines For Preparation of exams students can also check out other resource material CBSE Class 4 Maths Sample Papers CBSE Class 4 Maths Test Papers CBSE Class 4 Maths Important Questions In order to access the level of preparation done by any particular student he or she needs to solve Previous Year Question Papers. These papers act as perfect tools to practise for the final board exam. If one wants to get a clear look and feel of how final exam papers are framed in terms of level of difficulty, time and other aspects then , all students must make sure that they attempt these papers once their course revision is finished. Few benefits of solving Previous Question Papers are given below: • Revising the subject is very good practice but until and unless one Solves the past question papers in the lookalike environment as in board exam or final class room exam, there is most likely that student may not be able to identify and check whether the understanding of all concepts of the subject are complete or not. It is only after students attempt the question paper in the same time frame he or she is able to judge the capability of solving the paper in the stipulated time frame. It highlights the weak areas if any and gives students ample amount of time to work on those areas and be better prepared before exams. • Knowing everything is great but it is of no use unless the implementation and results are not matching with that. There is always a risk of the case in which in spite of knowing everything a student falls short of time to complete the entire question paper and thus loses marks. Generally CBSE Board papers and previous year questions are generally of 3 hour duration. So while practicing such papers it is imperative to create a final exam or board like environment at home and ensure that the Question paper is attempted only in 3 hours and then check whether it was possible to complete the paper in the desired amount of time. Often at first students take longer than expected, and thus they get early warning to practice more and increase the speed. • Students with anxiety issues need previous year papers more than anyone for overcoming such issues. Since they do not know what questions will be asked in the CBSE board they create panic in their mind due to this fear of the unknown and get scared with the idea that they might not be able to do well in exams. Thus such students need to complete at-least 7-10 Question papers prior to the exams, to gain confidence and get into a better frame of mind.
{"url":"https://www.ribblu.com/cbse-class-4-mathematics-question-papers","timestamp":"2024-11-07T17:21:55Z","content_type":"text/html","content_length":"481383","record_id":"<urn:uuid:0adc6ea6-405e-4ea2-bfc5-66f6ae01c072>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00158.warc.gz"}
Higher Engineering Mathematics BS Grewal Pdf - Maths Higher Engineering Mathematics BS Grewal Pdf: Latest Edition mathematics book by BS Grewal, Dr. BS Grewal Higher Engineering Mathematics Pdf free download, BS Grewal Mathematics Pdf, Engineering Mathematics BS Grewal Pdf Download. For students who are looking for a Mathematics book for Higher Engineering, this book is very important for Engineering Maths. Higher Engineering Mathematics BS Grewal Pdf Contents in Higher Engineering Mathematics BS Grewal Pdf In this book, there are 7 Units with a total of 38 chapters. These names are the followings:- Unit I: Algebra, Vectors, and Geometry 1. Solution of Equations 2. Linear Algebra: Determinants, Matrices 3. Vector Algebra and Solid Geometry Unit II: Calculus 4. Differential Calculus & Its Applications 5. Partial Differentiation & Its Applications 6. Integral Calculus & Its Applications 7. Multiple Integrals & Beta, Gamma Functions 8. Vector Calculus & Its Applications Univ III: Series 9. Infinite Series 10. Fourier Series & Harmonic Analysis Unit IV: Differential Equations 11. Differential Equations of First Order 12. Applications of Differential Equations of First Order 13. Linear Differential Equations 14. Applications of Linear Differential Equations 15. Differential Equations of Other Types 16. Series Solution of Differential Equations and Special Functions 17. Partial Differential Equations 18. Applications of Partial Differential Equations Uniit V: Complex Analysis 19. Complex Numbers and Functions 20. Calculus of Complex Functions Unit VI: Transforms 21. Laplace Transforms 22. Fourier Transforms 23. Z-Transforms Unt VII: Numerical Techniques 24. Empirical Laws and Curve-fitting 25. Statistical Methods 26. Probability and Distributions 27. Sampling and Inference 28. Numerical Solution of Equations 29. Finite Differences and Interpolation 30. Numerical Differentiation and Integration 31. Difference Equations 32. Numerical Solution of Ordinary Differential Equations 33. Numerical Solution of Partial Differential Equations 34. Linear Programming Unit VIII: Special Topics 35. Calculus of Variations 36. Integral Equations 37. Discrete Mathematics 38. Tensor Analysis Important Questions of Engineering Mathematics What is the derivative of the function f(x) = 3x^2 + 2x – 1? a) 6x + 2 b) 6x + 1 c) 3x^2 + 2 d) 6x – 1 Answer: a) 6x + 2 What is the value of the integral ∫(2x + 3) dx? a) x^2 + 3x + C b) x^2 + 3x c) x^2 + 3 d) 2x^2 + 3x Answer: a) x^2 + 3x + C (where C is the constant of integration) What is the value of sin(π/2)? a) 0 b) 1 c) -1 d) π/2 Answer: b) 1 What is the value of log10(100)? a) 0 b) 1 c) 2 d) 10 Answer: c) 2 Which of the following is an example of a vector quantity? a) Mass b) Temperature c) Speed d) Displacement Answer: d) Displacement What is the value of cos(0)? a) 0 b) 1 c) -1 d) π/2 Answer: b) 1 What is the value of e^0? a) 0 b) 1 c) e d) ∞ Answer: b) 1 Which trigonometric identity represents the Pythagorean theorem? a) sin^2θ + cos^2θ = 1 b) tan^2θ + 1 = sec^2θ c) 1 + cot^2θ = cosec^2θ d) cos^2θ – sin^2θ = 1 Answer: a) sin^2θ + cos^2θ = 1 What is the value of √(-4)? a) -2 b) 2 c) √2 d) Undefined (Imaginary) Answer: d) Undefined (Imaginary) What is the value of ∫(3x^2 + 2x – 1) dx within the limits [0, 2]? a) 6 b) 8 c) 10 d) 12 Answer: c) 10 Book Details of Higher Engineering Mathematics BS Grewal Pdf Book Name Higher Engineering Mathematics Author Name B.S. Grewal Pdf Language English Pdf Size 146 MB Total Pages 1327 Book Details Download PDF of the BS Grewal Mathematics Book To download the PDF of the Engineering Mathematics book by BS Grewal, You can download the book by the link given below, or you can buy this book from Amazon. Disclaimer: We have neither copied nor scanned this book, we are only sharing the links already available on the internet for the purpose of education. If any person/organization has any objection related to these notes/books, please contact us, and we will remove these links as soon as possible. Email – rrbexampdf@gmail.com
{"url":"https://rrbexampdf.com/higher-engineering-mathematics-bs-grewal-pdf/","timestamp":"2024-11-14T11:14:54Z","content_type":"text/html","content_length":"98676","record_id":"<urn:uuid:10df3f0b-5606-4f92-a903-1c30ce25f0d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00830.warc.gz"}
Market Basket Analysis and Association Rules from Scratch We have provided a tutorial of Market Basket Analysis in Python working with the mlxtend library. Today, we will provide an example of how you can get the association rules from scratch. Let’s recall the 3 most common association rules: Association Rules Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction. For example, we can extract information on purchasing behavior like “If someone buys beer and sausage, then is likely to buy mustard with high probability“ Let’s define the main Associaton Rules: It calculates how often the product is purchased and is given by the formula: \(Support(X) = \frac{Frequency(X)}{N (\#of \;Transactions)}\) \(Support(X \rightarrow Y) = \frac{Frequency(X \bigcap Y)}{N (\#of \;Transactions)}\) It measures how often items in Y appear in transactions that contain X and is given by the formula. \(Confidence(X \rightarrow Y ) = \frac{ Support(X \rightarrow Y )}{ Support(X) }\) It is the value that tells us how likely item Y is bought together with item X. Values greater than one indicate that the items are likely to be purchased together. It tells us how much better a rule is at predicting the result than just assuming the result in the first place. When lift > 1 then the rule is better at predicting the result than guessing. When lift < 1, the rule is doing worse than informed guessing. It can be given by the formula: \(Lift(X \rightarrow Y ) = \frac{ Support(X \rightarrow Y )}{ Support(X)\times Support(Y) }\) Coding Part By 2 Products Assume that we are dealing with the following groceries.xlsx file: We want to transform the data into order id and product id. import pandas as pd df = pd.read_excel("groceries.xlsx") df['items'] = df['items'].apply(lambda x: x.split(",")) df = df.explode('items') df.columns = ['oid', 'pid'] df.reset_index(drop=True, inplace=True) Write the function which returns the three association rules such as support, confidence and lift for every possible pair. The my_pid is the antecedent and the y is the consequent. def all_x_y(df, my_pid, y): df = df.copy() N = len(df.oid.unique()) tmp = pd.DataFrame({'XY':[my_pid,y]}) tmp = df.merge(tmp, how='inner', left_on='pid', right_on='XY' ) numerator = sum(tmp.groupby('oid').size()==2)/N a = len(df.loc[df.pid==my_pid].oid.unique())/N b = len(df.loc[df.pid==y].oid.unique())/N denominator = a * b lift = numerator/denominator confidence = numerator/a support = numerator return (support, confidence, lift) Let’s see some examples by considering the (milk, bread) and (orange, coffee): You can confirm that we get the same results with that from the mlxtend module: onehot = df.pivot_table(index='oid', columns='pid', aggfunc=len, fill_value=0) onehot = onehot>0 from mlxtend.frequent_patterns import association_rules, apriori # compute frequent items using the Apriori algorithm frequent_itemsets = apriori(onehot, min_support = 0.01, max_len = 2, use_colnames=True) # compute all association rules for frequent_itemsets rules = association_rules(frequent_itemsets, min_threshold=0.01) Now, let’s see how we can get all the possible pairs. unique_products = df.pid.unique() output = [] for i in unique_products: for j in unique_products: if (i!=j): tmp = all_x_y(df, i, j) output = pd.DataFrame(output) By 3 Products The Market Basket Analysis and the Association rules are becoming more complicated when we examine more combinations. Let’s say that we want to get all the association rules when the antecedents are 2 and the consequent is 1. I.e we have already two items in the basket, what are the association rules of the extra item. The first thing that we will need to do is to generate all the possible combinations by 3 (or even by 2, and then add the right-hand side). For example: x = list(itertools.combinations(unique_products, 3)) In another tutorial, we will show you how you can generate the association rules for more than two items. Stay tuned! Leave a Comment
{"url":"https://predictivehacks.com/market-basket-analysis-and-association-rules-from-scratch/","timestamp":"2024-11-11T07:24:43Z","content_type":"text/html","content_length":"907242","record_id":"<urn:uuid:83ef8255-b22c-4e82-81ec-c00b68595cb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00730.warc.gz"}
Exploratory Experimentation and Computation - PDF Free Download Exploratory Experimentation and Computation David H. Bailey and Jonathan M. Borwein The authors’ thesis—once controversial, but now a commonplace—is that computers can be a useful, even essential, aid to mathematical research. —Jeff Shallit eff Shallit wrote this in his recent review (MR2427663) of [10]. As we hope to make clear, Shallit was entirely right in that many, if not most, research mathematicians now use the computer in a variety of ways to draw pictures, inspect numerical data, manipulate expressions symbolically, and run simulations. However, it seems to us that there has not yet been substantial and intellectually rigorous progress in the way mathematics is presented in research papers, textbooks, and classroom instruction or in how the mathematical discovery process is organized. Mathematicians Are Humans We share with George Pólya (1887–1985) the view [25, vol. 2, p. 128] that, while learned, intuition comes to us much earlier and with much less outside influence than formal arguments. David H. Bailey is Chief Technologist of the Computational Research Department at Lawrence Berkeley National Laboratory. His email is [email protected] . This work was supported by the director, Office of Computational and Technology Research, Division of Mathematical, Information, and Computational Sciences of the U.S. Department of Energy, under contract number DE-AC02-05CH11231. Jonathan M. Borwein is Laureate Professor at the Centre for Computer Assisted Research Mathematics and its Applications (CARMA) at the University of Newcastle, Australia. His email address is jonathan.borwein@ newcastle.edu.au. Pólya went on to reaffirm, nonetheless, that proof should certainly be taught in school. We turn to observations, many of which have been fleshed out in coauthored books such as Mathematics by Experiment [10] and Experimental Mathematics in Action [3], in which we have noted the changing nature of mathematical knowledge and in consequence ask questions such as “How do we teach what and why to students?”, “How do we come to believe and trust pieces of mathematics?”, and “Why do we wish to prove things?” An answer to the last question is “That depends.” Sometimes we wish insight and sometimes, especially with subsidiary results, we are more than happy with a certificate. The computer has significant capacities to assist with both. Smail [27, p. 113] writes: the large human brain evolved over the past 1.7 million years to allow individuals to negotiate the growing complexities posed by human social living. As a result, humans find various modes of argument more palatable than others and are more prone to make certain kinds of errors than others. Likewise, the well-known evolutionary psychologist Steve Pinker observes that language [24, p. 83] is founded on the ethereal notions of space, time, causation, possession, and goals that appear to make up a language of thought. This remains so within mathematics. The computer offers scaffolding both to enhance mathematical reasoning, as with the recent computation connected to the Lie group E8 (see http://www.aimath.org/E8/ computerdetails. html), and to restrain mathematical error. Experimental Mathodology Justice Potter Stewart’s famous 1964 comment, “I know it when I see it,” is the quote with which Notices of the AMS Volume 58, Number 10 The Computer as Crucible [13] starts. A bit less informally, by experimental mathematics we intend [10]: (a) gaining insight and intuition; (b) visualizing math principles; (c) discovering new relationships; (d) testing and especially falsifying conjectures; (e) exploring a possible result to see if it merits formal proof; (f) suggesting approaches for formal proof; (g) computing replacing lengthy hand derivations; (h) confirming analytically derived results. Of these items, (a) through (e) play a central role, and (f) also plays a significant role for us but connotes computer-assisted or computer-directed proof and thus is quite distinct from formal proof as the topic of a special issue of the Notices in December 2008; see, e.g., [20]. Digital Integrity: I. For us, (g) has become ubiquitous, and we have found (h) to be particularly effective in ensuring the integrity of published mathematics. For example, we frequently check and correct identities in mathematical manuscripts by computing particular values on the LHS and RHS to high precision and comparing results—and then if necessary use software to repair defects. As a first example, in a current study of “character sums” we wished to use the following result derived in [14]: ∞ X ∞ X (−1)m+n−1 (1) (2m − 1)(m + n − 1)3 m=1 n=1 1 1 51 ? 2 = 4 Li4 π 4 − π 2 log (2) − 2 2880 6 1 7 4 + log (2) + log(2)ζ(3). 6 2 Here Li4 (1/2) is a polylogarithmic value. However, a subsequent computation to check results disclosed that, whereas the LHS evaluates to −0.872929289 . . ., the RHS evaluates to 2.509330815 . . .. Puzzled, we computed the sum, as well as each of the terms on the RHS (sans their coefficients), to 500-digit precision, then applied the “PSLQ” algorithm, which searches for integer relations among a set of constants [16]. PSLQ quickly found the following: ∞ X ∞ X (−1)m+n−1 (2m − 1)(m + n − 1)3 m=1 n=1 1 151 4 1 2 2 = 4 Li4 − π − π log (2) 2 2880 6 1 7 4 + log (2) + log(2)ζ(3). 6 2 In other words, in the process of transcribing (1) into the original manuscript, “151” had become “51”. It is quite possible that this error would have gone undetected and uncorrected had we not been (2) November 2011 able to computationally check and correct such results. This may not always matter, but it can be crucial. With a current research assistant, Alex Kaiser at Berkeley, we have started to design software to refine and automate this process and to run it before submission of any equation-rich paper. This semiautomated integrity checking becomes pressing when verifiable output from a symbolic manipulation might be the length of a Salinger novel. For instance, recently while studying expected radii of points in a hypercube [12], it was necessary to show the existence of a “closed form” for Z log(t + x2 + y 2 ) (3) J(t) := dx dy. 2 2 [0,1]2 (1 + x )(1 + y ) The computer verification of [12, Thm. 5.1] quickly returned a 100, 000-character “answer” that could be numerically validated very rapidly to hundreds of places. A highly interactive process stunningly reduced a basic instance of this expression to the concise formula π2 7 11 π log 2 − ζ(3) + π Cl2 (4) J(2) = 8 48 24 6 5π 29 π Cl2 − , 24 6 where Cl2 is the Clausen function Cl2 (θ) := P 2 n≥1 sin(nθ)/n (Cl2 is the simplest nonelementary Fourier series). Automating such reductions will require a sophisticated simplification scheme with a very large and extensible knowledge base. Discovering a Truth Giaquinto’s [18, p. 50] attractive encapsulation— “In short, discovering a truth is coming to believe it in an independent, reliable, and rational way”— has the satisfactory consequence that a student can legitimately discover things already “known” to the teacher. Nor is it necessary to demand that each dissertation be absolutely original— only that it be independently discovered. For instance, a differential equation thesis is no less meritorious if the main results are subsequently found to have been accepted, unbeknownst to the student, in a control theory journal a month earlier— provided they were independently discovered. Nearsimultaneous independent discovery has occurred frequently in science, and such instances are likely to occur more and more frequently as the earth’s “new nervous system” (Hillary Clinton’s term in a recent policy address) continues to pervade research. Despite the conventional identification of mathematics with deductive reasoning, Kurt Gödel (1906–1978) in his 1951 Gibbs lecture said: If mathematics describes an objective world just like physics, there is no reason why inductive methods should not be applied in mathematics just the same as in physics. Notices of the AMS He held this view until the end of his life despite— or perhaps because of—the epochal deductive achievement of his incompleteness results. Also, we emphasize that many great mathematicians from Archimedes and Galileo—who reputedly said “All truths are easy to understand once they are discovered; the point is to discover them”—to Gauss, Poincaré, and Carleson have emphasized how much it helps to “know” the answer beforehand. Two millennia ago, Archimedes wrote, in the introduction to his long-lost and recently reconstituted Method manuscript: For it is easier to supply the proof when we have previously acquired, by the method, some knowledge of the questions than it is to find it without any previous knowledge. Archimedes’ Method can be thought of as an uberprecursor to today’s interactive geometry software, with the caveat that, for example, the software package Cinderella actually does provide proof certificates for much of Euclidean geometry. As 2006 Abel Prize Laureate Lennart Carleson describes in his 1966 ICM speech on his positive resolution of Luzin’s 1913 conjecture (that the Fourier series of square-summable functions converge pointwise a.e. to the function), after many years of seeking a counterexample, he finally decided none could exist. He expressed the importance of this confidence as follows: The most important aspect in solving a mathematical problem is the conviction of what is the true result. Then it took 2 or 3 years using the techniques that had been developed during the past 20 years or so. Jeff Weeks’s Topological Games, or Euclid in Java.2 (e) Internet databases and facilities, including Google, MathSciNet, arXiv, Wikipedia, MathWorld, MacTutor, Amazon, Amazon Kindle, and many more that are not always so viewed. All entail data mining in various forms. The capacity to consult the Oxford dictionary and Wikipedia instantly within Kindle dramatically changes the nature of the reading process. Franklin [17] argues that Steinle’s “exploratory experimentation” facilitated by “widening technology” and “wide instrumentation”, as routinely done in fields such as pharmacology, astrophysics, medicine, and biotechnology, is leading to a reassessment of what legitimates experiment, in that a “local model” is not now a prerequisite. Thus a pharmaceutical company can rapidly examine and discard tens of thousands of potentially active agents and then focus resources on the ones that survive, rather than needing to determine in advance which are likely to work well. Similarly, aeronautical engineers can, by means of computer simulations, discard thousands of potential designs and submit only the best prospects to full-fledged development and testing. Hendrik Sørenson [28] concisely asserts that experimental mathematics—as defined above—is following similar tracks with software such as Mathematica, Maple, and Matlab playing the role of wide instrumentation: These aspects of exploratory experimentation and wide instrumentation originate from the philosophy of (natural) science and have not been much developed in the context of experimental mathematics. However, I claim that, e.g., the importance of wide instrumentation for an exploratory approach to experiments that includes concept formation also pertains to mathematics. Digital Assistance By digital assistance, we mean the use of: (a) integrated mathematical software such as Maple and Mathematica, or indeed Matlab and their open-source variants. (b) specialized packages such as CPLEX, PARI, SnapPea, Cinderella, and MAGMA. (c) general-purpose programming languages such as C, C++, and Fortran-2000. (d) Internet-based applications such as Sloane’s Encyclopedia of Integer Sequences, the Inverse Symbolic Calculator,1 Fractal Explorer, 1 Most of the functionality of the ISC, which is now housed at http://isc.carma.newcastle.edu.au/, is now built into the “identify” function of Maple starting with version 9.5. For example, the Maple √ command identify(4.45033263602792) returns 3 + e, meaning√that the decimal value given is simply approximated by 3 + e. In consequence, boundaries between mathematics and the natural sciences and between inductive and deductive reasoning are blurred and becoming more so. (See also [2].) This convergence also promises some relief from the frustration many mathematicians experience when attempting to describe their proposed methodology on grant applications to the satisfaction of traditional hard scientists. We leave unanswered the philosophically vexing if mathematically minor question as to whether genuine mathematical experiments (as discussed in [10]) truly exist, even if one embraces a fully idealist notion of mathematical existence. It surely seems to us that they do. 2 A cross-section of Internet-based mathematical resources is available at http://carma.newcastle.edu.au/ portal/ and http://www.experimentalmath.info. Notices of the AMS Volume 58, Number 10 least twelve Moore’s law doublings of computer power and memory capacity [10, 13], which, when combined with the utilization of highly parallel clusters (with thousands of processing cores) and fiber-optic networking, has resulted in six to seven orders of magnitude speedup for many operations. The Partition Function Consider the number of additive partitions, p(n), of a natural number, where we ignore order and zeroes. For instance, 5 = 4 + 1 = 3 + 2 = 3 + 1 + 1 = 2 + 2 + 1 = 2 + 1 + 1 + 1 = 1 + 1 + 1 + 1 + 1, so p(5) = 7. The ordinary generating function (5) discovered by Euler is ∞ ∞ −1 X Y (5) p(n)q n = 1 − qk . Figure 1. Plots of a 25 x 25 Hilbert matrix (L) and a matrix with 50% sparsity and random [0,1] entries (R). Pi, Partitions, and Primes The present authors cannot now imagine doing mathematics without a computer nearby. For example, characteristic and minimal polynomials, which were entirely abstract for us as students, now are members of a rapidly growing box of concrete symbolic tools. One’s eyes may glaze over trying to determine structure in an infinite family of matrices, including 2 1 M4 = 1 1 −55 −25 −8 M6 = p(2000) = 4720819175619413888601432406799959512200344166 −714 −336 , −140 −49 −12 but a command-line instruction in a computer algebra system will reveal that both M43 −3M4 −2I = 0 and M63 − 3M6 − 2I = 0. Likewise, more and more matrix manipulations are profitably, even necessarily, viewed graphically. As is now well known in numerical linear algebra, graphical tools are essential when trying to discern qualitative information such as the block structure of very large matrices. See, for instance, Figure 1. Equally accessible are many matrix decompositions, the use of Groebner bases, Risch’s decision algorithm (to decide when an elementary function has an elementary indefinite integral), graph and group catalogues, and others. Many algorithmic components of a computer algebra system are today extraordinarily effective compared with two decades ago, when they were more like toys. This is equally true of extreme-precision calculation—a prerequisite for much of our own work [8, 11, 9]. As we will illustrate, during the three decades that we have seriously tried to integrate computational experiments into research, we have experienced at November 2011 (This can be proven by using the geometric formula for 1/(1 − q k ) to expand each term and observing how powers of q n occur.) The famous computation by MacMahon of p(200) = 3972999029388 at the beginning of the twentieth century, done symbolically and entirely naively from (5) on a reasonable laptop, took 20 minutes in 1991 but only 0.17 seconds today, while the many times more demanding took just two minutes in 2009. Moreover, in December 2008, Crandall was able to calculate p(109 ) in three seconds on his laptop, using the HardyRamanujan-Rademacher “finite” series for p(n) along with FFT methods. Using these techniques, Crandall was also able to calculate the probable primes p(1000046356) and p(1000007396), each of which has roughly 35, 000 decimal digits. Such results make one wonder when easy access to computation discourages innovation: Would Hardy and Ramanujan have still discovered their marvelous formula for p(n) if they had powerful computers at hand? Quartic Algorithm for π Likewise, the record for computation of π has gone from 29.37 million decimal digits in 1986 to over 5 trillion digits in 2010. Since the algorithm below was used as part of each computation, it is interesting to compare in each √ the performance √ case: Set a0 := 6 − 4 2 and y0 := 2 − 1, then iterate yk+1 = 1 − (1 − yk4 )1/4 1 + (1 − yk4 )1/4 2 ak+1 = ak (1 + yk+1 ) − 22k+3 yk+1 (1 + yk+1 + yk+1 ). 4 Then ak converges quartically to 1/π —each iteration approximately quadruples the number of correct digits. Twenty-one full-precision iterations of (6), which were discovered on a 16K Radio Shack portable in 1983, produce an algebraic number that coincides with π to well Notices of the AMS and Plouffe. Additional details are given at http://www.numberworld.org/misc_ runs/pi-5t/announce_en.html. See also [6]. These digits appear to be “very normal”. Figure 2. Plot of π calculations, in digits (dots), compared with the long-term slope of Moore’s law (line). more than 6 trillion places. This scheme and the 1976 Salamin-Brent scheme [10, Ch. 3] have been employed frequently over the past quarter century. Here is a highly abbreviated chronology (based on http://en.wikipedia.org/wiki/ Chronology_of_computation_of_pi) : • 1986: Computing 29.4 million digits required 28 hours on one CPU of the new Cray-2 at NASA Ames Research Center, using (6). Confirmation using another algorithm took 40 hours. This computation uncovered hardware and software errors on the Cray-2. Success required developing faster FFTs [10, Ch. 3]. • January 2009: Computing 1.649 trillion digits using (6 ) required 73.5 hours on 1024 cores (and 6.348 Tbyte memory) of a Appro Xtreme-X3 system. This was checked with a computation via the Salamin-Brent scheme that took 64.2 hours and 6.732 Tbyte of main memory. The two computations differed only in the last 139 places. • April 2009: Takahashi increased his record to an amazing 2.576 trillion digits. • December 2009: Bellard computed nearly 2.7 trillion decimal digits of π (first in binary), using the Chudnovsky series given below. This took 131 days, but he later used only a single four-core workstation with lots of disk storage and even more human intelligence! • August 2010: Kondo and Yee computed 5 trillion decimal digits using the same formula (14) due to the Chudnovskys. This was first done in binary, then converted to decimal. The binary digits were confirmed by computing 32 hexadecimal digits of π ending with position 4,152,410,118,610, using BBP-type formulas for π due to Daniel Shanks, who in 1961 computed π to over 100,000 digits, once told Phil Davis that a billion-digit computation would be “forever impossible”. But both Kanada and the Chudnovskys achieved that in 1989. Similarly, the intuitionists Brouwer and Heyting asserted the “impossibility” of ever knowing whether the sequence 0123456789 appears in the decimal expansion of π , yet it was found in 1997 by Kanada, beginning at position 17387594880. As late as 1989, Roger Penrose ventured in the first edition of his book The Emperor’s New Mind that we likely will never know if a string of ten consecutive sevens occurs in the decimal expansion of π . This string was found in 1997 by Kanada, beginning at position 22869046249. Figure 2 shows the progress of π calculations since 1970, superimposed with a line that charts the long-term trend of Moore’s law. It is worth noting that whereas progress in computing π exceeded Moore’s law in the 1990s, it has lagged behind Moore’s law in the past decade. This may be due in part to the fact that π programs can no longer employ system-wide fast Fourier transforms for multiplication (since most state-ofthe-art supercomputers have insufficient network bandwidth), and so less efficient hybrid schemes must be used instead. Digital Integrity: II. There are many possible sources of errors in these and other large-scale computations: • The underlying formulas used might conceivably be in error. • Computer programs implementing these algorithms, which employ sophisticated algorithms such as fast Fourier transforms to accelerate multiplication, are prone to human programming errors. • These computations usually are performed on highly parallel computer systems, which require error-prone programming constructs to control parallel processing. • Hardware errors may occur. This was a factor in the 1986 computation of π , as noted above. So why would anyone believe the results of such calculations? The answer is that such calculations are always double-checked with an independent calculation done using some other algorithm, sometimes in more than one way. For instance, Kanada’s 2002 computation of π to 1.3 trillion decimal digits involved first computing slightly over one trillion hexadecimal (base-16) digits. He Notices of the AMS Volume 58, Number 10 found that the 20 hex digits of π beginning at position 1012 + 1 are B4466E8D21 5388C4E014. Kanada then calculated these hex digits using the “BBP” algorithm [7]. The BBP algorithm for π is based on the formula ∞ X 4 1 2 1 1 − − − , (7) π = 16i 8i + 1 8i + 4 8i + 5 8i + 6 i=0 which was discovered using the “PSLQ” integer relation algorithm [16]. Integer relation methods find or exclude potential rational relations between vectors of real numbers. At the start of this millennium, they were named one of the top ten algorithms of the twentieth century by Computing in Science and Engineering. The most effective is Helaman Ferguson’s PSLQ algorithm [10, 3]. Eventually PSLQ produced the formula ! 1 1 1, 14 −1 (8) π = 4 2 F1 −log 5, 5 − 4 +2 tan 2 4 ! 1, 41 1 where 2 F1 − 5 4 = 0.955933837 . . . is a 4 Gaussian hypergeometric function. From (8), the series (7) almost immediately follows. The BBP algorithm, which is based on (7), permits one to calculate binary or hexadecimal digits of π beginning at an arbitrary starting point, without needing to calculate any of the preceding digits, by means of a simple scheme that does not require very high precision arithmetic. The result of the BBP calculation was B4466E8D21 5388C4E014. Needless to say, in spite of the many potential sources of error in both computations, the final results dramatically agree, thus confirming (in a convincing but heuristic sense) that both results are almost certainly correct. Although one cannot rigorously assign a “probability” to this event, note that the chances that two random strings of 20 hex digits perfectly agree is one in 1620 ≈ 1.2089 × 1024 . This raises the following question: What is more securely established, the assertion that the hex digits of π in positions 1012 + 1 through 1012 + 20 are B4466E8D21 5388C4E014, or the final result of some very difficult work of mathematics that required hundreds or thousands of pages, that relied on many results quoted from other sources, and that (as is frequently the case) only a relative handful of mathematicians besides the author can or have carefully read in detail? In the most recent computation using the BBP formula, Tse-Wo Zse of Yahoo! Cloud Computing calculated 256 binary digits of π starting at the two quadrillionth bit [30]. He then checked his result using the following variant of the BBP formula due November 2011 to Bellard: (9) ∞ 1 X (−1)k 256 1 64 π= + − k 64 k=0 1024 10k + 1 10k + 1 10k + 3 4 4 − 10k + 5 10k + 7 32 1 − − . 4k + 1 4k + 3 In this case, both computations verified that the 24 hex digits beginning immediately after the 500 trillionth hex digit (i.e., after the two quadrillionth binary bit) are: E6C1294A ED40403F 56D2D764. More recent related computations are also described in [6]. − Euler’s Totient Function φ As another measure of what changes over time and what does not, consider two conjectures regarding φ(n), which counts the number of positive numbers less than and relatively prime to n: Giuga’s Conjecture (1950). An integer n > 1 is Pn−1 n−1 a prime if and only if Gn := ≡ n−1 k=1 k mod n. Counterexamples are necessarily Carmichael numbers—rare birds only proven infinite in 1994— and much more. In [11, p. 227] we exploited the fact that if a number n = p1 · · · pm with m > 1 prime factors pi is a counterexample to Giuga’s conjecture (that is, satisfies sn ≡ n − 1 mod n), then for i ≠ j we have pi ≠ pj , m X 1 p i=1 i and the pi form a normal sequence: pi 6≡ 1 mod pj for i 6= j. Thus the presence of 3 excludes 7, 13, 19, 31, 37, . . . , and of 5 excludes 11, 31, 41, . . .. This theorem yielded enough structure, using some predictive experimentally discovered heuristics, to build an efficient algorithm to show—over several months in 1995—that any counterexample had at least 3459 prime factors and so exceeded 1013886 , extended a few years later to 1014164 , in a five-day desktop computation. The heuristic is self-validating every time that the program runs successfully. But this method necessarily fails after 8135 primes; someday we hope to exhaust its use. While writing this piece, one of us was able to obtain almost as good a bound of 3050 primes in under 110 minutes on a laptop computer and a bound of 3486 primes and 14000 digits in less than fourteen hours; this was extended to 3678 primes and 17168 digits in ninety-three CPU-hours on a Macintosh Pro, using Maple rather than C++, which is often orders of magnitude faster but requires much more arduous coding. An equally hard related conjecture for which much less progress can be recorded is: Notices of the AMS Lehmer’s Conjecture (1932). φ(n) (n − 1) if and only if n is prime. He called this “as hard as the existence of odd perfect numbers.” Again, prime factors of counterexamples form a normal sequence, but now there is little extra structure. In a 1997 Simon Fraser M.Sc. thesis, Erick Wong verified the conjecture for fourteen primes, using normality and a mix of PARI, C++, and Maple to press the bounds of the “curse of exponentiality”. This very clever computation subsumed the entire scattered literature in one computation but could extend the prior bound only from thirteen primes to fourteen. For Lehmer’s related 1932 question when does φ(n) | (n + 1)?, Wong showed that there are eight solutions with no more than seven factors (six-factor solutions are due to Lehmer). Let Lm := m−1 Y k=0 n with Fn := 22 + 1 denoting the Fermat primes. The solutions are 2, L1 , L2 , . . . , L5 , and the rogue pair 4919055 and 6992962672132095, but analyzing just eight factors seems out of sight. Thus in seventy years the computer allowed the exclusion bound to grow by only one prime. Lehmer could not factor 6992962672132097 in 1932. If it had been prime, a ninth solution would exist: since φ(n)|(n + 1) with n + 2 prime implies that N := n(n + 2) satisfies φ(N)|(N + 1). We say could not because the number is divisible by 73, which Lehmer—a father of much factorization literature—could certainly have discovered had he anticipated a small factor. Today discovering that 6992962672132097 = 73 · 95794009207289 is nearly instantaneous, while fully resolving Lehmer’s original question remains as hard as ever. both produce rational approximations well), in 1996 David Bradley and one of us [3, 11] found the following unanticipated generating function for ζ(4n + 3): (11) ∞ X ζ(4k + 3) x4k ∞ k−1 Y 5X (−1)k+1 2k 2 k=1 k3 (1 − x4 /k4 ) m=1 k 1 + 4x4 /m4 1 − x4 /m4 ! . Note that this formula permits one to read off an infinity of formulas for ζ(4n + 3), n > 0, beginning with (10)(b), by comparing coefficients of x4k on the LHS and the RHS. A decade later, following a quite analogous but much more deliberate experimental procedure, as detailed in [3], we were able to discover a similar general formula for ζ(2n + 2) that is pleasingly parallel to (11): (12) ∞ X ζ(2k + 2) x2k ∞ X k=1 k−1 Y 1 k2 2k k (1 − x2 /k2 ) 1 − 4 x2 /m2 1 − x2 /m2 ! . As with (11), one can now read off an infinity of formulas, beginning with (10)(a). In 1996 the authors could reduce (11) to a finite form that they could not prove, but Almquist and Granville did a year later. A decade later, the Wilf-Zeilberger algorithm [29, 23]—for which the inventors were awarded the Steele Prize—directly (as implemented in Maple) certified (12) [10, 3]. In other words, (12) was both discovered and proven by computer. We found a comparable generating function for ζ(2n + 4), giving (10) (c) when x = 0, but one for ζ(4n + 1) still eludes us. Reciprocal Series for π Inverse Computation and Apéry-like Series Three intriguing formulae for the Riemann zeta function are (10) ∞ ∞ X 1 5 X (−1)k+1 , (b) ζ(3) = , (a) ζ(2) = 3 2 2k 2 k=1 k3 2k k=1 k k k ∞ 36 X 1 . (c) ζ (4) = 17 k=1 k4 2k k Binomial identity (10)(a) has been known for two centuries, whereas (b)—exploited by Apéry in his 1978 proof of the irrationality of ζ(3)—was discovered as early as 1890 by Markov, and (c) was noted by Comtet [3]. Using integer relation algorithms, bootstrapping, and the “Pade” function (Mathematica and Maple Truly novel series for 1/π , based on elliptic integrals, were discovered by Ramanujan around 1910 [3, 10, 31]. One is: √ ∞ 1 2 2 X (4k)! (1103 + 26390k) (13) = . π 9801 k=0 (k!)4 3964k Each term of (13) adds eight correct digits. Gosper used (13) for the computation of a then-record 17 million digits of π in 1985—thereby completing the first proof of (13) [10, Ch. 3]. Shortly thereafter, David and Gregory Chudnovsky found the following variant, which lies in the quadratic number field √ √ Q( −163) rather than Q( 58): (14) ∞ X 1 (−1)k (6k)! (13591409 + 545140134k) = 12 . π (3k)! (k!)3 6403203k+3/2 k=0 Notices of the AMS Volume 58, Number 10 Each term of (14) adds fourteen correct digits. The brothers used this formula several times, culminating in a 1994 calculation of π to over four billion decimal digits. Their remarkable story was told in a prizewinning New Yorker article [26]. Remarkably, as we already noted earlier, (14) was used again in late 2009 for the current record computation of π . Wilf-Zeilberger at Work. A few years ago Jesús Guillera found various Ramanujan-like identities for π , using integer relation methods. The three most basic—and entirely rational—identities are: (15) and (19) 3 1 ∞ X 1 x+ 2 n (42(n + x) + 5) 26n (x + 1)3n n=0 2 ∞ x + 21 X n . = 32x (2x + 1)2n n=0 Here (a)n = a(a + 1) · ·(a + n − 1) is the rising factorial. Substituting x = 1/2 in (18) and (19), he obtained, respectively, the formulae ∞ X 1 (1)3n π2 (3n + 2) = , 3 22n 3 4 n=0 2 ∞ X 2n+1 4 1 n 5 2 = (−1) r (n) (13+180n+820n ) π 2 n=0 32 (16) 2n+1 ∞ X 2 1 n 5 2 = (−1) r (n) (1+8n+20n ) 2 π 2 n=0 Formal Verification of Proof (17) 2n+1 ∞ 4 ? X 1 7 2 3 = r (n) (1+14n+76n +168n ) , π 3 n=0 8 where r (n) := (1/2 · 3/2 · · · · · (2n − 1)/2)/n! . Guillera proved (15) and (16) in tandem, by very ingeniously using the Wilf-Zeilberger algorithm [29, 23] for formally proving hypergeometric-like identities [10, 3, 19, 31]. No other proof is known, and there seem to be no like formulae for 1/π N with N ≥ 4. The third, (17), is almost certainly true. Guillera ascribes (17) to Gourevich, who used integer relation methods to find it. We were able to “discover” (17) using thirty-digit arithmetic, and we checked it to five hundred digits in 10 seconds, to twelve hundred digits in 6.25 minutes, and to fifteen hundred digits in 25 minutes, all with naive command-line instructions in Maple. But it has no proof, nor does anyone have an inkling of how to prove it; especially, as experiment suggests, since it has no “mate” in analogy to (15) and (16) [3]. Our intuition is that if a proof exists, it is more a verification than an explication, and so we stopped looking. We are happy just to “know” that the beautiful identity is true (although it would be more remarkable were it eventually to fail). It may be true for no good reason—it might just have no proof and be a very concrete Gödel-like statement. In 2008 Guillera [19] produced another lovely pair of third-millennium identities—discovered with integer relation methods and proved with creative telescoping—this time for π 2 rather than its reciprocal. They are (18) 3 2 1 1 ∞ ∞ X X 1 x+ 2 n 2 n (6(n + x) + 1) = 8x , 22n (x + 1)3n (x + 1)2n n= 0 n=0 November 2011 ∞ X π2 1 (1)3n . 3 (21n + 13) = 4 6n 3 2 3 n=0 In 1611 Kepler described the stacking of equalsized spheres into the familiar arrangement we see for oranges in the grocery store. He asserted that this packing is the tightest possible. This assertion is now known as the Kepler conjecture and has persisted for centuries without rigorous proof. Hilbert implicitly included the irregular case of the Kepler conjecture in problem 18 of his famous list of unsolved problems in 1900—whether there exist nonregular space-filling polyhedra? —the regular case having been disposed of by Gauss in 1831. In 1994 Thomas Hales, now at the University of Pittsburgh, proposed a five-step program that would result in a proof: (a) treat maps that only have triangular faces; (b) show that the facecentered cubic and hexagonal-close packings are local maxima in the strong sense that they have a higher score than any Delaunay star with the same graph; (c) treat maps that contain only triangular and quadrilateral faces (except the pentagonal prism); (d) treat maps that contain something other than a triangular or quadrilateral face; and (e) treat pentagonal prisms. In 1998 Hales announced that the program was now complete, with Samuel Ferguson (son of mathematician-sculptor Helaman Ferguson) completing the crucial fifth step. This project involved extensive computation, using an interval arithmetic package, a graph generator, and Mathematica. The computer files containing the source code and computational results occupy more than three Gbytes of disk space. Additional details, including papers, are available at http://www.math.pitt.edu/~thales/ kepler98. For a mixture of reasons—some more defensible than others—the Annals of Mathematics initially decided to publish Hales’s paper with a cautionary note, but this disclaimer was deleted before final publication. Notices of the AMS Hales [20] has now embarked on a multiyear program to certify the proof by means of computerbased formal methods, a project he has named the “Flyspeck” project. As these techniques become better understood, we can envision a large number of mathematical results eventually being confirmed by computer, as instanced by other articles in the same issue of the Notices as Hales’s article. Provably, the following is true: The “sum equals integral” identity for σp remains valid at least for p among the first 10176 primes but stops holding after some larger prime, and thereafter the “sum less the integral” is strictly positive, but they always differ by much less than one part in a googolplex = 10100 . An even stronger estimate is possible assuming the generalized Riemann hypothesis (see [15, §7] and [8]). Limits of Computation A remarkable example is the following: Concluding Remarks (20) Z∞ The central issues of how to view experimentally discovered results have been discussed before. In 1993 Arthur Jaffe and Frank Quinn warned of the proliferation of not-fully-rigorous mathematical results and proposed a framework for a “healthy and positive” role for “speculative” mathematics [21]. Numerous well-known mathematicians responded [1]. Morris Hirsch, for instance, countered that even Gauss published incomplete proofs, and the fifteen thousand combined pages of the proof of the classification of finite groups raises questions as to when we should certify a result. He suggested that we attach a label to each proof—e.g., “computeraided”, “mass collaboration”, “constructive”, etc. Saunders Mac Lane quipped that “we are not saved by faith alone, but by faith and works,” meaning that we need both intuitive work and precision. At the same time, computational tools now offer remarkable facilities to confirm analytically established results, as in the tools in development to check identities in equation-rich manuscripts, and in Hales’s project to establish the Kepler conjecture by formal methods. The flood of information and tools in our information-soaked world is unlikely to abate. We have to learn and teach judgment when it comes to using what is possible digitally. This means mastering the sorts of techniques we have illustrated and having some idea why a software system does what it does. It requires knowing when a computation is or can—in principle or practice— be made into a rigorous proof and when it is only compelling evidence or is entirely misleading. For instance, even the best commercial linear programming packages of the sort used by Hales will not certify any solution, though the codes are almost assuredly correct. It requires rearranging hierarchies of what we view as hard and as easy. It also requires developing a curriculum that carefully teaches experimental computer-assisted mathematics. Some efforts along this line are already under way by individuals including Marc Chamberland at Grinnell (http://www.math.grin.edu/~chamberl/ courses/MAT444/syllabus.html), Victor Moll at Tulane, Jan de Gier in Melbourne, and Ole Warnaar at the University of Queensland. cos(2x) 0 ∞ Y cos(x/n) dx = 0.392699081698724154807830422909937860524645434187231595926... The computation of this integral to high precision can be performed using a scheme described in [5]. When we first did this computation, we thought that the result was π /8, but upon careful checking with the numerical value 0.392699081698724154807830422909937860524646174921888227621..., it is clear that the two values disagree beginning with the forty-third digit! Richard Crandall [15, §7.3] later explained this mystery. Via a physically motivated analysis of running out of fuel random walks, he showed that π /8 is given by the following very rapidly convergent series expansion, of which formula (20) above is merely the first term: ∞ Z∞ ∞ X Y π (21) = cos[2(2m+1)x] cos(x/n) dx. 8 m=0 0 n=1 Two terms of the series above suffice for 500-digit agreement. As a final sobering example, we offer the following “sophomore’s dream” identity (22) σ29 := ∞ X sinc(n) sinc(n/3) sinc(n/5) · · · sinc(n/23) sinc(n/29) Z∞ = sinc(x) sinc(x/3) sinc(x/5) −∞ · · · sinc(x/23) sinc(x/29) dx, (23) where the denominators range over the odd primes, which was first discovered empirically. More generally, consider (24) σp := ∞ X sinc(n) sinc(n/3) sinc(n/5) sinc(n/7) · · · sinc(n/p) ? sinc(x) sinc(x/3) sinc(x/5) sinc(x/7) −∞ · · · sinc(x/p) dx. Notices of the AMS Volume 58, Number 10 Judith Grabiner has noted that a large impetus for the development of modern rigor in mathematics came with the Napoleonic introduction of regular courses: lectures and textbooks force a precision and a codification that apprenticeship obviates. But it will never be the case that quasiinductive mathematics supplants proof. We need to find a new equilibrium. That said, we are only beginning to tap new ways to enrich mathematics. As Jacques Hadamard said [25]: The object of mathematical rigor is to sanction and legitimize the conquests of intuition, and there was never any other object for Never have we had such a cornucopia of ways to generate intuition. The challenge is to learn how to harness them, how to develop and how to transmit the necessary theory and practice. The Priority Research Centre for Computer Assisted Research Mathematics and its Applications (CARMA), http://carma.newcastle.edu.au/, which one of us directs, hopes to play a lead role in this endeavor: an endeavor which in our view encompasses an exciting mix of exploratory experimentation and rigorous proof. References [1] M. Atiyah, et al., Responses to “Theoretical mathematics: Toward a cultural synthesis of mathematics and theoretical physics”, by A. Jaffe and F. Quinn, Bulletin of the American Mathematical Society 30, no. 2 (1994), 178–207. [2] J. Avigad, Computers in mathematical inquiry, in The Philosophy of Mathematical Practice (P. Mancuso, ed.), Oxford University Press, 302–316. [3] D. Bailey, J. Borwein, N. Calkin, R. Girgensohn, R. Luke, and V. Moll, Experimental Mathematics in Action, A K Peters, Natick, MA, 2007. [4] D. H. Bailey and J. M. Borwein, Computer-assisted discovery and proof, Tapas in Experimental Mathematics, 21–52, in Contemporary Mathematics, vol. 457, American Mathematical Society, Providence, RI, 2008. [5] D. H. Bailey, J. M. Borwein, V. Kapoor, and E. Weisstein, Ten problems in experimental mathematics, American Mathematical Monthly 113, no. 6 (2006), 481–509. [6] D. H. Bailey, J. M. Borwein, A. Mattingly, and G. Wightwick, The computation of previously inaccessible digits of π 2 and Catalan’s constant, Notices of the American Mathematical Society, to appear. [7] D. H. Bailey, P. B. Borwein, and S. Plouffe, On the rapid computation of various polylogarithmic constants, Mathematics of Computation 66, no. 218 (1997), 903–913. [8] R. Baillie, D. Borwein, and J. Borwein, Some sinc sums and integrals, American Math. Monthly 115 (2008), no. 10, 888–901. [9] J. M. Borwein, The SIAM 100 digits challenge, Extended review in the Mathematical Intelligencer 27 (2005), 40–48. November 2011 [10] J. M. Borwein and D. H. Bailey, Mathematics by Experiment: Plausible Reasoning in the 21st Century, extended second edition, A K Peters, Natick, MA, 2008. [11] J. M. Borwein, D. H. Bailey, and R. Girgensohn, Experimentation in Mathematics: Computational Roads to Discovery, A K Peters, Natick, MA, 2004. [12] J. M. Borwein, O-Yeat Chan, and R. E. Crandall, Higher-dimensional box integrals, Experimental Mathematics, January 2010. [13] J. M. Borwein and K. Devlin, The Computer as Crucible, A K Peters, Natick, MA, 2008. [14] J. M. Borwein, I. J. Zucker, and J. Boersma, The evaluation of character Euler double sums, Ramanujan Journal 15 (2008), 377–405. [15] R. E. Crandall, Theory of ROOF walks, 2007, available at http://www.perfscipress.com/papers/ ROOF11_psipress.pdf. [16] H. R. P. Ferguson, D. H. Bailey, and S. Arno, Analysis of PSLQ, an integer relation finding algorithm, Mathematics of Computation 68, no. 225 (1999), 351–369. [17] L. R. Franklin, Exploratory experiments, Philosophy of Science, 72 (2005), 888–899. [18] M. Giaquinto, Visual Thinking in Mathematics: An Epistemological Study, Oxford University Press, New York, 2007. [19] J. Guillera, Hypergeometric identities for 10 extended Ramanujan-type series, Ramanujan Journal 15 (2008), 219–234. [20] T. C. Hales, Formal proof, Notices of the AMS 55, no. 11 (2008), 1370–1380. [21] A. Jaffe and F. Quinn, Theoretical mathematics: Toward a cultural synthesis of mathematics and theoretical physics, Bulletin of the American Mathematical Society 29, no. 1 (1993), 1–13. [22] M. Livio, Is God a Mathematician?, Simon and Schuster, New York, 2009. [23] M. Petkovsek, H. S. Wilf, and D. Zeilberger, A = B, A K Peters, Natick, MA, 1996. [24] S. Pinker, The Stuff of Thought: Language as a Window into Human Nature, Allen Lane, New York, 2007. [25] G. Pólya, Mathematical Discovery: On Understanding, Learning, and Teaching Problem Solving (combined edition), New York, John Wiley and Sons, New York, 1981. [26] R. Preston, The mountains of pi, New Yorker, 2 Mar 1992, http://www.newyorker. com/archive / content / articles/ 050411fr_archive01. [27] D. L. Smail, On Deep History and the Brain, Caravan Books, University of California Press, Berkeley, CA, 2008. [28] H. K. Sørenson, Exploratory experimentation in experimental mathematics: A glimpse at the PSLQ algorithm, Philosophy of Mathematics: Sociological Aspects and Mathematical Practice, in press. [29] H. S. Wilf and D. Zeilberger, Rational functions certify combinatorial identities, Journal of the American Mathematical Society 3 (1990), 147–158. [30] Tse-Wo Zse, personal communication to the authors, July 2010. [31] W. Zudilin, Ramanujan-type formulae for 1/π : A second wind, 19 May 2008, available at http:// Notices of the AMS
{"url":"https://pdffox.com/exploratory-experimentation-and-computation-pdf-free.html","timestamp":"2024-11-11T23:46:04Z","content_type":"text/html","content_length":"71906","record_id":"<urn:uuid:ac6e5632-d216-4960-be54-f7b7b49da8fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00506.warc.gz"}
Computer Science Academic Year 2018/2019 - 1° Year Teaching Staff: Maria Serafina MADONIA Credit Value: Taught classes: 36 hours 36 hours Term / Semester: Learning Objectives Knowledge and understanding: students will acquire knowledge relative to some of the most important formal theories that are fundamental in Informatics. He will understand how all the aspects of applied Informatics have been realized or influenced by knowledge developed at a theoretical level. Applying knowledge and understanding: students will acquire the ability of applying theoretical notions in applicative contexts. Making judgements: students will be stimulated to search independently which aspects of theoretical computer science are used in topics covered in more applicative courses he followed in the same year. They will also be stimulated to understand how topics of other different courses could be formalized in mathematical logic. Communication skills: students will acquire the necessary communication skills and expressive ability in order to express in a formal and non-ambiguous way scientific arguments. Learning skills: students will get the competences to tackle independentlythe study of theoretical arguments when formally described. Course Structure Each lesson is divided into two parts. The first one (about one third of the time) is devoted to the solution of exercises and to the clarification of unclear topics of the previous lessons. The second part is devoted to the explanation of new topics. Detailed Course Content Elements of Theory of formal languages: • Alphabet, string, language. Operations on languages. Regular expressions. Cardinality of languages. • Chomsky grammars. Type 0,1,2,3 grammars. Chomsky Hierarchy. Bakus normal form. • What does it mean ''to compute'' • Recognition and decision of languages. Automata. • Finite state automata, deterministic and nondeterministic. • Pumping Lemma for FSA. • Context-free languages: a hint. Computational models and computability theory: • Turing machines and universal Turing machine. • Introduction to functional programming and the lambda-calculus • free and bound variables, alpha-conversion, substiturions, beta-reduction. Definition of formal system, Church numerals. Lambda-definable functions. • Lambda-definability of recursive functions. Uniqueness of normal form. Consistency of beta-conversion theory. • The formalism of primitive recursive functions and partial functions. • Informal introduction to recursion theory and some fundamental results. • A logic-based computational model: a sketchy introduction to logic programming. Codes and representation of numerical information: • Codes and two-complements representation of integers. • Strings vs Numbers Abstract machines. • Abstract machine definition. • Implementation of abstract machines; layered organization of computation systems. • Formal systems. Admissible and derivable rules. Some properties of formal systems. Consistency. • Propositional logic definition and main properties. Deduction theorem. • Semantics of propositional logics. Soundness and completeness. • Natural deduction for propositional logics- • The correspondence proofs as programsLa corrispondenza dimostrazioni-programmi • First-order logic: language and semantics. • Substitutions, natural deduction, axiomatic system. • Statements of fundamental theorems. • Arithmetic and group theory formalizations. • Statements of some fundamental theorems. . • Induction-Recursion correspondence: a hint. Programming-languages semantics: • Structured Operational Semantics The work of the computer scientists in a globalized world. Textbook Information
{"url":"https://dmi.unict.it/courses/l-31/course-units/?cod=10747","timestamp":"2024-11-14T16:52:56Z","content_type":"text/html","content_length":"28193","record_id":"<urn:uuid:3dcc4143-12d8-42d0-a635-7a65da782cb9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00529.warc.gz"}
Lyapunov's second method for nonautonomous differential equations Title data Grüne, Lars ; Kloeden, Peter E. ; Siegmund, Stefan ; Wirth, Fabian: Lyapunov's second method for nonautonomous differential equations. In: Discrete and Continuous Dynamical Systems. Vol. 18 (2007) Issue 2-3 . - pp. 375-403. ISSN 1553-5231 DOI: https://doi.org/10.3934/dcds.2007.18.375 Abstract in another language Converse Lyapunov theorems are presented for nonautonomous systems modelled as skew product flows. These characterize various types of stability of invariant sets and pullback, forward and uniform attractors in such nonautonomous systems. Further data
{"url":"https://eref.uni-bayreuth.de/id/eprint/63637/","timestamp":"2024-11-09T12:55:45Z","content_type":"application/xhtml+xml","content_length":"21874","record_id":"<urn:uuid:dce1dc9c-9663-4019-8584-48b4aee4bacd>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00802.warc.gz"}
Top tuition reviews from the Mathematically Enlightened (Testimonials) - Jφss Sticks Tuition Gratitudes & Testimonials Updated on Tutors’ Testimonials More testimonials in full-colour pics at Miss Loi’s Flickr page » We all know that a good tutor is difficult to come by. That’s why I’m writing this testimonial, so parents who are feeling helpless and desperate know where to find help. When I saw my girl’s mid-year results, I realized I had to find her a math tutor asap. In my desperate search, I stumbled upon Miss Loi. It was definitely the right choice to engage Miss Loi as my girl’s tutor. Competent and dedicated, she was a large factor in the tremendous improvement in my girl’s math grades. Mid-Year Results Before Tuition: E-Math: D7 A-Math: E8 Year-End Results After Tuition For Three Months: E-Math: B3 A-Math: C6 GCE O-Level Results After Tuition For Five Months: E-Math: A2 A-Math: A2 So Hurry! Get Help! Don’t Wait! The sooner you do so, the higher the chance of good grades! Mrs Rose LeeEvelyn Lee's Mom (Chij St. Joseph's Convent) Hi Miss Loi. I would like to thank you for your help and effort you have given Shi Hui. She did very well for her A-Maths class test. I’m very happy for her. I can see that she is more confident now, especially when she has been previously suffering from lack of confidence and kept on telling me that she “can’t do it”. Now she says “Yes! I can do it!”. Thanks and appreciate it! Emily ChiuShi Hui's Mom (St. Nicholas Girls' School) 我不是一个 ‘cheng zhi’ 的学生。因为不是没上课就是没做功课且Late。。。미안 해요, Teacher! 不过,记得在 January 刚报读在这里补习时,我是个什么都不懂的学生。但,老师的耐心教导让我懂得怎样做某些习题。谢谢老师!!(Truly thanks, 虽然我不太会表达但信里说的都是真的!)除了这些还有因为您的鼓 励让我不放弃。(虽然我是个很会 make careless mistakes 的学生!)还有因为您的幽默感让我在课堂上觉得有趣等等。。。 Clarissa YeoFirst Toa Payoh Secondary School I am extremely glad to have gotten tuition from Jφss Sticks. My math grades improved from D7 to A1 over the past 2 years. Enjoyed the worksheets as they are challenging and exposed me to the variety of questions, enabling me to be more aware and prepared for the possible questions that may be tested. It is also amazing how Ms Loi can mark my answers at a rate of about 10 seconds at most for each question! 😛 Lim Chern Ern ★Bishan Park Secondary School Thank you so much Ms Loi. I’ll never forget this one year experience learning not only how to do maths but also how to understand everything very well. I learnt that you took individual care and concern on every one of your students. I was able to challenge myself on discipline whenever I did your worksheets too. Those worksheets are very precious to me. I always think that it was a ‘good’ thing that I failed my maths last year, otherwise I would not have met you. Thanks a lot Ms Loi once again. I feel ‘gifted’ to get a place in your class. It is really Miss Loi’s Temple of Salvation! Priscilla IsabelSt. Margaret's Secondary School Time flies ~ It’s been about 3 years. I still remember that I was only a young and naive secondary one girl 3 years back. Maths was practically the subject I was weakest in and, of course, I hated it. But thanks to you, it became one of the subjects I love ♥ If it wasn’t because of your teachings and enlightenment, I wouldn’t be here today, I wouldn’t be taking A-Maths, and Maths does not seem THAT difficult anymore. I ♥ your lessons because they help me to score better grades! I ♥ your sour sweets! Thank you, Miss Loi ~ you’re someone I can never forget! Yeo Hui QingManjusri Secondary School Before Ms Loi taught me maths, my maths really sucked. But now, after 3 years of mathematical education at The Temple, my maths still suck – but less by ten times now. I think Ms Loi is the best maths tutor ever (50% of this is true because of all the sour sweets she gave me). Ms Loi is a super nice teacher and I cannot imagine how sucky I would be without her help. Thank you Ms Loi! You’re awesome (but less awesome than me 🙂 ) Julian SeahSt. Joseph's Institution It has been a very fruitful and arduous journey during these 4 years I at The Temple. I would like to sincerely thank Miss Loi for all the time and effort she has spent on us students, as well as all the best questions she painstakenly selected for us to do. My standard of Mathematics would not be where it is today if not for you, Miss Loi. I hope that you will continue to nurture, educate and nag [Miss Loi: -.-] at students, because you only do what is the best for them. THANK YOU! Alejandro Zikin FokMaris Stella High School Grades have been improving after tuitions, from C to A for EMaths, and a 4-grade improvement in AMaths, a subject I never passed!!! Miss Loi is like my savior HAHAHA. I’m clearly indebted to you, owe you one! Very grateful and thankful that I have you as my tutor. Thanks so much! XOXO♥ Sofia PutriChij St. Theresa's Convent MISS LOI is the queen of all math tuition! She has saved me from the evils of F9s which I have been getting since I started secondary school. For the 3 months that I have attended her class, I have done more than I thought I was capable of – LIKE PASSING MY PRELIMS. Her hilarious comments that she makes randomly in class, as well as her awesome teaching methods make her lessons something to look forward to. I ♥LOVE♥ MATHS Stephanie TanSt. Nicholas Girls' School Pls be glad that I’m writing this the day before my Lit & Amath exam (joking!) so I am not some ungrateful person. Thank you for spending so much effort & papers on me. You’re the first tuition teacher I ever went to & I’m glad that you accepted me because I’m awed by your ability to handle so many individual tuition at once & your resourcefulness & awesome notes which my friends always snatch away before an exam. I’m not very mathematically inclined (as evident from my 1 x 10^12 careless mistakes), but I’m still very appreciative of your patience in teaching me. I think I have definitely improved in my math & Amath (oh ya, I forgot to thank you in teaching me double Math even though I only requested for Amath in the beginning, ty!). From the first time I came to the Temple where i failed the fundamental test horribly, I notice you are always coughing & rushing to the toilet, so please take care of your health & not overwork so much. Despite how weird these presents might seem, I think they are very beneficial for your health (except the gummy bear, of course). I wish you good luck in your future endeavours! P.S. I have already tried my best to give my most beautiful handwriting so a human can read it. It’s 11.30 a.m. now & I have only 15 mins to prepare for my last session! Li Zhi TaoAnglican High School Ms Loi’s math tuition classes have been beneficial, without a doubt; evident from the fact that my perpetual U-grade / F9 Math scores improved to a decent pass within a few mere weeks of lessons during my Prelims. Her lessons are very well-structured, and she has a strange ability to make you remember complex formulae within minutes. Her lessons are very effective as she manages to cover all the fundamental principles and crucial stuff WITHOUT FAIL within 2 hours! Her lessons are a virtual savior to the Math-handicapped. MISS LOI ROCKS! “Life is like a coin, you only get to spend it once.” 🙂 Sarah KhanSingapore Chinese Girls' School Miss Loi has taught me how to love math again. She has exposed me to different types of math questions and widen my knowledge in math. Over the short span of 3 months, my math has improved tremendously and I am capable of doing more questions with much ease. Miss Loi is a great teacher and she is highly regarded and respected among her students. Miss Loi has also sacrificed her daily meals and personal time for the benefit of her students and she is always willing to go the extra mile to help the weaker students. Miss Loi is indeed a fantastic teacher and friend. She has truly blessed me richly with her brilliant teachings. I’d have to admit, for a math teacher, she’s really a gem. Richa GillPaya Lebar Methodist Girls' School (secondary) I really have to thank Miss Loi for helping me out in my math and making me realise that I have potential in doing math. I’ve seen so many math teachers but the perfect and the most positive math teachers out of all is Miss Loi. She is very patient and positive. When I don’t know how to answer a sum, she will explain a thousand times till I understand. The way Miss Loi teaches makes life easier as she uses many many [DEL:shortcut:DEL] methods and smart techniques to understand math topics fast. Ms Loi has been more of a great friend than a teacher as she is understanding and makes sure that I don’t give up on math. At the same time, her practice papers and notes have been really helpful. After attending her lessons my interest on maths has increased by a zillion times compared to the rest of my P1-Sec 5 life. Thank You Ms Loi for being a great friend, helping me to a great extent to understand math and being very patient with me. I will really miss you so much after my ‘O’s. Thanks! Love you … =) All of the credit goes to you Miss Loi!!! NasrinO-level Private Candidate Dear Ms Loi, Thanks for being so patient with us! It’s so cool how you can spot mistakes immediately & enlighten us! HAHA. Thanks to you my math improved by 5 awesome grades. Yay! Claudie TanSt. Nicholas Girls' School “He has shown significant improvement in Maths” – this was written by my son’s teacher in his report card for the recently completed end-of-the-year Sec 1 exam. We were fortunate to have Miss Loi take my son under her wings in June this year and in a matter of four short months, his grade jumped to an A2. His achievement is due, in no small part, to Miss Loi’s excellent teaching efforts and patience. Apart from her wide experience in teaching the subject, I have also found Miss Loi to be a pleasant, responsible and flexible teacher and have no qualms in recommending her to parents seeking a good and effective Maths tutor for their children. Thanks, Miss Loi, for working with Daniel. And although Daniel may not say it, I am sure he is grateful to have you as his tutor. Thank you for being a super awesome Math tutor. Without you, I would not have gotten so well for Math. Thanks for always being patience and caring. Also, thanks for your awesome sense of humour (on your blog) haha, you are funny, I didn’t know you like Korea. I love going there too!! haha! You are really cool^^ Thanks for being patient & caring for all your students (: You are indeed an awesome tutor 😀 and you have helped many people 😀 Thanks for helping my Math improve! from like C5 to A1! I never thought I will do well, but thanks to you, I did! 😀 Hehe continue being awesome ^^. Oh do you listen to kpop? Hehe my husband is in Super Junior! Go listen! Thanks for being the best teacher ever 😀 Love ya 😀 Cheryl the red QK (its a ninja. look the other way :D) Cheryl TanCrescent Girls' School Ms Loi is a very good teacher. She makes sure she helps you especially in your weak areas and teaches you fast methods to overcome these maths problems. Ms Loi is also very spot-on (experienced), in that she chooses the correct questions or topics to go through such that you are well-prepared for any test or exam. Her lessons are always enjoyable as she tries to make the lesson interactive so you enjoy what you are learning. Although I used to dislike maths as i never did well in that subject, Ms Loi helped me overcome that dislike and gave me lots of practice that made me more confident and eventually helped me do well in my O’ levels. It’s been a pleasure having her as my tutor. Jessica AowAng Mo Kio Secondary School I first came to Ms Loi barely a month before my preliminary examinations. I needed a tuition teacher badly, for my Amath and Emath results were unsatisfactory, and I was intending to count these subjects in my L1R5. Initially, I was hesitant about having a tuition teacher as my previous 2 experiences with private tutors had been bad ones. I might as well be honest in saying that I suffer from an inferiority complex, and more often than not, I seek to please others too hard, for I am afraid of what others might say if I don’t live up to their expectations. Because of this, when it comes to private tutors, I find myself hesitant to answer questions or do sums during lessons, for fear of getting the wrong answer and incurring the disapproval of the teacher. With Ms Loi however, everything was so different. From the very first lesson, she strived to make me feel at ease and even confident with my own ability to do math. Both my Amath and Emath results had dropped to a C6 then, and during lessons, Ms Loi painstakingly went through everything again, sometimes even returning back to the basics and working from there. I find her an excellent teacher who knows the syllabus well, for her expertise is evident from the way she is able to accurately spot exam questions and provide students with sufficient practice on popular exam topics. Most importantly, during lessons, any fear I ever had about getting sums wrong was dispelled for Ms Loi made it so “okay” to make mistakes. I never feared to say the words “I don’t know”. Not for once did she launch into a reprimand of “I thought I taught you this before!” or “You should have known this already!”. Without question, she would once again re-teach and give me related questions to do such that I would know the topic intimately. Even when I felt that I did horribly during tests, her encouragements to persevere always served to motivate me. Needless to say, under Ms Loi, I progressed tremendously and in my preliminary examinations a month later, I actually scored A1s for both my Amaths and Emaths. With these results, my L1R5 was 6 points and I managed to get into Raffles Junior College. I must say that I really have Ms Loi to thank for such excellent results and I do hope I won’t let her down when the O level results are Once again, thanks so much Ms Loi! Jacqueline SiaPaya Lebar Methodist Girls' School (secondary) Before coming to joss sticks, Maths was one of my least favourite subjects and one that I constantly failed (for AMath at least). But after being here for many months, I learnt that math is actually a fun subject and that you’re really a great teacher (: I remember the days when I was getting 20+/100 for AMath in my secondary three end of year exam last year LOL. But now I improved by like more than 40 marks!! So magical yah! And now whenever I do math problems I will feel quite SHIOK when I get the right answer! So thanks for teaching me Miss Loi, and helping me overcome Math (: Sharmaine TanNan Hua High School When you told me it’s not so easy to get A1, my heart sank. But giving it a second thought, I wouldn’t be at where I am today if not for you – I may not even be able to make it to B3! Thank you for being so patient in coaching me and giving me chances to “repent”. You’ve made me realise it’s actually all about effort. I’ve never thought of passing maths in my secondary school life but when I knew I passed the feeling was “shiok!”. Thank you for everything. Lessons spent at Temple was very relaxing with the free flow of sweets around and solving Maths questions one after another! I’ll definitely miss coming here I swear! And know what? Every time I go home I will feel extremely good, like having undergone some enlightenment! Hehe 😀 Chai Pei EnCompassvale Secondary School My time at Joss Sticks has been extremely productive and enjoyable. Before joining Miss Loi’s Emath and Amath lesson. I was scoring D7-F9 for Amath and around a B4 for Emath. I never passed a single Amath test for the whole of Sec 3. I hated Amath like crazy and even had thoughts of dropping the subject completely. However, soon after joining tuition, my grades for both Math and Amath started picking up tremendously. I finally felt there was hope for Amath when I passed my first test. Towards the end of the year, I started scoring As for emath and no longer failed Amath! Thank you Miss Loi for your patience, for convincing that I shouldn’t have dropped Amath (phew!), and lastly, for all the help you’ve offered to me in terms of salvaging my math grades! 🙂 🙂 🙂 Renee FooSingapore Chinese Girls' School Hey Ms Loi, thanks for all your help with my maths 🙂 Before I came, I seriously wanted to give up on my Amaths. Thanks so much for never giving up on me and my maths. You’ve been a great teacher, and even friend 🙂 It was really nice being in your class and having you as my teacher. Once again, thanks for everything Ms Loi, and I wish you all the best in everything you do. I LOVE MATHS! Elena OngNan Hua High School Miss Loi is AWESOME! Before I joined her tuition class, practising maths problems always seemed to be a chore. I would rather study Chinese than study Maths. But after I joined Miss Loi’s tuition, under her guidance, I found my love for maths. Maybe it’s coz of her enthusiasm, or maybe it’s coz her passion for teaching maths is ‘contagious’, that led me to finally change my opinion of Maths. Sometimes when I feel down and start to hate maths again, Miss Loi is always there to help me. My wish would be that Miss Loi will teach maths on the JC level too, so that I won’t have to worry about my maths then! =D *hints to Miss Loi* =D THANK YOU V. V. V. MUCH! Alex CheongSt. Joseph's Institution Well, it is that time of the year again and we will be back in Singapore for the summer holidays. I am wondering if you could spare time in your busy schedule to coach my son in Math again? He told me last year that you are one of the best teachers he had and he had really learnt a lot from you. Even though the time was short last year, it did gave him a good foundation in his school Math. I am really pleased and glad that we found you even though we were in Beijing at that time – the wonders of Internet!! Mrs June FooJune's Son's Mom (Beijing) When I was informed by my sister that I have been enrolled for maths tuition I was really annoyed. Although I knew that I required extra help for my Mathematics, I never liked tuition, as my friends have always complained about their experiences with their freaky tuition teachers and the piles of extra assessment papers which had overloaded them with stress. However I soon started to show much eagerness towards my Maths after I joined the tuition. I gradually started to improve in my Maths, as the tutor, Miss Loi, showed much care and patience towards me and was often kind enough to understand my time constrain at times and granted me extra time to complete my work. I would really like to thank Miss Loi for helping to create an interest in me towards Maths! Cheers =) Sharala GopalChij St. Joseph's Convent Miss Loi!!! Thanks for being such a good teacher and for helping me get my As! 🙂 Although you always refused to tell us the answer and made us re-do countless questions, I know it’s all for our own good! Haha and thanks for always fetching me to the MRT/home 🙂 Continue to act cool with your convertible HAHA Sharmaine ChanSt. Anthony's Canossian Secondary School Dear Miss Loi, I think you have been a very patient tuition teacher. You have been tolerating my nonsense since Sec 3! You are really really good in Maths (DUH) but I’m going to beat you one day! Your worksheets are really difficult but thank you for them cause it helped me to get my As! Thank you for being such a great tutor! Ivy QuahSt. Anthony's Canossian Secondary School (Miss Loi: Spellcheck doesn’t seem to be working for this one …) Hi Ms Loi! I can’t believe that it’s been 4 years since i stepped into the Temple and became your disciple, a Mecks Monk. From a perpetually “last-in-class-for-P5P6-Mecks” to a “lol-I-passed Amath!!” student, and occasionally shocking myself with surprisingly decent Mecks grades. 😮 I still recall struggling with Mecks in Sec 1 and Sec 2 so much that I felt like I could just 😥 there and then from the Mecks phobia. Of course I still do experience occasional bouts of anti-mecks feelings, but they are less frequent now 🙂 I took Amath cause I felt that, well, I already had mecks tuition, so might as well 一举两得, one tuition cover 2 subjects. I never thought I could pass Amath after hearing the horror stories from seniors, about Amath being a universally failed subject, too arcane, too ☠ etc etc. And it was ZOMG shocking when I was regarded as ‘good at Mecks’ in class by my classmates and teachers. Nevertheless, I am vvvvvv grateful for your help/tutoring for these 4 years or so, you’ve helped me keep afloat amidst the intimidating mecks tsunamis that threaten to engulf me. Even if I go out the exam hall feeling like I would flunk the paper, I would at least errrr… get back my paper with a ‘decent’ score, and this was quite a nice surprise actually. I am not your most prodigious disciple, and I will never reach mecks nirvana like you have. I might always regard Amath as ‘Apocalyptical Mecks’, Math topics to be ‘Glaph’, ‘Trickonometry’,’Madsuration’ and draw angry ⚜ on my math exam scripts (taboo yes), but I will always recall the shining, glorious moments when I aced random mecks tests and etc. Attribute to you All this being said … here’s a HUGE THANK YOU to you, and an apology for making you read my extremely illegible handwriting (did I blind you?) Okay. I should end this extremely incoherent ‘essay’ now 😛 Lisabelle TanChij St. Theresa's Convent Ms Loi has helped me a lot during my 4 years here 🙂 I like her teaching style ’cause it helps me remember my formulas … Although I started off weak and constantly failing but in Sec 4, she pushed me a lot so I got above the passing mark and this made me so happy hahh … The environment is nice 🙂 I like the green halls although it would be better in black (Miss Loi: hmmm … ) 🙂 Thank You Ms Diane LimTampines Secondary School Even though i have tuition with you for less than four months, But during this period I have realised a lot of things. Your words of encouragement give me hope, your patience gives me the determination to work hard and your beautiful smile gives me the confidence. And i think that is also why i could improve tremendously within such short period of time. Therefore I’m really very grateful for all your guidance. Thank you very much! CharlesBalestier Hill Secondary School We all know the number of A1s we obtain is directly proportional to the hard work we put in. However, “Efforts and Courage are not enough without purpose and direction” – John F. Kennedy Miss Loi has been guiding me with my math since I was in Secondary Two and every second I spent at The Temple has truly enriched my soul and touched my heart with the love of math ♥ [Miss Loi: Awww … sweetest quote of the year!] Miss Loi’s awesomeness is like an exponential graph. Her love and care for her students GMH. Thank you, Miss Loi, for being the bestest tuition teacher, ever! iie lubsxcz euu! <3 Toh Kiat ShengSt. Joseph's Institution Out of the 12 math teachers I’ve had in my whole life, I’ve hated all 12 of them. Ms Loi is the only math teacher that I like!! After attending these tuition sessions at Jφss Sticks, I realized that I can actually do math! The formulae / equations / [DEL:cheating:DEL] or secret techniques that Ms Loi had taught me were really helpful. The only bad thing is her face and her voice [DEL: haunts me:DEL] appear in my mind at nights especially the 2 nights before my Paper 1; I totally couldn’t sleep because Ms Loi’s voice kept repeating “when intercept at Y, X equals WHAT?!!?” It’s really scary but nevertheless thank you Ms Loi!!! Jasmine LimO-level Private Candidate This is Kelvin’s mummy. I am very happy to inform you that Kelvin scored A1 for Add Maths and an A2 for his E-Maths. My heartfelt appreciation for your excellent coaching for the whole of last year without which he will not have made such amazing progress!! I still remembered desperately searching for a maths tutor to help him prepare for his O level. He was faring so poorly (failing his A-Maths and just managing to pass his E-maths) inspite of tuition. I am so glad that I saw the article in The Straits Times on “Super Tutors” one day and you were one of the tutor featured. What impressed me was that it was written you will fire any student who is lazy!! “Wow” I thought, that is one serious tutor! Thank God, under you, he begins to make steady progress and also did not make any excuse not to go for tuition, unlike in the past! Thank you once again from the bottom of my heart, Miss Loi, for helping my son to achieve his potential. Keep up your excellent work!! Your students and my son are blessed to have a ‘Super Tutor’ like you. With deepest appreciation, Betty KohKelvin Koh's Mom (Maris Stella High School) Ms Loi is someone who knows the math syllabus (both additional and elementary) like the back of her hand. Trigonometry, logarithms, probability, et cetera the ‘O’ level syllabus is practically at the tips of her fingers. She is able to convey the crux of the topic to any student in the simplest manner ever. I was struggling in my maths, the turning point came for me when Ms Loi became my tuition teacher, after which my maths grades improved significantly. Although the ‘O’ levels are over, and come what may the results, i would just like to say thank you to Ms Loi for being such a wonderful teacher! The journey is what most matters, and you have been absolutely sensational! Koh Shao YuCedar Girls' Secondary School Dear Ms Loi, Thank you for helping me with my math for these past two years, guiding me and helping me improve. Thank you for always being so patient with me 🙂 Thank you for helping me achieve what I thought was impossible – 2 A1s for both E math and A math 🙂 I arrived at the end of Sec 2, where I was always either failing or just getting a pass. But your worksheets and guidance have proven to be extremely helpful. Thank you for the advice that you had given to me just the night before the exam and for always believing in me. Thank you very very very much , Ms Loi! 🙂 Gwyneth KongSt. Nicholas Girls' School Miss Loi has definitely made me understand E/A Math concepts a lot easier. Although I would not say Math is my favourite subject, Miss Loi has made me love ♥ these subjects a lot more. Her explanations are clear, detailed and she’s really nice too! The “massive” amount of homework (papers etc.) she gives every week has indeed helped me to reinforce certain chapters. She identifies the chapters I am weak in and gives me more practices on questions related to these chapters. The booklets of E/A Math notes are also really helpful as they help me in last-minute studying (just kidding :P) These booklets provide condense, step-by-step solutions on how to solve certain questions. I always read these booklets before test/exams and before each ‘O’ level Math paper. In all, Miss Loi is a wonderful teacher who cares for her students. Thank you for helping me clarify my doubts and never scolding me no matter how bad i was! In fact, I have benefited a lot from you besides Math/A Math. 🙂 頑張って!! <= 加油!! Jeanne MahSingapore Chinese Girls' School I still remember the first time I had lesson with Miss Loi and how it was back at the old tuition centre. Through these four years, I saw a lot of changes – change in venue, me growing up etc. Anyways through all these years, even when I kept talking and was being annoying, she still didn’t give up on me. And because of that, I usually got A’s in exams and I feel confident walking into the exam hall for O levels. Miss Loi is a great teacher. She cares for the student, is patient and understanding. My math standard would have never gotten to where it is now without her. Miss Loi has a special way of connecting with the students and helping them understand the hardest of topics in like 5 minutes. Her worksheets are really helpful and well put together. Thank you Miss Loi for everything. I wouldn’t be able to do math without you. And, I am really going to miss you loads. You have made a great impact in my life, and I will never forget you. ♥ LOVE YOU ♥ Miss Loi — BEST tutor in the world. Crystal Tay Hui ShiFuchun Secondary School Your tutoring has benefited me much more than almost any other teacher. The way you teach is very effective and ‘labour intensive’. I don’t know how good my Math subjects would be if I didn’t find you (probably worse). Plus, I really, really, really like the atmosphere and environment here. Very conducive and also homely. Everyone here is so helpful and friendly. I’m sure that no other tuition centre would be able to compare to Joss Sticks Tuition Centre. Thank you so much for aiding me in my improvement! One of the biggest contributing factor to my current good grades is your help, and I couldn’t be more grateful for this. I hope to come back in the future, possibly for A level tuition, so I’ll see you then! Angela LiSt. Nicholas Girls' School I still remember when I was in Sec 3 and my A-Math teacher pulled 3 students out of the class after they scored between 2-4 marks for a test. One of them was me. She made us sit outside the class and reflect on our grades. All 3 of us were one of the worst students (academically) in the class, and maybe in the whole level. At that time, I really thought I was just doomed forever not being able to do math. I always thought I was just naturally bad at math (probably because I hated algebra since Sec 1 and could hardly pass math ever since algebra was introduced). So you must be wondering what happened to the 3 of us. One got retained, the other didn’t get promoted but nevertheless got advanced eventually thus she dropped A-Math so it was only me left standing to face A-Math in the Os. But just one fine day during the holidays I stumbled upon your website and I think I was very, very lucky to get a slot in your busy schedule and even luckier to still be able to attend beyond the school holidays even though you said I might not have a slot after the holidays. Ever since I started attending Jφss Sticks, algebra and everything else seems so much easier! I really want to thank you for re-teaching everything all over again and for specially making time for me – it must be really torturous. Thank you for all the late night lessons just before the O levels and the wonderful notes! Thanks for all your awesome explanation and for putting up with all my careless mistakes. And thanks for the sour sweets that kept me awake even when I felt sleepy (I’m addicted to it right now, bought many many of it in Malaysia) and for rushing out the answers for the O level scripts! I really owe it to you for my grades right now and I enjoyed all my lessons at Joss sticks, it was very fun! Btw, everyone was pretty shocked when they found out I got an A1 for both A-Math and E-Math. So, THANK YOU! Fiona LeongMethodist Girls' School (secondary) I really, really, really, really, really hated Math. No, really. I couldn’t understand why I am doing Math at all. If I was drowning in the ocean, was the R-formula going to save my life? Probably not, but partially because I know how to swim. That’s not exactly my point but you see? Knowing how to swim would mean that I probably can save myself if I ever fall into the sea or something. But Math? Math just hurts my head. So after Primary School was over and I got scolded in Secondary 1 for using models to attempt to solve algebra, I think that was when I started not caring for Math. Naturally, my grades plummeted but somehow I pulled through (kind of) but I was just floating about, I think. And then, yeah, got scolded by my parents and teachers and parents again and again and again. Finally, my mother discovered a place called Joss Sticks and somebody called Miss Loi and suddenly I was on my way to Novena on a Saturday afternoon. Thank you Miss Loi, Thank you thank you thank you THANK YOU THANK YOU THANK YOU ♥♥♥ THANK YOU FOR BEING SUCH A GENIUS. Also, pulling me (no, dragging me) out of my blur state and making me realise that Math isn’t so terrible afterall. I might kind of like it now. THANKS FOR BELIEVING IN ME, I was so happy —- so so so so happy —- when you assured me saying that I can get an A1. ##$&’&’’&$%$&&(’! <—— random scribbles to show how happy I am. THANK YOU MISS LOI! CAN YOU FEEL MY GRATITUDE THROUGH THE PAPER AND INK AND TERRIBLE HANDWRITING? ♥♥♥♥♥♥♥♥ Last lesson 🙁 Now my Saturday afternoons will be bleak and boring and not Math-ey 🙁 I’m almost out of paper, so I’ll end here now! I’LL MISS YOU !!!! Cheang Chu YingNan Hua High School I started tuition with Miss Loi back in 2010 when I was then Secondary 1 and took Normal Academic Math. I vividly remember how she used to repeat the same question almost ten times but I still made the same mistake on the tenth try. However, Miss Loi did not give up and continued teaching me until I made no mistakes. In Secondary 3, with Miss Loi’s encouragement, I was able to clinch the mark that was able to allow me to take Elementary Math instead of Normal Academic Math. I was then overjoyed. Lessons with her were great as she explained the questions clearly and enabled me to have a better understanding of Math, thus making me enjoy math more. In Secondary 4, when i received my ‘N’ level certificate, I was extremely shock as I got A1 for Math and all this happened because of Miss Loi. I’m Secondary 5 this year and thank you to Miss Loi’s uncountable homework every week, I am sure I’ll do her proud for ‘O’ levels. 5 years with Miss Loi and I’m going to miss her alot! Thank you Miss Loi for your inspiration and encouragement, you’re indeed a SUPER TUTOR! 🙂 Teresa NazarethSt. Anthony's Canossian Secondary School Although I have only been under you for a short period of about a year, I have already attended over 60 of your lessons and speaking from the bottom of my ♥ I have really enjoyed every single one of them because they are very enriching and fruitful. I enjoy coming for tuition not only because of your jovialness, but also because every time I leave your temple, I always leave with something new (for math of course!). Thank you for making Math a whole lot simpler and thank you for sharing all your shortcuts with me. Although you refused to prompt me for the math problems occasionally, I know you did all those for my own good because you want me to learn to be independent. Once again, I really want to show my gratitude by officially calling you my idol. 谢谢! I will be back soon, promise!! PS. You make me happy when the skies are grey. Lim Wei WeiSt. Nicholas Girls' School I’ve always hated math and tuition because my whole life was about doing math and going for tuition. But after coming to Joss Sticks, I actually enjoyed going for tuition … and math wasn’t so Even though I only joined last year, I feel like I’ve been coming to Joss Sticks since Secondary 1. I made many friends here and I’m thankful for that!! I’m very happy that with your help, I I know secretly inside your heart, you’ll miss Wei Wei and I alot, but we’ll miss you more. Thank you for being a friend and a teacher. 🙂 I enjoyed gossiping and talking about losing weight. I think you’re one of those special people in my life whom I will never forget, and there are not many okay! You hold a special place in my heart, Miss Loi 🙂 Thank you again Miss Loi and i hope we’ll still keep in touch in the future 🙂 我爱你 Miss Loi! ♥ Myra NgSt. Nicholas Girls' School I started tuition at Joss Sticks at the end of Secondary 3 during the school holidays because I did not do well for my A Math end-of-year examination. As Miss Loi is a well known tuition teacher, I decided to try out her tuition. At first, I was actually quite intimidated by Miss Loi but after a few lessons, I found that Miss Loi is very easy to talk to and is very approachable. Miss Loi is also very dedicated in explaining the concepts and is very professional when she conducts her lesson. After 50 lessons with Miss Loi, I have improved a lot for A Math and I finally got an A1 in Secondary 4. I also believe that I can get A1s for both A Math and E Math for O levels. The results I got are all thanks to Miss Loi’s numerous worksheets and papers given week after week, and of course, her constant guidance. Miss Loi is truly an amazing tutor that can guide one to achieve A1s. Thank you so much Miss Loi and I really appreciate all your efforts! Melanie DingSingapore Chinese Girls' School Thank you for being such a patient teacher and thank you for tolerating me during my “super jialat” period! Most importantly, thank you for giving me my first A1 in my secondary school life ^^ Frankly, I was amazed by your ability to make an F9 student like me, to become an A1 student within months. Impressive!~ Sometimes, I think besides being a tutor, you’re like my friend too. hehe^^ anyway!! I think I’m very lucky to have found you as a tutor. Mindy AngDeyi Secondary School Having failed Amath in sec 3, my parents were desperate (as was I) to get me help. The first tutor they turned to was Ms Loi because she was always known to be an outstanding tutor. To my surprise, I enjoyed attending classes and actually felt like I was learning and improving. Ms Loi was more than able to provide sufficient resources for both Add. and E. math. I had improved so much that during the first common test in sec 4 i nailed an A1 grade. Ms Loi’s dedication can be seen by the numerous lessons scheduled even on the day of the O level exam to make sure we are well prepared and fully confident. Thanks to her, I made a drastic and almost unbelievable jump from a D7 in sec 3 to an A1 in Additional and elementary mathematics (O levels). My parents and I cannot thank you enough Ms Loi! Not only did I have excellent results in the O levels, i have also gained confidence (and even a slight interest haha) in mathematics! 🙂 Ang Li EnMethodist Girls' School (secondary) Dear Miss Loi, thank you so much for being so patient with me for the past four years. Before attending tuition classes at Joss Sticks, I was horrible at Math. I absolutely detested the subject and found myself lost during lessons in school (hence my atrocious grades back then). However, when I started attending your lessons, I realised that Math was a lot easier than I had expected. Getting the correct answer each time I did a question was extremely rewarding. You have ignited my passion for Mathematics and I am grateful for your constant guidance and endless encouragement. You have helped me build up my confidence in Math and opened up my eyes to see things from different perspectives. Thank you for being such an inspiration and for helping me clear my doubts. You are so much more than a super tutor. You are an inspiration and a constant reminder to myself to never give up. I can’t thank you enough for your patience and passion towards teaching. Truly, I have been saved by you and I am so thankful for having such an amazing teacher like you. Thank you so much Miss Loi! I won’t let you down! Kimberley NgChij Secondary (toa Payoh) I want to give a BIG THANK YOU to Miss Loi for her constant help and guidance. My EMath and AMath grades were slipping, but thankfully, I could find such a GREAT teacher! Miss Loi is a dedicated teacher and she helped me whenever I needed her to explain a question. I really appreciate her support and encouragement. I started out with little confidence in Math, feeling nervous before every Math test. However, I was confident for the ‘O’ Levels as I knew that Miss Loi had prepared me very well. I ended up achieving A1 for both EMath and AMath. This is all thanks to Miss Loi for giving me so much practice and allowing me to come for many extra classes even though sometimes her class was already I’m really happy to get such great results so THANK YOU MISS LOI 🙂 Brenda KongSingapore Chinese Girls' School 老师的任务是教育学生。但是,每一个学生的学习方式都不一样,而若想事倍功半,便须先了解学生,才能让学生更有效的学习。Ms Loi 便是这样一位老师。她愿意和学生打成一片,了解其个性与学习方式,然后才叫学生 如何“对付”那些过去他们可能不理解的难题。虽说学习没有捷径,但只要通过这样的方式巩固学生们的数学基础,那么学习进度也毋庸置疑的会跟着加快和更有效率。从我的角度看,这就是Ms Loi所采取的教育方式。 学生最害怕的应该就是“多做多错”,而因此给了自己懒惰的借口。不过,Ms Loi 却会坚持推翻这样的歪理。对学生来说,她扮演的是推动的角色。在适当的时候,她便优点给予鼓励,缺点给予建议,以及额外帮助学生的资 中三时,我对数学已经心灰意懒,认为那是难以捉摸的科目。但是,在Ms Loi 的耐心教导下,面对中四离校考试时,我对两科数学以信心满满。而在稍早前的预考中,两科数学也拿了A1。 Maple TayCedar Girls' Secondary School Maths is a taboo topic for me and has been the bane of my life since Primary 5. It’s a struggle for me to attain any thing above 60 marks and when the crucial Secondary 3 dawns on me, my mother decided to take quick action. I have had other tutors before but I needed a Good One desperately. She met Miss Loi at a business group meeting and made arrangements for her to tutor me for A maths. Till today, I am still astounded with my first results just after 5 lessons with Miss Loi. I scored the highest in the test! Well, as the saying goes … the rest is history. Miss Loi is both resourceful and comprehensive. Her explanations are clear and easy to understand. In many instances, she was able to highlight probable questions and pinpoint areas which most students tend to have problems with. This is culled, I believe from years of experience and being focused on the subject. With Miss Loi’s help, I have been able to score top marks ever since. Gail LimCrescent Girls' School Ms Loi is a focused teacher who drives her students to their maximum limits. Personally, she has helped to raise my math grades to above average, with frequent As. Her teaching methods are exam orientated and precise and include giving out practice papers, issuing tests and making sure that concepts are learnt properly before moving on. She also has an easy going nature that makes it easy for students to interact and form friendships with her. Thus, it is not an overstatement when I say that she has so far been the best tuition teacher I have ever had!! Christiana LimZhonghua Secondary School Ms Loi is the best Maths tutor I ever encountered, whom both my children had the privilege to be under her tutelage. She not only knows her work well but is also very dedicated, exam focused in her approach and most importantly, produces results. For parents and students in search of a good math tutor, Ms Loi’s services come highly recommended. Sharon LimChristiana Lim's Mom (Zhonghua Secondary School) Miss Loi is smart, efficient, helpful, friendly, pretty(!) and easy going! What more can you ask of a tuition teacher? I was a F9, E8 person for maths … But after attending lessons for a month (4 sessions, 8 hours!), I jumped to A1! 😮 OMG! Unbelievable! Compared to my friends who have attended tuition for 1 year and still fail! Miss Loi is definitely so much more efficient and effective! I’m glad to have found her! PS: There’re still sweets provided! What more can you ask for? 🙂 [Note: Riot of colours inherent from source document.] Charlene SiahManjusri Secondary School Thank you for all your help this year. My Math grades have improved ever since you start teaching me. All the math questions you gave are related to the ones given in school. I just love the challenging math problems you gave me. It really gets my brain thinking haha:D Thank you for repeating the method to solve the math question for countless of times! 😛 Oh, I really admire how fast you can solve every single math question! They are all on your fingertips! So Once again, thank you! Kway Wei TongDunman Secondary School Miss Loi, Thank you for helping me improve my Math since Secondary 2. I still remember that when I first came here I was failing my Math. Since then, my math has been improving slowly and now I can get A for both Maths. I know that sometimes my CCA in school is very demanding and you have helped me replace a lot of lessons. Thank you for everything and most importantly improving my results and helping me catch up on the concepts that I didn’t understand! Schuyler TayCatholic High School Miss Loi is a dedicated and focused teacher, who not only does her job well, but is also a friend to her students. She often encouraged me, telling me to not be defeated by the questions that quickly, and continue to persevere in the face of failure. Amazingly, when she speeds through the various topics, I can fully understand the concepts at once. One thing that I like about her teaching method is that she will note all the important concepts and common questions likely to come out in the exam in a notebook, allowing me to revise with ease and convenience. She is also apt at pinpointing my weak areas, constantly working on them until I could even attempt the challenging questions! And what she mentions about her ability to spot exam questions, it’s true! Through her guidance, I became more confident, and even interested in maths. My grades improved consistently, from F9s to Bs during prelims. Her pleasant and likeable personality also provided a cheery environment for learning. When I received my O Levels results, I achieved A1 for E Maths and A2 for A Maths. Thanks Miss Loi! You’re the best maths tutor I’ve had! Chua Yong ZhenSt. Nicholas Girls' School Exciting. Enjoyable. Enriching. These are probably three words I would use to describe Miss Loi’s lessons. Her methods of teaching are concise as well as organized, following the O’ level syllabus very closely. This has allowed me to grasp the many mathematical concepts very quickly and also apply it to my daily work at school. Lessons with her are never dull or boring because of her cheerful disposition when teaching. This allows me to feel very comfortable in terms of interacting with her or just asking questions when I do not understand the concept fully. Miss Loi’s patience and clear explanations truly benefited me in terms of both my grades and also character development. Chong Ke Qing CherieSt. Nicholas Girls' School Let me first say that I’m a man of few words, so this might be pretty short, but full of sentiments 🙂 Although I’ve been in your class for less than 2 years, but it’s really been nice to have you as a tuition teacher!! Your maths is awesoome and you’re really kind and patient in explaining and Under your math tuition, my math and amath results have improved too, so when the news reporters interview me for 10 A1s next year, I’ll attribute my results to you! (Miss Loi: will hold you to You’re an AWESOME MATHS TEACHER! Chester Chua JieminAnderson Secondary School Dearest Miss Loi, Thank you for all your time and patience in helping me improve on my EMaths and AMaths. Before I first came to Joss Sticks, I used to think I could “study” Maths. You have made me realise that only practicing can make me improve my Math grades. The notes you printed out for me were extremely useful and I have used them for tests and exams and the O Levels. I’m sorry if I gave you white hair during my stay. Finally thanks for your effort in helping me conquer my fear of Maths. STAY AWESOME! Meldrick WeeNan Hua High School Ms Loi was my first tuition teacher. Before going for tuition, I was getting C5 for Amath and B3 for Emath. However, upon entering tuition, my grades eventually went up to A1 for both Amath and Emath. I find that Ms Loi’s teaching methods are extremely effective as she comes up with ingenious methods that allows the students to solve the question with maximum success and minimal effort. She also constantly challenges me to solve difficult problem sums to ensure that I understand the basic concepts of the particular chapter. She’s also equipped with over 10 years worth of O Level papers 😀 On a whole, I feel that Ms Loi is a teacher that is extremely addicted to teaching. She’s also very funny and thanks to her, Math is actually my favourite subject. Thank You Ms Loi! Denise TongSingapore Chinese Girls' School Okay … I’m so sorry it’s so last minute and for probably freaking you out today. (Everyone seeing me write a card outside United Square is officially staring at me) … But anyway, I have nothing but thanks Ms Loi. Rather expectedly, I obviously wouldn’t have been able to get the A1, without you. Really! In January, when my CA1s were out, I was shocked. Never, ever had i come so close to failing a core subject ever. Nothing can describe how scared I was … I have God to thank for Jacqueline recommending you. I couldn’t have asked for a better teacher to have helped me than you. I couldn’t have asked for a more devoted, constantly-eager-to-strangle me teacher to help me get the marks I wanted because you knew I could and believed that I could still do better, however good the marks my seem. Thanks for everything and yes… I shall finish my homework. Stephenie OngMethodist Girls' School (secondary) A year with you has definitely been a fruitful one. I am so glad to have found you and I always look forward to your lessons. I find you different from other tutors: Pro, Jovious & Creative. Your dedication and patience to ensure every student and myself achieve As is beyond imagination. Furthermore, lessons have always been sweetened by your Sour Sweets. I entered The Temple without carrying much hope, just like a candle without its light. As lessons went on, the candle lighted up and the fire in my heart ignited as well. This caused the electric impulses in my body to travel through the synapse into the relay neuron of my brain. [Miss Loi: this sentence is bit 离题.] From weeks to months, I realize that I have been brightened up by your systematic lessons. You have caught my weaknesses and transformed it into my strengths. With that, I just wanna say that I really appreciate your help and guidance and I hope that one day, I will be able to help others like the way you do. Thank you sooooo much. (and oh yes I will miss you!) Lee Cai Xia LynKranji Secondary School Hello Miss Loi! Thank you for helping me with my math, which improved after coming here. All thanks to you! 🙂 I love coming for tuition, cos there’s sweets! SOUR SWEETS! 🙂 And also you! So pretty & chio! <3 If not for you, I believe my math will still be a F9 grade. To me, you’re just like Albus Dumbledore, and I’m Harry Potter. If you read Harry Potter books, you should know I mean – it means you’re damn freakin’ nice lah! Thanks for being such a pretty, chio, caring & cute teacher. I will always remember you! So you must remember me too! Just miss me, remember me and think of me! 🙂 ♥♥♥ Miss Loi, you ℝULEZ! Emily TohHougang Secondary School I joined this tuition two months before the ‘O’ level examination. I was a F9 student all along but miracle worked on me after a few weeks of tuition. My mathematics grade improved and I am no longer the last for any mathematics test and examination. Because of this, I now have more confidence than before. Ms Loi, Thank You! I am really blessed to know you as my ‘miracle healer’. Haha 😀 I love you! 😀 :* Muack! [Note: Again, riot of colours inherent from source document.] Joyce LimSt. Margaret's Secondary School MISS LOI has been an awesome teacher and has taught me math concepts over and over again without any grouse. She’s very patient and her lessons are fun with her seemingly endless supply of sweets to perk us up when we feel tired! 😀 From her, I learnt a lot of math concepts I previously did not understand at all. Now that lessons are over, I kinda miss them! Anyway, Miss Loi is one of the best teachers out there and I’m forever grateful to her for helping me. Thanks a lot, Miss Loi! 🙂 Clare LeeSt. Anthony's Canossian Secondary School The United Colors of The Temple. Join The Temple’s United Nations of students! [DEL:Devotees:DEL] Students from over 160 schools have passed through The Temple Gates on the way to their Mathematical Nirvana. Can you spot your alma mater, or perhaps contact us to add yours to the growing list below? JUNIOR COLLEGES, IP & IB SCHOOLS Updated: December 2011
{"url":"https://www.exampaper.com.sg/gratitudes","timestamp":"2024-11-01T19:14:56Z","content_type":"application/xhtml+xml","content_length":"247203","record_id":"<urn:uuid:7fa1184b-d37a-44a8-a45f-24f35b9bfdcc>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00304.warc.gz"}
A system is made up of two subsystems, A and B, connected in parallel. Subsystem A... Answer #1 Similar Homework Help Questions • of components connected as in the Consider the system accompanying picture. Components 1 and 2 are... • Problem 4 Consider the system of components connected as depicted below. The system can be thought... • Consider the system of components connected as in the accompanying picture. Components 1 and 2 are... A system is made up of two subsystems, A and B, connected in parallel. Subsystem A... A system is made up of two subsystems, A and B, connected in parallel. Subsystem A is made up 5 Components connected in parallel. Subsystem B is made up of 5 components connected in series. All components function independently. The probability that a component is operational is 0.7. let P(S) Donate the probability that the system is operational. a. find P(S) b. A component from subsystem A is tested and found to be operational. Find P(S). c. A component from subsystem B is tested and found to be operational. Find P(S). d. A component is randomly selected (from A or B, equally) and found to be operational. Find P(S). Free Homework Help App Download From Google Play Scan Your Homework to Get Instant Free Answers Need Online Homework Help? Ask a Question Get Answers For Free Most questions answered within 3 hours.
{"url":"https://www.homeworklib.com/question/2106589/a-system-is-made-up-of-two-subsystems-a-and-b","timestamp":"2024-11-08T19:19:00Z","content_type":"text/html","content_length":"51658","record_id":"<urn:uuid:e176a43e-ac9b-4b25-8a05-4036679f2684>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00322.warc.gz"}
Double Sided Equations Worksheet - Equations Worksheets Double Sided Equations Worksheet Double Sided Equations Worksheet – The objective of Expressions and Equations Worksheets is to help your child learn more effectively and efficiently. They include interactive activities and challenges based on sequence of operations. These worksheets make it simple for children to grasp complex concepts and simple concepts quickly. These PDF resources are free to download, and can be utilized by your child to test math problems. These resources are helpful for students who are in the 5th through 8th grades. Get Free Double Sided Equations Worksheet These worksheets can be utilized by students in the 5th-8th grades. The two-step word problems are made with fractions and decimals. Each worksheet contains ten problems. These worksheets are available on the internet and printed. These worksheets are an excellent opportunity to practice rearranging equations. In addition to allowing students to practice rearranging equations, they also assist your student to understand the basic properties of equality and the inverse of operations. These worksheets are suitable for use by fifth- and eighth graders. These worksheets are suitable for students struggling to compute percentages. You can choose from three different kinds of problems. You can choose to solve single-step issues that involve decimal or whole numbers, or you can use word-based approaches to solve decimals or fractions. Each page will contain ten equations. The Equations Worksheets are used by students in the 5th-8th grades. These worksheets are an excellent source for practicing fractions as well as other concepts that are related to algebra. Some of the worksheets let you to select between three types of challenges. It is possible to select the one that is word-based, numerical or a mixture of both. It is important to choose the right type of problem because each problem will be different. Each page will have ten challenges, making them a great aid for students who are in 5th-8th grade. These worksheets teach students about the relationships between variables and numbers. These worksheets allow students to solve polynomial equations and to learn how to apply equations in everyday life. These worksheets are a great opportunity to gain knowledge about equations and expressions. These worksheets will educate you about various kinds of mathematical problems as well as the many symbols used to express them. These worksheets can be extremely beneficial for students in their first grade. The worksheets will help students learn how to solve equations as well as graph. They are great for practicing polynomial variable. They will also help you understand how to factor them and simplify these variables. You can find a great set of equations and expressions worksheets for kids at any grade. Making the work yourself is the most efficient way to learn equations. There are a variety of worksheets for teaching quadratic equations. There are different levels of equation worksheets for each stage. These worksheets are designed in order to help you practice solving problems of the fourth degree. Once you’ve finished a level, you can begin to work on solving other types equations. After that, you can focus at solving similar-level problems. For example, you can solve a problem using the same axis, but as an extended number. Gallery of Double Sided Equations Worksheet Solving Linear Equations With Variables On Both Sides Worksh How To Solve Double Sided Equations Solving Double Sided Variable Equations Leave a Comment
{"url":"https://www.equationsworksheets.net/double-sided-equations-worksheet/","timestamp":"2024-11-13T12:21:59Z","content_type":"text/html","content_length":"63825","record_id":"<urn:uuid:e1963306-728e-4427-95d2-364eb8959fff>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00316.warc.gz"}
Synthetic Division, Beyond the Basics To go "Beyond the Basics", I suppose it is necessary to illustrate a quick review of "the basics" in the beginning. The image above is taken from a website that was active in January of 2020. This is how synthetic division is taught in thousands of classrooms across america, (millions more ignore the topic because they see it as without value). And the disclaimer about "we will see and example...." they say , "We followed the synthetic division process, but arrived at a wrong answer." but they didn't show any method of adapting to this problem.... I grew up being told that this was the extent of what synthetic division could do, and like any reasonable student, decided that it wasn't really a useful tool to worry about. But when I went in to teaching, it was still in textbooks occasionally, and it came with the same set of warnings. One day in a boring staff meeting, I decided to test that hypothesis. The results of that exploration are in the sections below. The most basic form of synthetic division is to have a quadratic or higher polynomial divided by a linear term with a unit coefficient of the variable; something like x+3, or x-5. While to many people this is "synthetic division:, it is better known as Ruffini's rule, and was first described by Paolo Ruffini in 1804 according to Florian Cajori. This work by Ruffini actually anticipated the method of Horner. More on this historical note at the at the bottom. So let's create a simple problem, and start with the most basic approach, polynomial long division, show what the algebra really means, and then show why synthetic division was created as "an advantage" Then we will break all the rules and divide by a quadratic, and a term with a leading coefficient different than one. IF you oppose synthetic division, then thoroughly teach the polynomial long division in a way that develops understanding. THE BASE, OF THE "BASIC" METHOD: Let's start with a simple problem of dividing a quadratic expression by a linear term, and then explain what it really says. We will divide 2x^2 - 2x + 5 by the linear term x+3 x+3 )2x^2 - 2x + 5 We ask what we multiply the x in the divisor by, to get 2x^2. The answer of course is 2x, so we write that in over the bar, and multiply by all terms in the divisor. x+3 )2x^2 - 2x + 5 2x^2 + 6x At this point, like any long division problem, we subtract the bottom line from the original dividend. (Subtraction is harder, and more prone to trivial mistakes than addition, and this step will explain a major difference in synthetic division. x+3 )2x^2 - 2x + 5 2x^2 + 6x -8x + 5 So now, we divide x into -4x, with the obvious result of -4, and we multiply by the divisor again. 2x -8 x+3 )2x^2 - 2x + 5 2x^2 + 6x -8x + 5 -8x -24 ------------------- and again we subtract again to get a remainder 29 (yep 5 minus -24 is 17... and teachers with experience KNOW that kids mess this up incredibly often). At this point the algebra teacher often has the student resort to polynomial multiplication of (x+3) (2x-4) and then add 29(the remainder) and sure enough they get x^2 -2x -24 +29 which simplifies to x^2 - 2x + 5, and hurrah it all checks. The student, not really sure of either the new division process, or the meaning of the multiplication process he may have learned only a few weeks ago, silently accepts the result, but really doesn't understand any thing about the operation he just preformed. BUT, suppose we tell the students, "Let's pretend that x = 30. (we want to pick a first number greater than our remainder) Then this problem would say divide 2(30^2) +2(30) + 5, by 30 + 3.... or 1865 divided by 33. Go on do it, you can even use your calculator. They will tell you that the answer is 56 with remainder of 17 or (2*30-4) with a remainder of 17. Now do it with another number, 50 or 100 or more more more..... this is the long division they "sort of" knew from middle school.. it's just a more useful way of finding an answer whatever value of x you choose. Just for the excitement, have them write out a division problem like 425 divided by 32, anything like that. Now tell them if x = 10, this is just 4x^2 + 2x + 5 divided by 3x+2, and then have them do it both ways. Very few students who understand this arithmetic method of checking polynomial division will ever miss a question on a test. NOW, they may be ready to approach a shortcut to something the are closer to understanding, synthetic division by a polynomial. Important Instruction number 1, the approach of synthetic division simply avoids writing out all the variables by use of positioning, and reduces the common mistakes of subtraction by changing the steps to an addition process. We write the same problem of divide 2x^2 - 2x + 5 by the linear term x+3 in synthetic division: To change the subtraction to addition, we will use only the opposite of the constant term, change the 3, to -3 Now we write the problem in the same way as the long division, but without listing the exponents (but if any terms are missing, we need to put in the missing zeros) For this problem we begin -3 ) 2 -2 5 Yeah, lots less writing. Now we begin by bringing down the first number in the divisor, the 2, to the third row, leave a space under the 2, -2, 5. We'll use that later. -3 ) 2 -2 5 From here on out, it goes quick, we multiply the bottom number by -3, place that in the second row of the next column, and then we ADD (hurrah, no subtraction of a negative from a negative) Since 2 & (-3) = -6, we put that below the -3, and add putting the answer in the bottom row. -3 ) 2 -2 5 -6 - 2 -8 And now we just keep repeating those two steps, multiply the last number on the bottom row( -8) by the divisor (-3) and add to 5 -3 ) 2 -2 5 -6 +24 2 -8 29 Now look at the bottom line, 29 is the remainder, so the -8 is a constant term, and the 2 is the coefficient of x. Yep, 2x-8 with a remainder of 29. (and we stress this a lot at early levels.... NO MATTER WHAT WE DESIGNATE THE x TO BE. (After they are comfortable with this, try showing them that even if x is less than the remainder, the arithmetic still works out. For example, if we said x was 5 in this problem, then 2x^2 - 2x + 5 = 2(25)- 10 + 5 = 45 divided by x+3=8. 45 divided by 8 is 5 remainder of five... (that means that 5 x 8 plus the remainder of 5 is the dividend, 45. .. but if we look at our answer, 2x-8 R29 the 2(5) -8 is 2. When we multiply 2 (the quotient) by 8, the divisor, we get 16. But the divisor is 45... oh wait, we add back the remainder of 29 and indeed, we get 45. Maybe a table of values stresses this aspect x dividend = quotient * divisor + remainder 2x^2 - 2x + 5 = 2x-8 * x+3 + 29 5 45 = 2 * 8 +29 10 185 = 12 * 13 +29 15 425 = 22 * 18 +29 And now on to the fun stuff for the teachers Synthetic Division by A Quadratic I was sitting in a school improvement meeting paying less attention than I ought to, and started trying to figure out how to figure out the topic above. What is below is my way.. I tried to find something about this on the internet, but only found sites that deal with linear factors, and some of them said that what I have done was not possible (I hope they are wrong)... Then when I came home I found a post on the AP Calculus Discussion site that asked how to do this very thing, with a citation for an article in the Mathematics Teacher, (March, 1980 journal (hopefully the archives go back that far), there is an article called "Synthetic Division for Nonlinear Factors" ...thanks to Lisa Lewis) that describes the process. Unfortunatly I can't get that on-line. So here is how I did it.. I made up a problem involving a fourth power polynomial, and a quadratic divisor(A clever student could break this divsor into two linear terms and do the same in two divisions... IF he is careful about remainders, but pretend I had been clever enough to use a irreducible quadratic for the divisor) Divide x^4 + 8x^3+ 15x^2 + 4x + 1 by x^2 + 3x + 2, using synthetic division: The dividend is written out in the usual order with the coefficients (including any missing zeros)... and to the left two columns with the negative of the B term (3 and the C term (2) as shown below... notice that I use the -B term above the -C term.. and as usual, the bottom row will be the quotient. The first step is still to bring down the leading coefficient.. from there there are two multiplications: the product of the -B term and the number in the bottom row go in the row with -B and the column with the last bottom number... The product of the -C term and the last bottom number will go in the column one to the right of the last bottom number. Note the locations of the -3 under the 8, and the -2 under the 15...Then we add up all the values in the second row to get a new bottom value. We continue this process, always keeping the product of the -C term entered one to the right of the last bottom number. Since we are dividing a fourth degree polynomial by a second degree polynomial, the answer will be of the second degree, and the last two cells on the bottom represent a linear remainder. In this case the quotient is x^2 + 5x -2 and the remainder is 0x + 5... You can check the solution by multiplication. Big Idea..... Gauss proved that all polynomials with real valued coefficients can be factored into a product of linear factors (like x-a) and irreducible quadratic factors (qudratics that don't have real roots, and hence can't be factored over the reals). So we can factor any real valued polynomial with nothing bigger than quadratic division.... OK, I don't know, but if I get a free moment in the next 24 hours, I think I will try extending this method to a third term and see if I get lucky... Ok, so the remark about "breaking the cubic down" was not clear, so I will try to cover that in a day or two..... HEY, I have five classes to prep..give a guy a break... A Little More on Synthetic Division I wanted to follow up a few things I didn't make as clear as I might have yesterday; first the question about the leading coefficient, then the extension to dividing by a polynomial of higher than the second degree, and the remark about dividing by "breaking up a cubic" First the question about the leading coefficient.... It doesn't matter what the leading coefficient of the dividend is, but the leading term of the divisor must have a coefficient of one...That is the same as with dividing by a linear factor...... but you can reduce the problem to an equivalent division by dividing out that coefficient...(wow, that was a mouthful.. so here is an example, using the easier linear factor ) Supppose we wanted to divide 6x^2 + 16x + 10 by 3x+5.. We need to turn the 3 in 3x+5 into a one, so we divide to get x+5/3.. but this is like simplifying fractions(think of 6x^2 + 16x + 10 as the numerator and 3x+5) as the denomiantor) we have to divide both the top and bottom by three to keep the equality, so we also divide 6x^2 + 16x + 10 by three, and our new problem is to divide 2x^2 + 16/3 x + 10/3 by x + 5/3 Keep in mind that if there is a remainder, it will not be the same in the simplified problem as in the original.. If we add 6 to the constant term of the previous problem we get 6x^2 + 16x + 16 by 3x+5. Eliminating the leading term of the divisor we get 2x^2 + 16/3 x + 16/3 by x + 5/3 and now when we do the synthetic division we get a remainder of 2, instead of six.... ahhh, but we divided both terms by three, so our true remainder is 3x2=6...Keep in mind that one of the things synthetic division does is give us f(2) for a function (the dividend) of x.. If we divide f(x) by three, the value at any point (and hence the remainder on division) is going to be 1/3 of what it would have been otherwise. And now... the higher degree issue, which seems to work out quite nicely... If we wanted to divide by a cubic we just add three lines, and continue to move the products over one column.. Here is an example dividing x^5+ 6x^4+4x^3-39x^2-122x-120 by the cubic x^3+3x^2-10x-24... note that we still take the negative of each coefficient of the divisor. As we continue we see that the quotient is x^2+3x+5 with no remainder... Finally for the cryptic remark about "breaking up a cubic"... In the last problem, if we had wanted to, and if we recognized that x^3+3x^2-10x-24 could be factored into (x-3)(x^2+6x+8), we could have done the last problem in steps. If we divide by one factor and then by the other, we will stil get the same result (although there is still need to pay special attention to remainders). In arithmetic we can divide 24 by 6, or we can divide it by 3, then divide the answer by 2... In the same way we can do the synthetic division in two steps.. I decided (no special reason.. ) to divide by the linear term first.. then the quadratic, as shown here. Synthetic Division when the Leading Coefficient is NOT One I was hoping this would be the last one on synthetic division, I have a problem I promised Al Harmon from Misawa, Japan I would answer, and I wanted to talk about the Law of Sines (and how much I hate when people write it upside down)... So I will defer the responses to those of you who asked about the history of synthetic divison for a later day. Today I wanted to answer the challenge to post the way to divide by ax+b without factoring anything out first. Apparently my ex-students never throw away their notes hoping to catch me if i ever do something a different way, as I did a couple of days ago with dividing both divisor and dividend before I did the division so as to eliminate the leading coefficient of the divisor..... so in case there are other teachers who teach this out there...here is a modified synthetic division approach that will work for 3x+5 and such and still give the right remainder... I will use the same example 6x^2 + 16x + 16 by 3x+5 . To accomodate the 3 in the leading coefficent, a new line is added below the usual bottom line. After each column is summed, (except for the remainder) the result is divided by the linear coefficient, in this case 3. Notice that after we add the first two numbers in a column, we divide that sum by three to get the actual value of the quotient for that term.... but we do NOT divide the remainder. I also wondered (I had never tried it) if you could do the same, or something similar with the higher degree divisors. I had previously divide x^5+ 6x^4+4x^3-39x^2-122x-120 by the cubic x^3+3x^ 2-10x-24, so I thought I would try 6x^5+ 6x^4+4x^3-39x^2-122x-120 by the cubic 2x^3+3x^2-10x-24, which Should give a quotient of 3x^2 - 3x/2 +77/4 with a remainder of 159/4 x^2 + 69x/2 + 342 and see what happens..... Once more we create an extra line to divide by the coefficient of the term with the highest degree (in this case 2) and proceed as we do in the non-linear cases by shifting over one column as we go down the list of coefficients (or the negatives of them, actually). And once more, the result emerges rather effortlessly... Ok, we have some way ugly fractions in there... but you would get them even with all the mess you create with long division... Call that a wrap... Synthetic Division to find the Derivative of a Polynomial Questions, questions, questions... It seems like synthetic division has struck a cord..and I get responses asking or telling me that a) No one should teach synthetic division; b) I didn't show you the way to divide by linear terms like 3x-5 directly, and I had shown someone in my class a few years ago (and they remembered,... "thank you, math Spirits)... c) I should have shown you how to find the derivative like I showed their calculus class (not sure how far back that was..I've done it a couple of times) My answer to a) is, "If you don't want to teach synthetic division, don't; but please give me the freedom to make that choice for myself." That leaves c) so today I will illustrate that it is almost as easy to find the value of the derivative at a given value using synthetic division as it is to find the value of a function. I will use the simple f(x) = x^3 - 3x^2 + 6x - 7 and we wish to evaluate f(2) and f '(2) ... From the factor-remainder theorem we know that if a polynomial f(x) is divided by x - b, the remainder is f(b)... so we can simply find the value by synthetic division as below. From this we see that f(2) = 1... but how does that help us find the derivative... Calculus teachers know that f '(x) = 3x ^2 - 6x + 6 , and nothing like that seems to pop out of what we see above... and if we evaluate f"(2) we get 6... but where is it in the synthetic division? Patience,... one more run... now evaluate the quotient of the above problem at 2 again....and Behold, as Brahmagupta supposedly wrote.. the remainder of this second division is the value of the first derivative at that point...... and I know your heart is thumping in your chest, wondering, wishing.... what if we did it again, would it be,........ could it be? Oh, Yeah... and now the descent through the derivatives in similar form is probably clear... SO WHY does that work? Let's walk through the second part using some pronumeral value x=a instead of a number Notice that in the place where we expect the remainder, we see that we get the evaluation f(a)... and when we continue down, you can see the derivative terms accumulate as the division works across to finally reveal, that f '(a) does indeed become 3a^2 -6a + 6... I love math.... You just don't have clever things like that pop out in any other discipline... For those who would like a slightly more technical explanation, her is one from William Rose: Write P(x) = (x-a)Q(x) + r By Remainder Theorem, this is: P(x) = (x-a)Q(x) + P(a) P'(a) = lim as x--> a of [ P(x) - P(a) ]/(x-a) = lim as x--> a of [(x-a)Q(x) + P(a) - P(a)]/(x-a) =lim as x--> a of Q(x) So the theorem says that the quotient after dividing P by a, evaluated at a, is the derivative of P at a. This seems totally transparent when you look at P(x) = (x-a)Q(x) + P(a) in that form. Q(a) is the rate at which P is changing near x =a. Some Notes on the History Ruffini seems to have been the first person in Western mathematics to introduce this shortcut for finding the roots of a polynomial. The Italian Scientific Society offered a prize in 1802 for the best method for determining the roots of a polynomial equation of any degree. Ruffini was awarded the gold medal in 1804. He also wrote simpler explanations of his method in 1807 and 1813. Horner would publish his paper in the 1819 in the Philosophical Transactions. Fortunately for Horner, he had two well known ambassadors for his method in Augustus De Morgan and J R Young. Through their influence, the work by Horner swept through European Mathematical circles. Almost a century expired before Florian Cajori credited Ruffini in 1910. Of course there was a very similar approached used by the Chinese as early as the clasic "Nine Chapters on the Mathematical Arts" before the 1st Century BC. Both D. E. Smith (1925) and Cajori (1924) recognize this as the first "synthetic division" . (For student's we can point out that this ancient text also had the Pythagorean Theorem (but not by that name) and Gaussian Elimination (also not by that name). 1 comment: Unknown said... Your blog has an error in it. The part where you say, "Let's pretend x = 30," you accidentally changed the second term in the polynomial from -2x to +2x. If you make it -2x, then you get 1745/33 = 52 R29, now it checks out.
{"url":"https://pballew.blogspot.com/2020/01/synthetic-division-beyond-basics.html","timestamp":"2024-11-04T23:42:03Z","content_type":"application/xhtml+xml","content_length":"164298","record_id":"<urn:uuid:4a079b1c-b654-47b3-a2f5-71c1d114fd6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00140.warc.gz"}
Statistical Techniques to Simplify Oil Analysis Data In oil analysis, we often must interpret a change in one parameter relative to a change in another to reach a meaningful conclusion. For example, an increase in wear levels combined with a corresponding decrease in zinc might tell us that wear is increasing due to the loss of antiwear additive protection. Further “reading between the lines” using viscosity, acid number (AN) and infrared analysis data might tell us whether the reduced antiwear protection is the result of the addition of the wrong oil or additive depletion, and so forth. Unfortunately, while these parameters are all important to our analysis, they use widely varying units and each has its own degree of random variation (data fragility). That makes charting the values over time on a common graph difficult. Also, different alarming logic is applied to the different parameters. One way around this problem is to trend statistically derived percentile ratings instead of actual parameter values. Figure 1 This simple technique yields the following benefits to the oil analyst: • All parameters can be reviewed on a common trend graph using common units (the percentile). This makes it easy to see what is rising and what is falling simultaneously, facilitating the process of reading between the lines. In Figure 1, it is apparent that viscosity is decreasing while particle count and iron have increased dramatically, suggesting that the wrong oil might have been added to the machine, and that has induced high wear rates. • Common alarms can be set for all parameters and displayed on a common graph. For example, one standard deviation might represent a caution, while two might suggest a critical situation (Table 1). • The noise effects of normal variation are factored out because each parameter’s percentile calculations are based upon its own standard deviation. • Percentiles can be understood by anyone, including management. Particle count, mg KOH/g of oil, etc. are not so obvious to the untrained observer. • Non-oil analysis parameters ranging from vibration overalls to skirt length can easily be incorporated into the graphics and, thus, into the analysis and decision processes. • The technique is fast and easy. Table 1 Transfer Data into Percentiles 1. Using historical data, determine the average value (mean) for each parameter (Equation 1). 2. Calculate the standard deviation for each parameter using the same data set used to calculate the mean (Equation 2). 3. Generate a Z-Score by subtracting the mean value from the current reading, then divide by the standard deviation (Equation 3). This number tells how many standard deviations you are over or under the mean value. 4. Use cumulative normal distribution tables. Most commercially available spreadsheet programs generate a cumulative distribution value for a given Z-Score. 5. Present the normal distribution value as a percentile value. For example, suppose a machine has a mean iron level of 15 ppm and a standard deviation of 3 ppm. An observed value of 18 ppm would yield a Z-Score of 1, or one standard deviation greater than the mean. The 18 ppm value would occur at the 84th percentile. If our observation occurred at the mean (15 ppm), our value would be the 50th percentile. Table 1 illustrates where various Z-Scores occur on the cumulative normal distribution curve. This and other techniques can be effectively applied to simplify oil analysis data and ease the diagnostic process. Try variations on the percentile theme like using a 10-sample moving average and standard deviation in place of the fixed values where it is appropriate. Such simplification is important for oil analysis to gain acceptance into the mainstream of the decision-making process. Making Statistics Work for You When attempting to schedule maintenance actions based upon oil analysis data, simple statistics can be a powerful tool to simplify data, identify relationships between oil analysis parameters and increase confidence in conclusions. Statistical techniques like correlation analysis can help ensure that we are making the right decision. They can also help focus our efforts to uncover the root cause of the abnormal condition. Review of the oil analysis data from nine identical hydraulic machines performing the same function in the same environment reveals substantial variation in zinc levels as a function of the time the oil is in service. Further investigation leads us to conclude that acid numbers (AN) also decline as a function of time. Upon calculating the correlation, we see that zinc and AN values are highly correlated (Figure 2). Figure 2 The Role of ZDDP We know that the zinc dialkyldithiophosphate (ZDDP) used in most antiwear oils reacts with the potassium hydroxide (KOH) reagent used to measure AN, elevating the numbers when the oil is new. The AN decreases as the additive is depleted. Once the ZDDP antiwear/antioxidant additive is depleted, it leaves the base stock with reduced protection from oxidation, and acid numbers will begin to increase from their minimum point as the base stock degrades. Also, once the ZDDP is depleted, the machine is subject to increased wear due to lost antiwear protection from the fluid. Zinc and AN tend to correlate well in most oils equipped with a ZDDP antiwear additive. It is important to quantify this correlation with test data specific to an application. The analysis indicates that one of the machines is running with low zinc levels and low AN. Because both zinc and the acid numbers have depleted, and knowing that the correlation between these two parameters is strong in this application, we have high confidence that our ZDDP additive is depleted, perhaps to the point of exhaustion. This is a situation that warrants maintenance action. It is likely that the oil has simply reached the end of its life. Alternatively, abnormal stress might have expedited degradation. Additional oil analysis and inspection of the machine should identify if the degradation is normal or abnormal. If abnormal, the process should reveal the specific root cause of the problem. Once the root cause is identified, a maintenance action can be scheduled to correct the situation. If the rate of degradation is deemed normal, we simply change or reconstitute the oil without further investigation. By understanding how various oil parameters correlate, we can investigate abnormal symptoms and make decisions with a strong sense of confidence that we are addressing real maintenance problems, not just chasing false alarms. Read more on oil analysis best practices: How to Interpret Oil Analysis Reports 4 Oil Analysis Tests to Run on Every Sample How to Spot Repeating Trends in Oil Analysis Reports
{"url":"https://www.machinerylubrication.com/Read/1769/statistical-oil-analysis","timestamp":"2024-11-01T20:34:29Z","content_type":"text/html","content_length":"47799","record_id":"<urn:uuid:e216fa95-b640-4459-8c54-aaa022f580fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00215.warc.gz"}
Algebraic and geometric topologyLecture notes in mathematics 1126 proceedings of a conference held at Rutgers University, New Brunswick, USA, July 6-13, 1983 Ranicki, Andrew 1948- Levitt, N 1943- Quinn, F 1946- Berlin New York Springer-Verlag c1985 423 p ill 25 cm edited by A. Ranicki, N. Levitt, and F. Quinn Cuprinde bibliografie pe capitole Topology Congresses Algebraic topology Congresses 510 s 514 0387152350 (U.S. : pbk.)
{"url":"http://library.imar.ro/cgi-bin/koha/opac-export.pl?op=export&bib=6649&format=mods","timestamp":"2024-11-08T22:52:47Z","content_type":"application/xml","content_length":"2105","record_id":"<urn:uuid:135d5842-c6de-476a-a793-7cb5a24c067e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00164.warc.gz"}
Oregon Tag Draw Percentages The Mythical 100% You have 13 200 series elk points. You find a hunt that looks interesting, 286Q East Hebo. Here are the draw odds: 286Q East Hebo Preference Points: 0 – 9 10 11 12 13+ 2019 Resident Actual Odds 3% 50% 100% 100% 100% Let’s say this hunt matches your parameters, season dates, weapon, bag limit and the location is all good. With 13 points and the information listed above, how certain are you that you’ll draw this The Mythical 100% You choose a hunt where your odds of drawing are 100% at your point level. You apply. You plan and forego other hunting opportunities to prepare for this particular hunt because you are certain you will draw. Your fall schedule is based on drawing this tag, it has 100% draw odds, you are all in, and then you do not draw. This is what happened to me 20 years ago. The truth is, people see 99% and 100% so differently. People see 100% as a guarantee and they want a guarantee when it comes to drawing a tag. However, when it comes to the Oregon controlled draw, there will always be a level of unpredictability and uncertainty and there are no guarantees. Sometimes 100% draw odds is not a sure bet. Last fall I applied for an Oregon antelope tag. My listed odds were 100% at my point level. While I knew the math was right, I also knew that I likely would not draw . . . and I did not draw. An aspect to the hunt I had seen before indicated that a small change in application behavior might result in not drawing the tag. You can call it “human intuition” but it’s more of an educated I cannot inject “intuition” or observation into the predictions. What I can do is share some of the factors and traits that make listed odds, particularly 100%, vulnerable to change. I’d like to share some of the things you need to consider when you see 100%. If you are aware of other traits that should be considered, please send them to me and I will add them to this list. The number of tags Looking again at East Hebo: 86Q East Hebo Tags: 3 Preference Points: 0 – 9 10 11 12 13+ 2019 Resident Actual Odds 3% 50% 100% 100% 100% I added a new piece of information – the number of tags. The number of tags allocated for a hunt is by far the single biggest factor to consider when it comes to 100% draw confidence. East Hebo has 3 tags, which means two tags in 75% pool and 1 in the 25% pool. When you see 100% always ask: How many people would it take to impact my odds? I’ll phrase this another way, how many people jumping into this hunt at 14 points will it take to ruin your chances of drawing? With 13 points and East Hebo, it would take just 2 applicants with 14 or more points. The second question to ask: How many people are capable of impacting my odds? How many people in Oregon have 14 or more elk points? There are hundreds of people with 14 or more points. Given that there are a lot of people capable of ruining your odds and that it will only take two of them to achieve this, the prediction for this hunt should be viewed with low confidence. The fact is, even with 17 points you cannot be confident of drawing. Two people could jump in at 18 or more points. Here is a fair question to ask – if the listed odds are too high (100%), why don’t I fix them? The odds are based on what has happened in the past. It is the best predictor of what will happen next. If the listed odds are 100% with 11 points, as in this example, it means that people have not been jumping into this hunt with 12 or more points – not from 299 pool or from another hunt. However, this could change at any moment, and it just takes two people. The math says 100% at 13 points, human intuition says maybe, maybe not. Your distance from the 25% pool I am going to change a few attributes on 286Q. Let’s say there are 300 tags, you have 11 points and here are the odds: 86Q East Hebo Preference Points: 0 – 9 10 11 12 13+ 2019 Resident Actual Odds 3% 5% 100% 100% 100% When the draw occurs, its starts with the 75% pool draw. It starts at the highest point level and looks at first choice applicants, if any, at that level. It gives out tags and once everyone at the highest level has received a tag, it moves to the next lower point level. The draw continues to work its way down the point levels until it runs out of 75% pool tags. I call the point level where the 75% tags run out “the break point”. Every hunt has a break point (unless its 100% to draw with zero points), every point level above the break point is 100% odds, every point level below the break point is the 25% pool odds. In the 286Q example above, the break point odds (5%) is very close to the 25% pool odds. This is telling you something important. It means that while there are 75% pool tags predicted for the 10 point level, there will not be very many. This also means that if just a few people decide to apply with 11 or more points, the 11 point level will become the break point. If you have 11 points and that level becomes the break point, then you may not draw. 286Q East Hebo Tags: 300 Preference Points: 0 – 10 11 12 13 14+ 2019 Resident Actual Odds 3% 87% 100% 100% 100% This is what happened with my hunt last year. I was listed as 100%, but the actual draw showed it became 87% and I fell into the 13% that didn’t draw. Draw Odd Consistency When you see a hunt like this, run in the other direction… 2014 Resident Actual Odds 3% 38% 100% 100% 100% 100% 2015 Resident Actual Odds 2% 2% 2% 2% 87% 100% 2016 Resident Actual Odds 3% 89% 100% 100% 100% 100% 2017 Resident Actual Odds 4% 4% 71% 100% 100% 100% 2018 Resident Actual Odds 14% 100% 100% 100% 100% 100% 2019 Resident Actual Odds 1% 1% 51% 100% 100% 100% … unless you know why, in which case, use that inside knowledge to your advantage. Oregon has hunts where the draw odds bounce around. This makes a mathematical prediction for the coming years an impossible task. It’s like predicting the next winning lottery tickets based on what numbers have happened in the past. My algorithm is going to produce a prediction, but for hunts that bounce around, without a numerical pattern, that prediction is likely going to be wrong. A change by the ODFW Every year the ODFW changes hunts. Sometimes the ODFW changes the hunt id but the hunt parameters remain the same. Sometimes the ODFW keeps the hunt id but changes the parameters of the hunt. When a hunt has a change you need to decide if it is really the same hunt and will draw the same applicants or whether it now is a new hunt. Ask this: Has the hunt changed so much that whatever happened in the past with this hunt has no bearing on what will happen next? Here is a (hypothetical) example: Hunt 286V is the Starkey Exp. Forest cow tag for the past 5 years. This year the ODFW makes a small change, just to the bag limit, it changes from “cow elk” to “any elk”. A change like this makes this a completely different hunt and past applicant numbers are meaningless. The Odds Mess with the Odds I recently received a question on this real hunt: Preference Points: 0 1 2+ 2013 Resident Actual Odds 35% 100% 100% 2014 Resident Actual Odds 25% 91% 100% 2015 Resident Actual Odds 25% 86% 100% 2016 Resident Actual Odds 25% 78% 100% 2017 Resident Actual Odds 24% 74% 100% 2018 Resident Actual Odds 27% 81% 100% 2019 Resident Actual Odds 39% 99% 100% 2020 Resident Prediction 77% 100% 100% The question was simple, do you really believe the odds will be 77% with zero points? My answer was simple as well, no the odds will be lower, something like the 20-30% range. If you look at the numbers, you can see the hunt is getting easier to draw with 1 point. The math is right, if that trend continues the prediction will be spot on. However, the trend will not continue because folks will see these odds, particularly applicants with 1 point, and focus on the 100%. It will pull them in. Here is another actual hunt (there are lots of hunts like this): Preference Points: 0 1+ 2014 Resident Actual Odds 100% 100% 2015 Resident Actual Odds 62% 100% 2016 Resident Actual Odds 39% 100% 2017 Resident Actual Odds 71% 100% 2018 Resident Actual Odds 61% 100% 2019 Resident Actual Odds 27% 100% 2020 Resident Prediction 100% 100% This hunt is listed as 100% with zero points. I am certain that this hunt will not be 100% with zero points in the actual draw. If you look at the numbers you can see how the math arrives at 100%. Yet, these odds will catch attention of people with no points, applicants that are looking for something to apply for. They will see this hunt, and it will pull enough of them in that the odds will The point level is important, keep in mind that every applicant has at least zero points and can impact the odds of a hunt at the zero point level. When you see that mythical 100% draw prediction, please look a little closer.
{"url":"http://oregontags.com/the-mythical-100/","timestamp":"2024-11-11T16:58:17Z","content_type":"text/html","content_length":"47646","record_id":"<urn:uuid:c253633d-4d55-4182-9116-844ff1c27989>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00872.warc.gz"}
Analytic signals of discrete-time inputs The dsp.AnalyticSignal System object™ computes analytic signals of discrete-time inputs. The real part of the analytic signal in each channel is a replica of the real input in that channel, and the imaginary part is the Hilbert transform of the input. In the frequency domain, the analytic signal doubles the positive frequency content of the original signal while zeroing-out negative frequencies and retaining the DC component. The object computes the Hilbert transform using an equiripple FIR filter. To compute the analytic signal of a discrete-time input: 1. Create the dsp.AnalyticSignal object and set its properties. 2. Call the object with arguments, as if it were a function. To learn more about how System objects work, see What Are System Objects? This object supports C/C++ code generation and SIMD code generation under certain conditions. For more information, see Code Generation. anaSig = dsp.AnalyticSignal returns an analytic signal object, anaSig, that computes the complex analytic signal corresponding to each channel of a real M-by-N input matrix. anaSig = dsp.AnalyticSignal(order) returns an analytic signal object, anaSig, with the FilterOrder property set to order. anaSig = dsp.AnalyticSignal(Name,Value) returns an analytic signal object, anaSig, with each specified property set to the specified value. Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them. If a property is tunable, you can change its value at any time. For more information on changing property values, see System Design in MATLAB Using System Objects. FilterOrder — Filter order used to compute Hilbert transform 100 (default) | scalar integer Order of the equiripple FIR filter used in computing the Hilbert transform, specified as an even integer scalar greater than 3. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 y = anaSig(x) computes the analytic signal, y, of the M-by-N input matrix x, according to the equation where j is the imaginary unit and $H\left\{X\right\}$ denotes the Hilbert transform. Each of the N columns in x contains M sequential time samples from an independent channel. The method computes the analytic signal for each channel. Input Arguments x — Data input vector | matrix Data input, specified as a vector or a matrix. Data Types: single | double Complex Number Support: Yes Output Arguments y — Analytic signal output vector | matrix Analytic signal output, returned as a vector or a matrix. Data Types: single | double Complex Number Support: Yes Object Functions To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax: Common to All System Objects step Run System object algorithm release Release resources and allow changes to System object property values and input characteristics reset Reset internal states of System object Compute The Analytic Signal Compute the analytic signal of a sinusoidal input. t = (-1:0.01:1)'; x = sin(4*pi*t); anaSig = dsp.AnalyticSignal(200); y = anaSig(x); View the analytic signal. plot(t, x) title('Original Signal'); subplot(2,1,2), plot(t, [real(y) imag(y)]); title('Analytic signal of the input') legend('Real signal','Imaginary signal',... More About Analytic Signal The analytic signal x = x[r] + jx[i], where the real part x[r] is the original data and the imaginary part x[i] contains the Hilbert transform. The imaginary part is a version of the original real sequence with a 90° phase shift. Sines are therefore transformed to cosines, and conversely, cosines are transformed to sines. The Hilbert-transformed series has the same amplitude and frequency content as the original sequence. The transform includes phase information that depends on the phase of the original. The Hilbert transform is useful in calculating instantaneous attributes of a time series, especially the amplitude and the frequency. The instantaneous amplitude is the amplitude of the complex Hilbert transform. The instantaneous frequency is the time rate of change of the instantaneous phase angle. For a pure sinusoid, the instantaneous amplitude and frequency are constant. The instantaneous phase, however, is a sawtooth, reflecting how the local phase angle varies linearly over a single cycle. The algorithm computes the Hilbert transform using an equiripple FIR filter of the specified order n. The linear phase filter is designed using the Remez exchange algorithm and imposes a delay of n/2 on the input samples. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Usage notes and limitations: See System Objects in MATLAB Code Generation (MATLAB Coder). This object also supports SIMD code generation using Intel^® AVX2 code replacement library under these conditions: • Input signal is real-valued. • Input signal has a data type of single or double. The SIMD technology significantly improves the performance of the generated code. For more information, see SIMD Code Generation. To generate SIMD code from this object, see Use Intel AVX2 Code Replacement Library to Generate SIMD Code from MATLAB Algorithms. Version History Introduced in R2012a
{"url":"https://nl.mathworks.com/help/dsp/ref/dsp.analyticsignal-system-object.html","timestamp":"2024-11-08T11:27:36Z","content_type":"text/html","content_length":"91806","record_id":"<urn:uuid:32276286-ef6d-4a07-a178-b8ea7b40a82f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00181.warc.gz"}
Rule of replacement Jump to navigation Jump to search In logic, a rule of replacement^[1]^[2]^[3] is a transformation rule that may be applied to only a particular segment of an expression. A logical system may be constructed so that it uses either axioms, rules of inference, or both as transformation rules for logical expressions in the system. Whereas a rule of inference is always applied to a whole logical expression, a rule of replacement may be applied to only a particular segment. Within the context of a logical proof, logically equivalent expressions may replace each other. Rules of replacement are used in propositional logic to manipulate propositions. Common rules of replacement include de Morgan's laws, commutation, association, distribution, double negation,^[4] transposition, material implication, material equivalence, exportation, and This logic-related article is a stub. You can help Wikipedia by expanding it.
{"url":"https://static.hlt.bme.hu/semantics/external/pages/kett%C5%91s_tagad%C3%A1s/en.wikipedia.org/wiki/Rules_of_replacement.html","timestamp":"2024-11-09T17:19:13Z","content_type":"text/html","content_length":"40667","record_id":"<urn:uuid:0064e8cc-4d2b-432b-8377-7eb1691905a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00235.warc.gz"}
Math Is Fun Forum Registered: 2005-06-28 Posts: 48,282 Re: Introduction Hi Ebere, Welcome to the forum! From: Bumpkinland Registered: 2009-04-12 Posts: 109,606 Re: Introduction Welcome to the forum. In mathematics, you don't understand things. You just get used to them. If it ain't broke, fix it until it is. Always satisfy the Prime Directive of getting the right answer above all else. Registered: 2017-05-04 Posts: 2 Re: Introduction zetafunc wrote: Thank you Registered: 2014-05-21 Posts: 2,436 Re: Introduction Registered: 2017-05-04 Posts: 2 Hello, I'm new here It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
{"url":"https://mathisfunforum.com/viewtopic.php?pid=399236","timestamp":"2024-11-04T04:12:45Z","content_type":"application/xhtml+xml","content_length":"11022","record_id":"<urn:uuid:d011c33e-63e4-4b2a-9ed2-8fde534261f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00799.warc.gz"}
requency domain transform F/D/T transforms the time domain array data into spectral frequency domain data using the Fast Fourier Transform (FFT). As a default setting the full length impulse responses are tranformed with a Blocksize `NNFT=2^(nextpow2(lengthofimpulseresponses))`. The blocksize can be rised by using FFToversize which is a multiplier of NFFT (Default = 2). A limited time section can be picked out by defining `firstSample` and `lastSample`. The length of this section is `(lastSample-firstSample)`. This way unnecessary data can be removed or a running window can be realized to obtain time slices and resolve the temporal information within the captured sound field. To save processing power and calculation time the SOFiA chain works on the half-sided FFT spectrum only (NFFT/2+1). Therefore F/D/T produces half-sided spectrum output signals (fftData). Later the makeIR function automatically reconstructs the double-sided spectrum to compute impulse responses. [ffftData, kr, f] = sofia_fdtVSA(timeData, FFToversize,... firstSample, lastSample) fftData Frequency domain data ready for the Spatial Fourier Transform (SOFiA S/T/C Core)! To save processing power the SOFiA chain always works on the half-sided FFT spectrum only (NFFT/2+1) kr kr-Values of the delivered data (required e.g. for the modal radial filters SOFiA M/F) f Absolute frequency scale. Not really required but good for control purposes or scaling a spectrum plot. timeData Struct with minimum fields * . impulseResponses [Channel X Samples] FFTOversize FFToversize rises the FFT Blocksize. [default = 2] A FFT of the blocksize (FFToversize*NFFT) is applied to the time domain data; where NFFT is determinded as the next power of two of the signalSize which is signalSize = (lastSample-firstSample). The function will pick a window of (lastSample-firstSample) for the FFT: firstSample First time domain sample to be included. lastSample Last time domain sample to be included. If firstSample and lastSample are not defined the full IR will be transformed: [default: firstSample = 1; lastSample = size(timeData.impulseResponses, 2);] Call this function with a running window (firstSample->lastSample+td) increasing td to obtain time slices. This way you resolve the temporal information within the captured sound field
{"url":"https://audiogroup.web.th-koeln.de/SOFiA_wiki/FDT.html","timestamp":"2024-11-10T16:03:10Z","content_type":"text/html","content_length":"26972","record_id":"<urn:uuid:f0ec4a5d-589c-476e-bc71-aeb772511841>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00812.warc.gz"}
Fixed Income OAS questions I’ve seen this same question or some variation pop up on nearly every mock. It asks us to pick the best bond for our client, and it seems like the answer every time is to pick the one with the highest OAS, regardless of z score or nominal spread. Is this generally the bible rule or are there execptions? Usually you want to pick the bond with the highest OAS cause it means it’s underpriced and lowest option cost. Depending on what the problem says, you might have to look at the interest rate hier OAS with LOWER option cost What about for callable bonds Z-spread >= OAS? So, selecting the highest OAS is not always right? Also, I recall that you compare the OAS against the spread on an option-free bond, in which case you will chosse the other option-free bond if it has a higher spread>>>still have not fully wrapped my head around these concepts. if Z-Spread > OAS in a callable Bond, you are OK. since option cost is positive then it all make sense. you pick HIGH OAS LOW OPTION COST if Z-Spread < OAS in a callable Bond, this is impossible, since a negative option cost means a PUT which is not in a callable bond. if a callabale bond has this patern, then buy it until you ran out of money if OAS is negative, and your benchmark is the treasurie, then you know you are undervalued since you have less risk than the risk-free curve. You never compare OAS on a option free bond since OAS = Z-spread. OAS negative should be overvalued, am i right? if OAS is negative, then the option cost is to the roof. then yes. If there are two bonds, one is option-free with a nominal spread of 8.5%, and the other has an OAS = 8%, how do you choose between the two? it depends on the option, is it a put or a call… if it’s a call it’s very hard to say since one is estimated on the yield and the other one on spot. so if you dont have more comparable you cannot say… if it’s a put, then i would go for the option free one since the put shoud give you more OAS than nominal in most cases. but as i said, comparing OAS and nominal is very disturbing since it’s yield and spot. Z spread is evaluated on spot, just like the OAS, so it’s easier to compare them If I then find that the option-free bond has a z-spread of 7.5%, what do I conclude? if OAS = 8 % then if the bond is calable it’s undervalued since the option cost is negative. because you assume that a callable bond will have a positive option cost. if it’s putable, you can’t tell if you can’t compare it with other similar putable bond. May be I need to review FI again, but my understanding is that if you are valuing a bond that has no options using a z-spread, then you have determined the proper spread for the bond. Now if you check another bond, and you adjust for the option, regardless of whether it’s callable or puttable, you have determined its OAS, so the option is no longer relevant. Since the OAS has been calculated using spot rates, and te z-spread is also calculated based on spot rates, you have two bonds with two spreads that can be compared directly. I don’t get why I need to worry about the option cost…I have already factored that in and got the OAS. Again, I may be off on this one. Dreary what SS is saying that there are three types of spreads: nominal, Z, and OAS. Don’t confuse nominal with Z or use them interchangeaby (which it seems like you did) Nominal: nobody really pays attention to in this case. Nominal is just YTM - benchmark yield. It doesn’t tell us much. YTM just gives us a single yied rate Z spread - we must assume that each cash flow is discounted at its appropriate spot rate. If we use a binomial tree, we are assuming some volatility around those benchmark rates. However, if we discount a bond at the observed spot rates with the benchmark spot rates (or using the binomial tree with forward rates), then we will probably come upon a price that is different than the current market price (ie, if treasury spot rates are the appropriate benchmark rates, these are most likely lower than the rerquired rates given that the risk of the bond > risk of treasury bond). Ie, we will come up with a higher price. We know that these rates can’t be appropriate, however, because the risk of a bond (before assuming options here) from Ford is much different than a Treasury bond. So, we need to add some amount to each of those rates to make sure that the market price is equal to the price that our binomial tree spits out. How much do we add to each rate? The z spread. Imagine FORD issues a noncallable bond. And we discount each cash flow at the benchmark rates, to come up with a price of 120. But, the market price is more like 102, so we should probably adjust those benchmark rates up to a level that will produce the market price. The z spread is a constant spread added to all of the benchmark rates so that our model spits out 102 instead of 120. Higher rate = lower price. OAS - with OAS, we are accounting for the fact that the CASH FLOW will change due to the embedded option. Ie, if in our tree rates fall to a certain point with a callable bond, the issuer will probably call the bond. In which case the bond price is limited by the par value (issuer will pay par to redeem). So, if our falling interest rate in our binomial tree is such that the price at a node becomes 115 when the rate falls below the call rate, we dont discount the value of 115 anymore because we expect the bond to be called at 100. So, the CASH flow will be lower at that time period (remember, the embedded option has no bearing on the benchmark rates) - so now if we discount the cash flows using the benchmark rates, we will get a lower valuation, right? Assume a similar example as above, except now FORD has a callable bond that is identical to the one they issued above. (Assume the volatility in this example is set high enough that the bond will be called in the down scenario). We can expect the cash flows to be lower if the bond is called, because we are expecting the price to be 115 in a down scenario, but in reality we will only get 100. So, the value will be lower than the first example, so now we don’t need to add as much spread to the benchmark rates for valuation because the decrease in cash flows is already reducing the value closer to the market price the option cost is relevant when you are comparing two bonds with embedded options. If two bonds have the same OAS but one has a lower option cost (meaning value of noncallable bond = value of callable + option cost), take the one with the lower option cost. OAS vs z spread is relevant when comparing a bond without an embedded option to a bond with an embedded option (I’m 95% sure) That’s very helpful…but one confusing part is sometimes we say the z-spread = 8%, for example, while at other times we say the z-spread is only the number of basis points we add to the benchmark spot rates…however, the z-spread is a constant number of points while the spot rates are many, so do I say the Z-spread is, for example 75 bps, or add it to the spot rate? But there are many spot rates! Add it to which one? Let us look at some scenario: Callable Bond A has a nominal spread (NS) = 150 bps, i.e., NS = 1.5%. Option-free Bond B has NS = 125 bps A’s Z-spread - 125 bps. A’s OAS = 100 bps. So, Bond A appears to be undervalued because it has a higher NS (150 bps versus 125 bps). But when we look at its OAS of 100 bps, we see that Bond B is better (it has a NS of 125 bps). To make a long story short, I need to go back and read the darn thing! A zspread is the spread added to ALL of the spot rates. You cant really quote the rate with it, because if you add it to all of the spot rates it will be a different rate at each point in time. Unfortunately this section of the book is laid out poorly IMO (ie, goes over a concept and explains it a few pages later) For example, if the treasure curve from 1 year to 3 year in year increments are If we use these rates to discount the cash flow of a Ford bond, we might get some really high price that doesnt make sense, because we know the risk of Ford isnt similar to that of the US treasury. We could add a different amount to all of the rates in this curve, but that doesnt help us much because its tough to compare bonds this way. Say we use these rates and get a price of 120, where the market price is 102. Enter the z spreads what is the constant amount we can add to all of these rate to bring the value from the model to 102. For the sake of argument, let’s add 75 bps to each rate. Now, using these rates, our bond price is 102, which is the market price. How much did we have to add to each rate to prevent arbitrage? 75 bps = zspread. You don’t compare nominal spread to z spread or oas. Basically both z spread and OAS are just the bps that make your cash flow equal to market value. Like ppl were saying before nominal spread is just ytm - treasury rate. Tells no info about bond return. For you question about spot rate. It’s basically like this. Say you have 20 cash flows for your bond. Each cash flow has a different spot rate. The z spread is basically the bps added to all the spot rate so your discounted cash flow is equal to the market place. The discount rate on each cash payment is different ( floating rate of that period plus), not just because of Compounding, but also cause of changing rates. Thats why higher z spread higher return, all else equal. Higher spread mean like money you pay for the bond. The cash flow will be the same in the future regardless of spot rate (compare to other bonds with same coupon maturity and risk), it’s just ur paying less for it That’s clear…how about some problems on OAS/Z-spread stuff…anyone? 2 am in hong kong, but if you can wait I think there was 3 hard question in schweser exam book 2. I can post it to you on this thread tmr. In 14 hours maybe. the OAS is what you will earn in average ( above the spot ). you make 10000 scenario, you calculate the yield you earn on each scenario, then you average it. so it’s bassicily saying that you will earn that spread over spot in average. the z-spread is the spread over the spot of 1 scenario ( the actual spot rate curve ). in fact, that scenario is the base case ( the actual ) scenario of your monte carlo simulation. so if you are saying that OAS is smaller than Z-spread, it’s like saying that the 1 scenario you are evaluating is an above average scenario. if i tell you, 1)you can buy a bond with a z-spread of 3% with a OAS of 2% or 2)you can buy a bond with a a z-spread of 3% whit a OAS of 1%. similar credit risk / maturity if i were you, i would prefer to earn in average 2% than earn average of 1% even if the base case scenario ( current yield curve ) is telling me that if everyting stay the same i will earn 3% Can someone explain in layman’s terms how the OAS considers the option cost when in fact the spread excludes the cost of the option…seems counterintuitive?
{"url":"https://www.analystforum.com/t/fixed-income-oas-questions/74008","timestamp":"2024-11-11T13:33:56Z","content_type":"text/html","content_length":"67378","record_id":"<urn:uuid:79dc6e24-0ded-40c6-940b-cce3d896e96b>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00551.warc.gz"}
Go Figure Place the numbers on the Go Figure grid so that the four calculations are correct. The two digits on the leftmost yellow spaces should add up to the number on the red space below. In a similar way the other three calculations shoud be correct. A good strategy is to start by indicating which numbers could not possibly go on certain spaces. To help you record this information small clickable tags will appear when you click the button below.
{"url":"https://www.transum.org/Maths/Puzzles/Go_Figure/","timestamp":"2024-11-07T22:21:46Z","content_type":"text/html","content_length":"45152","record_id":"<urn:uuid:e2dcd7c4-66e8-48c1-8667-266819e2ac29>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00598.warc.gz"}
How to count Cells with Date Range in Excel How to count Cells with Date Range in Excel We are going to use COUNTIFS to count cells with date range. In this tutorial, we will learn, how we can count cells with specific date range. We will be using COUNTIFS formula which uses multiple criteria to count the cells. Microsoft Excel 2021 is used. We will be using the following data for our tutorial: Table for Counting date ranges In the above image, we see a table with ranges specified from 2020 to 2022. Note that, the format of the date will depend on your operating system settings. For this tutorial, we are using the date format as: Day/Month/Year. We will be counting the number of employees who have joined the company within a specific date range. We will need the following parameters: • Start Date • End Date • Cell Range for which we will count if a cell meets the criteria of the date. In this Example, we are going to count how many cells are between July 2022 and December 2022. For this, we will use COUNTIFS formula. Enter =COUNTIFS( in any cell where you want the output. Enter the Cell range, in this case, there is a date with every employee, we can select that as the Date range. The Formula becomes =COUNTIFS(B14:B33 Now, we enter the start date from where we want to see, in our case, it is, 01/07/2022, while in American format it will be: 07/01/2023 Note that we enter comma, after the cell range and we enter the criteria within closed quotation marks. Before entering start date, we enter the operator >=, it will count the entered date and the dates that come after it. So, the formula becomes: =COUNTIFS(B14:B33,">=01/07/2022" Do not put spaces in the criteria that is written within quotation marks. Now, we need to enter the end date, but before entering the end date, we will need to enter the cell range. We will use the same Cell range, we used earlier for start date. So the formula becomes: =COUNTIFS(B14:B33, ">=01/07/2022", B4:B33 And finally after putting comma, we enter the end date: 31/12/2022, while 12/31/2022 for American format. So, the final formula becomes:=COUNTIFS(B14:B33, ">=01/07/2022", B4:B33, "<=31/12/2022") Now, we can see that we have counted the number of cells with date range. Similarly, we can add other dates.
{"url":"https://usamababar.com/how-to-count-cells-with-date-range-in-excel/","timestamp":"2024-11-12T07:17:40Z","content_type":"text/html","content_length":"99797","record_id":"<urn:uuid:690d07e7-8e97-4328-96f0-a2b021245937>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00248.warc.gz"}
Financial Economics - SILO.PUB File loading please wait... Citation preview Financial Economics Whilst many undergraduate finance textbooks are largely descriptive in nature the economic analysis in most graduate texts is too advanced for final year undergraduates. This book bridges the gap between these two extremes, offering a textbook that studies economic activity in financial markets, focusing on how consumers determine future consumption and on the role of financial securities. Areas covered in the book include: • • • • An examination of the role of finance in the economy using basic economic principles, eventually progressing to introductory graduate analysis. A microeconomic study of capital asset pricing when there is risk, inflation, taxes and asymmetric information. An emphasis on economic intuition using geometry to explain formal analysis. An extended treatment of corporate finance and the evaluation of public policy. Written by an experienced teacher of financial economics and microeconomics at both graduate and postgraduate level, this book is essential reading for students seeking to study the links between economics and finance and those with a special interest in capital asset pricing, corporate finance, derivative securities, insurance, policy evaluation and discount rates. Chris Jones is Senior Lecturer at the School of Economics at The Australian National University. Financial Economics Chris Jones First published 2008 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Simultaneously published in USA and Canada by Routledge 270 Madison Avenue, New York, NY 10016 Routledge is an imprint of the Taylor & Francis Group, an informa business This edition published in the Taylor & Francis e-Library, 2008. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” © 2008 Chris Jones All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Jones, Chris, 1953Financial economics/Chris Jones. p. cm. Includes bibliographical references and index. 1. Finance. 2. Economics. I. Title. HG173.J657 2008 332–dc22 2007032310 ISBN 0-203-93202-1 Master e-book ISBN ISBN10: 0-415-37584-3 (hbk) ISBN10: 0-415-37585-1 (pbk) ISBN10: 0-203-93202-1 (ebk) ISBN13: 978-0-415-37584-9 (hbk) ISBN13: 978-0-415-37585-6 (pbk) ISBN13: 978-0-203-93202-5 (ebk) List of figures List of numbered boxes List of tables 1 Introduction 1.1 Chapter summaries 3 1.2 Concluding remarks 12 Investment decisions under certainty 2.1 Intertemporal consumption in autarky 16 2.1.1 Endowments without storage 16 2.1.2 Endowments with storage 18 2.1.3 Other private investment opportunities 20 2.2 Intertemporal consumption in a market economy 22 2.2.1 Endowments with atemporal trade 22 2.2.2 Endowments with atemporal trade and fiat money 23 2.2.3 Endowments with full trade 25 2.2.4 Asset economy with private investment opportunities 31 2.2.5 Asset economy with investment by firms 34 2.2.6 Asset economy with investment by firms and fiat money 37 2.3 Asset prices and inflation 40 2.3.1 The Fisher effect 41 2.3.2 Wealth effects in the money market 44 2.4 Valuing financial assets 48 2.4.1 Term structure of interest rates 49 2.4.2 Fundamental equation of yield 52 2.4.3 Convenient pricing models 54 2.4.4 Compound interest 55 2.4.5 Bond prices 57 2.4.6 Share prices 58 2.4.7 Price–earnings ratios 60 2.4.8 Firm valuations and the cost of capital 63 Problems 65 viii x xii 1 vi Contents 71 Uncertainty and risk 3.1 State-preference theory 73 3.1.1 The (finite) state space 73 3.1.2 Debreu economy with contingent claims 75 3.1.3 Arrow–Debreu asset economy 77 3.2 Consumer preferences 83 3.2.1 Von Neumann–Morgenstern expected utility 86 3.2.2 Measuring risk aversion 87 3.2.3 Mean–variance preferences 89 3.2.4 Martingale prices 90 3.3 Asset pricing in a two-period setting 92 3.3.1 Asset prices with expected utility 92 3.3.2 The mutuality principle 96 3.3.3 Asset prices with mean–variance preferences 101 3.4 Term structure of interest rates 103 Problems 105 Asset pricing models 4.1 Capital asset pricing model 109 4.1.1 Consumption space and preferences 109 4.1.2 Financial investment opportunity set 111 4.1.3 Security market line – the CAPM equation 122 4.1.4 Relaxing the assumptions in the CAPM 125 4.2 Arbitrage pricing theory 129 4.2.1 No arbitrage condition 131 4.3 Consumption-based pricing models 133 4.3.1 Capital asset pricing model 134 4.3.2 Intertemporal capital asset pricing model 136 4.3.3 Arbitrage pricing theory 137 4.3.4 Consumption-beta capital asset pricing model 139 4.4 A comparison of the consumption-based pricing models 142 4.5 Empirical tests of the consumption-based pricing models 143 4.5.1 Empirical tests and the Roll critique 144 4.5.2 Asset pricing puzzles 145 4.5.3 Explanations for the asset pricing puzzles 147 4.6 Present value calculations with risky discount factors 151 4.6.1 Different consumption risk in the revenues and costs 151 4.6.2 Net cash flows over multiple time periods 153 Problems 157 Private insurance with asymmetric information 5.1 Insurance with common information 163 5.1.1 No administrative costs 163 5.1.2 Trading costs 167 5.2 Insurance with asymmetric information 169 5.2.1 Moral hazard 169 5.2.2 Adverse selection 171 5.3 Concluding remarks 179 Problems 180 6 Derivative securities 6.1 Option contracts 184 6.1.1 Option payouts 185 6.1.2 Option values 188 6.1.3 Black–Scholes option pricing model 192 6.1.4 Empirical evidence on the Black–Scholes model 196 6.2 Forward contracts 197 6.2.1 Pricing futures contracts 198 6.2.2 Empirical evidence on the relationship between futures and expected spot prices 202 Problems 202 Corporate finance 7.1 How firms finance investment 205 7.2 Capital structure choice 205 7.2.1 Certainty with no taxes 207 7.2.2 Uncertainty with common information and no taxes 212 7.2.3 Corporate and personal taxes, leverage-related costs and the Miller equilibrium 218 7.2.4 The user cost of capital 233 7.3 Dividend policy 237 7.3.1 Dividend policy irrelevance 238 7.3.2 The dividend puzzle 239 7.3.3 Dividend imputation 242 Problems 245 Project evaluation and the social discount rate 8.1 Project evaluation 253 8.1.1 A conventional welfare equation 254 8.1.2 Optimal provision of public goods 256 8.1.3 Changes in real income (efficiency effects) 265 8.1.4 The role of income effects 267 8.2 The social discount rate 269 8.2.1 Weighted average formula 270 8.2.2 Multiple time periods and capital depreciation 275 8.2.3 Market frictions and risk 276 Problems 277 Notes References Author index Subject index 1.1 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 4.1 4.2 4.3 4.4 4.5 Income and consumption profiles Intertemporal consumption in autarky Costless storage in autarky Private investment opportunities in autarky Consumption opportunities with income endowments and atemporal trade Consumption opportunities with income endowments, atemporal trade and a competitive capital market The relationship between saving and the interest rate The relationship between borrowing and the interest rate Consumption opportunities in the asset economy with private investment Optimal private investment with a competitive capital market The Fisher separation theorem with firms Investment when the Fisher separation theorem fails to hold The Fisher effect Different inflationary expectations Welfare losses in the money market Welfare losses from higher expected inflation Yield curves for long-term government bonds An asset with a continuous consumption stream An event tree with three time periods Commodity and financial flows in the Arrow-Debreu economy The no arbitrage condition Consumer preferences with uncertainty and risk Consumption with expected utility and objective probabilities The mutuality principle Trading costs State-dependent preferences Normally distributed asset return Mean–variance preferences Investment opportunities with two risky securities Perfectly positively correlated returns Efficient mean–variance frontier with ρAB = +1 Perfectly negatively correlated returns Efficient mean–variance frontier with ρAB = –1 Figures ix 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 4.20 4.21 4.22 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6.1 6.2 6.3 6.4 6.5 6.6 6.7 7.1 7.2 8.1 8.2 8.3 8.4 8.5 8.6 8.7 Partially correlated returns Efficient mean–variance frontier with –1 < ρAB < +1 Portfolios with two risky securities Portfolios with a risk-free security (F) Efficient mean–variance frontier with risky security A and risk-free security F Portfolio risk and number of securities Efficient mean–variance frontier with many (N) risky securities Portfolios with many risky securities Capital market line Security market line Risk-neutral investors Heterogenous expectations No borrowing Zero beta securities Income taxes Arbitrage profits Main assumptions in the consumption-based asset pricing models Aggregate uncertainty and individual risk Consumption without insurance Full insurance Partial insurance with processing costs Insurance with fixed administrative costs Insurance with complete information Pooling equilibrium Separating equilibrium Non-existence of separating equilibrium Payouts on options contracts at expiration date (T) Payouts at time T on shares and risk-free bonds Replicating payouts on a call option Payouts to a straddle Payouts to a butterfly Bounds on call option values Constructing a risk-free hedge portfolio Default without leverage-related costs Default with leverage-related costs Welfare effects from marginally raising the trade tax in the first period The Samuelson condition in the first period The revised Samuelson condition in the first period MCF for the trade tax in the first period Weighted average formula Fixed saving Fixed investment demand 2.1 Storage: a numerical example 2.2 Costly storage: a numerical example 2.3 Private investment opportunities: a numerical example 2.4 Trade in a competitive capital market: a numerical example 2.5 Private investment and trade: a numerical example 2.6 Seigniorage in selected countries 2.7 Differences in geometric and arithmetic means: numerical examples 2.8 The equation of yield: a numerical example 2.9 Examples of compound interest 2.10 Measured P/E ratios for shares traded on the Australian Securities Exchange 2.11 Examples of large P/E ratios 2.12 The market valuation of a firm: a numerical example 3.1 Obtaining primitive (Arrow) prices from traded security prices 3.2 Anecdotal evidence of state-dependent preferences 3.3 Obtaining martingale prices from traded security prices 3.4 Using the CBPM to isolate the discount factors in Arrow prices 3.5 Using the CBPM in (3.18) to compute expected security returns 3.6 Using the CBPM in (3.19) to compute expected security returns 3.7 Consumption with log utility: a numerical example 3.8 The mutuality principle: a numerical example 4.1 Average annual returns on securities with different risk 4.2 The CAPM pricing equation (SML): a numerical example 4.3 Numerical estimates of beta coefficients by sector 4.4 The CAPM as a special case of the APT 4.5 The APT pricing equation: a numerical example 4.6 The CAPM has a linear stochastic discount factor 4.7 The ICAPM pricing equation: a numerical example 4.8 The CCAPM pricing equation: a numerical example 4.9 Valuing an asset with different risk in its revenues and costs 4.10 Using the ICAPM to compute the present value of a share 5.1 Full insurance: a numerical example 5.2 Administrative costs and insurance: a numerical example 5.3 Self-protection with costless monitoring: a numerical example 5.4 Self-insurance without market insurance 5.5 Self-insurance with competitive market insurance 5.6 A separating equilibrium Boxes 5.7 5.8 6.1 6.2 6.3 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18 7.19 8.1 8.2 8.3 8.4 8.5 8.6 A pooling equilibrium A constrained separating equilibrium Valuing options with Arrow prices: a numerical example The Black–Scholes option pricing model: a numerical example Prices of share futures: a numerical example Debt–equity ratios by sector A geometric analysis of the demand condition A geometric analysis of the supply condition Modigliani–Miller leverage irrelevance: a geometric analysis The market value of an all-equity firm: a numerical example Leverage policy with risk-free debt: a numerical example Leverage policy with risky debt: a numerical example Leverage irrelevance in the Arrow–Debreu economy: a geometric analysis The capital market with a classical corporate tax: a geometric analysis Optimal capital structure choices with leverage-related costs Marginal income tax rates in Australia Tax preferences of high-tax investors in Australia The Miller equilibrium: a geometric analysis The Miller equilibrium without marginal investors The Miller equilibrium with a lower corporate tax rate The dividend puzzle The dividend puzzle and trading costs The dividend puzzle and share repurchase constraints The new view of dividends with inter-corporate equity An equilibrium outcome in the public good economy Estimates of the shadow profits from public good production Estimates of the marginal social cost of public funds (MCF) Estimates of the revised shadow profits from public good production The shadow value of government revenue in the public good economy The weighted average formula in the public good economy xi 177 179 191 195 201 208 209 211 212 214 215 217 219 221 226 228 229 230 232 237 240 241 242 243 258 260 263 264 268 273 2.1 3.1 4.1 4.2 4.3 4.4 4.5 7.1 7.2 Revenue collected by the government as seigniorage Lottery choices: the Allais paradox Random returns on securities A and B Means and variances on securities A and F The asset pricing puzzles in US data Equity premium puzzle Low risk-free interest rate puzzle Payouts in the absence of tax refunds on losses Income taxes on the returns to debt and equity Individuals regularly make decisions to determine their consumption in future time periods, and most have income that varies over their lives. They initially consume from parental income before commencing work, whereupon their income normally increases until it peaks toward the end of their working life and then declines at retirement. An example of the income profile (It) for a consumer who lives until time T is shown by the solid line in Figure 1.1. When resources can be transferred between time periods the consumer can choose to smooth consumption expenditure (Xt) to make it look like the dashed line in the diagram. Almost all consumption choices have intertemporal effects when individuals can transfer resources between time periods. Any good that provides (or funds) future consumption is referred to as a capital asset, and consumers trade these assets to determine the shape of the consumption profile. In Figure 1.1 the individual initially sells capital assets (borrows) to raise consumption above income, and later purchases capital assets (saves) to repay debt and save for retirement and the payment of bequests. These trades smooth consumption expenditure relative to income, where consumption profiles are determined by consumer preferences, resources endowments and investment opportunities. There are physical and financial capital assets: physical assets such as houses and cars generate real consumption flows plus capital gains or losses, and financial assets have monetary payouts plus any capital gains or losses that can be converted into consumption goods. There are important links between them as many financial assets are used to fund investment in physical assets, where this gives them property right claims to their payouts. In frictionless competitive markets, asset values are a signal of the marginal benefits to sellers and marginal costs to buyers from trading future consumption. In effect, buyers and sellers are valuing the same payouts to capital assets when they make decisions to trade them, which is why so much effort is devoted to the derivation of capital asset pricing models in financial economics, particularly in the presence of uncertainty. Consumers will not pay a positive price for any asset unless it is expected to generate a net consumption flow for them in the future. In many cases these benefits might be reductions in consumption risk rather than increases in expected consumption. In fact, a large variety of financial securities trade in financial markets to facilitate trades in consumption risk. While much of the material covered in this book examines trade in financial markets and the pricing of financial securities, there are important links between the real and financial variables in the economy. After all, financial markets function to facilitate the trades in real consumption, where financial securities reduce trading costs, particularly when consumption is transferred across time. Their prices provide important signals of the marginal valuations and costs of future consumption flows. To identify interactions between the real and Introduction Xt, It Consumption (Xt) Income (It) Time (t) Figure 1.1 Income and consumption profiles. financial variables in the economy, we examine the way capital asset prices change over time, and how they are affected by taxes, leverage, risk, new information and inflation. In particular, we look at how financial decisions affect real consumption opportunities. A useful starting point for the analysis is the classical finance model with frictionless and competitive markets where traders have common information. In this setting the financial policy irrelevance theorems of Modigliani and Miller (1958, 1961) hold, where financial securities are a veil over the real economy. That is not to say these securities are irrelevant to the real economy, but rather, the types of financial securities used and the way they make payouts, whether as consumption, cash or capital gains, are irrelevant. This is an important proposition because it reminds us that the values of financial securities are ultimately determined by the net consumption flows they provide – in other words, by their fundamentals. While this model appears at odds with reality, it provides an important benchmark for gradually extending the analysis to a more realistic setting with trading costs, taxes and asymmetric information to explain the interactions we observe between the real and financial variables in the economy. Considerable progress has been made in deriving asset pricing models in recent years by linking prices back to consumption, which is the ultimate source of value because it determines the utility of consumers. Most of this work is undertaken in classical finance models, where departures from it attempt to make the pricing models perform better empirically. This book aims to bridge the material covered in most undergraduate finance courses with material covered in a first-year graduate finance course. Thus, it can be used as a textbook for third-year undergraduate and honours courses in finance and financial economics. Another aim is to provide policy analysts with an accessible reference for evaluating policy changes with risky benefits and costs that extend into future time periods. The most challenging material is presented at the end of Chapter 4, where four popular consumption-based pricing models are derived, and in Chapter 8 on project evaluation and the social discount rate. I benefited enormously from reading many of the works listed in the References section, but two books were particularly helpful. The book by John Cochrane (2001) provides nice insights into the economics of asset pricing, and is well supported by the book by Yvan Lengwiler (2004) that carefully establishes the properties of the consumption-based pricing model where the analysis in Cochrane starts. Introduction 3 In this book I have expanded the material on corporate finance and included material on project evaluation. Corporate finance is an ideal application in financial economics because a large portion of aggregate investment is undertaken by corporate firms. It provides us with an opportunity to examine the role of taxes and the effects of firm financial policies on their market valuations. Welfare analysis is used in the evaluation of public sector projects, and to identify the efficiency and equity effects of resource allocations by trades in private markets. In distorted markets policy analysts use different rules than private traders for evaluating capital investment decisions. These differences are examined and we extend a compensated welfare analysis to identify the welfare effects of changes in consumption risk. For that reason the book may also be useful as a reference for courses in cost–benefit analysis, public economics and the economics of taxation. We now summarize the material covered in each of the following chapters. 1.1 Chapter summaries Intertemporal decisions under uncertainty Uncertainty obviously impacts on intertemporal consumption choices, where consumers, when valuing capital assets, apply discount factors to their future net consumption flows as compensation for the opportunity cost of time and risk. Rather than include both time and risk from the outset, we follow Hirshleifer (1965) in Chapter 2 by using certainty analysis to identify the opportunity cost of time. This conveniently extends standard atemporal economic analysis to multiple time periods without the complication of also including uncertainty. It is included later in Chapter 3 using a two-period Arrow–Debreu state-preference model, which is a natural extension of the certainty analysis in Chapter 2. By proceeding in this manner we establish a solid foundation for the more advanced material covered in later chapters. Some graduate finance books treat uncertainty analysis, and in some cases, statepreference theory, as assumed knowledge. The certainty analysis commences in an autarky economy where individuals effectively live on islands. We do this to identify actions consumers can take in isolation from each other to transfer consumption to future time periods through private investment in capital assets. For example, they can store commodities, plant trees and other crops as well as build houses to provide direct consumption benefits in the future. While this is a simplistic description of the choices available to most consumers, it establishes useful properties that will carry over to a more realistic setting. In particular, it identifies potential gains from trade, where the nature of these gains is identified by gradually introducing trading opportunities to the autarky economy. We initially extend the analysis by allowing consumers to exchange goods within each time period (atemporal trade) where transactions costs are introduced to provide a role for (fiat) money and financial securities. It is quite easy to overlook some of the important roles of money and financial securities in a more general setting with risk, taxes, externalities and asymmetric information. In a certainty setting without taxes and other distortions consumers use them to reduce the costs of moving goods around the exchange economy. Money and financial securities will coexist as a medium of exchange if they provide different cost reductions for different transactions. Since money is highly divisible and universally accepted as a medium of exchange, it reduces trading costs on relatively low-valued transactions. In contrast, financial securities are used for larger-valued transactions and trades with more complex property right transfers which are less easily verified at the time the exchanges occur.1 If commodities are perfectly divisible, costless to transfer between locations, and traders have complete information about their quality and other important characteristics, the absence of trading costs will make money and financial securities redundant. Money is frequently not included in finance models due to the absence of trading costs on the grounds they are too small to play a significant role in the analysis. That also eliminates any transactions cost role for financial securities. When money is included in these circumstances it becomes a veil over the real economy so that nominal prices are determined by the supply of money.2 Once trading costs are included, however, money and financial securities can have real effects on equilibrium outcomes. When consumers can trade atemporally in frictionless competitive markets they equate their marginal utility from allocating income to each good consumed. This allows us to simplify the analysis considerably by defining consumer preferences over income on the basis that consumption bundles are being chosen optimally in the background to maximize utility. This continues to be the case in the presence of uncertainty when there is a single consumption good. However, with multiple goods, risk-averse consumers care not only about changes in their (expected) money income in future time periods but also about changes in relative commodity prices as both determine the changes in their real income.3 This observation makes it easier to understand why in some pricing models the risk premiums are determined by changes in relative commodity prices. The next extension to the autarky economy introduces full trade where consumers can trade within each period and across time (intertemporally) in a market economy. Initially we consider an exchange model where consumers swap goods in each time period and use forward commodity contracts to trade goods over time. The analysis is then extended to an asset economy by allowing consumers to trade financial securities. As noted by Arrow (1953), financial securities can significantly reduce the number of transactions. Instead of trading a separate forward contract for each good consumed in the future, consumers can trade money and financial securities with future payouts that can be converted into goods. Thus, money and financial securities can be used as a store of value to reduce the costs of trading intertemporally. But this introduces a wealth effect in the money market due to the non-payment of interest on currency. Whenever consumers hold currency as a store of value, they forgo interest payments on bonds; this acts as an implicit tax when the nominal interest rate exceeds the marginal social cost of supplying currency. Any anticipated expansion in the supply of fiat currency that raises the rate of price inflation and the nominal interest rate will increase the welfare loss from the non-payment of interest by further reducing the demand for currency. There are other important interactions between financial and real variables in the economy when we introduce risk and asymmetric information. By trading intertemporally in frictionless competitive markets, consumers equate their marginal rates of substitution between future and current consumption to the market rate of interest, and therefore use the same discount factors to value capital assets. After extending the asset economy to allow investment by firms, we then examine the Fisher separation theorem. This gives price-taking firms the familiar objective of maximizing profit. Sometimes this objective is inappropriate. For example, shareholders are unlikely to be unanimous in supporting profit maximization when the investment choices of firms also affect the relative prices of the goods they consume. The Fisher separation theorem holds when these investment choices only have income effects on the budget constraints of shareholders. We then examine the effects of fully anticipated inflation in a classical finance model where the real economy is unaffected by changes in the rate of general price inflation. This establishes the Fisher effect where nominal interest rates change endogenously to Introduction 5 keep the real interest rate constant, so that current asset prices are unaffected by changes in inflation. The real effects of inflation are obtained by relaxing assumptions in the classical finance model, including homogeneous expectations and flexible nominal prices. Finally, the certainty analysis is completed by deriving asset prices for different types of securities such as perpetuities, annuities, share and bonds. In general terms, capital asset prices are determined by the size and timing of their net cash flows and the term structure of interest rates used to discount them. While this may seem a relatively straightforward exercise, it can become quite complicated in practice. There are many factors that can impact on the net cash flows and their discount factors, including, storage, investment opportunities, trading costs, inflation and taxes. After identifying the term structure of interest rates, we establish the fundamental equation of yield in a certainty setting. The term structure establishes the relationship between short- and long-term interest rates. This is important for pricing assets when their net cash flows are spread across a number of future time periods because the discount factors need to reflect the differences in their timing. Risk premiums are added to the short-term interest rates using an asset pricing model when the net cash flows are risky. These adjustments are derived later in Chapters 3 and 4. The equation of yield measures the economic return to capital invested in assets in each period of their lives. It identifies economic income as cash and consumption plus any capital gains or losses. Some asset prices rise over time, some fall and others stay constant. It depends on the size and timing of the cash flows they generate. Assets that delay paying net cash flows until later time periods must pay capital gains in subsequent years to compensate capital providers for the opportunity cost of time. In contrast, the prices of assets with larger immediate cash flows are much more likely to fall in some periods of their lives. In a frictionless competitive capital market every asset must pay the same economic rate of return as every other asset (in the same risk class). This is the no arbitrage condition which eliminates profit from security returns and makes them equal to the opportunity cost of time (and risk). It is an important relationship that appears time and again throughout the analysis in this book, and it provides extremely useful economic insights for predicting asset price changes and identifying the economic returns on assets. The role of arbitrage can be demonstrated by computing the price of a financial asset with a net cash flow in the next period of X1 dollars when the nominal rate of interest over the period is i1. It has a present value of PV0 = X1 , 1 + i1 where the discount factor 1/(1 + i1) converts future dollars into fewer current dollars to compensate the asset holder for the opportunity cost of delaying consumption expenditure. Whenever the current asset price (p0) falls below PV0 there is surplus with a net present value of NPV0 = X1 − p0 . 1 + i1 PV0 is the most the buyer would pay for this asset because it is the amount that would need to be invested in other assets (in the same risk class) to generate the same net cash flow, with PV0(1 + i1) = X1. In a frictionless competitive capital market arbitrage drives the market price of the asset (p0) to its present value (PV0). If the asset price results in p0(1 + i1) > X1 investors move into substitute assets which pay higher economic returns, while the reverse applies when p0(1 + i1) < X1. When the no arbitrage condition holds, the asset price is equated to the present value of its net cash flows, so that NPV0 = 0. In these circumstances the discount rate (i1) is the return every other asset (in the same risk class) pays over the same period of time. Despite the simplicity of this example, it can be used to establish a number of very important properties that should apply to asset values. First, their net cash flows are payouts made to asset holders, and they are computed as gross revenue accruing to underlying real assets minus any non-capital costs of production. Second, the discount rate should in every way reflect the characteristics of the net cash flows being discounted. It should be the rate of return paid on all other assets in the same risk class over the same time period. If the payouts are made in six months’ time the discount rate is the interest rate over that six-month period, while assets that make a continuous payout through time should be evaluated using a continuous discount factor. When the payouts are measured in nominal terms we use a nominal discount rate, and for those measured in real terms a real discount rate. In the presence of taxes we discount after-tax payouts using an after-tax discount rate. Finally, when the net cash flows are risky a premium is included in the discount rate to compensate asset holders for changes in their consumption risk. While these seem obvious points to make, they can nonetheless be easily overlooked in more complex present value calculations. Uncertainty and risk A key role of financial securities is to spread and diversify risk, and these issues are examined in Chapter 3. Many different types of securities trade in capital markets, including shares, bonds, options, futures, warrants and convertible notes. Traders use them to trade and diversify consumption risk and to obtain any profits through arbitrage. In a competitive capital market there is a perfect substitute for every traded security, so that no one can provide new risk trading opportunities by bringing a new security to the capital market. In other words, every new security can be replicated by creating a derivative security from existing traded assets. In this setting, traders have no market power because other traders can combine options, bonds and shares to create perfect substitutes for their securities. This activity is important for invoking the no arbitrage condition on security returns when there is uncertainty and plays an important role in making the capital market efficient in the sense that asset prices reflect all available information. Chapter 3 extends the analysis in the previous chapter by including uncertainty using the Arrow–Debreu state-preference model. This establishes the classical finance model in an uncertainty setting where consumers have conditional perfect foresight, there are no trading costs and markets are competitive. It is equivalent to a certainty analysis where the characteristics of goods are expanded to make them state-contingent. The states of nature completely summarize all possible outcomes of the world in the future, and everyone in the economy agrees on the state space and can solve the equilibrium outcomes in the economy in every state. The only remaining uncertainty is over the state that will actually eventuate. Most of the economic intuitions for the equilibrium allocations in the certainty setting will carry over to this setting, except that consumers use stochastic discount factors to assess the values of capital assets.4 If the capital market is complete, so that consumers can trade in every state of the world, they use the same state-contingent discount factors and have the same marginal valuations for risky capital assets. Introduction 7 Risk-averse consumers include a risk premium in their discount factors when valuing net cash flows on capital assets. This premium compensates them for risk imparted to their future consumption by the net cash flows. But while every consumer includes the same risk premium in their discount factors in the Arrow–Debreu model, they may not measure and price risk in the same way. One of the main objectives of finance research is to obtain an asset pricing model where consumers measure and price risk identically so that financial analysts can predict the market valuations of capital assets, and policy analysts can include a risk premium in the discount factors used to evaluate the net benefits on public sector projects. The first important step in this direction is to adopt von Neumann–Morgenstern expected utility functions to separate the probabilities consumers assign to states of nature from the utility they derive in each state. Since these preferences are time-separable with a state-independent utility function, they transform the Arrow–Debreu pricing model into the consumption-based pricing model where consumers face the same consumption risk and therefore measure and price risk identically. Asset pricing models Further assumptions are required, however, to make the consumption-based pricing model a simple linear function of a few (ideally one) factors that isolate market risk in the net cash flows to securities. We derive four popular pricing models as special cases of the consumptionbased pricing model in Chapter 4. They include the capital asset pricing model (CAPM) derived by Sharpe (1964) and Lintner (1965), the intertemporal capital asset pricing model (ICAPM) by Merton (1973a), the arbitrage pricing theory (APT) by Ross (1976) and the consumption-beta capital asset pricing model (CCAPM) by Breeden and Litzenberger (1978) and Breeden (1979). All of them adopt assumptions that make the common stochastic discount factors of consumers linear in a set of factors that isolate aggregate consumption risk. And since these factors are variables reported in aggregate data, the models are relatively straightforward for analysts to use when estimating the current values of capital assets. In all of these models there is no risk premium for diversifiable risk in security returns because it can be costlessly eliminated by bundling risky securities in portfolios. Only the non-diversifiable (market) risk attracts a risk premium because it is risk that consumers must ultimately bear. Since this material is more difficult analytically, we follow standard practice by initially deriving the CAPM as the solution to the portfolio problem of consumers. In this two-period model consumers fund all their future consumption from payouts to securities where consumption risk is determined by the risk in their portfolios. Since they have common information they combine the same bundle of risky securities with a risk-free security, where market risk is determined by the risk in their common risky bundle (known as the market portfolio). Thus, they measure risk in the returns to securities by their covariance with the return on the risky market portfolio. This is a widely used model in practice because of its simplicity. There is a single measure of market risk in the economy that all consumers price in the same way, where the market portfolio is normally constructed as a value-weighted index of the traded risky securities on the stock exchange. The problem with this model lies in the simplifying assumptions, in particular, that of common information, no transactions costs and joint normally distributed returns. When security returns are joint normally distributed the returns on security portfolios are completely described by their mean and variance. This is why the CAPM is based on a mean–variance analysis. The APT model is more general because it does not require security returns to be normally distributed. Instead, it is a linear factor analysis that isolates market risk empirically by identifying the common component in security returns. While the factors used are macroeconomic variables, they are not necessarily the source of the market risk in security returns. They are simply used to isolate it. We derive the APT model in a similar fashion to the derivation of the CAPM to demonstrate the role of arbitrage in eliminating diversifiable risk, and the role of mimicking portfolios to price the market risk isolated by the macro factors. The main weakness of this model is its failure to identify the set of common factors used by consumers. In the last three sections we derive the CAPM and the APT, as well as the ICAPM and the CCAPM, as special cases of the consumption-based pricing model. Even though the analysis is slightly more complex, it provides much greater insight into the underlying economics in these pricing models. In particular, it links the risk in securities directly back to the risk in consumption expenditure. Since consumers derive utility from consumption and face the same consumption risk, they assess the risk in capital assets by measuring the covariance of their returns with changes in aggregate consumption. Additional factors are required when aggregate consumption risk also changes over time. Each model has its strengths and weaknesses, and by deriving them as special cases of the consumption-based pricing model, they can be compared more effectively. Early empirical tests of these models focused on their ability to explain the risk premiums in expected security returns without considering how much risk was being transferred into real consumption expenditure. When testing the CCAPM, Mehra and Prescott (1985) looked beyond its ability to explain the risk in asset prices and examined whether the implied values of the (constant) coefficient of relative risk aversion and the (constant) rate of time preference were consistent with the risk in aggregate real consumption. Using US data, they discovered the equity premium and low risk-free real interest rate puzzles, where the premium puzzle finds the need to adopt a coefficient of relative risk aversion in the CCAPM that is approximately five times larger than its estimated value in experimental work, while the low risk-free rate puzzle finds the observed real interest rate much lower than the CCAPM would predict when the coefficient of relative risk aversion is set at its estimated value. Once it is set at the higher values required to explain the observed equity risk premium in security returns using the CCAPM, the predicted real interest rate is even higher. After summarizing these pricing puzzles we then look at subsequent attempts to explain them by modifying preferences and including market frictions. Insurance with asymmetric information As noted earlier, no risk premium is included in security returns for diversifiable risk in the consumption-based pricing models. This is referred to as the mutuality principle, and when it holds, we cannot assess the risk in security returns by looking solely at their variance. Instead, we need to measure that part of their variance that cannot be costlessly eliminated by bundling financial securities together or purchasing insurance. The diversification effect from bundling securities is examined in Chapters 3 and 4, while insurance is examined in Chapter 5. Insurance markets allow consumers to pool individual risks, which are diversifiable across the population. When insurance trades at actuarially fair prices (that is, at prices equal to the probability of their losses), consumers with von Neumann–Morgenstern preferences fully insure. They purchase less insurance and do not eliminate all the diversifiable risk from their consumption when there are marginal trading costs. Governments and international aid agencies often justify stabilization policies on the grounds that private insurance markets are distorted by moral hazard and adverse selection Introduction 9 problems. These are problems that arise when traders have asymmetric information – in particular, when insurers cannot costlessly observe the effort taken by consumers to reduce their probability of incurring losses, or distinguish between consumers with different risk. Dixit (1987, 1989) makes the important observation that stabilization policies can only be assessed properly when they are evaluated in the presence of the moral hazard and adverse selection problems. We provide a basis for doing this by formalizing equilibrium outcomes in the market for private insurance when traders have asymmetric information. Its effects are identified by comparing these outcomes to the equilibrium outcomes when traders have common information. Derivative securities There are frequently circumstances where individuals take actions now so they can delay making future consumption choices when uncertainty is partially resolved by the passing of time. Alternatively, they can eliminate some of the uncertainty in future consumption now by securing prices for future trades. Options contracts give holders the right but not the obligation to buy and sell commodities and financial assets at specified prices at (or before) specified times, while forward contracts are commitments to trade commodities and financial assets at specified prices and times. These derivative securities play the important role of facilitating trades in aggregate risk and allowing investors to diversify individual risk by completing the capital market. They also provide valuable information about the expectations of investors for future values of underlying assets. Strictly speaking, derivatives are financial securities whose values derive from other financial securities, but the term is used more widely to include options and forward contracts for commodities.5 Micu and Upper (2006) report very large increases in the combined turnover in fixed income, equity index and currency contracts (including both options and futures) on international derivatives exchanges in recent years. Most of the financial contracts were for interest rates, government bonds, foreign exchange and stock indexes, while the main commodity contracts were for metals (particularly gold), agricultural goods and energy (particularly oil). After summarizing the payouts to these contracts, we then look at how they are priced in Chapter 6. An economic model could be used to solve the stochastic discount factors in the consumption-based pricing model, but that involves solving the underlying asset prices. A preferable approach obtains a pricing model for derivatives that are functions of the current values of the underlying asset prices together with the restrictions specified by the contracts. Since the assets already trade we can use their current prices as inputs to the pricing model without trying to compute them. In effect, the approach works from the premise that markets price assets efficiently and all we need to do is work out how the derivatives relate to the assets themselves. This is the approach adopted by Black and Scholes (1973) whose option pricing model values share options using five variables – the current share price, its variance, the expiry date, exercise price and the risk-free interest rate. It is a popular and widely used model because this information is readily available, but it does rely on a number of important assumptions, including that they are European options with fixed exercise dates, the underlying shares pay no dividends and they have a constant variance. We do not derive the Black–Scholes option pricing model formally, preferring instead to provide an intuitive explanation for its separate components. Forward contracts are also valued using the current price of the underlying asset, the settlement date, margin requirements, price limits and storage costs when the asset is a storable commodity. Corporate finance In most economies a significant portion of aggregate investment is undertaken by corporate firms who can raise large amounts of risky capital by trading shares, bonds and other securities. In particular, they can issue limited liability shares that restrict the liability of shareholders to the value of their invested capital. In return, they are subject to statutory regulations that, among other things, specify information that must be reported to shareholders at specified times, and bankruptcy provisions to protect bondholders from undue risk. A significant fraction of the value of financial securities that trade in capital markets originate in the corporate sector. There are primary securities, such as debt and equity, as well as the numerous derivative securities written on them. In recent years a larger proportion of consumers hold these corporate securities, if not directly, then at least indirectly through their superannuation and pension funds. We examine the role of risk and taxes on corporate securities and on the market valuations of the firms who issue them in Chapter 7. In particular, we look at the effects of their capital structure and dividend policy choices. For expository purposes the classical finance model is an ideal starting point for the analysis because it establishes fundamental asset pricing relationships that can be extended to accommodate more realistic assumptions. In this setting, where consumers have common information in frictionless competitive capital markets, we obtain the Modigliani–Miller financial policy irrelevance theorems. They are generalized where possible by including risk and taxes before introducing leverage related costs and asymmetric information. Most countries have a classical corporate tax that taxes the income corporate firms pay their shareholders but not interest payments on debt. This tax bias against equity encourages corporate firms to increase their leverage. Early studies looked for leverage-related costs to explain the presence of equity in a classical finance model, including bankruptcy costs, and lost corporate tax shields due to the asymmetric treatment of profits and losses, which both lead to optimal leverage policy choices. However, empirical studies could not find large enough leverage costs to offset the tax bias against equity, so Miller (1977) examined the combined effects of corporate and personal taxes and found that favourable tax treatment of capital gains could make equity preferable for investors in high tax brackets – that is, investors with marginal personal tax rates on cash distributions that exceed the corporate tax rate by more than their personal tax rates on capital gains. Most countries have progressive personal tax rates so that low-tax investors can have a tax preference for debt while high-tax investors have a tax preference for capital gains. Once both securities trade, Modigliani–Miller leverage will hold when consumers have common information. But this analysis by Miller produced the dividend puzzle where no fully taxable consumers have a tax preference for dividends over capital gains. Thus, shares pay no dividends in the Miller equilibrium. We examine a number of different explanations for this puzzle, including differential transactions costs, share repurchase constraints that restrict the payment of capital gains, and dividend signalling under asymmetric information. In the last section of this chapter we examine the imputation tax system used in Australia and New Zealand. This removes the double tax on dividends by crediting shareholders with corporate tax paid, where the corporate tax is used as withholding tax to discourage shareholders from realizing their income as capital gains in the future. Since capital gains are taxed at realization, rather than when they accrue inside firms, shareholders can reduce their effective tax rate on them by delaying realization. The corporate tax considerably reduces these benefits from retention by taxing income as it accrues inside firms. Introduction 11 Project evaluation and the social discount rate Governments also undertake a large portion of the aggregate investment in most economies, where public sector agencies generally use different evaluation rules than those employed by private investors when markets are subject to distortions arising from taxes, externalities, non-competitive behaviour and the private underprovision of public goods. Private investors make investment choices to maximize their own welfare, while governments make investment choices to maximize social welfare. These objectives coincide in economies where resources are allocated in competitive markets without distortions (setting aside distributional concerns). However, when markets are subject to distortions private investors evaluate projects using distorted prices, while governments look beyond these distortions and evaluate projects by measuring their impact on social welfare. These differences are demonstrated in Chapter 8 by evaluating public projects that provide pure public goods in a taxdistorted economy with aggregate uncertainty. The analysis is undertaken in a two-period setting where consumers have common information and von Neumann–Morgenstern preferences. Initially we obtain optimality conditions for the provision of pure public goods in the absence of taxes and other distortions to provide a benchmark for identifying the effects of distorting taxes. This extends the original Samuelson (1954) condition to an intertemporal setting with uncertainty where the current value of the summed marginal consumption benefits from the public good (MRS) is equated to the current value of the marginal resource cost (MRT ). When these costs and benefits occur in the second period they are discounted using a stochastic discount factor, which, in the absence of taxes and other distortions, is the same as the discount factor used by private investors. However, in the presence of trade taxes (and other distortions) there are additional welfare effects when the projects impact on taxed activities. Any reduction in tax revenue is a welfare loss that increases the marginal cost of government spending, while the reverse applies when tax revenue rises. As a consequence of these welfare changes, projects in one period can have welfare effects that spill over into other time periods. A conventional Harberger (1971) analysis is used to separate the welfare effects of each component of the projects, where this allows us to isolate the social benefits from extra public goods and the social costs of the tax changes made to fund their production costs.6 By doing so we obtain measures of the marginal social cost of public funds for each tax; these are used as scaling coefficients on revenue transfers made by the government to balance its budget. For a distorting tax, each dollar of revenue raised will reduce private surplus by more than a dollar due to the excess burden of taxation, where the marginal social cost of public funds exceeds unity. When taxes are Ramsey optimal they have the same marginal social cost of public funds, where the welfare effects of the projects are independent of the tax used. Compensated welfare measures are then used to isolate the changes in real income from each project, where a compensated gain is surplus real income generated at unchanged expected utility for every consumer. They are efficiency effects that ultimately determine the final changes in expected utility. We demonstrate this by generalizing the Hatta (1977) decomposition to allow variable producer prices and uncertainty. It solves actual changes in expected utility as compensated welfare changes multiplied by the shadow value of government revenue, where the shadow value of government revenue measures the aggregate change in expected utility from endowing a unit of real income on the economy. Since all the income effects are included in this scaling coefficient they play no role in project evaluation when consumers have the same distributional weights, and when they have different weights the distributional effects are conveniently isolated by the shadow value of government revenue. Most public sector projects impact on consumption risk, where some projects are undertaken because they provide risk benefits, while for other projects the changes in risk are side effects. For example, governments in developing countries have frequently used commodity price stabilization schemes to reduce consumption risk, like the rice price stabilization scheme in Indonesia and the wool price stabilization scheme in Australia.7 We measure risk benefits from projects by deducting the expected compensating variation (CV) from the ex-ante CV. The expected CV holds constant the utility of every consumer in every time period and every state of nature. Thus, it completely undoes the impact of each project on consumers, including changes in their consumption risk. In contrast, the ex-ante CV holds constant the expected utility of every consumer but without holding their utility constant in every state of nature. It is the amount of income we can take from consumers now without reversing the changes in their consumption risk from the project. When the expected CV is larger than the ex-ante CV consumers benefit from changes in consumption risk, while the reverse applies when the ex-ante CV is larger. One of the most contentious issues in project evaluation involves the choice of social discount rate for public projects in economies with distorted markets. Harberger (1969) and Sandmo and Dréze (1971) find the social discount rate is a weighted average of the pre-and post-tax interest rates in the presence of a tax on capital income in a two-period certainty setting. By including additional time periods, Marglin (1963a, 1963b) finds it should be higher than the weighted average formula, while Bradford (1975) finds it should be approximately equal to the after-tax interest rate. Sjaastad and Wisecarver (1977) show how these claims can be reconciled by their different treatment of capital depreciation. When private saving rises to replace depreciation of public capital the discount rate becomes the weighted average formula in a multi-period setting. Others argue there are differences between private and social discount rates when project net cash flows are uncertain. Samuelson (1964), Vickery (1964) and Arrow and Lind (1970) argue the social discount rate should be lower because the government can raise funds at lower risk. Bailey and Jensen (1972) argue these claims are based on the public sector being able to overcome distortions in private markets for trading risk. We derive the social discount rate by including a tax on capital income in the public good economy. This extends the analysis of Harberger and of Sandmo and Dréze where, in the absence of trade taxes, the weighted average formula holds in each state of nature. Once trade taxes are included, the social discount rate deviates from this formula when public investment impacts on trade tax revenue. The derivations of the discount rate by Marglin and Bradford are reconciled to the weighted average formula using the analysis in Sjaastad and Wisecarver. 1.2 Concluding remarks Financial economics is a challenging subject because it draws together analysis from a number of fields in economics. Indeed, modern macroeconomic analysis uses general equilibrium models with money and financial securities in a multi-period setting with uncertainty. Time and risk are fundamental characteristics of the environment every consumer faces. In recent years activity in capital markets has expanded dramatically to provide consumers with opportunities to trade risk and choose their intertemporal consumption. Introduction 13 More and more people have become shareholders in private firms as they set aside funds for consumption in retirement. Professional traders in the capital market perform a variety of important services. Some gather information to find profitable investment opportunities, where this imposes constraints on firm managers and aligns their interests more closely to those of their investors. And by reducing trading costs they expand the aggregate consumption opportunities for the economy. Others specialize in trading insurance so that consumers can reduce individual risk from their consumption. While most finance courses focus on private activity, which is understandable given the desire students have to either work in private firms or as policy analysts with an understanding of how private markets function, there are nonetheless a number of important issues that are peculiar to the evaluation of public policy in economies with distorted markets. This book attempts to identify fundamental principles that underpin activity in financial markets. Starting in a certainty setting the analysis is extended gradually so that readers can develop a framework for understanding how time and risk impact on the allocation of resources, both in the private and public sectors of the economy. By exposing the fundamental economic principles in financial markets, financial economics provides a clearer understanding of the material covered in the field of finance. For example, the capital asset pricing model makes much more sense, and can be used in a more informed way, when it is derived using standard demand–supply analysis. It helps us understand why consumers all hold the same risky bundles and why they price risk identically, as well as exposing the important role of the key assumptions made in the model. Investment decisions under certainty A lot of important insights are obtained from the standard consumer problem of an individual who maximizes utility by allocating a given amount of money income to a bundle of goods with benefits confined to the current period. In practice, however, most goods generate future consumption flows, and consumers regularly make choices to determine their future consumption by trading capital goods. Houses and cars are obvious examples of goods with future consumption flows, as are jars of honey or packets of biscuits. These goods are capital assets which, broadly defined, are goods that embody future consumption flows. They can be purchased from current income as a form of saving, or by borrowing against future income. Either way, they allow consumers to trade intertemporally.1 There are different ways of shifting consumption through time: some result from actions consumers take in isolation, such as storage and other private investment activities, while others arise from trading in the capital market. In Section 2.1 we follow the analysis in Hirshleifer (1965) by examining storage and other private investment opportunities in an autarky economy where consumers live in isolation on (imaginary) island economies. This conveniently separates capital investment undertaken directly by individuals themselves from investment made on their behalf by firms. Individuals can store goods, such as rice and apples, for future consumption, and they can also plant rice and apple trees to produce future consumption. Both are examples of private investment in capital assets. Additional opportunities arise when they can trade capital assets with each other in the capital market.2 The role of trade is examined in Section 2.2 by introducing it in stages. Atemporal trade is introduced to the autarky economy where consumers exchange goods in each time period but not over time. This first step conveniently allows us to summarize intertemporal consumption choices using dollar values of expenditure in each period. It is the basis for the standard Fisher (1930) analysis of intertemporal consumption choices over current and future expenditure. Since consumers equate the marginal utility from spending (real) income on each good in their bundle, the composition of the consumption bundle can be suppressed in the analysis. Fiat money (currency) is then included to identify its role as a medium of exchange, and we do this by initially ruling out currency as a store of value, where the demand for currency is determined by its ability to reduce trading costs in each time period. The final extension allows consumers to also trade across time periods in a market economy, where some save while others borrow due to differences in their preferences, income flows and /or the rate of interest (which equates aggregate borrowing and saving in a competitive capital market). These intertemporal transfers can be made without affecting aggregate consumption. It only requires consumers to have different marginal valuations for future relative to current consumption. And there is even greater scope to trade Investment decisions under certainty intertemporally when aggregate consumption can be transferred into the future through storage and other forms of investment. Financial securities play a number of important roles in market economies, one of which is to reduce the costs of trading private property rights over resources. Most finance models ignore these costs because they are relatively small, but that diminishes their importance, particularly when there is uncertainty and asymmetric information between traders where property rights are more costly to trade. In a certainty setting with complete information, no transactions costs, and perfectly divisible capital goods, there is no role for financial securities. In reality, however, goods are not perfectly divisible and they are costly to move about, and financial securities, and in particular fiat money, can dramatically lower these costs by reducing the number of physical exchanges of goods and services. Without financial assets consumers would exchange goods numerous times before finally converting them into their preferred consumption bundle. Since these assets provide holders with claims to underlying real resources, they reduce the number of times goods are transferred between consumers. When financial securities trade in a market economy we refer to it as an asset economy. Trading costs are introduced to the asset economy to illustrate what determines the optimal demand for financial securities in a certainty setting. These costs arise on atemporal and intertemporal trades, where different securities play different roles in reducing them. Fiat money is a liquid security used for relatively low-valued transactions, and is more widely accepted by traders. Its role as a store of value is undermined somewhat by the non-payment of interest on currency held for a period of time, so its primary role is as a medium of exchange. Consumers do carry currency between periods as a form of insurance when there is uncertainty, but most intertemporal trade is facilitated using financial securities. For example, firms issue bonds and shares to fund investment, particularly larger investments with economies of scale. These securities specify the terms and conditions that govern the resource transfers through time. In practice the most significant difference between bonds and shares, and the many other financial securities that trade, is the risk in their payouts. Indeed, a key role of financial securities is to facilitate trades over risky resource transfers, and we examine this in much greater detail in Chapters 3 and 4. In a certainty setting, however, financial securities summarize property right transfers between savers and borrowers, where savers forgo current consumption in return for future consumption, while the reverse applies for borrowers. In effect, the security is a contract that specifies the terms and conditions that govern these intertemporal resource transfers. They also provide a mechanism for aligning (at least partially) the incentives of firm managers (as agents) to the interests of their investors (the principals). The task is greatly simplified when investors all have the same objective function for firms. Irving Fisher (1930) made the important observation that consumers make their investment and intertemporal consumption choices separately when they are price-takers. In particular, they choose investment to maximize wealth and then choose intertemporal consumption to maximize utility. This is referred to as the Fisher separation theorem, and it provides price-taking firms with the simple and unanimous objective by its investors to maximize profit. We demonstrate this theorem and consider how economic analysis is affected when it fails to hold. The firm’s objective function is much more complicated when Fisher separation breaks down because investment choices depend on the intertemporal consumption preferences of its investors. Financial securities play an important role in aligning the interests of managers with those of their investors when there is uncertainty and asymmetric information. Investment decisions under certainty For example, ordinary shareholders generally have voting rights over the decisions taken by firm managers, where a shareholder, or group of shareholders, can have a controlling interest in a firm when they hold or can influence more than 50 per cent of its shares. Also, specialist traders in the financial market gather information to identify profitable opportunities when share prices deviate from their fundamental determinants. On some occasions they purchase enough shares to change the way a firm operates by reorganizing or replacing its existing management, by merging it with another firm, or by liquidating its assets and closing it down entirely. That is why share prices provide important signals, not just about conditions that affect the underlying value of goods and services that firms produce, but also about the performance of their managers. Share prices fall when traders believe managers are performing poorly, and this acts as a discipline on them. Conversely, managers who perform well benefit their shareholders by driving up share prices. Indeed, share prices, and changes in them, provide important information to traders in capital markets. Expected inflation in the general price level can affect capital asset values by changing their real economic returns. These real effects originate in a number of different ways so we start by initially demonstrating the Fisher effect in Section 2.3. It is where nominal asset returns move with fully anticipated inflation to preserve their real returns, and it arises in a classical finance model where all nominal variables adjust freely in frictionless competitive markets and traders have common information. This establishes an important benchmark for identifying the real effects of inflation when the key assumptions in the model are relaxed. In particular, we consider heterogeneous expectations and the wealth effects from the non-payment of interest on currency. Arbitrage in competitive (frictionless) markets underpins all of the popular asset pricing models in finance. Indeed, it makes every security (in the same risk class) pay the same expected economic return in every time period, and makes any sequence of short rates of return consistent with the corresponding long rate of return over the same period. We demonstrate these propositions in Section 2.4 by pricing bonds and shares, and then use these prices to derive the Modigliani−Miller financial policy irrelevance theorems in the presence of taxes in a two-period certainty model. This analysis is extended later in Chapters 3 and 4 to accommodate uncertainty. 2.1 Intertemporal consumption in autarky Fisher (1930) initially showed that price-taking agents use the net present value (NPV) rule to value capital assets when they can trade intertemporally in a competitive capital market. Before demonstrating this we start the analysis in an autarky economy to identify where the potential gains from trade come from in market economies. Storage and other private investment opportunities are also examined in this section. 2.1.1 Endowments without storage Consider an autarky economy where each consumer (h = 1, ... , H) is endowed with N nonstorable consumption goods xth := {xth (1), … , xth ( N )} in each time period t ∈ {0, 1}. They consume bundles of these goods xth := {xth (1), … , xth ( N )} to maximize utility u h ( x0h , x1h ).3 Since each consumer effectively lives on an island their optimization problem, in the absence of storage and private investment opportunities (and with superscript h omitted), can be summarized as Investment decisions under certainty ⎧⎪ x − x0 ≤ 0 ⎫⎪ 4 max ⎨u( x0 x1 ) 0 ⎬. x1 − x1 ≤ 0 ⎭⎪ ⎩⎪ 17 (2.1) With non-satiation the constraints in (2.1) binds and the equilibrium outcome is degenerate in the sense that everyone consumes their endowments.5 Clearly, each consumer is like a shipwrecked Robinson Crusoe (but without Friday) living on a remote island where the consumption opportunities are as illustrated in Figure 2.1 for one of the commodities. At the endowment point x the marginal rate of substitution between consumption of each good (n) tomorrow (t =1) and today (t = 0) is the inverse of the slope of the indifference schedule, with: MRS1,0 ( n) = u1′ ( n) λ1 ( n) = ∀n,6 u0 ′ ( n) λ 0 ( n) where ut ′ ( n) = ∂ u( n)/∂ xt ( n) for t ∈{0, 1}, and λ0(n) and λ1(n) are the Lagrange multipliers for the endowments that constrain consumption of each good in each time period. In the absence of trade these multipliers are equal to the marginal utility from consuming a good in each period, where λ1(n)/λ0(n) = 1/[1 + ρ(n)] is the personal discount factor the consumer uses to compute the current value of good n in the second period. Without trade the discount rate ρ(n) can differ across goods and across consumers. For example, the personal discount rate for good b in the second period (measured in units of good n) is λ1(b)/λ0(n) = 1/[1 + ρ(b, n)], where it is possible that ρ(b, n) ≠ ρ(n). These differences signal potential gains from trading goods within each period and across time, as consumers have different valuations for future consumption flows. And when they do, they have different valuations for capital assets in the autarky economy. We can determine a consumer’s rate of time preference for each good by measuring the personal discount rate for the constant consumption bundles along a 45 degree line through the origin in Figure 2.1. Any deviation in the discount factor from unity along this line is solely due to the timing of consumption, where a positive rate of time preference indicates the consumer’s impatience for consuming the good, while the reverse applies when it is negative.7 – x1 – x=x uE – x0 Figure 2.1 Interemporal consumption in autarky. Investment decisions under certainty Thus, along the 45° line the discount rate is equal to the rate of time preference, but not otherwise (for strictly convex indifference curves). 2.1.2 Endowments with storage Before introducing trade we look at private investment opportunities in the autarky economy of which storage is an obvious example. Suppose one or more of the goods can be stored so that consumers can transfer current endowments to the second period, the problem for each consumer becomes ⎫⎪ ⎧⎪ x − x0 ≤ 0 max ⎨u( x0 , x1 ) 0 ⎬, x1 − x1 − ( x0 − x0 ) ≤ 0 ⎭⎪ ⎩⎪ with x0 ( n) − x0 ( n) > 0 being the quantity of each good n ∈ N stored. When consumers have standard preferences (to rule out corner solutions and non-satiation) the constraint on future consumption in (2.2) binds, but the constraint on current consumption may not. This is confirmed by using the first-order conditions to compute the discount factor, for each good (n), as MRS1,0 ( n) = λ1 ( n ) 1 ≡ . λ 0 ( n) + λ1 ( n) 1 + ρ( n) Notice how both constraint multipliers appear in the denominator when goods are storable. In effect, second-period consumption is constrained by the endowments in both periods, where consumers may choose to store every storable good, some of them, or none at all. The decision is determined by their marginal valuations for these goods at the endowment point, where two possibilities arise: i Costless storage occurs, with x0 ( n) − x0 ( n) > 0, if at the endowment point MRS1,0(n) > 1. Since the constraint on current consumption in (2.2) is non-binding, we must have λ0(n) = 0, so that MRS1,0(n) = 1 at an interior solution, with x0(n) > 0 and x1(n) > 0. This equates the marginal utility from consuming the good in each period, as well as the marginal valuation of future consumption of all other goods measured in units of good n in the first period, where MRS1,0 ( k , n) = 1 at all n, k. No storage occurs, with x¯ 0(n) − x0(n) = 0, if at the endowment point MRS1,0(n) ≤ 1. This is where consumers have a higher marginal valuation for the good (n) in the first period, and would therefore prefer to transfer some of the endowment from the second period (or consume their endowment when MRS1,0(n) = 1). The consumption opportunities for a storable good (n) are illustrated in Figure 2.2 by the frontier DEF. Storage allows the consumer to trade from the endowment point E along the segment DE of the frontier, which has slope −1 for costless storage. The good is stored when the slope of the indifference curve at the endowment point is flatter than line DE reflecting a higher marginal valuation for it in the second period. Thus, storage allows consumers to exploit some of the potential gains from trade in the autarky economy when they have higher relative marginal valuations for consuming goods in the second period. But they cannot exploit potential gains from trade when they have Investment decisions under certainty x1(n) D x1(n) – x1(n) F – x0(n) Figure 2.2 Costless storage in autarky. higher relative marginal valuations for goods in the first period or when their marginal valuations for goods in each period differ from those of other consumers. Box 2.1 Storage: a numerical example Brad Johnson has 400 kg of rice which he can consume today (x0) or store (z0) and consume in 12 months time (x1). He has no other income in each of the two periods and there are no storage costs. When he chooses consumption to maximize the utility function ln x0 + 0.98 ln x1 Brad will consume less rice next year than today, where his optimal consumption satisfies x1* = 0.98 x0*. Using the budget constraint when it binds, with x0 = 400 − x1, he chooses x0* ≈202 kg and x1* ≈198 kg, where the difference is due to his marginal impatience for current consumption captured in the coefficient 0.98 in the utility function. When Brad consumes on the 45° line his marginal rate of substitution between consumption today and next year is MRS0,1 = 0.98 = 1/(1 + ρ), where ρ ≈ 0.02 is his marginal rate of time preference. Since he has a higher marginal valuation for current consumption his optimal consumption choice, which is illustrated in the diagram below at point A, lies to the right of the 45° line. Clearly, if Brad’s marginal rate of time preference was zero he would consume on the 45° line. x1 400 Slope of uA on 45° line is −1.02 Slope = −1 A uA 45° 202 In reality, storage is costly due to wastage and the costs of providing storage facilities, and these costs contract segment DE of the consumption frontier in Figure 2.2. It they are constant marginal costs the segment DE of the consumption frontier gets flatter around point E, while fixed costs shift line DE to the left. Investment decisions under certainty Box 2.2 Costly storage: a numerical example When marginal storage costs are introduced into the optimization problem for Brad Johnson in Box 2.1 above, his budget constraint contracts around the endowment point and he consumes even less rice next year. Recall that with costless storage he consumes less rice in the second period due to his positive rate of time preference. The effects of storage costs are illustrated by introducing 2 per cent wastage, so that his optimal consumption choice now satisfies x1* = 0.9604 x0*. Using the budget constraint in the presence of these costs when it binds, with x0 = 400 − x1/0.98, he consumes x0* ≈ 202 kg and x1* ≈ 194 kg. This outcome is illustrated in the diagram below at point B which is vertically below point A where he consumed previously when there were no storage costs. There is no change in current consumption here because the income effect offsets the substitution effect where the change in real income falls solely on consumption next period which falls by the storage costs of 202 × 0.02 ≈ 4 kg. x1 Slope = −0.98 400 B 45° 202 uA uB 400 2.1.3 Other private investment opportunities Consumers have other ways of converting current endowments into future consumption goods, and they differ from storage by providing the possibility of growth. In other words, they have private investment opportunities that convert a given quantity of current consumption goods into a larger quantity of future consumption goods. To accommodate them we define the second-period outputs for each consumer (h) as y1h ( z0h ) := { y1h (1),…, y1h ( N )}, which are produced by inputs of current goods, z0h := {z0h (1),…, z0h ( N )}, with zoh (n) = x¯ 0h (n) − x h0 (n) being the input of each good n at time 0. This production technology is general enough to allow multiple outputs from single inputs and vice versa, as well as single outputs from single inputs. But as a way to bound equilibrium outcomes, and to make them unique for each consumer, we follow standard practice and assume the production possibility sets are strictly convex. In other words, there is a diminishing marginal productivity of investment and no fixed costs, where the problem for each consumer becomes ⎧⎪ x + z − x0 ≤ 0 ⎫⎪ 8 max ⎨u( x0 , x1 ) 0 0 ⎬. x1 − x1 − y1 ≤ 0 ⎭⎪ ⎩⎪ For optimally chosen consumption (with standard preferences) the personal discount factors for each good (n) are Investment decisions under certainty MRS1,0 ( n) = λ1 ( n ) 1 , ≡ λ 0 ( n) + ∑ j λ i ( j ) MP( j, n) 1 + ρ( n) where MP( j, n) = ∂y1( j)/∂z0( n) is the marginal increase in the output of good j from investing another unit of good n.9 This expression is similar to the discount rate with costless storage, where once again private investment opportunities raise the marginal utility of the current endowments when they are consumed in the second period. It is captured here by the term in the denominator that measures the marginal valuation of the goods used as inputs, ∑ j λ1 ( j ) MP( j, n)/λ1 ( n). Consumers will invest when, at the endowment point, they have a higher marginal valuation for future consumption, with MRS1,0(n) > λ1(n)/λ0(n). The consumption opportunity set in the autarky economy with private investment is illustrated in Figure 2.3 when good n is the only input used to produce itself in the second period. You could think of it as corn planted now and harvested in the future. The non-linear segment DE of the consumption opportunity frontier maps the extra future consumption from private investment (with diminishing marginal productivity) onto the endowment point E. Box 2.3 Private investment opportunities: a numerical example Suppose we reconsider the consumption choices made by Brad Johnson in Box 2.1 by replacing storage with private investment where he can plant z0 kg of rice today and harvest y1 = 30÷z0 kg in 12 months’ time. This technology has a positive marginal product of dy1/dz0 = 15/÷z0 kg, which diminishes with investment. Now his optimal consumption choices satisfy 0.98( x0* / x1* ) = ÷z0 /15, and are solved using the budget constraints on current and future consumption when they bind, with x0 = 400 − z0 and x1 = 30÷z0, where x*0 ≈ 269 kg and x1* ≈ 344 kg. This outcome is illustrated in the diagram below at point C where his indifference curve uC has the same slope as his investment opportunity set when he invests z0* ≈ 131 kg. Since there is a positive marginal product from initial investment, the extra real income raises Brad’s utility above the levels achieved through storage earlier in Boxes 2.1 and 2.2, where we have uC > uA > uB. x1 Slope ≈ −1.31 C uC Investment Z*0 ≈ 131 269 400 Even though there is no role for financial securities in the autarky economy the analysis has identified intertemporal consumption opportunities for individual consumers prior to trade and isolated the source of any potential gains from trade when consumers have different marginal valuations for goods in and between each time period. Investment decisions under certainty x1(n) D x1(n) – x1(n) F – x0(n) Figure 2.3 Private investment opportunities in autarky. 2.2 Intertemporal consumption in a market economy We introduce trade into the autarky economy in this section by allowing consumers to exchange goods in a market economy. Initially they only trade their endowments within each time period, before eventually trading over time as well. This allows us to separate the roles of fiat currency (notes and coins) and financial securities as mediums of exchange and stores of value, where financial assets are used to reduce the costs of exchanging goods and to transfer expenditure between time periods. 2.2.1 Endowments with atemporal trade Consumers who can barter and exchange goods in frictionless competitive markets have the same marginal valuations for each good in each time period for optimally chosen consumption. But without intertemporal trade they can have different marginal valuations for goods between time periods, in which case they will use different discount rates on future consumption flows when valuing capital assets. We assume all trades within each period are at competitively determined equilibrium commodity prices, where pt: = {1, ... , pt(N )}is the set of relative prices for the N goods at time t ∈{0,1} with good 1 chosen as numeraire. In this economy the consumer problem can be summarized as ⎧⎪ X ≤ I 0 ⎫⎪ , max ⎨u( x0 , x1 ) 0 ⎬ X 1 ≤ I1 ⎭⎪ ⎩⎪ where the market values of consumption and expenditure are X t ≡ pt xt and I t ≡ X t = pt xt , respectively. Any combination of the N goods can be purchased in each time period, subject to consumption expenditure being no greater than the market value of the endowments. And with atemporal trade these constraints apply to total expenditure and not the endowment of each good. Thus, there is a single constraint multiplier on the market value of income in each time period, rather than a separate multiplier for each good in each period, as was the case previously in the autarky economy. Investment decisions under certainty Definition 2.1 The endowment economy with atemporal trade is described by (u, x ), where u is the set of consumer utility functions and x the set of current and future endowments for all H consumers. A competitive equilibrium in this economy is characterized by the relative commodity prices p0* and p1* such that: i x0h* ( n) and x1h* ( n), for all n, solve the consumer problem in (2.4) for all h; ii the goods markets clear in each period t ∈{0, 1}, with ∑x¯ th (n) = ∑h x ht*(n) for all h. It is a unique equilibrium outcome when consumers have strictly convex indifference sets over bundles of consumption goods, but is unlikely to be Pareto efficient. All consumers have the same marginal rates of substitution between goods in the same time period when they can trade atemporally in frictionless competitive markets, with MRS0 ( n, k ) = p0 ( n) ∀n, k , h, p0 ( k ) but they can have different marginal rates of substitution for goods in different time periods when they cannot trade intertemporally, with MRS1h,0 ( n) = λ1h p1 ( n) ∀n, λ 0h p0 ( n) where λ1h / λ 0h are personalized discount factors on future income.10 Once consumers exhaust the gains from atemporal trade we can reformulate the consumer problem over a single representative commodity (income), where the indirect utility function can be solved as 11 ⎧⎪ X ≤ I 0 ⎫⎪ v ( I ) := max ⎨u( x0 , x1 ) 0 ⎬, X 1 ≤ I1 ⎭⎪ ⎩⎪ with I = {I0, I1}. It is a maximum value function with income optimally allocated to bundles of goods in each period. In many applications we are not interested in the composition of these bundles, but rather in the value of consumption expenditure in each period, where the consumption opportunities are illustrated in Figure 2.4. This is the familiar analysis used by Fisher (1930) and Hirshleifer (1970). Clearly, consumption must be at the endowment when goods cannot be transferred between the two periods through storage, investment opportunities or trade. At this equilibrium allocation the personal discount rate on future income is the inverse of the slope of the indifference curve at the endowment point I, and it can differ across consumers with different endowments and preferences. Before allowing consumers to trade intertemporally we consider the role of (fiat) money as a medium of exchange. 2.2.2 Endowments with atemporal trade and fiat money Governments are monopoly suppliers of fiat money (notes and coins), which has two main roles − one is to reduce trading costs, while the other is to provide traders with a store of Investment decisions under certainty v (Io, I1) I0 Figure 2.4 Consumption opportunities with income endowments and atemporal trade. value (in nominal terms at least). By using currency, traders can significantly reduce the number of costly physical exchanges of goods and services in each time period, and we focus on this role by assuming currency cannot be used as a store of value. In other words, any currency the government supplies to the private economy by purchasing goods is redeemed in each period immediately after private trades are consummated. We avoid the need for any taxes by assuming currency is costless to supply.12 Thus, there are no resource transfers through the government budget in this endowment economy with atemporal trade. We define trading costs (measured in units of numeraire good 1) for each consumer as a constant proportion τh(n) of the market value of each good n∈N traded. They are the same in each time period t ∈{0, 1} with τh(n) > 0 for purchases when x ht (n) > x– ht(n), and τ h(n) < 0 for sales when x th ( n) < x th ( n). It is assumed here that they are strictly decreasing functions of the currency used in each period (mth), where the problem for each consumer in the endowment economy with atemporal trade and fiat currency is p x + τD0 − p0 x0 ≤ 0 ⎪⎫ ⎪⎧ max ⎨u( x0 , x1 ) 0 0 ⎬, p1 x1 + τD1 − p1 x1 ≤ 0 ⎪⎭ ⎪⎩ with τ: = {τ (1), ..., τ (N)} being the trading costs for each good and Dt: = {Dt(1), ..., Dt(N)} the net demands for them, where Dt ( n) = pt ( n)[ xt ( n) − xt (n)] for all n at each time t ∈{0, 1}. Definition 2.2 The endowment economy with atemporal trade and (fiat) money is described by (u, x¯, m), where m is the set of total currency supplied in each time period. A competitive equilibrium in this economy is characterized by the relative commodity prices p0* and p1* such that: i x 0h* (n) and x 1h* (n), for all n, solve the consumer problem in (2.6) for all h; ii the goods markets clear in each time period t ∈{0, 1}, with ∑h x th (n) = ∑h x th* (n), for all n π 1, h h and ∑h ¯x ht (1) = ∑h x h* t (1)+∑hτ D t . Investment decisions under certainty As noted above, there is no government budget constraint in this economy as currency is costless to supply and no resources are transferred from consumers as seigniorage. That is the reason why currency does not appear directly in the budget constraints in (2.6), but indirectly through its impact on trading costs, where the final market-clearing condition in Definition 2.2 (ii) equates the sum of trading costs and consumption of good 1 to the aggregate endowment of good 1. If trading costs are minimum necessary costs of trade they do not distort equilibrium outcomes, even though consumers cannot equate their marginal rates of substitution for goods in the same time period. For optimally chosen consumption in each period, we have MRS0 ( n, k ) = p0 ( n)[1 + τ( n)] ∀n, k , h, p0 ( k )[1 + τ( k )] where the trading costs do not cancel, even when they are the same for each good, as one may be purchased and the other sold. But if we deduct these costs from the marginal rates of substitution by shifting them to the left-hand side of the expression above, consumers will have the same net marginal rates of substitution for goods in the same time period. This is a signal of efficiency in the conventional Paretian sense when they are minimum necessary costs of trade. However, consumers can have different marginal rates of substitution for goods in different time periods, even without trading costs, when they cannot trade intertemporally. This was confirmed in the previous section where, in the presence of trading costs, we now have MRS1h,0 ( n) = λ1h p1 ( n)[1 + τ( n)] ∀n. λ 0h p0 ( n)[1 + τ( n)] Optimally chosen currency demands in each time period satisfy −λt ∑ n ∂τ( n) p ( n) ( xt ( n) − xt ( n) = 0 ∀t ∈{0, 1} andd ∀h. ∂mt t Since there are no private costs to consumers from using currency in this setting they exhaust the benefits, with ∂τ(n)/∂ mt = 0 for all n. But any quantity of (almost) perfectly divisible currency will satisfy consumers in a certainty setting where nominal prices can be costlessly adjusted to preserve the market-clearing relative commodity prices. Thus, there is a classical dichotomy between the real and nominal variables in the economy, where the reduction in trading costs is independent of the quantity of currency supplied. When currency is held as a store of value there is an implicit tax on currency holders from the non-payment of interest which transfers real resources as seigniorage to the government budget, where fully anticipated changes in the supply of currency will have real effects through their impact on the nominal rate of interest. These wealth effects are examined later in Section 2.5. 2.2.3 Endowments with full trade In this section consumers can trade within each period (atemporally) and across time (intertemporally), where initially goods are traded intertemporally by exchanging forward Investment decisions under certainty contracts that are promises to deliver specified quantities of goods. The analysis is extended to an asset economy by introducing a financial security so that consumers can transfer income between the two time periods. This allows us to compare equilibrium outcomes in the exchange and asset economies when full trade is possible to confirm the observation by Arrow (1953) that financial securities significantly reduce the number of choice variables for consumers in the first period. In particular, they choose the market value of their future consumption bundle without choosing its composition until the second period when their securities are liquidated. Consider the exchange economy when consumers can trade intertemporally by exchanging forward contracts f h(n) for each good n ∈ N . The buyer receives a unit of good n in the second period for each contract purchased, with f h(n) > 0 for the buyer and f h(n) < 0 for the seller. These contracts trade in the first period at relative prices pf: = {pf (1), ... , pf (N)}, where the consumer problem in the endowment economy with full trade and forward contracts is ⎧⎪ p0 x0 − p0 x0 + p f f ≤ 0 ⎫⎪13 max ⎨u( x0 , x1 ) ⎬, p1 x1 − p1 ( x1 + f ) ≤ 0 ⎭⎪ ⎩⎪ with f h: = {f h(1), ... , f h(N)} being the forward contracts traded. Definition 2.3 The market economy with endowments and forward commodity contracts is the triplet (u, x–, f ), where f is the set of forward contracts for H consumers. A competitive equilibrium in this economy is characterized by the relative commodity prices p0* and p1* and relative forward contract prices p f * such that: i x 0h * (n), x 1h * (n) and f h* (n), for all n, solve the consumer problem in (2.7) for all h = 1, ... , H; ii the goods market clear in each time period t ∈{0, 1}, with ∑h xth (n) = ∑h xth*(n) for all n, and the market for forward contracts clears, with ∑h f h* (n) = 0 for all n. Optimally chosen forward contracts (for an interior solution) satisfy the first-order conditions for each good n, with −λ 0h p f (n) + λ1h p1 (n) = 0 ∀n, h. In the absence of transactions costs and taxes consumers use the same discount factor ( λ1h /λ 0h ) to value income in the second period, where the prices of forward contracts satisfy: P f (n) = p1 (n) λ1 ∀n. λ0 Since they can now trade all goods intertemporally they have the same marginal rates of substitution for goods in different time periods, with MRS1,0 (n, k) = p f ( n) ∀n, k , h. p0 ( k ) Investment decisions under certainty Thus, the equilibrium outcome is Pareto optimal. An implicit market rate of interest is embedded in these pricing relationships, and we confirm this by allowing consumers to trade ah units of a risk-free security in the first period at market price pa per unit, with ah > 0 for buyers and ah < 0 for sellers. The current value of the asset traded by each consumer is V0h = pa a h , and it has payouts in the future of a h R1 ≡ V0h (1 + i ), where i is the risk-free interest rate, and R1 = pa(1 + i) the gross payout on each unit of current income invested in the security. Now the problem for each consumer in the asset economy is summarized in (2.5) when income in each time period is defined as I 0 ≡ X 0 − V0 , I1 ≡ X 1 − aR1 .14 In this setting consumers determine the market value of their consumption expenditure in each time period by trading the risk-free security. Definition 2.4 An asset economy is a market economy with a financial security. Definition 2.5 The asset economy with endowments is described by (u, x, a), where a is the set of asset holdings of all H consumers. A competitive equilibrium in this economy is characterized by relative commodity prices p0* and p1* , a security price pa* , and an interest rate i * such that i x 0h* (n) and x 1h* (n), for all n, and a h* solve the consumer problem in (2.8) for all h; ii the goods markets clear in each time period t ∈{0, 1}, with ∑ h x th( n) = ∑ h x th∗( n) ∀ n , and the capital market clears, with ∑ h a h∗ = 0 ∀n. Without providing an explicit reason for using a financial security rather than forward contracts, the two economies in Definitions 2.3 and 2.5 above are identical. Indeed, they have identical real equilibrium outcomes where consumers choose the same consumption bundles in each period and have the same utilities. A proper description of the economy would require the introduction of trading costs that are reduced by trading forward contracts and the financial security. Realistically, both could trade when they have different marginal effects on these costs. Thus, the asset economy in Definition 2.5 implicitly assumes trading costs can be costlessly eliminated by using the financial security without forward contracts. It is certainly plausible that the financial security will reduce these costs more than forward contracts in most circumstances. When using forward contracts consumers trade a separate one for each good to determine the composition of their future consumption bundle. However, when using the financial security they choose the market value of their future consumption bundle, while its composition is determined in the second period using the security payout. This potentially reduces the number of choice variables in the first period from 2N with forward contracts to N + 1 with the financial security. Later in Chapters 3 and 4 we extend the analysis to accommodate uncertainty where consumers choose portfolios of securities to spread risk. Despite the additional security trades, however, they still have fewer choice variables in the asset economy, where the optimal security trade by each consumer solves the first-order condition Investment decisions under certainty −λ 0h pa + λ1h R1 = 0 ∀h, where the discount factor on future income becomes λ1/λ0, = 1/(1 + i), which is the same for all consumers.15 When the constraints in (2.8) bind we obtain the familiar net present value rule for pricing capital assets, where wealth is the discounted present value of income, with W0 = I 0 + I1 X = X0 + 1 . 1+ i 1+ i This allows us to summarize the consumer problem in the endowment economy with frictionless competitive markets as ⎧ I ⎫⎪ max ⎨v ( I ) W0 = I 0 + 1 ⎬ . { a} 1+ i ⎭ ⎩⎪ The asset choice distributes income across the two periods, and is ultimately determined by consumer preferences for the goods purchased in each period. The consumption opportunities are illustrated in Figure 2.5, where the slope of the budget constraint determines the rate at which income can be transferred between the two periods by trading the security. Whenever consumers save a dollar of current income their future consumption expenditure rises by 1 + i dollars, while borrowing a dollar of future income raises their current consumption expenditure by 1/(1 + i) dollars. The budget constraint is linear as consumers are price-takers. When they are large in the capital market, and the interest rate rises with borrowing and falls with saving, the budget constraint is concave to the origin through the endowment point. Any bundle along (or inside) the budget constraint is feasible, where optimally chosen intertemporal consumption expenditure satisfies MRS1,0 ( I ) = v1 1 = . v0 1 + i – x1 Slope= −(1+i ) – x0 B v(I ) W0 Figure 2.5 Consumption opportunities with income endowments, atemporal trade and a competitive capital market. Investment decisions under certainty Three examples of the way consumers trade in the capital market are illustrated in Figure 2.5 with a saver at point S, a borrower at point B, and a non-trader who consumes the endowment at point E. The final equilibrium outcome for each consumer is determined by (i) the income endowments, (ii) the market rate of interest, and (iii) preferences. If the income endowments are skewed toward the first period, or interest rates are relatively high, consumers are more likely to save, and vice versa. Box 2.4 Trade in a competitive capital market: a numerical example Earlier in Box 2.1 we looked at the consumption choices made by Brad Johnson in a two-period setting. He has an endowment of 400 kg of rice today that could be transferred to the second period using storage and other private investment opportunities. Now we consider his intertemporal consumption choices when he can trade in a frictionless competitive capital market at a risk-free interest rate of 5 per cent over the year (but without storage and other private investment opportunities). By trading a risk-free security (a0) he can transfer rice into the second period, where the constraints on his rice consumption in each period are x0 ≤ 400 − a0 and x1 ≤ 1.05 a0, respectively. Based on his preferences in Box 2.1, his optimal consumption choices satisfy 0.98x0* = x1* /1.05, and they are solved using the budget constraints when they bind, with x0* ≈ 202 kg and x1* ≈ 208 kg. Thus, his optimal demand for the risk-free security is a *0 ≈ 192 kg of rice. This outcome is illustrated in the diagram below at point D where his indifference curve uD has a slope equal to −1.05, which is also the rate at which he can transfer rice between the two periods by trading the riskfree security. Notice how current consumption is the same as it was with costless storage in Box 2.1. Based on his preferences all the extra real income from interest received on saving is allocated to future consumption, where his utility (uD) is higher than his utility in autarky (uA) at point A in Box 2.1. x1 420 Slope = −1.05 D uD 45° 202 In a competitive capital market the interest rate equates aggregate saving and borrowing, with Σ h S h = Σ h B h, and since saving and borrowing decisions are determined by the distribution of income endowments over consumers in each time period and by their individual preferences, they also determine the interest rate in a closed economy. A stable equilibrium adjustment mechanism drives down the interest rate when Σ h S h > Σ h B h, while the reverse applies when Σ h S h < Σ h B h. It seems reasonable to expect higher interest rates will raise saving and reduce borrowing, which is what we normally observe in aggregate data, but it may not apply for every individual consumer due to the role of income effects. To see this, consider the effects of raising the interest rate to i1 for a consumer with standard convex preferences who initially saves at point A in Figure 2.6. The substitution effect Investment decisions under certainty I1 C A v0 – x1 E Slope= −(1+i0) Slope= −(1+i1) S0 x0 – x0 Figure 2.6 The relationship between saving and the interest rate. unambiguously raises saving in the move from A to C. But the income effect works against the substitution effect when current consumption is a normal good because a higher interest rate generates additional real income at each level of saving. This moves the consumption bundle to the right of point C onto the new budget line through E. If the income effect is smaller than the substitution effect in absolute value terms then saving rises above its initial level at S0, but if it is larger in absolute value terms saving falls below S0. This seemingly anomalous case is more likely at higher initial saving S0 because the income effect is larger. Clearly, saving always rises when current consumption expenditure is inferior, which seems unlikely. In this setting with income endowments the necessary condition for a higher interest rate to reduce saving is for current consumption expenditure to be normal, while the sufficient condition is that the income effect should be larger than the substitution effect in absolute value terms. For a consumer who initially borrows, the higher interest rate will always reduce borrowing when current consumption expenditure is normal as the income and substitution effects work in the same direction. Borrowing can only rise in this setting when current consumption expenditure is inferior. These cases can be illustrated using Figure 2.7. After the interest I1 Slope = −(1+i0) Slope = −(1+i1) – x1 E C A D v0 B0 – x0 Figure 2.7 The relationship between borrowing and the interest rate. Investment decisions under certainty rate rises to i1 the consumer substitutes from A to C and borrowing declines. As real income falls the new consumption bundle must lie on the new budget constraint represented by the dotted line through the endowment point E. If it lies above the endowment point the consumer becomes a saver where borrowing unambiguously falls. But if current consumption expenditure is inferior the new bundle lies below point D on the new budget constraint. For borrowing to rise the income effect must be larger than substitution effect. But this case seems improbable as current consumption is unlikely to be inferior. 2.2.4 Asset economy with private investment opportunities Most consumers can determine the size and timing of their income stream in future time periods through the labour−leisure choice and by investing in human capital. When older consumers leave the workforce, however, they have less scope to do this and the analysis with fixed income endowments in the previous section is perhaps more appropriate. In contrast, younger consumers make private investment choices that will determine the type of labour they can supply in future years; an obvious example is education that is undertaken to increase labour productivity. As a consequence, they are making choices not only about the type of work they want to do, but also about the amount of wage and salary income they want to earn in future years. There are occasions where consumers invest in education to achieve higher job satisfaction rather than higher wages, but we will abstract from that issue here. Using the production technology defined earlier in the autarky economy, the market value of output in the second period for each consumer is Y 1h ≡∑ n p1 ( n) y 1h ( n), which is produced by inputs in the first period with a market value of Z 0h ≡∑ n p0 ( n) z 0h ( n). Once again, we assume the production opportunity set is strictly convex, where the problem for each consumer in the asset economy with private investment is summarized in (2.5) when income in each period is defined as I 0 ≡ X 0 − V0 − Z 0 , I1 ≡ X 1 + aR1 + Y1 . Definition 2.6 An asset economy with private investment opportunities is described by (u, x, y (H), a), where y(H) is the set of private production opportunities for the H consumers. A competitive equilibrium in this economy is characterized by relative commodity prices p0* and p1* , a security price pa* and an interest rate i * such that i x0h* (n), x1h*(n) and y1h*(n), for all n, and a h * solve the consumer problem in (2.10) for all h; ii the goods markets clear in each time period, with ∑ h x¯ 0h(n) = ∑h x0h*(n) + ∑hz0h* (n) for all n in the first period, ∑h x1h(n) + ∑h y1h*(n) = ∑hx1h*(n) for all n in the second period, and the capital market clears, with ∑ha h* = 0. For optimally chosen investment we have λ0 = λ1 VMP, where VMP = 1 + iZ is the value (at market prices) of the marginal product of capital investment; it is 1 plus the marginal rate of return on investment (iZ). Since investors can equate their discount factors on secondperiod consumption by trading in the capital market, with λ1/λ0 = 1/(1 + i), we can write the Investment decisions under certainty I1 P* Slope = −VMPZ = −(1+iZ) Slope = −(1+i ) – x1 E Z *0 – x0 – W0 Figure 2.8 Consumption opportunities in the asset economy with private investment. condition for optimally chosen investment as VMP = 1 + i. This tells us that consumers maximize wealth by equating the marginal return on investment to the market rate of interest, with iZ = i. The consumption opportunities are illustrated in Figure 2.8. Wealth in the absence of investment is equal to the discounted present value of the endowment at W0 . Since the marginal return on investment exceeds the market interest rate at the endowment point E, with iZ > i, the consumer can raise wealth to W0 by investing Z *0 (units of the numeraire good). At point P * the marginal return from investment matches the market interest rate, which is the opportunity cost. Investing beyond point P * would lower wealth because the capital market pays a higher rate of return. The optimal consumption bundle in the income space lies along the budget line through the production point P *, where from the first-order conditions on the consumer problem in (2.10) we have MRS1,0 ( I ) = ν1 1 1 = = . ν 0 1 + i 1 + iz In this setting consumers separate their investment and consumption choices; they choose investment to maximize wealth which they then allocate to intertemporal consumption by trading in the capital market to maximize utility. Any other level of investment above or below Z 0* in Figure 2.9 reduces wealth by moving the budget constraint down in a parallel fashion. In other words, investment choices have pure income effects on price-taking consumers so that maximizing wealth will also maximize their utility. Examples of non-optimal investment choices are illustrated in Figure 2.9 by the large black dots where the new budget constraint is the dotted line parallel to the budget line through P * which maximizes wealth. Investment only has income effects here because consumers are price-takers in the capital market. This is referred to as the Fisher separation theorem (Fisher 1730), and it has important implications for the objective functions of firms when they undertake investment on behalf of consumers. Investment decisions under certainty I1 iZ < i P* iZ >i – x1 E Z *0 – W 0 W0 – x0 Figure 2.9 Optimal private investment with a competitive capital market. Box 2.5 Private investment and trade: a numerical example We now extend the analysis in Box 2.4 by allowing Brad Johnson to transfer rice to the second period by planting it on his farm using the technology in Box 2.3 and trading the risk-free security in a competitive capital market at a 5 per cent interest rate over the year. This means he has a single budget constraint, which in present value terms is x0 + 30 √ z0 x1 = 400 − z0 + = W0 . 1.05 1.05 Brad’s wealth is maximized when private investment satisfies 15/÷z*0 = 1/1.05, where z*0 ≈ 248 kg with W0* ≈ 602. Based on his preferences in Box 2.1, his rice consumption in each period is chosen optimally when it satisfies 0.98x*0 = x1*/1.05. Using the budget constraint with maximized wealth, W0* = x0 − x1 / 1.05 ≈ 620, we have x0* ≈ 304 kg and x1* ≈ 313 kg. These choices are illustrated in the diagram below, where wealth is maximised at point F on the investment frontier with consumption at point E which is on a higher indifference curve uE than point D without private investment in Box 2.4. x1 Slope = −1.05 uE Investment z*0 ≈ 248 152 W*0 ≈ 602 Investment decisions under certainty 2.2.5 Asset economy with investment by firms Finally, we extend the analysis to an economy where consumers have endowments of goods which they can trade within and between each time period in competitive markets. There are also J firms that perform the important task of moving resources to future time periods, where they do so at lower cost through specialization and large-scale production. To simplify the analysis we will assume goods are non-storable (although storage can easily be accommodated as a part of production) and there is no private investment by consumers. All investment is undertaken by firms who finance it by selling securities to consumers in the first period. In the second period they sell their output and use the proceeds to repurchase their securities from consumers. There is no government in the economy at this stage, so the only traders in the capital and goods markets are private agents. In many of the finance applications we examine in following chapters very little insight is gained by including production. For example, when deriving prices for assets with uncertain future returns we want to know how they are affected by risk-spreading opportunities provided in the capital market for risk-averse consumers. By including production we allow the supply of risky securities to change endogenously, but that adds little to the derivations of equilibrium asset pricing equations unless it provides new risk-spreading opportunities not already available to consumers using existing securities. Production is much more important in project evaluation where welfare changes depend on actual equilibrium outcomes. For that reason we include production in the asset economy, where the consumer problem is summarized in (2.8) when income in each period is defined as I 0 ≡ X 0 − V0 + η0 , I1 ≡ X 1 + aR1 , where η0 : = {η10 , …, η0J } is the set of profit shares in each firm j. Production by private firms is the only way resources can be transferred intertemporally in this economy without storage and private investment. In its absence, savers and borrowers would be confined to trading given resources with each other within each time period. Thus, in the asset economy with production consumers can transfer resources atemporally and intertemporally, where the problem for each firm j = 1, ... , J, is given by max η0j = V0 j − Z 0j a j R1 − Y1 j ( Z 0j ) ≤ 0 , j with V0 ≡ pa a j and Z 0j ≡ Σ n p0 ( n) z0j ( n) being the market values of the securities supplied and inputs purchased in the first period, respectively, while a jR1 = V0j(1 + i) is the payout to the risk-free security in the second period which is constrained by the market value of the output produced, Y1 j ≡ Σ n p1 ( n) y1j ( n). We invoke the no arbitrage condition by allowing specialist firms (called financial intermediaries) to trade the risk-free security in a frictionless competitive capital market.16 As was the case for private investment in Sections 2.1.3 and 2.2.4, the production structure for firms is general enough to accommodate multiple inputs and outputs for each firm, and we maintain the assumption that their production opportunity sets are strictly convex. Optimally chosen investment in (2.12) equates the discounted value of the marginal product of investment to its opportunity cost, with λ1j VMP j = 1.17 The multiplier λ1j is a personal discount factor used by each firm j to evaluate the current value of future net cash Investment decisions under certainty flows, and this is confirmed by the first-order condition for optimally supplying the risk-free security, where pa = λ1j R1 = λ1j pa (1 + i ) = 0. Thus, in the absence of taxes or transactions costs price-taking firms use the same discount factor on future cash flows, with λ1j = 1 / 1(1 + i ). And since it is also the same discount rate used by consumers, the competitive equilibrium in this asset economy with production is Pareto optimal. Definition 2.7 An asset economy with production by firms is described by (u, x, y(J), a), where y(J) is the set of production outputs of the J firms. A competitive equilibrium in this economy is characterized by relative commodity prices p0* and p1* , a security price pa* , and an interest rate i * such that: i x0h*(n) and x1h*(n), for all n, and a h* solve the consumer problem in (2.11) for all h; j* j* ii z0 (n) and y1 (n), for all n, and a j* solve the producer problem in (2.12) for all j; iii the goods markets clear at each t ∈{0, 1}, with ∑h x0h (n) = ∑h x0h*(n) + ∑j z0 (n) and ∑h x1h (n) + ∑ jy1J*(n) = ∑h x 1h*(n) for all n and the capital market clears, with ∑ h a h * = ∑ ja j * j* A formal derivation of the Fisher separation theorem is obtained by differentiating the consumer problem in (2.5) for a marginal increase in investment by firm j when income is defined in (2.11), with aR1 = V0(1 + i), where the welfare change using the optimality condition for the risk-free security is dV dν = {− λ 0 + λ1 (1 + i )} 0j = 0.18 j dZ 0 dZ 0 In frictionless competitive markets the investment and consumption choices of individual consumers and firms have no effect on commodity prices or the interest rate. And once investment is optimally chosen to maximize profit, it maximizes consumer wealth and utility. Formally stated the theorem is ‘The investment decisions by individual consumers are independent of their intertemporal consumption preferences.’ The crucial assumption is that of price-taking by firms and consumers, but the absence of transactions costs and taxes is also important. Trading costs drive wedges between borrowing and lending rates, and this can in some circumstances cause the theorem to fail. The important practical implication of this theorem is that all shareholders are unanimous in wanting firms to maximize profit. Indeed, once this objective is assigned to firms it invokes the conditions required for Fisher separation on the economic analysis. It is much easier for the capital market to create incentives for firm managers to act in the interests of their shareholders when their objective is to maximize profit. Mergers and takeovers are a threat to managers who do not maximize profit because their share prices are lower than they could be with better management. Figure 2.10 is the familiar analysis in Hirshleifer (1970) that is used to illustrate the Fisher separation theorem in a two-period certainty model with production by firms. It is a natural extension of the standard two-period analysis in previous sections, where the representative firm j borrows capital from consumers by selling financial securities.19 These funds are invested to maximize profit (ηj), which investors receive when the firm repurchases its securities in the second period. The representative investor h allocates the initial endowment of Investment decisions under certainty Representative firm j I1h Slope = −(1+ i ) Representative investor h Slope = −(1+ i ) P* η1 Shareholder’s portion of firm’s value j* η0 Funds invested by consumer – W0h Figure 2.10 The Fisher separation theorem with firms. income to current consumption and financial securities in a number of firms (to spread risk in an uncertainty setting). When the representative firm changes its investment decision it only has pure income effects on the investor’s budget constraint, where the utility of every investor is maximized when it maximizes profit, and as a consequence, its market value. Perhaps the easiest way to see how complicated things become when the Fisher separation theorem fails, is to consider a situation where investment decisions by individual firms affect the market rate of interest. In particular, suppose there is a positive relationship between them. Now the investment decision has both income and substitution effects on the budget constraints of investors, and it is no longer clear that profit maximization is the unanimous choice for firms. This conflict is illustrated in Figure 2.11, where additional I1 Slope = −(1+ i1) A B Slope = −(1+ i0) Figure 2.11 Investment when the Fisher separation theorem fails to hold. Investment decisions under certainty investment changes both the slope and intercepts of the budget constraints of consumers by affecting the market rate of interest. Clearly, any consumer with consumption bundle B is made worse off by the investment decision, while the reverse applies to a consumer with bundle A. In circumstances like this the objective function of the firm cannot be solved independently of the preferences of its investors, where some type of voting mechanism is needed to trade off their competing gains and losses. This conflict also applies more generally. Whenever a firm is able to affect the prices of its inputs and/or outputs it can have income and substitution effects on investors when they also consume the firm’s output or supply its inputs. By way of example, consider a singleprice monopolist whose investors consume its output. When it restricts output to make profit by driving up the product price, its investors are made better off by the higher profit but worse off by the higher product price. The relative costs and benefits depend on how much capital they invest in the firm relative to the value of the good consumed. Typically we assume investors in a single-price monopolist do not consume its product, or when they do they consume such a small amount that price changes have a negligible impact on their real income, where profit maximization is their unanimous objective function for the firm’s managers. A number of private and public institutions play an important role in supporting the Fisher separation theorem. Publicly listed companies are threatened by mergers and takeovers that help to align the interests of firm managers with those of their shareholders. Also, companies write contracts with managers that provide them with incentives to act in the interests of shareholders. For example, managers are frequently required to hold a portion of their wealth in the firm’s shares or to hold call options written on them. They also include penalties for managers who do not perform. There are public regulations which specify the minimum information that firm managers must provide investors with, and competition policy is used to restrict the market power of firms. All these problems arise because investors do not have complete information about the actions taken on their behalf by firm managers. Traders in financial markets specialize in monitoring firms and will exploit any potential profits from replacing managers who under-perform. As specialists they perform this monitoring role at lower cost than investors would incur by monitoring firm managers themselves. 2.2.6 Asset economy with investment by firms and fiat money There is no role for currency as a store of value in a certainty setting without trading costs when there is a risk-free security that pays interest. Due to the non-payment of interest on currency the opportunity cost of holding it over time is the forgone interest that could have been earned by holding the security instead. Currency was introduced to the endowment economy in Section 2.2.2 when it could be used as a medium of exchange to reduce costs of atemporal trade in each time period, but not as a store of value. In practice, currency has properties that make it a more effective medium of exchange, particularly for some trades, than a financial security, and we captured this previously by assuming they had different impacts on trading costs. While this explains why consumers use currency in each period, it does not explain why they use it as a store of value when no interest is paid. Any income transferred into the future would generate a larger consumption flow by using the risk-free security. On that basis, consumers would use currency in each period as a medium of exchange but not hold it over time unless there are other benefits from doing so. In the following analysis we overcome this problem by assuming it is too costly for consumers to Investment decisions under certainty choose different currency holdings in each period. Instead, they choose their currency holding in the first period and carry it over to the second period. In the asset economy with fiat money and production the consumer problem is summarized in (2.5) when income in each period is defined as I 0 ≡ X 0 − τ 0 D0 − V0 − m0 + η0 + G0 , I1 ≡ X 1 − τ1 D1 + aR1 + m0 + G1 .20 The trading costs are defined here as a constant proportion of the market value of the net demands for goods, and are measured in units of good 1. They can differ across goods but are the same for sales and purchases, and are assumed to be decreasing functions of currency demand (m0) in the first period. When consumers exchange goods for currency the government collects resources in the first period that it can spend, where G th is the share of the value of government spending apportioned to each consumer h at each t ∈{0, 1). Total government spending in each period is Gt = ∑h pt g ht , with gth : = {gth (1), ... , gth ( N )} being the goods allocated to consumer h at each t ∈{0, 1}. Thus, the market value of the net demands for goods in this setting in each period is Dt = pt ( xt − gt − xt ). We also allow the government to trade in the capital market, where V 0g = pa a 0g is the value of the risk-free security it holds, with V0g > 0 when it saves and V0g < 0 when it borrows. In the first period the constraint on government spending is G0g +V 0g=m g0 , while in the second period it is G1g + m0g = V0g (1 + i ). Using the second-period constraint we can solve the value of the security traded by the government, as V 0g = (G1+ m 0g)/(1 + i), where its budget constraint in present value terms becomes G0 + im g G1 = 0 . 1+ i 1+ i In the second period the government collects seigniorage of im0g on the currency issued in the first period, and it is returned to consumers as government spending. In other words, the outputs the government produces are provided at no direct cost to consumers. They pay indirectly through the implicit tax on their currency holdings. We simplify the analysis by assuming firms do not use currency, where this leaves the problem for each firm j in (2.12) unchanged, and we make currency the numeraire good so that all prices are measured in money terms (which for convenience is referred to as dollars). Definition 2.8 An asset economy with fiat money and production by firms is described by (u, x, y(J), a, m). A competitive equilibrium in this economy is characterized by relative commodity prices p0 , and p1* , a security price pa*, and an interest rate i* such that: i x0h* (n) and x1h*(n), for all n, and ah* solve the consumer problem in (2.11) for all h; j* ii z0 (n) and y1j* (n), for all n, and a j* solve the producer problem in (2.12) for all j; iii the goods markets clear at each t ∈{0, 1}, with ∑h x0h (n) = ∑h x0h*(n) + ∑j z0 (n) and ∑h x1h(n) + ∑jy 1j*(n) = ∑hx1h*(n) for all n, and the capital market clears with ∑ h a h* = ∑ j a j *. j* A number of interesting issues arise when a financial security and currency can both be used by consumers to reduce trading costs. We assume the trading costs in both periods are Investment decisions under certainty reduced by using currency, while the risk-free security only reduces trading costs in the second period. The optimality condition for the security trade by each consumer is obtained from (2.13) as ⎛ ∂τ ⎞ − λ 0 + λ1 ⎜ (1 + i ) − D1 1 ⎟ = 0, ∂V0 ⎠ ⎝ with ∂τ0(n)/∂V0 = 0 and ∂τ1(n)/∂V0 < 0, for all n, while the optimal currency demand solves ⎛ ⎛ ∂τ ⎞ ∂τ ⎞ − λ 0 ⎜ 1 + D0 0 ⎟ + λ1 ⎜ 1 − D1 1 ⎟ = 0, ∂m0 ⎠ ∂m0 ⎠ ⎝ ⎝ with ∂τ0(n)/∂m0 < 0 and ∂τ1(n)/∂m0 < 0 for all n.21 Notice that the condition for the optimal security trade above can be rearranged to provide a personal discount rate that will differ across consumers when they have different marginal changes in trading costs, with λ1 1 = . λ 0 (1 + i ) − D1 ( ∂τ1 / ∂V0 ) An obvious implication of this is that consumers will not value capital assets in the same way in these circumstances. The widely used equilibrium asset pricing models, which we examine in Chapter 4, assume there are no trading costs. But they have to be included to explain the demand for currency and the separate but related role for using financial securities rather than forward contracts. Indeed, financial securities are a less costly way to summarize the property right transfers when consumers trade intertemporally. A financial security that transfers income between the two periods is preferable to forward contracts because only one asset is exchanged (in a certainty setting), whereas a number of forward contracts are exchanged for goods and they must specify the quality, time and quantity of each good to be traded in the future. Thus, there are transactions costs benefits from using the risk-free asset which are likely to drive wedges between the discount rates consumer use to evaluate future consumption flows. As noted earlier in Section 2.2.2, however, the equilibrium allocation will still be Pareto efficient when the trading costs are minimum necessary costs of trade. It is possible to confirm the proposition made earlier that, in the absence of trading costs, consumers will not hold currency as a store of value by solving the discount rate in (2.15) using the first-order condition for the security trade, with ∂τ1/∂V0 = 0, as λ1/λ0 = 1/(1 + i), where the first-order condition for currency becomes λ1/λ0 < 1.22 Since no interest is paid on currency, consumers can always increase their utility by allocating resources to the risk-free security rather than currency. If we follow conventional analysis and assume trading costs are unaffected by the financial security the discount factor in (2.15) becomes λ1/λ0 = 1/ (1 + i), where the optimal currency demand solves − D0 ∂τ 0 ∂τ 1 1 23 . − D1 1 = ∂m0 ∂m0 1 + i 1 + i Investment decisions under certainty This expression has a familiar interpretation because the left-hand side measures the present value of the marginal benefits from using currency as a medium of exchange, while the right-hand side is the present value of the opportunity cost, which is the forgone interest on the risk-free security. Once interest is paid on currency it becomes a perfect substitute for the security in a certainty setting. In much of the analysis in later chapters we assume there are no trading costs, which necessarily eliminates money from the asset economy. We do this to focus on the effects of taxes and firm financial policies on the capital market. The analysis in this section provides us with a way to think about how results in following sections may change when trading costs are included. 2.3 Asset prices and inflation Asset prices change when inflation affects their expected real returns. Governments determine nominal price inflation in their economies by controlling the rate of growth in the nominal money supply. Money prices do not change over time if the money supply grows at the same rate as money demand.24 However, it is not a trivial task for governments to match these growth rates, particularly when there is uncertainty about future outcomes. Money demand can be quite difficult to determine, especially in periods when there are large real shocks in economic activity. For example, the effects of large increases in the price of oil will depend on whether they are expected to be persistent or transitory, and are difficult to predict because people adjust to them over time. Moreover, governments have direct control over fiat money (notes and coins), but not broadly measured money, which includes cheque and other interest-bearing deposit accounts issued by both public and private financial intermediaries. Most governments control the broad money base by adjusting the quantity of currency on issue and by regulating the liquidity ratios of the assets held by financial institutions who create non-fiat money. They also intervene in bond markets to change interest rates when, in the absence of capital controls, domestic and foreign bonds are not perfect substitutes. At the present time most developed countries have annual rates of general price inflation around 2−3 per cent. The rates of inflation are much higher in some developing countries, as they were in many developed countries during the 1970s and 1980s. There are costs and benefits of inflation. Some costs arise from the interaction between inflation and the tax system, while others arise from the income redistribution that takes place when inflation causes relative prices to change due to rigidities in nominal variables. For example, consumers with fixed money incomes lose from higher goods prices, while governments benefit from collecting revenue as seigniorage from the non-payment of interest on currency. This section examines the way general price inflation affects current asset prices. To motivate the following analysis, consider a risk-free security that pays a nominal net cash flow in the second period of R1 when the (expected) rate of inflation is π. In an economy with frictionless competitive markets its current price is pa = R1 , 1+ i where i is the nominal risk-free interest rate. Clearly, this asset price will not change with higher inflation when the nominal interest rate rises sufficiently to hold the real interest Investment decisions under certainty rate (r) constant. In other words, if R1 and 1 + i both rise at the expected rate of inflation there is no real change in the current value of the asset. If we assume, for the moment at least, that the net cash flows rise at the inflation rate, the change in the asset price will be determined by the way the nominal interest rate changes. This is confirmed by using the identity that defines the relationship between the nominal and real interest rates: 25 1 + i ≡ (1 + r ) (1 + π ). If the inflation rate is expected to rise we have the following possibilities: i the nominal interest rate can rise with an unchanged real interest rate, where this leaves the asset price in (2.17) unchanged; ii the nominal interest rate can stay constant and the real interest rate falls, where the asset price in (2.17) rises due to the lower opportunity cost of time; iii the nominal and real interest rates can both change when the nominal rate rises by less than the inflation rate, where the lower real rate causes the asset price in (2.17) to rise, but by less than it would have with an unchanged nominal interest rate. 2.3.1 The Fisher effect Ultimately the relationship between the nominal interest rate and the expected rate of inflation will depend on the way the economy adjusts to expected inflation. Consider a partial equilibrium analysis of the effects of higher expected inflation in the capital market, illustrated in Figure 2.12. In a two-period certainty setting there is single risk-free interest rate that is common to all financial securities in a frictionless competitive capital market. Aggregate saving (S) rises with the real interest rate because it is the opportunity cost of consuming now rather than later, while aggregate investment (Z) demand falls with the real interest rate because it is the i, r Capital market with certainty S(r) i0 = r0 r1 Excess demand S1 Z0 = S0 Figure 2.12 The Fisher effect. Z, S Investment decisions under certainty cost of capital. In the absence of inflation, demand and supply are equated by the market-clearing nominal interest rate i0, which is equal to the real interest rate r0. Now suppose all borrowers (who sell financial securities) and savers (who purchase them) expect general price inflation over the next period at rate π1. If, by way of illustration, traders in the capital market do not revise the nominal interest rate (so that i0 stays constant), then from (2.18) the real interest rate declines to r1. This creates an excess demand for capital as borrowing rises and saving falls, thereby exerting upward pressure on the nominal interest rate which continues to rise until the real interest rate returns to r0 where investment demand is once again equal to saving. On that basis, the nominal interest rate rises to keep the real rate constant and preserve capital market equilibrium. This important result is referred to as the Fisher effect, where, from (2.18), we have di dπ = 1 + r. dr = 0 It holds in a classical finance model when the following conditions prevail: i All nominal variables in the economy (including money wages, prices and the nominal interest rate) adjust freely in competitive markets. ii Agents have homogeneous expectations about the rate of inflation. iii There are no wealth effects in the money market iv There are no taxes. In this setting correctly anticipated changes in the nominal money supply will have no real effects as all nominal variables adjust to preserve the real economy. It holds in the asset economy with fiat money and production in Section 2.2.5 when the government pays interest on currency. Consider the consumer problem in (2.5) where the budget constraints in these circumstances are defined as I 0 ≡ X 0 − τ 0 D0 − V0 − m0 + η 0 + G0 , I1 ≡ X 1 − τ1 D1 + aR1 + m0 (1 + i ) + G1 . When trading costs are unaffected by the financial security, we know from (2.15) that the constraint multiplier on consumption in the second period becomes λ1 = λ0 /(1 + i), which allows us to rewrite the consumer problem as ⎧ X τD G ⎪⎫ max ⎨ ν(W0 ) W0 = X 0 − 1 − τ 0 D0 − 1 1 + η0 + G0 + 1 ⎬ .26 1+ i 1+ i 1+ i ⎭ ⎪⎩ Whenever the government increases the supply of currency in the second period all nominal prices rise by the same proportion as the money supply, and nothing real happens because the Fisher effect in (2.19) holds.27 Thus, the present value of the second-period Investment decisions under certainty endowments X 1 /(1 + i ), trading costs τ1D1/(1 + i) and government spending G1/(1 + i) are unaffected by the change in inflation. Neither is the profit share in each firm j: η0j = p1 y1 − p0 z0j 1+ i where p1 and 1 + i both rise at the same rate. Since nothing real happens to the consumption opportunity sets of consumers, they choose the same bundle of goods in each period and get the same utility. It means anticipated changes in the money supply have no real effects in these circumstances. However, there is empirical evidence from some countries that nominal interest rates will rise with the rate of inflation over a long period of time. And this happens in economies where nominal interest payments are subject to distorting taxes so that the tax-adjusted Fisher effect needs to be even higher.28 However, it is unlikely to hold in the short term or in economies where the conditions above do not apply. The most useful aspect of this analysis is that it provides a way of understanding what factors determine the real effects of expected inflation outside the classical finance model. We now consider what happens when the first three conditions in the classical model outlined above are relaxed. The role of taxes will be examined in more detail in later chapters. If there are rigidities in more than one nominal variable in the economy then inflation can have real effects that will cause the Fisher effect to fail. In a Keynesian macroeconomic model with rigid money wages, monetary policy has real effects by altering the real wage and employment. Suppose a minimum wage leads to involuntary unemployment in the economy where an increase in the rate of growth in the money supply can raise aggregate output by pushing up the nominal prices of goods and services and reducing real wages.29 Clearly, this stimulus in activity will be reversed when minimum wages are later adjusted to preserve them in real terms.30 Any resulting changes in capital asset prices are determined by equilibrium adjustments to the relative prices of goods and services and the real interest rate, which can be estimated by using a computable general equilibrium model of the economy. When agents form different expectations about the rate of inflation they expect different real interest rates, and this impacts on the capital market. By way of illustration, suppose borrowers expect a higher rate of inflation than do savers, with πB > πS. Since both face a common nominal interest rate when negotiating security trades, they must have different real interest rates which are solved using (2.18) as: 1 + i = (1 + rB (1 + π B = (1 + rS (1 + π S , with rB < rS. This difference means borrowers are prepared to raise the nominal interest rate more than savers require to preserve their real return. The equilibrium nominal interest rate simultaneously raises the real return to savers and lowers the real cost of capital for investors, where the implicit interest rate subsidy is illustrated in Figure 2.13. The lower real interest rate for borrowers causes capital asset prices to rise in the first period. Clearly, the reverse applies when savers expect a higher rate of inflation because they would want the nominal interest rate to rise more to preserve their real return than borrowers could afford to pay. This would act like an implicit tax on the capital market by driving down the equilibrium level of investment and saving.31 Investment decisions under certainty i, r Capital market S(r) rS r0 rB πS < πB Z(r) Z0 = S0 Z, S Figure 2.13 Different inflationary expectations. 2.3.2 Wealth effects in the money market Changes in the rate of inflation have wealth effects when no interest is paid on currency. The private cost of holding currency is the nominal interest rate forgone on interest-bearing assets (with the same risk as currency). There are two components to this opportunity cost − one is the real interest return on bonds, while the other is the loss of purchasing power of currency due to inflation. If, as is normally the case, the social marginal cost of supplying currency is less than the nominal interest rate, then the non-payment of interest will impose a welfare loss on currency holders. In effect, they face a tax equal to the difference between the nominal interest rate and the marginal production cost which imposes a welfare loss on them. And this loss increases when higher expected inflation pushes up the nominal interest rate and reduces currency demand even further. Thus, changes in expected inflation have real effects on consumers that undermine the Fisher effect. This welfare loss from the non-payment of interest on currency is illustrated in Figure 2.14, where the aggregate demand (md) for and supply (ms) of real money balances are defined here as the nominal value of the notes and coins held by consumers divided by the consumer price index (CPI). For illustrative purposes we assume the marginal social cost of supplying currency (mcs) is zero. In practice, however, it is positive but much smaller than the nominal interest rate. Initially the nominal interest rate i0 equates money demand and supply, where the CPI is expected to rise at the same rate as the nominal money supply (broadly defined) in the next period of time. Real money demand is determined by the marginal benefits consumers get from using currency, which for the most part is determined by the amount it reduces their trading costs as a medium of exchange and is therefore an increasing function of real income (y). Consumers maximize utility by equating their marginal benefits from using currency to the nominal interest rate, where the welfare loss is the cross-lined triangular region in Figure 2.14; it is a dollar measure of the forgone benefits due to the non-payment of interest. Currency holders are left with consumer Investment decisions under certainty i, r Currency market c mS b md (i, y) Figure 2.14 Welfare losses in the money market. surplus of i0 c a, while the vertical-lined rectangle (i0 a m0 0) is revenue collected by the government as seigniorage; it is inflation tax revenue in i0 a b r0, plus revenue from not paying real interest on resources obtained with currency in r0 b m0 0. A simple example will illustrate how revenue is transferred to the government budget as seigniorage. Suppose the nominal interest rate is 15.5 per cent (i0 = 0.155) when the expected inflation rate is 10 per cent (π = 0.10). From (2.18) we find the real interest rate is 5 per cent in the circumstances. Imagine the Central Bank prints a $100 bill that the government uses (at time 0) to purchase corn from private traders at a money price of $1 per kilo. It plants this corn at time 0 and uses the harvest at time 1 to redeem its liability to currency holders by selling them corn with a value of $100 when the money price of corn is expected to be $1.10 per kilo. The revenue transfers in the second period are summarized in Table 2.1. Table 2.1 Revenue collected by the government as seigniorage The government harvests gross revenue of $115.50 when the price of corn per kg is $1.10: Less corn sold by the government to redeem its $100 bill at time 1: Seigniorage: $115.50 (105 kg) $100 (90.91 kg) $15.50 (14.09 kg) We assume there is a (constant) 5 per cent real return from planting corn, so that 100 kg grows into 105 kg over the period. In the absence of inflation the government would have to sell 100 kg of corn to redeem its $100 bill and would collect $5 of seigniorage as the real return on investment (5 kg). But with 10 per cent price inflation it only has to sell 90.91 kg of corn at time 1 to redeem its $100 bill, collecting $15.50 of seigniorage with a real value of 14.1 kg. This includes $10 inflation tax revenue as well as the $5.50 real return on capital. Investment decisions under certainty Box 2.6 Seigniorage in selected countries Based on data reported by the International Monetary Fund (IMF) we obtain crude estimates of (gross) seigniorage as a proportion of GDP for the year ending December 2005 in the following countries. Notice how countries with relatively high nominal interest rates due to higher rates of inflation, such as Brazil, the Philippines and Zimbabwe, raise more seigniorage as a percentage of the GDP. In contrast, Japan raised (almost) no seigniorage in the calendar year 2005 because the nominal interest rate was zero. This is consistent with the Friedman rule that makes the optimal rate of inflation (ignoring distortions in other markets and the marginal cost of printing currency) negative and equal to the real interest. By driving the nominal interest rate to zero it eliminates the implicit tax on currency holders, and the government collects no revenue as seigniorage. Country Currency/GDP (%)a Interest rate (%)b Seigniorage/GDP (%) Australia Brazil Canada China – Mainland China – Hong Kong France Germany India Indonesia Japan Malaysia New Zealand Philippines Russian Federation Singapore Switzerland Thailand United Kingdom United States Zimbabwe 4.76 11.77 3.37 34.46 20.55 6.44 7.10 16.75 9.88 23.21 11.46 3.36 10.64 13.70 12.04 10.93 11.49 3.49 6.26 9.97 5.5 19.12 2.66 3.33 4.25 2.15 2.09 6.00 6.78 0.001 2.72 6.76 7.314 2.68 2.28 0.63 2.62 4.70 3.21 540.00 0.26 2.25 0.09 1.15 0.87 0.14 0.15 1.00 0.67 0.00 0.31 0.23 0.78 0.37 0.27 0.07 0.30 0.16 0.20 53.81 Source: International Financial Statistics On-line database, International Monetary Fund, for the year ending December 2005. a Currency is measured using reserve money reported in data series (14) while GDP is measured using series (99b). b The interest rate is the money market rate reported in series (60b) except in France and Zimbabwe, where we use a bank rate which is lower than the money market rate in other countries. The welfare loss from higher expected inflation can be formally derived for the asset economy with currency and production in Section 2.2.5. We do this by aggregating consumer preferences using the individualistic social welfare function (W) of Bergson (1938) and Samuelson (1954), with W = W ( v ,32 where v: = {vl, ... , vH} is the set of indirect utility functions for consumers. By totally differentiating this welfare function, we have ⎧ dI h ⎫ dW = ∑ β h ⎨dI 0h + I ⎬ , 1+ i ⎭ h ⎩ Investment decisions under certainty with β h = ( ∂W /∂v h λ 0h being the distributional weight for each consumer h, which measures the change in social welfare from marginally raising their wealth. In a conventional Harberger (1971) analysis consumers are assigned the same weights, with β h = β for all h, where a dollar measure of the change in social welfare is dI dW = dI 0 + 1 , β 1+ i with dI 0 = ∑ h dI 0h and dI1 = ∑ h dI1h . The changes in aggregate income are obtained by summing consumer budget constraints in each time period, applying the first-order conditions for consumers and firms, and using the market-clearing conditions for the goods, currency and capital markets, where the dollar change in social welfare from marginally raising the rate of growth in the money supply becomes ∂τ ∂m0 D d τ1 ∂m0 dW 1 i ∂m0 = − D0 0 − 1 = < 0,33 ∂m0 ∂m0g 1 + i ∂m0 ∂m0g 1 + i ∂m0g dm0g β with Dt = pt ( xt − gt − xt being the market value of the net demand for goods at each time t ∈{0, 1}. There is good intuition for this welfare change. A marginal increase in the money supply raises the nominal interest rate and reduces the private demand for currency. This exacerbates the welfare loss from the non-payment of interest on currency by the present value of the tax burden i/(1 + i) multiplied by the change in the demand for currency ∂m0 / ∂m0g .34 We are now in a position to illustrate the welfare effect from changes in the expected rate of inflation. Suppose the government announces it will increase the rate of growth in the money supply (relative to money demand) over the next year. When the private sector believes the announcement, there are economic effects in both time periods: i At time 0. Once traders expect a higher inflation rate the nominal interest rate rises to maintain equilibrium in the capital market. Currency holders respond to the higher nominal interest rate by reducing their demand for currency, where the excess supply of real money balances is eliminated by an immediate jump in the general price level. This exacerbates the welfare loss from the non-payment of interest on currency, which is spread across the real economy through resulting changes in private activity. This loss in wealth is illustrated as the cross-lined rectangle in Figure 2.15. It is larger for more interest-elastic money demand and a higher initial nominal interest rate. In any case, it will cause the Fisher effect to fail when the real interest changes. ii At time 1. When the anticipated increase in the nominal money supply takes place it raises the general price level at the same rate. Thus, over the two periods, prices rise proportionately more than the nominal money supply due to the price jump in the first period. Bailey (1962, pp. 49−53) formalized this wealth effect in a macroeconomic model of the economy. In its purest form, the classical model breaks down when no interest is paid on currency. A large literature looks at the feasibility of allowing private currencies to trade. Opponents raise concerns about the potential default problems that could cause bank runs and lead to financial crises, while those in favour argue there are incentives for private providers to coordinate them and to maintain the integrity of their currencies as a way to Investment decisions under certainty i, r Currency market c m′S mS i1 i0 Welfare loss md (i, y) 0 Figure 2.15 Welfare losses from higher expected inflation. attract traders to use them. In a competitive environment there is pressure to pay interest on currency. And this can be done by dating notes and coins when they are issued and agreeing to pay holders interest at specified time periods during the year. Between these times traders negotiate discounts on trades made with currency as compensation for accrued interest. Indeed, this practice could be implemented by embedding computer chips in notes and (possibly) coins to record accrued interest and compute any discounts on trade between interest payments. Many supporters of private currency argue it removes the incentive for governments to use inflation as a hidden tax on consumers to finance their expenditure. But in recent times most governments have maintained low rates of inflation to minimize its adverse real effects on the economy, and this has mitigated, at least partially, the attraction of private currency.35 In summary, the way expected inflation affects capital asset prices depends on the real effects it has on the private economy. If the Fisher effect holds then changes in the expected rate of inflation will not affect current asset prices, and it does so in a classical macroeconomic model where financial variables are a veil over the real economy. Any anticipated changes in the money supply have no real effects in this setting. Even though it does not provide a realistic interpretation of what happens in practice, especially in the short term, it does establish the conditions for it to hold. Then, by relaxing them, we can determine how changes in expected inflation might impact in the real economy. 2.4 Valuing financial assets Most financial securities have cash flows in a number of future time periods. To compute their values we need to know their size and timing, and then discount them for the opportunity cost (of time and risk). While the analysis in previous sections is undertaken in a certainty setting with two time periods, we will refer to expected future values here in preparation for the inclusion of uncertainty in the next chapter. The analysis is undertaken by extending the asset economy in Definition 2.7 to an infinite number of time periods and Investment decisions under certainty requiring the market-clearing conditions to hold in each of them. The consumer problem in this infinite time horizon economy can be summarized as: { ) max v ( I I t = X t − Vt + ηt ∀ t ,37 with I = {I0, I1, ... , I8}. When consumers can trade in frictionless competitive markets they will use the same discount factors λ t /λ 0 = 1 / (1 + it to evaluate future cash flows, where it is the interest rate on a long-term security that matures at time t. With standard preferences (to rule out non-satiation) we can write their budget constraints in (2.23) as W0 = I 0 + I1 I2 + i 1 + ( 1 (1 + i2 + + (1 + i ) where the current price of any security k becomes ∞ Pak = ∑ t =0 (1 + i ) with Rkt being the (expected) payout to the security at time t. The long-term interest rates used in the discount factors are geometric means of the (expected) short-term interest rates in each period. The relationship between them is examined in the next section.38 2.4.1 Term structure of interest rates Consider security k when it has a single expected payout of Rk 2 at the end of period 2, where its market value is Pak = Rk 2 , (1 + i1 (1 +1 i2 with i1 being the short-term interest rate for the first period, and 1 i2 the (expected) short-term interest rate for the second period. The term structure of interest rates describes the relationship between these spot rates and the long-term interest rate over the two-year period (i2). Ideally, it would be the term structure for another security with the same risk as the payouts on security k, but since it is unlikely that such a security will trade with enough different maturity dates to extract a full set of spot rates (especially when there are more than two time periods), we use the term structure of interest rates for government bonds and adjust the spot rates in each period for the risk in asset k. If the expectations hypothesis holds, we can use the long-term interest rate (i2) in place of the two spot rates, where the value of security k becomes: Investment decisions under certainty Pak = Rk 2 (1 + i ) There are two ways to carry a dollar forward over the two periods − one is to purchase a long-term security with a single payout at maturity, while the other is to purchase a shortterm security in the first period and then to roll the payouts over into another short-term security in the second period. These alternatives generate the cash flows of (1 + i2)2 in the case of one long-term security and (1 + i1 (1 +1 i2 in the case of two short securities. When they are perfect substitutes (with the same risk), arbitrage in frictionless competitive markets equates their payouts: (1 + i ) (1 + i ) = (1 + i ) 1 This is the expectations hypothesis where expected returns on combinations of short-term securities are the same as the returns on the long-term securities over the same time period. The long-term interest rate is the geometric mean of the short term interest rates, i2 = (1 + i ) (1 + i ) − 1, 1 and it differs from the arithmetic mean of the short rates, i2A = i1 +1 i2 , 2 due to the compounding effect of interest paid on interest in the second period. Box 2.7 Differences in geometric and arithmetic means: numerical examples The difference between the geometric mean and its arithmetic approximation for a two-period bond is illustrated by the following numerical examples. With consecutive short-term interest rates of 6 per cent and 5 per cent, respectively, the geometric mean is approximately 0.00118 percentage points lower than the arithmetic mean. It is 0.00122 percentage points lower for the lower consecutive short rates of 3 per cent and 2 per cent, respectively. ¯i i 2A 0.06 0.03 0.05 0.02 0.0549882 0.0249878 0.055 0.025 The yield curve reported in the financial press summarizes the term structure of interest rates for government bonds. Since long-term bond yields are approximately equal to the average of the expected spot rates in each period, the shape of the yield curve tells us how short-term rates are expected to change over time. This is illustrated by the three different yield curves in Figure 2.16. Spot rates are expected to decline along yield curve (a) and rise along yield Investment decisions under certainty it (%) – – – i1 = 1i2 = 2i3 = 3i4......... (b) (c) Figure 2.16 Yield curves for long-term government bonds. curve (c), while they are constant for the flat term structure along yield curve (b). Exogenous shocks to the economy, such as changes in monetary policy and oil price shocks, shift the yield curve, which provides us with information about changes in the expectations of market traders. Since long-term bond yields are known at the time the bonds trade, they contain forward spot rates that solve (1 + i ) = (1 + i ) (1 + f ) , 2 with 1 f 2 being the forward spot rate in the second year. Since we observe i1 and i2 we can compute 1f2. Then, by taking the average annual yield for a three-year bond from the yield curve we can compute the forward spot rate in the third year, and so on until we obtain a complete set of forward rates. When the expectations hypothesis holds, these forward spot rates are equal to the expected spot rates in each period: (1 + i ) (1 + i ) = (1 + i ) = (1 + i ) (1 + f ) . 2 This justifies the use of long-term interest rates in the present value calculation in (2.24). When the net cash flows on security k contain more market risk than the net cash flows to government bonds, a risk premium is included in the discount factors using an asset pricing model similar to those considered later in Chapter 4. If the expectations hypothesis fails to hold the current price of any security k is computed using the expected spot rates: T pak = ∑ t =1 Rkt ∏ (1 + s −1 is t s =1 Investment decisions under certainty Empirical studies test the expectations hypothesis by regressing expected spot rates on the forward rates embedded in long-term bond yields: i = α + β 1 f2 . Typically, past spot rates are used as measures of expected future spot rates on the assumption that investors’ expectations are on average correct, where the hypothesis holds when α = 0 and β = 1. Tease (1988) finds support for the expectations hypothesis using Australian data, while there is little support for it in overseas data. Some argue the failure of the hypothesis is evidence of a liquidity (risk) premium in long-term bond rates as they are costly to trade in periods prior to maturity. When it does fail to hold we can use empirical estimates of the expected short rates from these studies as the discount rates in the pricing equation in (2.25). 2.4.2 Fundamental equation of yield In a frictionless competitive capital market capital assets must be expected to pay the same economic rate of return as all other assets in the same risk class in every period of their lives. This important relationship underpins the present value calculations used to compute asset prices. Economic income in any period of time measures the change in wealth. It is a measure of the potential consumption flow the initial capital will generate for the asset holder, and it can be a cash or direct consumption flow plus any capital gain. We derive the equation of yield by computing the expected price of capital assets in each future time period. Consider an asset which pays a stream of expected net cash flows at the end each year up to year T. Its current price (at t = 0) can be decomposed (with subscripts a and k omitted) as p0 = R1 + p1 , 1 + i1 where R1 + p1 represents the expected market value of the potential consumption flow the security would fund in the second period. The current price sells at a discount on this payout to compensate the asset holder for time (and risk). In a similar fashion we can write the expected price of the asset at the end of each subsequent period as p1 = R + p3 R2 + p2 R + pT R + p4 , pT −1 = T , p2 = 3 , p3 = 4 1 + T −1 iT (1 + 1i2 (1 + 2 i3 (1 + 3 i4 In the absence of any further net cash flows beyond time T, the asset is expected to have no value at that time, with pT = 0. By substituting these prices back down the chain we obtain the asset price in (2.25), and this becomes the pricing equation in (2.24) when the expectations hypothesis holds. Thus, between all adjacent time periods {t − 1, t} we must have pt −1 = Rt + pt , (1 + t −1 it ) which can be rearranged as the equation of yield, Investment decisions under certainty i = t −1 t Rt + ∆ pt , pt −1 with ∆ pt = pt − pt −1 being the expected capital gain when the asset price rises. It is also referred to as the holding period yield, and is a very useful relationship for understanding how asset prices change over time, where some rise, others fall and others stay constant. In every time period the expected economic income ( Rt + ∆ pt ) per dollar of capital invested in the asset (p–t−1) is equal to the expected rate of return on all other assets in the same risk class. Whenever t −1 it > ( Rt + ∆ pt / pt −1 , investors sell security k and use the funds to purchase assets in the same risk class until –pt−1 declines. Conversely, its price rises when investors expect t −1 it < ( Rt + ∆ pt / pt −1 because it pays a higher expected economic rate of return than other assets in the same risk class. In a frictionless competitive equilibrium we must have t −1 it = ( Rt + ∆ pt / pt −1 , which is the no arbitrage condition where all profits are eliminated from asset prices. To see how this relationship is useful in providing insight into the way asset prices change, consider four different payouts over the period from t−1 to t: ) ) Rt = 0. In periods when there are no net cash/consumption flows the asset price must rise at the risk-adjusted rate of return for all assets in the same risk class, with t −1 it = ∆ pt / pt −1 . For example, shares that pay no dividends must be expected to pay capital gains at this rate to stop shareholders selling them. Also, the value of wine stored in an unused space must be expected to rise at the expected return on all other assets in the same risk class. The relationship determines when trees planted for commercial timber should be cut down or when to extract oil or other minerals from the ground. While the trees continue to grow at a rate that generates additional timber in the future with a market value greater than the opportunity cost of funds plus any opportunity cost from using the land they are growing on, they are left standing. Once the growth in the value of the extra timber falls below this hurdle the trees are cut down. The same rule determines the optimal time to extract oil and other minerals.39 There is a private incentive to delay current consumption when doing so raises future consumption by more than the opportunity cost of time and risk for assets in the same risk class. ii ∆ pt = 0. Assets must be expected to have net cash or direct consumption flows that yield an expected economic return sufficient to cover the opportunity cost of capital, with t −1 it = Rt / pt −1 . The most obvious example is perhaps a bank deposit which pays market interest in each time period. iii Rt < 0. There are many investments that require cash outlays in the early years followed by expected revenues in future years. Mining companies search for oil and other minerals for a number of years before discovering anything, while information technology firms allocate resources to research and development for long periods of time to develop computing software and other products. Sometimes they have negative net cash flows in these periods, but their share prices must be expected to rise at a greater rate than the return on all other assets in the same risk class, with t −1 i < ∆ pt / pt −1 , to provide shareholders with the necessary economic return to hold their capital in these firms. iv ∆pt < 0. Cars and white goods are common examples of depreciating assets with prices that fall over time. They must have large enough cash flows to offset these capital losses and pay the same economic return as all other assets, with t −1 it < Rt / pt −1 . This example i Investment decisions under certainty provides an ideal opportunity to derive the user cost of capital for firms by rearranging the equation of yield in (2.26): ct = t −1 it − Φ t = Rt , pt −1 where Φ t = ∆ pt / pt −1 is the rate of change in the value of the asset over the period. It is the forgone expected return on all other assets in the same risk class (t−1it) less the rate of capital gain ( Φ t ). For depreciating assets Φ t , < 0 measures the rate of economic depreciation that must be recovered from the net cash flows to preserve each dollar of wealth invested in the asset. Most governments examine the way their policies impact on the user cost of capital in each sector of the economy to determine how they affect private investment. Some implement policies, including, for example, tax reform and accelerated depreciation allowances, to reduce the user cost of capital and raise investment. Tax reform that reduces the excess burden of taxation can lower the used cost of capital in every sector, while accelerated depreciation allowances are targeted at specific activities and are therefore likely to cause efficiency losses. Box 2.8 The equation of yield: a numerical example Sunscreen Ltd is a publicly listed company whose current share price is $15. It produces awnings, roller shutters and shade sails. If, in the absence of taxes and transactions costs, traders expect the economic earnings per share (EPSk) over the next year to be $1.80, then by the equation of yield all other shares in the same risk class (k) must pay a rate of return of ik = EPSk / pk = 0.12. Moreover, when they expect the dividend yield to be 8 per cent at the end of the next year they must also expect the share price to rise by 4 per cent: ∆ pk = EPS k − DIV k = 1.80 − $1.20 = $0.60, with pk = pk + ∆ pk = $15.60. Financial analysts use measured earnings per share and information about the revenue and costs of Sunscreen over the period to estimate economic earnings per share. Those with better information than the market can make profits by trading the shares. 2.4.3 Convenient pricing models Two pricing models are frequently used for making simple rule-of-thumb calculations. They are perpetuities which pay a fixed annual net cash flow in perpetuity, and annuities which pay a fixed annual net cash flow over a defined number of years. When shares are expected to pay a stable stream of dividends in the future we can approximate their value by using the pricing equation for the perpetuity. The present value of a security that pays a constant nominal net cash flow of Rp at the end of each year in perpetuity is pp = Rp i where i is the average annual yield on a perpetual government bond. If there are no government perpetuities we can use the average annual yield on a 50-year bond as a close approximation. Investment decisions under certainty This pricing relationship is confirmed by noting that pp is the amount that would have to be invested for ever at interest rate i to generate a net cash flow of Rp at the end of every year. Suppose the net cash flow is $100 and the average annual yield on the long-term government bond is 5 per cent. Then the price of the perpetuity is $2000. When the net cash flow is expected to grow at a constant rate gp each year, the price of the perpetuity becomes pp = Rp i − gp Annuities are more common because they provide a constant net cash flow over a specified number of years. They are popular securities for consumers wanting to fund a consumption flow over finite time periods. The current price of an annuity that pays net cash flows of Rp dollars at the end of each year for T years can be calculated as the combination of two perpetuities paying the same annual cash flow; one is purchased now and the other sold at the end of year T, so that, in present value terms, we have p A 0 = pP 0 − (1 + i ) RP i ⎧ 1 ⎪ ⎨1 − ⎪⎩ (1 + i ⎫ ⎪ ⎬. ⎪⎭ This calculation assumes the average annual yield to maturity on the perpetuity is the same as the average annual yield on the T-year government bond. For an annual cash flow of $100 paid at the end of each year for 10 years, the price of the annuity is $772.17 when the interest rate is 5 per cent. 2.4.4 Compound interest Assets often have net cash flows over time intervals that do not coincide with the timing of the available interest rate data. We can use the compound interest formula to compute the discount rates in these circumstances. Compound interest is where interest is paid on interest. In other words, interest is paid and then reinvested in the asset. This raises the effective interest rate above the simple interest rate over the period, which can be demonstrated by computing the amount one dollar will grow into over a year when interest is paid m times: m ⎛ i⎞ ⎜⎝ 1 + m ⎟⎠ = 1 + ie , where i is the simple interest rate and ie the effective rate of interest. If the dollar is compounded m times each year for t years it will grow to ⎛ i⎞ ⎜⎝ 1 + m ⎟⎠ = (1 + ie . t Investment decisions under certainty With continuous compounding a dollar will grow in 1 year into: m ⎛ i⎞ lim ⎜ 1 + ⎟ = 2.718i = e i , m→∞⎝ m⎠ where 2.718 is the base of the natural logarithm. Thus, the effective rate of interest is 172 per cent for the simple interest rate of 100 per cent. With continuous compounding over t years it grows to: ⎛ i⎞ lim ⎜ 1 + ⎟ m→∞⎝ m⎠ = 2.718it = e it . Box 2.9 Examples of compound interest The benefits from compound interest can be illustrated for a simple interest rate of 5 per cent over 1 year for different values of m: a for semi-annual interest (m = 2): 2 ⎛ 0.05 ⎞ ⎜⎝ 1 + 2 ⎟⎠ = 1.0506; b for quarterly interest (m = 4): 4 ⎛ 0.05 ⎞ ⎜⎝ 1 + 4 ⎟⎠ = 1.0510; c for continuous compounding (m → ∞): m ⎛ 0.05 ⎞ lim ⎜ 1 + = e 0.05 = 1.0513. m→∞⎝ m ⎟⎠ There are a number of applications where compounding is important. Consider an asset with constant net cash flow paid continuously for T years, illustrated in Figure 2.17. A light bulb is an example when it provides a constant stream of light (L). Frequently the net cash flows are continuous over blocks of time and have uncertain lives, but we abstract from those complications here. This certain stream of net cash flows has a present value of: T pL = ∫ RL e − it dt . 0 The compound interest formula can be used to derive discount rates for cash flows that occur at times that do not coincide with the timing for reported interest rates. To illustrate this point, consider an asset k with net cash flows that occur eight times every 100 days from now. It has a present value of: 8 pk = ∑ t =1 Rkt , (1 + i100 )t Investment decisions under certainty – RL Figure 2.17 An asset with a continuous consumption stream. where i100 is the 100-day interest rate embedded in the annual interest rate i, with: (1 + i100 )365/100 = 1 + i. If the annual interest rate is 10 per cent the 100 day rate is 0.0265, which can be approximated as i(100/365) = 0.0274.40 2.4.5 Bond prices A variety of different types of bonds are issued by private and public institutions. Government bonds are in general less risky than corporate bonds, as is evident from their lower returns, and they have different maturity dates ranging from 90 days to 20 (or more) years. There are three types of government bonds, which differ in their stream of future cash flows: i The coupon bond pays coupon interest on the face value of the bond in each period up to and including the date of maturity when it also repays the principal. The coupon interest rate can differ from the market interest rate over the life of the bond. It is a commitment made at the time the bond is sold. ii The consol is a coupon bond that pays coupon interest in perpetuity. iii The discount bond is a coupon bond with zero coupon interest. Thus, it pays a specified cash flow (for example, one dollar or one unit of real purchasing power) at maturity, but nothing in preceding time periods. (In effect, the current market price of a discount bond represents the capital that would need to be invested in a risk-free asset with interest payments reinvested until maturity.) When there are differences in the coupon and market interest rates, the market and face values of the bonds diverge. Consider a government bond that pays 5 per cent coupon Investment decisions under certainty interest for 5 years on its face value of $1000 that is redeemed at maturity. Its current market price is 4 pB = ∑ t =1 50 1050 41 + . 5 ∏ 1 + ij (1 + i5 t j =1 When the coupon interest rate is less than the market rate the bond price sells at a discount on its face value, while the reverse applies when the market rate is lower than the coupon rate. This ensures the bond pays its holder a market rate of return, which is confirmed using the equation of yield, where, in each period of the bonds life, we have: i5 pBt −1 = 50 + ∆pBt . In most countries public and corporate bonds trade prior to maturity on secondary boards at stock exchanges, where this provides information about the yield curve and the term structure of interest rates. Corporate bonds can be long or short term and secured or unsecured, where secured bonds have a prior claim to the market values of specified assets when firms default on their interest payments. Even though bondholders have prior claims to the net cash flows they still face risk when shares have limited liability that restricts the losses of shareholders to the value of their invested capital. Once firms have losses that exceed the equity capital they must fall on the bondholders.42 2.4.6 Share prices Since shares have residual claims to the net cash flows of firms they are typically more risky than debt, even when shares have limited liability. They also give shareholders valuable voting rights that can be used to influence the investment decisions of firms. The larger the proportion of shares any individual shareholder can influence, the more control they have over the firm. Share prices are ultimately determined by the value of their dividend payments, where ∞ pE = ∑ r =1 (1 + i ) with DIVt being the dividend per share, and iEt the expected economic return on all other assets in the same risk class, at each time t. To reduce the amount of notation in this and following sections we omit the variable identifying the firm. Shareholders expect to receive income as cash dividends and/or capital gains in each period. Some shares pay variable dividends through time, while others pay a stable dividend stream. Either way they must be expected to pay the required economic return in each time period. Dividends are funded from the net cash flows of firms after paying interest to bondholders and maintaining the initial market value of the invested capital. This can be demonstrated by writing the equation of yield for a share as: iEt pEt −1 = DIVt + ∆ pEt = EPSt , Investment decisions under certainty where EPSt is expected economic earnings per share at time t. Investors must expect each share to pay economic earnings over the period as a cash dividend and capital gain which is at least equal to the economic income paid on all other shares in the same risk class (iEt pt−1). This economic income is generated by the production activities of the firm issuing the share, with: (1 + i ) p Et Et −1 t −1 = X t − (1 + iBt Bt −1 + Vt . Here St−1 is the number of shares issued by the firm at the beginning of the period. The market value of this equity at the beginning of the period, is pEt−1 St−1 = Et−1. Xt is the firm’s net cash flow, which is equal to the gross revenue from selling output minus all the non-capital operating costs. It is the cash flow that can be distributed to capital providers in the firm. Bt−1 is the market value of debt issued by the firm at the beginning of the period. We assume debt pays market interest at the end of the period (iBt) so that its face value is also its market value. Finally, Vt = Et + Bt is the expected market value of the firm at the end of the period. After rearranging (2.32) we can write the expected economic return on equity over the period as iEt Et −1 = X t − iBt Bt −1 + Vt − Vt −1 . Since shareholders have the residual claim on the firm’s net cash flows their income is measured after repaying principal and interest to bondholders and recovering any fall in the market value of capital invested in equity. It is convenient to define the rate of economic depreciation as the rate of change in the market value of the firm: Φt = Vt − Vt −1 . Vt −1 This is negative when the firm’s market value is expected to fall over the period and positive when it is expected to rise. This allows us to write the expected economic income paid to equity, as: iEt Et −1 = X t − iBt Bt −1 + Φ tVt −1 When investors look at trading equity they compute its expected economic income, because that determines the change in their wealth. Economic income is what investors can consume over a period of time without reducing their initial wealth. In practice, however, economic income is quite difficult to measure because it includes capital gains or losses on assets held over the period. Many capital assets are purchased in prior periods, and unless there are active markets for identical assets the changes in their values must be estimated. Accounting rules and conventions are adopted to remove this subjectivity from reported income. Clearly, it would be possible for firms to manipulate reported income when capital gains or losses are subjectively determined. By reporting measured income, which is based on specified rules and conventions for computing depreciation in the values of capital assets, traders know how it is computed, and they can then make the necessary adjustments to convert it into economic income, which is the measure of income they care about because it isolates the true consumption gain. Investment decisions under certainty 2.4.7 Price−earnings ratios Price−earnings (P/E) ratios for publicly listed shares are reported in the financial press in most countries. These ratios are used by traders and analysts in financial markets to assess the future profitability of shares, but they are based on measured rather than economic earnings. Thus, traders make adjustments to convert them into ideal price−earnings ratios that are obtained by rearranging (2.31): pEt −1 1 = . EPSt iEt Box 2.10 Measured P/E ratios for shares traded on the Australian Securities Exchange The P/E ratios listed below were compiled from data provided by the Australian Securities Exchange at the close of business on Friday 13 April 2007. They are average values for firms in 12 sectors of the economy with P/E ratios less than or equal to 150. Twenty-three firms had P/E ratios above this number and they were not included because in most cases they were large outliers that did not reflect the values reported for most firms. The standard deviations are included to indicate how the P/E ratios differ across firms in each sector; they differed most for firms in the health care sector and least for firms in the telecommunications sector. Sector P/E Average Standard deviation No. of firms Consumer discretionary Consumer staples Energy Financials, ex property Property trusts Health care Industrials Information technology Materials Telecommunication services Utilities Unclassified All firms reporting PE (≤ 150): Source: The data was obtained from the Australian Financial Review website at http://www.afr.com/home/ sharetables/weekly/2007/4/13/CCsswk070413.csv, on 17 April 2007. This tells us the number of years it takes for the share to repay its capital as economic income, and when the no arbitrage condition holds these ratios are the same for all shares in the same risk class. Any differences are due to differences in their market risk. Traders make investment decisions using the information contained in these ratios because they are based on economic income. In practice, however, the measured P/E ratio for each share is based on accounting (A) income: pEt −1 1 = . A EPSt −1 iEtA Investment decisions under certainty There are two reasons why this differs from the ideal ratio in (2.35): i It uses the most recently reported earnings per share and is therefore backward-looking. In contrast, the ideal ratio is forward-looking because it uses expected future income. Indeed, the current share price is determined by future income, and not income in previous periods (unless it provides information about future income). ii It uses measured income rather than economic income, for reasons discussed above. By defining ideal P/E ratios it is possible to understand the information reported in the measured ratios. Occasionally they have very high values that suggest the shares will pay very low returns. For example, information technology (IT) stocks had measured P/E ratios as high as 60 during the IT boom at the end of the twentieth century. But this is explained by the low measured earnings in periods of research and development which do not include the expected capital gains included in economic income. Sometimes the reported P/E ratios are negative due to income losses. However, current share prices are determined by expected economic income, which cannot be negative. No investor pays a positive price for a share with expect economic losses. While losses are possible due to uncertainty, they must still expect profits. Financial analysts make these adjustments when using the reported ratios. Box 2.11 Examples of large P/E ratios Twenty-three firms with P/E ratios greater than 150 were not included in the data reported in Box 2.10. Four of them are summarized below, each of which has a very low measured earnings per share relative to the share price. Firm Share price Victoria Pet Mariner Bridge Consolidated Minerals Bakehouse Quarter 2100 6000 8833.3 39100 0.21 2.40 2.65 3.91 0.0001 0.0004 0.0003 0.0001 Source: The data was obtained from the Australian Financial Review website at http://www.afr.com/home/ sharetables/weekly/2007/4/13/CCsswk070413.csv, on 17 April 2007. The difference between measured and economic income can be illustrated by comparing economic earnings per share, EPSt = X t − (1 + iBt Bt −1 + Φ tVt St −1 with measured earnings per share, EPStA = X t − (1 + iBt Bt −1 + Φ tA Vt St −1 for the period from t−1 to t. Most of the difference arises from the way depreciation is computed, where ΦtA is measured depreciation and Φt economic depreciation. There are two main reasons for Φ tA ≠ Φ. Investment decisions under certainty First, measured depreciation is computed by applying decay factors to the written-down book values of the assets in the firm. There are basically two methods that can be used: straight line and diminishing value. But it is not uncommon for these asset prices to rise in some time periods. For example, firms with land and buildings in the central business districts of major cities frequently make capital gains on these assets even though they apply depreciation allowances to them when computing measured income. Moreover, there are circumstances where every physical asset depreciates in value while the overall market value of the firm appreciates. This occurs in periods when intangible assets such as goodwill are created, or when investments are made in activities that are expected to pay economic profits in the future. All these capital gains are included in economic depreciation (with Φt > 0), while they are only included in measured income at the time the intangible assets actually trade. For example, when firms sell land and buildings they can report any capital gains at the time of sale, and not in the period when they actually occur. This makes measured income less than economic income in periods when the gains occur, but greater than economic income in periods when the assets are traded and the gains realized. Second, since measured depreciation is based on initial prices paid for assets it is called historic cost depreciation, and it underestimates economic depreciation when there is general price inflation. The best way to illustrate this point is to consider the way economic depreciation in (2.34) is affected by anticipated inflation. Consider a situation where all nominal variables rise at the expected rate of inflation (π) to preserve the real economy. By using (2.18) we can write the expected nominal economic income on equity in (2.32) as: (1 + r ) (1 + π ) E t −1 = X t − (1 + rBt (1 + π Bt −1 + Vt , where the net cash flows and value of the firm at time t rise by the expected rate of inflation. After rearranging this expression, the real economic return to equity, becomes: rEt Et −1 = Vt − Vt −1 (1 + π Xt − rBt Bt −1 + . (1 + π (1+ π From this we can see that economic depreciation is the change in the market value of the firm that preserves the purchasing power of the initial capital invested at t−1. If we assume the measured and economic depreciation rates are equal in the absence of inflation, the measured real return to equity, which uses historic cost-based depreciation allowances, would be rEtA Et −1 = Xt V − Vt −1 −r B + t . 1 + π Bt t −1 1+ π Notice how this understates the firm’s expenses and causes measured income to exceed economic income (all other things equal), with rEtA > rEt , where expected inflation increases the effective tax rate on economic income when measured income is subject to tax.43 Another way of summarizing this effect is to compare the economic depreciation rate, Investment decisions under certainty Φt = Vt − Vt −1 (1 + π Vt −1 (1 + π with the measured depreciation rate, Φ tA = Vt − Vt −1 . Vt −1 When Vt rises at the expected rate of inflation, nothing real happens to the economic depreciation rate, whereas the measured rate rises. The appropriate way to preserve measured depreciation allowances in these circumstances would be to scale up all the written-down book values of the firm’s assets by 1 + π before applying the decay factors to them. This adjustment would at least preserve the purchasing power of the initial capital invested in the firm and stop the effective tax on economic income from rising when there is expected inflation. 2.4.8 Firm valuations and the cost of capital Later in Chapter 7 we examine the impact of firm financial policy choices on their market values. We do this by defining the proportion of capital financed by debt at time t−1 as bt−1 = Bt−1/Vt−1, and the remaining proportion, financed by equity, as (1−bt−1) = Et−1/Vt−1, and this allows us to solve the market value of the firm at the beginning of the period using (2.34) as Vt −1 = Xt . ct Here ct = (1−bt−1) iEt + bt−1 iBt−Φt is the user cost of capital in period t, that is, the cost of tying up each dollar of capital in the firm over the period from t−1 to t, where (1−bt−1) iEt + bt−1 iBt is the forgone return on all other assets in the same risk class, and Φt is the rate of capital depreciation. It is worth noting that while the user cost of capital is a weighted average of the costs of debt and equity, it is indeed the marginal cost of capital when the firm is a price-taker in all markets and changes in investment have no effect on the depreciation rate Φt. Thus, its investment choices cannot affect the market returns to debt and equity, or the market prices of capital assets. In the presence of uncertainty, however, changes in the debt− equity ratio (with investment held constant) can affect the returns to debt and equity by moving risk between them, but without affecting the overall value of the firm when there is common information. This is examined later in Chapter 7 by including uncertainty in the analysis. Once firms can affect the market risk or the underlying risk-free return there are additional terms in the user cost of capital. Recall from Section 2.2 that the objective function of a competitive firm is to maximize profit, defined as ηt −1 = Vt −1 − Z t −1 , Investment decisions under certainty where the net cash flows are an increasing function of capital investment at the beginning of the period (Zt−1), with ∂Xt−1/∂Zt−1 > 0 and ∂ 2 X t −1 /∂Z t2−1 < 0. Leverage (bt−1) is chosen to minimize the user cost of capital, with ∂ηt −1 ∂bt −1 dX t −1 = 0 Vt −1 (i − i = 0, ct Et Bt while optimally chosen investment satisfies ∂ηt −1 ∂Z t −1 dbt −1 = 0 ∂X t −1 1 ∂c − t V − 1 = 0. ∂Z t −1 ct ∂Z t −1 t −1 When the no arbitrage condition holds in a certainty setting without taxes, debt and equity must pay the same rate of return, with iEt = iBt, where the cost of capital is independent of leverage and (2.41) holds for all bt−1. This is the Modigliani−Miller leverage irrelevance theorem in a certainty setting. The condition for optimally chosen investment in (2.42) equates the value of the marginal product of capital to its marginal user cost, with ∂Xt−1/∂Zt−1 = ct when ∂ct /∂Zt−1 = 0. Once leverage affects the user cost of capital, investment cannot be choosen independently of the debt−equity choice, and Modigliani−Miller leverage irrelevance fails. Box 2.12 The market valuation of a firm: a numerical example Homemaker Ltd is a publicly listed company with expected net cash flows ( X ) of $168 million in 12 months’ time and a current market value (V) of $1200 million. In a frictionless competitive capital market where traders have common information the firm’s expected used cost of capital ( c ) is obtained using the equation of yield: X 168 = = 0.14. V 1200 In a certainty setting where debt and equity pay the same risk-free return (i), the change in the value of the firm over the year ( Φ ) is obtained from the user cost of capital, which is: c = i − Φ = 0.14. If the risk-free rate is 5 per cent the market value of the firm declines by 9 per cent, ( Φ = i − c = 0.05 − 0.14 = −0.09), so at the end of the year, we have V = V (1 − Φ ) = 1200(1 − 0.09) = $1092. Later in Chapter 7 we introduce risk and taxes and find circumstances where the expected user cost of capital changes with leverage. Investment decisions under certainty Problems 1 James allocates endowments of income between consumption today and consumption tomorrow by trading in a competitive capital market at interest rate i. i Illustrate his consumption opportunities in a diagram with dollars of consumption today on the x axis and dollars of consumption tomorrow on the y axis. ii Under what circumstances will James just consume his income endowments in each period? iii If James does not initially borrow or lend, will a rise in the market interest rate ever cause him to become a lender? Will he always be better off after the interest rate rises? iv Explain what conditions on preferences are required for James to lend less when the interest rate rises if he is initially a lender. v What happens to wealth measured in current dollars when the interest rate rises? What happens to wealth measured in future dollars when the interest rate rises? vi Explain what determines whether individuals enter the capital markets as borrowers or lenders. What role do financial securities play? vii Illustrate the change in James’ consumption opportunities when transactions costs drive a wedge between the borrowing and lending rates of interest, with iB π iL. viii Identify the benefits to James, measured in current dollars, from introducing a capital market. Is it the same for all consumers? ix What determines the market rate of interest? 2 Bill is endowed with current income (M0) and will receive a pension (P1) in the second period which he allocates between consumption in the two periods in a competitive capital market at interest rate i. (Assume he has strictly convex indifference schedules over consumption expenditure in the two periods.) i Use a consumption space diagram to illustrate the way a tax on interest income affects Bill’s budget constraint when it drives a wedge between borrowing and lending rates. Will this tax always cause him to trade less (i.e. reduce saving or borrowing) in the capital market when current consumption is a normal good? ii Examine the way Bill’s budget constraint is affected by a lump-sum tax (T0) in the current period which is returned to him in the second period (without interest). Will this cause him to save less when current and future consumption are normal goods? iii Use a diagram to illustrate the welfare change from replacing the pension with a non-tradable voucher of V1 = P1 dollars when Bill initially borrows. Would this raise the amount he borrows if current consumption is an inferior good? In your diagram identify the change in the interest rate that would alter his consumption in exactly the same way as the voucher−subsidy switch. iv Use a diagram to illustrate the welfare change from replacing the pension with a non-tradable voucher of V1 = P1 dollars when Bill initially saves. Does this cause his saving to fall when current consumption is a normal good? In your diagram identify the change in the interest rate that would alter his consumption in exactly the same way as the voucher−subsidy switch. 3 A farmer has 400 bushels of wheat, and he can convert wheat this year (x1) into wheat next year (x2) using the farming technology x2 = 50 x 1 . All prices are expected to remain constant between this year and next. Clearly, if the farmer plants his entire endowment he will earn a rate of return of 150 per cent on his investment. Investment decisions under certainty i Illustrate the farmer’s consumption opportunities in a commodity space {x1, x2} diagram when he cannot access a capital market (i.e. he cannot borrow or lend). Write down his optimization problem when he derives utility from consuming wheat now and next year, and then derive the condition for optimally chosen investment. Illustrate this outcome in your diagram. ii Suppose the farmer can borrow and lend at 25 per cent per annum. How much wheat should he plant? What are his consumption opportunities now? Why is his investment decision now independent of his tastes? iii The farmer tells his neighbour that if the market interest rate were to rise to 50 per cent per annum he would plant only 278 bushels of wheat even though he could earn a 150 per cent return on his investment by planting the whole 400 bushels. Is the farmer investing too little wheat? iv How much wheat should the farmer plant when he can borrow and lend at a zero market rate of interest? v With the interest rate at which he can borrow and lend standing at 25 per cent, the farmer learns that he can borrow the equivalent of 300 bushels of wheat at a zero interest rate from a primary industry bank recently established by the government to stimulate farm investment. What is his optimal response to this scheme? What are his consumption opportunities? 4 Consider two projects with the following net cash flows: Project (a) (b) −5500 −5500 C2 8500 The average annual yield on a two-period security is 10 per cent (i2 = 0.10). In answering the following questions, assume this rate stays constant. i Would you invest in either of the projects when the term structure of interest rates is flat? ii Would you invest in either of the projects if the current spot rate is 5 per cent? What is the expected spot rate for the second year, E(1i2)? iii Would you invest in either of the projects if the expected spot rate for the second year is 5 per cent? What is the current spot rate, i1? iv Find the term structure of interest rates that would make you indifferent to the projects. v Using the answers to parts (i), (ii) and (iii), isolate the important factors for the appraisal of projects with multi-period cash flows. What problems are encountered in practice when appraising projects? vi What determines the term structure of interest rates, and how can they be calculated in practice? vii How will project evaluation be biased if a flat discount rate is used to discount the net cash flows when the yield curve rises over the life of the project? 5 Consider the returns on the following three traded securities. ¥ S01 pays a 5 per cent return at t = 1 on dollars invested at t = 0; ¥ S12 pays a 12 per cent return at t = 2 on dollars invested at t = 1. ¥ S02 pays an average return of 9 per cent over the two periods paid at t = 2 on dollars invested at t = 0. Investment decisions under certainty Construct a wealth-maximizing set of trades in these securities. Derive the maximization problem for an arbitrageur who starts with no initial wealth and sells one security to buy the other. Draw the budget constraint and the iso-profit lines in the security space and identify the profit from arbitrage. (Assume there are no taxes on security returns and the securities have equal risk over the two-year period.) Derive and explain the fundamental equation of yield. (Carefully detail the assumptions you make.) Use it to explain how the price of a capital asset changes over periods of time when it generates no net cash flows. Why do people hold capital assets which generate no net cash flows, such as paintings, if their prices do not rise at the rate of interest? A taxi cab company purchases its last car for $22,000 at the beginning of the year and it is expected to have a market value of $16,000 at the end of 12 months. Over the year the car is expected to generate $35,960 in cab fees. If the company expects to pay $17,000 in wages to a driver of the cab, $6,000 in fuel costs and $3,000 in other operating expenses (which do not include the cost of capital), what is the expected 12-month return on capital assets in the same risk class as this asset? (Assume all revenues are received, and operating costs paid, at the end of the period.) What is the risk premium if the riskless rate of return is 8 per cent? Jordan owns a ride-on lawn mower that he expects will have a market value of $3,800 at the end of the next 12 months when he receives a net cash flow of $600 for mowing his neighbours’ lawns. (Assume these are the only services the mower provides over the period.) Will the expected user cost of capital be 15 per cent if the mower has a current market value of $4,000? Consider the following cost−benefit analysis: How about considering the use of energy-efficient compact fluorescent lamps in place of the traditional incandescent (filament) globe? Let’s compare the cost of operating a 60 watt incandescent globe with a life of 1,000 hours and costing $1 against a compact fluorescent 11 watt lamp with a life of 8,000 hours and costing $25. Due to its greater efficiency the 11 watt lamp provides light equivalent to the 60 watt globe. Using a domestic rate of 7.32 cents per kilowatt-hour for electricity and operating the lights for 10 hours per day, we find that after 8,000 hours of use (about 2 years and 3 months) the cost of using a globe was $35.14 for power plus $8 for the globes, a total of $43.14. On the other hand, the lamp used $6.44 worth of power and cost $25 giving a total cost of $31.44. This means a potential saving of $11.70 - obviously, more lights and increased usage will mean a greater saving. On the same basis a 15 watt lamp compared with a 75 watt globe shows a possible saving of $18.14! These calculations make it hard to explain why people buy incandescent (filament) globes in preference to compact fluorescent lamps. Can you provide reasons why they do? Use a spreadsheet to compute the present values of the capital and recurrent costs of providing light from eight globes and one lamp. (Note that the electricity charge is 7.32 cents each hour when using a 1,000-watt appliance. Assume the electricity charges are paid every 62 days, and that the interest rate is 10 per cent per annum in each year.) 10 The price-earnings ratios for shares traded on the Australian Securities Exchange are reported on a regular basis to provide investors with information they can use to determine their security trades. Investment decisions under certainty i Explain why these reported ratios differ across traded shares. What information do investors get from them? Should investors buy shares with high P/E ratios? ii Examine the adjustments that investors would make to reported P/E ratios to convert them into ideal P/E ratios. Explain what the ideal ratios measure and why they differ across shares. iii Consider reasons why the so-called new technology stocks have such high reported P/E ratios. Do stocks with more risk have higher or lower P/E ratios than stocks with less risk? iv Explain why measured depreciation allowances differ from economic depreciation allowances. What are the factors that determine economic depreciation? How does expected inflation raise the effective tax on company income through measured depreciation allowances? 11 You have the following information. At t = 0 a firm issues $1,000 of debt with an interest cost of 20 per cent, and 1,000 shares with a market value of $1 (i.e., p0 = $1). At t = 1 there are expected to be net cash flows (before interest, dividends and depreciation) of $600, an ex-dividend share price (p1) of $1.20, and 5 per cent economic depreciation. i Calculate and explain: a the EPS, dividends per share and capital gains per share; b the dividend yield; c the earnings−price ratio; and, d the P/E ratio. ii How is the P/E ratio measured in practice, and why does it differ from the ideal P/E ratio? 12 The following financial data was reported for banks that trade on the Australian Securities Exchange. It was compiled at the close of business on Friday 13 April 2007, and was obtained from the Australian Financial Review website at http://www.afr.com/ home/sharetables/weekly/2007/4/13/CCsswk070413.csv, on 17 April 2007. Security description Adelaide Bank ANZ Banking Group Bank of Queensland Bendigo Bank Ltd C’wealth Bank of Aust. Home Bld Soc. Homeloans Ltd Mackay Permanent Mortgage Choice National Aust. Bank Rock Bld Perm St George Bank Westpac Banking Wide Bay Aust. Ltd Share price ($) 13 April 2007 Net tangible Dividend Earnings assets ($) yield (%) per share(¢) Day High Day Low Last Sale 15.71 30.40 18.35 17.08 51.99 15.40 0.98 6.90 3.21 42.74 5.60 35.40 26.52 12.40 15.26 30.11 18.01 16.99 51.57 15.30 0.97 6.90 3.17 42.28 5.50 34.77 26.25 12.35 15.45 30.33 18.06 17.01 51.80 15.32 0.97 6.90 3.20 42.56 5.60 35.00 26.29 12.35 6.01 8.53 4.37 5.06 10.23 4.72 — 3.07 0.40 11.91 2.02 6.73 6.12 3.37 3.95 4.12 3.16 3.17 4.58 2.87 5.15 3.33 4.06 3.92 4.02 4.31 4.41 4.57 84.92 200.00 88.20 81.20 320.70 54.00 5.09 33.00 16.10 262.60 21.90 201.40 167.20 60.52 Compute the measured P/E ratios for these banks and consider reasons why they might differ. Do they indicate the Home Building Society and the Rock Building Permanent Investment decisions under certainty are the most risky banks, while the ANZ Banking Group and Westpac Banking are the least risky banks? Calculate the dividends per share for each bank. Find examples of companies with negative P/E ratios and positive dividend yields and explain why it happens. Construct a similar table for a small group of companies in another sector of the economy. Can the ideal P/E ratio ever be negative? 13 Consider a sewing machine which generates a certain $1,000 net cash flow at the end of each year for 5 years (with no residual value). i Determine the price of this machine if the interest rate in each period is 10 per cent. ii Calculate the rate of economic depreciation for this machine if it has no resale value at the end of its life. iii Calculate the annual rate of depreciation allowed for tax purposes if the machine is depreciated on a straight-line basis over 5 years with zero residual value. Compare these allowances to economic depreciation in each year and consider whether they raise or lower the effective tax rate on the economic income generated by the sewing machine. 14 Historic cost based depreciation allowances can cause measured income to differ from economic income when: (a) allowed rates of depreciation ( Φ tA ) differ from economic rates of depreciation (Φt) and (b) there is expected inflation. This question demonstrates these differences for the nominal returns paid to shareholders in a corporate firm from period t − 1 to t, where the economic rate of return is iEt = X t − iBt Bt −1 + Φ tVt −1 , Et −1 and the measured rate of return is iEtA = X t − iBt Bt −1 + Φ tAVt −1 . Et −1 Let Et −1 = Bt −1 = $500. i In the absence of inflation Vt = $950, Xt = $80, and iBt = 0.03. Calculate and compare economic and measured income in the absence of inflation when i Φ tA = −0.03, and ii Φ tA = −0.05. ii Suppose inflation is expected to be 10 per cent over the period from t − 1 to t, where Vt = $1,045. a Compute nominal economic income when the Fisher effect holds (in the absence of tax). b Assume Φ tA = −0.05. Compute nominal measured income when iBt = 0.033. Use historic cost based depreciation to calculate measured income. c What happens to the effective tax rate on economic income if the tax is applied to nominal measured income? 15 There are a number of ways that changes in expected inflation can impact on capital asset prices. One is through wealth effects in the money market due to the non-payment Investment decisions under certainty on interest on currency (notes and coins) held by the private sector. Answer the following questions when the demand for real currency balances (in billions of dollars) is md = 26 − 100i, where i is the nominal rate of interest. (Assume md is unaffected by changes in real income.) i What is the supply of real currency when the equilibrium nominal interest rate is 6 per cent? Calculate how much seigniorage there is when the rate of inflation in the general price level is expected to be 3 per cent. Explain how seigniorage transfers revenue to the Reserve Bank of Australia (RBA). Compute a dollar measure of the inefficiency when no interest is paid to currency holders and explain what this inefficiency measures. (Assume the RBA is a monopoly supplier that prints currency at a constant resource cost of 1 per cent of the quantity supplied.) ii Now suppose currency holders expect an increase in the rate of inflation over the next year that raises the equilibrium nominal interest rate to 8 per cent. Compute the reduction in the demand for real currency balances and calculate a dollar measure (in millions) of the fall in the real wealth of currency holders. Carefully explain why this loss in wealth occurs and examine circumstances where it is larger for the same change in the nominal interest rate. iii What would the real currency supply be if the RBA paid interest to currency holders? (Assume the RBA incurs no costs of paying interest, and interest is paid to eliminate inefficiency in the currency market.) 16 A gardening contractor buys a ride-on lawn mower which will generate a certain net cash flow of $5,000 at the end of each year for the next 2 years when it has a certain residual value of $1,000. (Assume all markets are competitive.) i Compute economic depreciation on the mower in each of the two productive years of its life and compare it with measured straight-line depreciation when the riskfree interest rate is 5 per cent per annum. (Note that straight-line depreciation apportions the purchase price of the asset less its residual value equally over the 2 years of its life. Assume there is no expected inflation.) ii Compute and compare the depreciation measures in part (i) when expected inflation increases all nominal variables by 2 per cent each year, including the net cash flows, the residual value of the mower and the nominal interest rate. Use this to explain why there are differences in measured and economic income. Identify circumstances where the Fisher effect holds and explain the forces that drive it. (Assume there is certainty.) Explain why a change in the expected rate of inflation has real effects in the currency market when no interest is paid on notes and coins. Examine these real effects when there is a fall in the expected rate of inflation. How does the government raise revenue as seigniorage, and does this revenue fall when expected inflation declines? Uncertainty and risk Consumption goods in a certainty setting are characterized by their type (physical attributes), geographic location and location in time. For example, an apple in one location is different from an apple (with the same physical attributes) in another location, and it is also different from the same apple in different time periods. Indeed, consumers derive utility from the combination of characteristics that define them, which for an apple, include sweetness, size, colour, firmness and moisture content. In fact, it is possible to estimate the price of any good as the summed value of its characteristics using hedonic prices, which are consumer marginal valuations for each characteristic.1 Uncertainty introduces randomness into future consumption through exogenous variability in the environment within which consumers live. Its effects may be confined to the variance it imparts to their consumption or a combination of that and its direct impact on their utility. Debreu (1959) captures uncertainty by expanding the characteristics used to define consumption goods by making them state-contingent, where all possible future states of the world are defined by unique combinations of a set of environmental variables. Consumers choose future consumption bundles that are contingent upon the realization of a final state of the world. In effect, they pre-commit to trades in specified states, where uncertainty is resolved when the true state eventuates.2 This is the state-preference approach to uncertainty that extends a standard certainty analysis by expanding the commodity space to include goods that are statecontingent. When consumers with common beliefs about the outcomes in each state of nature can trade goods in competitive frictionless markets in each time period, over time and between states of the world, the familiar Pareto optimality conditions apply. In the Debreu model consumers trade a full set of contingent commodity contracts which are commitments to exchange goods in specified states at agreed terms of trade. To make it a straightforward extension of the certainty model summarized in Definition 2.3, consumers have conditional perfect foresight and correctly anticipate equilibrium outcomes in each state of the world. In particular, every consumer correctly predicts their income and all the commodity prices. The only uncertainty is about which state becomes the actual (or true) state. In this setting the number of forward commodity contracts must increase to N times the number of possible states of the world so that consumers can trade all N commodities in every state.3 Arrow (1953) extends the Debreu model by including risky financial securities so that consumers can transfer income (and consumption) between states by bundling securities into portfolios. In the Arrow–Debreu economy there are no transactions costs, the capital market is complete, and traders are price-takers with conditional perfect foresight. That makes it fully equivalent to the asset economy with certainty, summarized earlier in Definition 2.4. In a complete capital market consumers can trade every commodity in every state, where, in the absence of taxes and other market distortions, they equate their marginal valuations for goods. Uncertainty and risk Uncertainty provides an explanation for the large number of different types of securities that trade in capital markets. Consumers bundle them together in portfolios to choose patterns of consumption expenditure over uncertain states of nature. Indeed, in a complete capital market there are enough securities for them to trade in every state and spread risk according to their preferences. Financial securities play two important roles in spreading risk. The first is to eliminate diversifiable (individual) risk from consumption expenditure, while the second is to transfer non-diversifiable (market) risk across consumers. Whenever production activities in the economy are less than perfectly correlated, some of the variability in their net cash flows can be eliminated by bundling the securities used to finance them inside well-diversified portfolios. Consumers also face idiosyncratic (or individual) risk in their consumption expenditure that can be diversified across the population. For example, a given proportion of consumers will suffer a car accident and be harmed by adverse weather conditions. By purchasing insurance they create pools of funds for paying claims made by those incurring losses. Whenever individual risk trades at actuarially fair prices it is costlessly eliminated from consumption, where non-diversifiable risk is the only risk that will cause asset prices to sell at a discount in a frictionless competitive capital market. This is a fundamental property of all the popular asset pricing models we look at in Chapter 4.4 Financial securities facilitate the efficient transfer of market risk to consumers with lower relative risk aversion and/or better information. A large proportion of aggregate investment is financed by shares and bonds that consumers hold either directly in their own security portfolios or indirectly in mutual funds that are portfolios created by financial intermediaries. Indeed, a range of derivative securities are created to eliminate diversifiable risk from consumption and trade market risk at lower cost. For example, there are futures contracts for most major commodities that allow producers to reduce their exposure to price uncertainty on their outputs and inputs. Aluminium, crude oil, petroleum, wheat, wool, rice, sugar and coffee are all examples of commodities with futures contracts. Buyers give sellers a commitment to pay a set price for the delivery of a specified quantity and quality of a commodity at a specified point in time. Options contracts give holders the right, but not the obligation, to trade commodities and financial securities at specified prices on or before a specified time. They are used to replicate existing securities and to trade market risk. A major objective of finance research is to derive an asset pricing model where every consumer measures and prices risk in the same way. It is used by private traders to value risky projects and by agencies in the public sector to evaluate the effects of government policies. Many traders in financial markets are specialists who collect information about the net cash flows on capital assets to identify securities with prices that do not fully reflect their fundamentals. By selling securities with high prices (relative to their fundamentals), and buying securities with low prices, they make profits through arbitrage. When these profits are eliminated the no arbitrage condition holds so that security prices reflect all available information about their fundamentals. Pricing models are also used in project evaluation by private firms and public agencies. Private firms seek profitable investment opportunities, while government agencies examine policy changes and public projects that will raise social welfare. But these two objectives rarely coincide in economies with distorted markets due to, for example, taxes, externalities and non-competitive behaviour.5 In this chapter we examine the role of uncertainty and risk on equilibrium outcomes in private market economies – in particular, how it affects capital asset prices. Knight (1921) Uncertainty and risk distinguished between risk and uncertainty by identifying risk as circumstances where consumers assign numerical probabilities to random events, and uncertainty as circumstances where they do not (or cannot) assign such probabilities. The analysis commences in Section 3.1 by looking at consumer preferences under uncertainty and risk, starting with the generalized state preferences employed in the Arrow–Debreu economy. We then consider the expected utility approach that separates probabilities from utility at each random event. This was initially formalized by von Neumann and Morgenstern (1944) using common (objective) probabilities with state-independent consumption preferences. A large literature generalizes their approach by allowing different (subjective) probabilities and/or state-dependent consumption preferences. Despite the appeal of these extensions, however, the von Neumann–Morgenstern expected utility (NMEU) function is much more widely used in economic analysis because of its simplicity. Finally we consider mean–variance analysis as a special case of the expected utility approach. This is used in the four asset pricing models examined later in Chapter 4. In Section 3.1 we derive an asset pricing equation in the two-period state-preference model of Arrow and Debreu where consumers have conditional perfect foresight based on common beliefs about the state-contingent commodity prices. This is a certainty-equivalent analysis that naturally extends the asset pricing model derived in the two-period certainty economy with production in Section 2.2.4. The Arrow–Debreu pricing model accounts for uncertainty in asset prices without explicitly isolating the probabilities consumers assign to random events. We modify this model in Section 3.2 by adopting (NMEU) functions to separate probabilities assigned to random events from the utility consumers derive from their expenditure in each event. This allows us to derive the consumption-based pricing model (CBPM) where in equilibrium consumers have the same consumption risk in a frictionless competitive capital market. Thus, we can summarize it using the set of common factors that explain the risk in aggregate consumption. The four popular asset pricing models examined in Chapter 4 differ by the way they isolate these common factors. 3.1 State-preference theory Debreu made a very important contribution to standard general equilibrium analysis under uncertainty by expanding the definition of commodities to make them event-contingent. It is generally referred to as the state-preference approach to uncertainty. 3.1.1 The (finite) state space Savage (1954) provides widely accepted definitions for the basic concepts of the theory of choice under uncertainty in the state-preference model, where a state is a complete description of all relevant aspects of the world, a true state is the one that actually eventuates when the uncertainty is resolved, while an event is a set of states. We assume the set of possible state S : ={1, … ,S} and the number of time periods T are finite.6 At each time t = 0,1, ... ,T there is a partition t of the state space S , whose elements are events that can occur at that time.7 Each event is a subset of the states in S and is outside the control of consumers. Consumers face most uncertainty in the first period where partition 0 has one event containing all the possible states of nature that can eventuate in the last time period T; this is the coarsest partition of S . In contrast, when the uncertainty is resolved at time T there are S events in Uncertainty and risk partition T; this is the finest partition of S . We can summarize the properties of the state space as follows: i S is exhaustive – it contains all possible states of the world. ii All s ∈ S are mutually exclusive – the occurrence of one state rules out the occurrence of any other state. iii Every state s ∈ S is independent of the actions of consumers – both as individuals or as coalitions. iv All consumers agree on s and classify every state in the same way. v All consumers agree on the true state of the world in period T. By conditions (i) and (ii) the state space identifies every possible description of the environment in the second period where each state is unique. Since consumers cannot influence the environment by property (iii), phenomena such as global warming are ruled out.8 Properties (iii) and (iv) allow consumers to make binding agreements with each other: (iv) lets them make commitments that are conditional on specified contingencies, while (v) makes them enforceable. An example of an event tree for three time periods is illustrated in Figure 3.1. There is a single event in the first period that contains all eight possible states, 0: = {e0}, with e0: = {S}. In the second period the states are partitioned into three separate events, where 1: = {e1, e2, e3}, with e1: = {s1, s2, s3}, e2: = {s4, s5} and e2: = {s6, s7, s8}. When one of these events is realized as an actual outcome in the second period (t = 1) some of the uncertainty is resolved as it contains the true state of the world in the final period (t = 2). In the true state all the uncertainty is resolved, and we have 2: = {e4, e5, e6, e7, e8, e9, e10, e11}, with e4: = {s1}, Figure 3.1 An event tree with three time periods. Uncertainty and risk e5: = {s2}, e6: = {s3}, e7: = {s4}, e8: = {s5}, e9: = {s6}, e10: = {s7}, and e11: = {s8}. Event-contingent goods are automatically located in time. Thus, in the presence of uncertainty they are defined by their physical attributes, their geographic location and by contingent events. In a two-period analysis where uncertainty is completely resolved in the second period it makes more sense to define goods as state-contingent, rather than event-contingent, as the single event in 0 contains all the states, while there are as many events as states in 1. We adopt this terminology in the following analysis which is undertaken in a two-period setting. Consumer beliefs must clearly play an important role in determining equilibrium outcomes when there is uncertainty. With incomplete information they can have subjective probabilities that deviate from the true underlying objective probabilities. Further, consumers with different information can have different subjective probabilities.9 It is clear from the event tree in Figure 3.1 that a considerable amount of information has to be processed to solve the event-contingent prices for each good, especially in the first period where all states are possible outcomes in the final period. Each consumer must implicitly solve all the event-contingent commodity prices along each branch of the tree by computing the demands and supplies of every good in each state. In the following analysis we characterize equilibrium outcomes in the state-preference model when consumers have conditional perfect foresight. Definition 3.1 (Conditional perfects foresight) Consumers with conditional perfect foresight have common beliefs and correctly predict the commodity prices in each state of the world, with psh = ps for all h. If, in these circumstances, they can trade event-contingent claims for every commodity in every future time period the optimality conditions in a competitive equilibrium will have similar properties to those we are more familiar with in a certainty setting without taxes and trading costs. In particular, it will be a Pareto optimal allocation where consumers use the same event-contingent discount factors to value future consumption. In effect, they each make consumption choices for their entire life in the first period, and they do not expect to revise them in subsequent periods as the uncertainty is resolved.10 3.1.2 Debreu economy with contingent claims The model of uncertainty in Chapter 7 of Debreu (1959) is for an economy where resources are privately owned and traded in competitive markets. There are no financial securities or money so consumers and producers instead exchange event-contingent claims to goods. These are forward contracts that specify the delivery of a unit of a commodity at a given location contingent on the occurrence of a specified event, with current prices determined by the event-contingent commodity prices. We initially restrict the analysis to two time periods and adapt the endowment economy in Section 2.2.3 by including production and expanding the commodity space to make goods state-contingent in the second period. In the first period each consumer (h = 1, ... , H) now chooses a bundle of future consumption goods xsh : = {xsh (1),..., xsh ( N )} that trade in competitive markets at expected future spot prices ps: = {ps (1), ... , ps (N)} in each state s. They do this to maximize a generalized utility function u h ( x0h , x1h ,... , xsh ), where x0h is the current consumption bundle.11 We assume consumers have conditional perfect foresight when they trade forward commodity contracts f sh : = { f sh (1), ... , f sh ( N )} at the state-contingent prices pfs: = {pfs(1), ... , pfs(N)}, where fs(n) > 0 is the amount of good n delivered to the consumer, and fs(n) < 0 the amount sold, in state s. Uncertainty and risk Thus, the consumer problem in the Debreu economy with continent claims (omitting superscript h), can be summarized as ⎧⎪ max ⎨u( x0 , x1 ,... , xs ) ⎩⎪ p0 x0 − p0 x0 + Σ s p fs f s − η0 ≤ 0 ⎫⎪ ⎬, ps xs − ps ( xs + f s ) ≤ 0, ∀s ⎭⎪ where η0 : = {η10 , ... , η0J } are the profit shares in each of the J firms in the economy.12 In the previous chapter we saw how the consumer problem could be simplified when all goods are traded in frictionless markets in each time period. If consumers can also trade goods in every state, they will equalize their marginal utility of income in each time period and in each state. This allows us rewrite the problem in (3.1) as X − I ≤ 0 ⎪⎫ 13 ⎪⎧ max ⎨v ( I ) 0 0 ⎬, X s − I s ≤ 0, ∀s ⎪⎭ ⎪⎩ where income I : = {I 0 , I1 ,... , I s } is I 0 ≡ X 0 − F0 + η0 , with F0 = ∑ s p fs f s , in the first period, and I s ≡ X s + Fs with Fs = ps f s , in each state in the second period. When forward contracts are traded optimally in frictionless competitive markets, they satisfy p fs = ϕ sh ps with ϕ hs = λ hs / λ 0h being the state-contingent discount factor used to value income in state s. It is the ratio of the constraint multipliers on the budget constraints in (3.2) which measure the marginal utility of future income in each state (with ∂v h / ∂I sh = λ hs ) relative to the marginal utility of current income ( with ∂v h / ∂I 0h = λ 0h ). Most asset pricing models in finance are derived in endowment economies where consumption risk is determined by endowment risk. Later in Chapter 8 we want to use the pricing models in project evaluation where production plays an important role in equilibrium outcomes. For that reason we include production, but simplify the analysis by ruling out private investment opportunities. Thus, all investment in the economy is undertaken by (j = 1, ... , J) firms who sell forward contracts f s j : = { f s j (1), ... , f s j ( N )} in the first period to fund expenditure on their inputs z0j : = {z0j (1), ... , z0j ( N )} which are used to produce the state-contingent ysj : = { ysj (1), ... , ysj ( N )}. When they trade forward contracts, with f s j ( n) < 0 for sales and f s j ( n) < 0 for purchases of each good n in each state s, the problem for each firm in the twoperiod Debreu economy can be summarized (omitting superscript j) as max η0 = F0 − Z 0 Fs − Y1 ( Z 0 ) ≤ 0 , where F0 = Σs pfs fs is revenue from selling futures contracts in the first period, Z0 = p0z0 expenditure on production inputs, Fs = psfs the market value of goods delivered on forward contracts in each state, and Ys = psys state-contingent sales revenue. While firms can produce multiple outputs using multiple inputs, we assume production sets are strictly convex. When forward contracts are traded optimally in frictionless competitive markets, they satisfy Uncertainty and risk p fs = ϕ sj ps ∀s. 77 (3.5) where ϕ sj = λ sj are the state-contingent discount factors used to value future income; they are the multiplier on the state-contingent payout constraints in (3.4).15 Both pricing models in (3.3) and (3.5) value forward contracts for each good by discounting their future spot prices, where all consumers and firms use the same discount factors in frictionless competitive markets, with ϕ hs = ϕ sj = ϕ s for all h, j. Thus, the equation for pricing forward contracts in the Debreu economy is: p fs = ϕ sj ps ∀s. Definition 3.2 The Debreu economy with contingent claims is described by (u, ¯x, y(J )), where u and x– are, respectively, the vectors of utility functions and endowments for the H consumers, and y (J) the vector of production technologies used by the J firms. In this economy where consumers have conditional perfect foresight a competitive equilibrium can be characterized by the vectors of relative commodity prices po and p*s for all s, and the vectors of forward contract prices pfs* for all s, such that: h i x0 * , f sh * and xsh * for all s , solves the consumer problem in (3.1) for all h; ii z0j*, fsj* and ysj*, for all s, solves the producer problem in (3.4) for all j; iii the goods markets clear at each t ∈{0, 1}, with ∑ h x0h ( n) = ∑ h x0h * ( n) + ∑ j z0j * ( n) for all n ∑ h xsh ( n) + ∑ j ysj * = ∑ h xsh* ( n) for all n, s, and the forward market clears, with ∑ h f sh* ( n) = ∑ j f s j * ( n) for all n, s. Since consumers with conditional perfect foresight agree on the future spot prices for the commodities they also use the same discount factors. But this unanimity breaks down when they have different information and form different expectations about the future spot prices. 3.1.3 Arrow–Debreu asset economy Arrow (1953) extended the analysis of Debreu by introducing financial securities, but without formalizing their role by including trading costs. Since they are used to reduce the number of choice variables for consumers in the first period they are implicitly included to lower trading costs. This was noted earlier in the certainty analysis in Section 2.2.3, where instead of choosing the composition of their consumption bundles in the second period consumers chose the value of their consumption expenditure by trading a risk-free security. While there are more choice variables in the state-preference model where consumers determine expenditure in each state, the financial securities reduce the number of choice variables in the first period from at most N(1 + S) in the Debreu economy to at most N + S in the Arrow–Debreu economy. The state-preference approach clarifies the role played by financial securities in spreading risk, where security demands are determined by preferences for patterns of consumption over the states of nature. As noted earlier, consumer preferences are determined by their Uncertainty and risk beliefs about the likelihood of states, and the utility from consumption in them. These two components are separated later by using expected utility functions. Before doing so we derive an asset pricing equation in the Arrow–Debreu economy where consumers have the generalized state preferences in (3.1). This provides us with useful insights into the popular asset pricing models examined later in Chapter 4. In particular, it highlights the important role played by the restrictions they impose on consumer preferences and the distributions of the security returns. In the asset economy (k = 1, ... , K) securities trade in a frictionless competitive capital market at prices pa: = {pa1, ... , pak}. Consumers hold them in portfolios a h : = {a1h ,... , akh } with a current market value of V0h = pa a h , where akh > 0 for units they purchase and akh < 0 for units they sell.16 These portfolios have state-contingent payouts, with Rsh = ∑ k akh Rks , that determine the pattern of their future consumption expenditure which is illustrated in Figure 3.2 when consumers have no endowments in the second period. Thus, all their future consumption expenditure is funded from the security payouts. When consumers have endowments in both periods we can write the budget constraints for the consumer problem in (3.2) as I 0 ≡ X 0 − V0 − η0 , I s ≡ X s + Rs , ∀s, where η0 = V0 − Z0 is profit in private firms which is paid to consumers as shareholders.17 In the absence of trading constraints, optimally chosen security trades satisfy ϕ h R = pa , with ϕ h being the (1 × S) row vector of state-contingent discount factors, R the (S × K) payout matrix and pa the (1 × K) row vector of security prices.18 The structure of the payout matrix R determines how much flexibility consumers have to choose their patterns of statecontingent consumption. In a complete capital market they can trade in every state of nature, which leads to the following definition. t=1 X1h(R1h ) X2h(R2h ) X3h(R3h) X0h (ah ) X4h(R4h) • • • • • • • • • • • • Figure 3.2 Commodity and financial flows in the Arrow–Debreu economy. Uncertainty and risk Definition 3.3 The capital market is complete for consumers when there are: i as many linearly independent securities (K) as states of nature (S), with rank [R] = S; and ii no constraints on consumer security trades. Consider the following payout matrix for a full set of conventional securities, which have payouts in more than one state, in a three-state world: R1 R2 R3 s1 R = s2 s3 ⎡1 0 1⎤ ⎢ ⎥. ⎢1 1 1⎥ ⎢0 1 1⎥ ⎣ ⎦ Securities 1 and 2 are risky, while security 3 is risk-free.19 In a complete capital market consumers can create a full set of primitive (Arrow) securities with payouts in a single state of nature, where the payout matrix becomes R1p R2p R3p s1 R = s2 s3 p ⎡1 0 0 ⎤ ⎢ ⎥. ⎢0 1 0 ⎥ ⎢0 0 1 ⎥ ⎣ ⎦ p They are created by bundling conventional securities into portfolios, where R1 is obtained by purchasing a unit of conventional asset 3 and selling a unit of conventional asset 2, p R2 by purchasing a unit each of conventional assets 1 and 2 and selling a unit of convenp tional asset 3, and R3 by purchasing a unit of conventional asset 3 and selling a unit of conventional asset 1.20 Clearly, the two conditions in Definition 3.3 must hold for the capital market to be complete for consumers. They cannot trade in every state, even with a full set of linearly independent securities, when there are constraints on their security trades – in particular, short selling constraints that restrict borrowing. In a complete capital market (where R is non-singular) price-taking consumers equate their discount factors in (3.8), with ϕ = ϕ h = pk R−1 , ∀h, where R−1 is the inverse of the payoff matrix. In an incomplete capital market they can have different state-contingent discount factors. Most formal analysis with incomplete markets provides no explicit reason for the absence of a full set of linearly independent securities. It is normally assumed, often implicitly, there are trading costs or consumers face borrowing constraints. It is important to model the incompleteness endogenously because it affects equilibrium outcomes, particularly with respect to their welfare effects. And we cannot automatically conclude there is market failure when transactions costs make the capital market incomplete. If they are minimum necessary costs of trade the equilibrium outcome is (Pareto) efficient when traders are price-takers. Any traders with a transactions cost advantage can supply securities with new patterns of returns across the states, and this gives them market power that can violate the competition assumption. Uncertainty and risk All investment in the Arrow–Debreu economy is undertaken by (j = 1, ... , J) private firms who trade portfolios of financial securities a j : = ( a1j , ... , akj ) in the first period with a market j j value of V 0 = pa a to fund their input purchases ( Z 0j ), with akj > 0 units sold and akj < 0 for units purchased. In the second period they make state-contingent payouts to securities ( Rks ) from their net cash flows (Ys j ), where the problem for each firm, is max η0j = V0 j − Z 0j Vs j − Ys j ( Z 0j ) ≤ 0, ∀s , using Vs j = ∑ k akj Rks to denote the value of the security payouts in state s. In the absence of trading constraints their optimally chosen security trades satisfy ϕ j R = pa, with ϕ j being the (1 × S) row vector of state-contingent discount rates. In the following analysis firms (or their agents, financial intermediaries) trade securities to exploit any expected profits. This activity is especially important for the no arbitrage condition in models with taxes on security returns where consumers face borrowing constraints to restrict tax arbitrage.21 Definition 3.4 The capital market is complete for firms when there are: i as many linearly independent securities (K) as states of nature (S), with rank [R] = S; and ii no constraints on firm security trades. In a complete capital market price-taking firms equate their state-contingent discount rates in (3.10): ϕ = ϕ j = pa R−1 , ∀j. When consumers and firms trade in frictionless competitive markets, we have the following definition:22 Definition 3.5 The Arrow–Debreu asset economy is described by (u, –x , y(J), R),where R is the S ×K payout matrix for a completed capital market (with rank (R) = S ). In this economy, where consumers have conditional perfect foresight, a competitive equilibrium can be characterized by vectors of security prices p*a,, commodity prices in the first period p0* and state-contingent commodity prices p*s such that: i a h x0h * and xsh * , for all s, solve the consumer problem in (3.2) with income defined in (3.7) for all h; ii a j* , z0j* and ysj* , for all s, solve the firm problem in (3.10) for all j; h* j* iii the capital market clears, with ∑ h ak = ∑ j ak for all k, and the goods market clear, with ∑ h x0h = ∑ h x0h * + ∑ j z0j * and ∑ h xsh + ∑ j ysj * = ∑ h xsh * for all s. With more than two time periods the state-contingent variables are made event-contingent, where each event is a subset of the state space that identifies variables in a specified time period. This is the same as the real equilibrium outcome in the Debreu economy in Definition 2.3 due to the absence of trading costs. They are different when trading costs are included and Uncertainty and risk financial securities and forward contracts have different impacts on them. Indeed, there are circumstances where forward contracts and financial securities will both trade. To simplify the analysis we follow standard practice and rule out that possibility by excluding trading costs and forward contracts. When consumers and firms face the same payoff matrix R and vector of security prices they use the same discount factors in their pricing models in (3.8) and (3.10), respectively, with ϕ = ϕ h = ϕ j = pa R−1 ∀h, j. This leads to the Arrow–Debreu pricing model (ADPM), ϕR = Pa ∀ h, j, where the vector of discount factors (ϕ) are prices of the primitive (Arrow) securities.This is confirmed by using the payout matrix for a full set of primitive securities, where Rp is the identify matrix. For three states it is R1p R2p R3p s1 ⎡1 0 0 ⎤ . R p = I = s2 ⎢0 1 0 ⎥ ⎢ ⎥ s3 ⎢0 0 1 ⎥ ⎣ ⎦ Thus, by using (3.11), we have ϕ = pap . Box 3.1 Obtaining primitive (Arrow) prices from traded security prices Three securities have the following market prices for the payouts in three possible states of nature: Payouts ($) Current Price ($) State 1 State 2 State 3 ADL Share Intec Share Govt. Bond Since the payouts to these securities are linearly independent (which means none of the securities can be replicated by combining the other two in a portfolio) we can solve the primitive (Arrow) prices using the following system of equations: 12 = ϕ2 15 + ϕ3 30 21 = ϕ1 60 + ϕ3 20 19 = ϕ1 20 + ϕ2 20 + ϕ3 20 where ϕ = {0.3, 0.5, 0.15}. The risk-free interest rate can be obtained by pricing a risk-free bond that pays one dollar in every state, with pB = 兺s ϕs = 0.95 = 1/(1 + i), where i = 0.05. The ADPM is not used in practice because the number of states is potentially large, and that makes them difficult to identify, particularly in a multi-period setting. The popular pricing models in finance proceed by adopting expected utility functions to decompose the state probabilities and discount factors embedded in the Arrow prices. We do this in following sections. Uncertainty and risk Arbitrage plays an important role in this model because it equates the prices of assets with the same state-contingent payouts. Thus, there are no arbitrage profits in a frictionless competitive capital market equilibrium, which leads to the following theorem. Theorem 3.1 The no arbitrage condition holds in a competitive capital market described by (R, pa) if and only if ϕR = pa for ϕ >> 0. Proof. Any arbitrage portofolio aA (containing non-trivial elements and requiring no initial net wealth) with RaA = 0 must have a market value of: paaA = (ϕR) aA = ϕ(RaA) = 0. It is important to understand what competition means in a state-preference model. Security trades by price-taking consumers and firms cannot change the risk-spreading opportunities available to the capital market. In other words, they cannot supply new securities as perfect substitutes can be created by bundling together existing traded securities. Formally, for any new security m, there exists a derivative security (d) such that Rm = Rd = ∑ k ≠ m ak Rk where Rm is the column vector of state-contingent payoffs in R for security m, and Rd the vector of state-contingent payoffs from combining other traded assets k ≠ m in R. It is straightforward to see how this condition holds in a frictionless complete capital market when consumers have common beliefs about the state-contingent commodity prices and state-contingent security returns. It is a much stronger requirement, however, when the capital market is incomplete. If trade is not possible in some states due to trading costs, no one will create new trading opportunities when traders face the same costs. Traders capable of creating new risk-spreading opportunities must have a cost advantage. For example, some firms may have production technologies that allow them to trade at lower cost in some states through their investment choices.23 Before we can properly solve an equilibrium with an incomplete capital market it is important to specify the reasons why the incompleteness occurs in the first place. And this is particularly important for making any assessment about the welfare properties of the equilibrium outcome. As a way to demonstrate the role of the no arbitrage condition, consider the arbitrage portfolio which combines security m with its perfect substitute (d), where the problem for the arbitrageur (A) is max η0A = ϕamA Rm + ϕadA Rd pam amA + pad adA = 0 . Using the first-order conditions for this problem, we have Rm /Rd = pam /pad. The role of arbitrage is illustrated in Figure 3.3 where the budget constraint passes through the origin because the portfolio is self-funding. The dashed iso-profit lines isolate combinations of security holdings with the same profit. There are profits from going long in security m and short in security d when the relative payouts on security m exceed its relative price, with pam / pad > Rm /Rd. The portfolio C generates profit ηAC, but it is not an equilibrium unless constraints stop further trades. In fact, the demand for security m is unbounded whenever pam / pad > Rm /Rd, while the reverse applies when pam / pad < Rm / Rd. This arbitrage activity eliminates any profit by mapping the iso-profit lines onto the budget constraint, where in equilibrium pam /pad = Rm /Rd. Uncertainty and risk Slope = −Rm /Rd Slope = −pam / pad −am ηAC aACm 0 pam am + pad ad = 0 −ad Figure 3.3 The no arbitrage condition. 3.2 Consumer preferences The flow chart in Figure 3.4 provides a schematic summary of the relationship between the different preference mappings over uncertain consumption expenditure used in finance applications. In the following analysis we adopt the convention of placing a tilde (~) over random variables, where I~ takes values from the set of state-contingent incomes, with I~ = Is for s ∈ S. When objective probabilities (πs) are assigned to states of nature the expectations operator is denoted E( ), while it is denoted Eh( ) for subjective probabilities ( π hs ), with Σsπs = 1 and Σ s π hs = 1 for all h, respectively. Similarly, state-independent preferences are denoted by utility function U ( I), and state-dependent preferences by the function U ( I, s ). 24 In its most general form the Arrow–Debreu model employs generalized state preferences. These are preference mappings over the expanded commodity space where goods are characterised by type, location, time and event. But they do not separate probability distributions over states from the utility consumers derive in those states. In many applications it is useful to isolate the probability assessments made by consumers, where the most familiar approach uses the von Neumann–Morgenstern expected utility function (von Neumann and Morgenstern 1944). It weights the utility consumers derive in each state by its probability and then sums them over states, where the objectively determined probabilities are common to consumers and the utility functional is state-independent. All the popular pricing models examined in Chapter 4 adopt these preferences. Savage (1954) extends the von Neumann–Morgenstern analysis by allowing consumers with different information to have different subjective probabilities. More recent work has moved away from the expected utility approach – for example, Machina (1982) relaxes the independence axiom – while others extend the expected utility approach – Mas-Colell et al. (1995) adopt state-dependent preferences with objective probabilities, while Karni (1985) adopts state-dependent preferences with subjective probabilities. Most of these extensions are subject to criticisms about the unrealistic nature of one or more of the axioms upon which they are based.25 This is a reflection of how difficult it is to isolate subjectively assigned probabilities, particularly when the utility functions are state-dependent. • Uncertainty and risk Generalized state preferences v (I1, ... , IS) The independence axiom Expected utility State-independent NMEU ~ E{U(I )} SEU ~ Eh{U(I )} State-dependent SDEU ~ E{U(I, s)} SDSEU ~ Eh{U(I, s)} Normally distributed income Mean–variance preferences ~ V(E(I ), σI) Subjective mean– variance preferences ~ V(Eh(I ), σhI) Figure 3.4 Consumer preferences with uncertainty and risk. This separation is very useful because it identifies the important role of information when people form their probability beliefs, and different subjective probabilities result from consumers having different information. A lot of economic activity in financial markets involves gathering and trading information, where traders with different costs of obtaining information are likely to have different probability beliefs. ~ As noted earlier, the generalized state-preference function v(I ) does not separate probabilities from utility over consumption expenditure in each state. It is based on the same minimal restrictions imposed on preferences in a certainty setting, where rankings over consumption bundles should be complete, transitive and continuous. Any risk assessment is embedded inside the generalized function itself. By adopting the independence axiom, which leaves preferences over any two random events unaffected by combining each of them with a common third event, we can measure consumer preference rankings using expected utility functions that separate the probabilities from utility over consumption expenditure. Also, when consumers assign objective probabilities to states and use state-independent utility functions to measure consumption benefits they have NMEU functions, denoted E {U ( I)} . Thus, any preferences they have for patterns of consumption expenditure over uncertain states are determined by the statistical properties of the probability density function. While this utility function seems entirely appropriate for evaluating payoffs to lotteries which have objective probabilities that can be readily computed by consumers, like those associated with roulette-wheel type lotteries, it may not be suitable for evaluating consumption bundles that are contingent on random states of nature, like those associated with horse-race type lotteries.26 Since states of nature are determined by combinations of a potentially large number of exogenously determined environmental variables, it seems more Uncertainty and risk appropriate that consumers will assign subjective probabilities to them. In contrast, they can assign objective probabilities to random outcomes generated by roulette-wheel type lotteries because less information is needed and it is more accessible. It seems reasonably clear from the way the NMEU function is derived that it focuses on individual risk from roulette-wheel type lotteries without recognizing (at least explicitly) the aggregate uncertainty from states of nature. For most individual risk there is good information about the probabilities of payouts, and when it is the only source of income uncertainty NMEU seems appropriate. But income is also subject to aggregate uncertainty about the states of nature, where consumers are much more likely to assign different probabilities to states, particularly when they have different information because it is costly to obtain.27 Savage (1954) and Anscombe and Aumann (1963) recognize this by deriving the subjective ~ expected utility (SEU) function Eh{U(I )}. This assigns subjective probabilities to states of nature and uses a state-independent utility function to assess the benefits from consumption expenditure. Additional behavioural postulates are required for this function, which include the sure-thing principle that extends the independence axiom to state-contingent outcomes, and conditions to describe the way consumers form their probability beliefs.28 This is a particularly useful extension because it isolates the role of costly information when consumers assign probabilities to states. Subsequent work has argued that the SEU function should in fact be state-dependent because the same consumption bundle in one state may not generate the same benefits in another state. For example, consumers are unlikely to get the same benefits from a bundle of food in a state where they are healthy as they would get from the same bundle in a state where they are sick. Mas-Colell et al. (1995) respond to this problem by deriving a state-dependent expected utility function. They extend the NMEU function with objective probabilities by allowing state-dependent preferences, while Drèze (1987), Fishburn (1974), Grant and Karni (2004), Karni (1993) and Karni et al. (1983) derive state-dependent subjective expected utility functions. Unfortunately they all have drawbacks that result from their different behavioural postulates. For example, Drèze gives consumers the ability to determine states, while Fishburn has consumers making comparisons between mutually exclusive outcomes. It is a very difficult task to separate subjective probabilities from state-dependent preferences, a point that is perhaps best appreciated by noting that the expected utility function in these circumstances is ∑ s π shU ( I s , s ). If consumers have the same probabilities we can identify the role of their state-dependent preferences by evaluating expected utility with constant consumption (which is where the consumption bundle is the same in every state). Alternatively, when they have state-independent preferences we can identify their subjective state probabilities by doing the same thing. But when they have subjective probabilities and state-dependent preferences it is not possible to separate them without placing additional restrictions on preferences. A further extension to NMEU defines consumer preferences over the mean and variance in their consumption expenditure using the indirect utility function V( ). Many applications in economics adopt this approach to simplify the analysis. For example, all the asset pricing models examined in Chapter 4 use mean–variance analysis. Later in Section 3.3.3 we show there are two ways of justifying this approach – one assigns quadratic preferences to consumers, while the other requires consumption expenditure to be a normally distributed random variable. If consumers also assign objective probabilities to states of nature they have the mean–variance function V ( E ( I), σ 1 ), where E ( I) is expected consumption • Uncertainty and risk expenditure and σI its standard deviation, while they have the subjective mean–variance function V ( E h ( I), σ hI ) when they assign subjective probabilities to states. Box 3.2 Anecdotal evidence of state-dependent preferences Aumann provides Savage with the example of a man whose sick wife has a 50 per cent chance of surviving an operation she must undergo. He is offered a choice between betting $100 dollars that she will survive the operation and betting the same amount on heads in the toss of a fair coin. It is argued that he will likely choose the bet on the operation if he loves his wife, because in the event that she dies he could win $100 on the coin toss when it is worth very much less to him. Aumann argues this is an example of a situation where the value is state-dependent. Savage responds by arguing it can be accommodated in a model with SEU when preferences are stateindependent by making the full set of consequences from the lotteries available in every state. But that would require making comparisons between incompatible outcomes where, in the example provided by Aumann, one outcome has the man winning $100 dollars and his wife dies in a state where she survives the operation. These exchanges between Aumann and Savage are published in Drèze (1987). 3.2.1 Von Neumann–Morgenstern expected utility Most empirical applications in finance adopt NMEU functions because they simplify uncertainty analysis considerably. Consumers with state-independent preferences choose patterns of consumption across states based on their risk aversion, relative commodity prices and the state probabilities. And by trading in a frictionless competitive capital market with common probabilities they have the same growth rates in marginal utility over time. That means consumers will face the same consumption risk, which is why the popular consumptionbased pricing models we examine in Chapter 4 are functions of factors that determine aggregate consumption risk. But NMEU has obvious limitations – in particular, stateindependent preferences may not be appropriate in such applications as the economics of health care insurance.29 Once we relax state-independence and/or common objective probabilities consumers can face different consumption risk where the asset pricing equations are functions of a much larger number of factors. The behavioural postulates for the NMEU preferences are summarized as follows: i The standard preference relation (Ɒ) applies to rankings of consumption bundles (in the (1 + S)-dimensional commodity space), where Ɒ is complete, transitive and continuous. ii The independence axiom holds – so that common alternatives within each state are irrelevant when ranking money payoffs to lotteries. For example, the preference ranking over two lotteries L and L′ will not be changed by combining them both with a third lottery L′′. Thus, if L Ɒ L′ then [(1 − π′′)L + π′′L′′] Ɒ [(1 − π′′) L′ + π′′ L′′] when the independence axiom holds. iii The preference relation Ɒ is state-independent. iv Consumers assign objective probabilities to lotteries and states. There is evidence from behavioural experiments that the independence axiom is violated in practice. The most widely cited example is referred to as the Allais paradox (Allais 1953) which finds people ranking the lotteries summarized in Table 3.1 in a manner that is inconsistent with the independence axiom. Most people choose A over B and C over D, but when the independence axiom holds D is preferred to C (whenever A is preferred to B). Uncertainty and risk Table 3.1 Lottery choices: the Allais paradox Probability of $5m A B C D 0 0.1 0.1 0 Probability of $1m 1.0 0.89 0 0.11 Probability of $0 0 0.01 0.9 0.89 Empirical tests of the consumption-based pricing models suggest the state-independence assumption may also be violated in practice. Experimental studies find evidence of consumers placing more weight on bad outcomes than they do on good outcomes. Benartzi and Thaler (1995) and Barberis et al. (2001) model this as loss aversion for consumers with state-independent preferences, but it may in fact be evidence that they have state-dependent preferences. Indeed, even anecdotal evidence suggests that a significant proportion of consumption benefits do depend on states of nature, particularly with respect to personal health, but also the prevailing weather conditions. There is also evidence that consumers assign different subjective probabilities to states of nature due to differences in their information sets. Traders in financial markets gather information about the fundamental determinants of the future payouts to securities. By specialising in particular types of securities they get information at lower cost and make profits – at least until the information is reflected in security prices. When the efficient markets hypothesis holds in its strongest form, security prices reflect all past and current information as well as security prices in past periods. But there is evidence that profits can be made from systematic trading rules. For example, there is a weekend effect where security prices fall over weekends when markets are closed, a January effect in the US where security prices are systematically higher at the beginning of the month, a small-firm effect where firms with relatively low market values paid higher rates of return on average than the entire stock market index over the period between 1960 and 1985, and a closed end fund effect where the value of mutual funds that bundle together a fixed number of shares trade at lower valuations than the sum of the market valuations of their shares. Despite these concerns about the NMEU approach to measuring consumer preferences in the presence of uncertainty and risk, it is widely used in economic analysis. With multiple time periods most analysts make the expected utility function time-separable with a constant subjective discount factor (δ), where, for an infinitely lived agent, we have: ∞ EU t = Et ∑ δ j U ( It + j ) ,30 j =0 with 0 < δ ≤ 1 being a measure of impatience for future consumption expenditure which is determined by the rate of time preference (ρ), with δ = 1/(1 + ρ). As noted earlier, consumers who can trade in every state of nature will, in the absence of trading costs or other market frictions, have the same expected growth rate in their marginal utilities and, as a consequence, face the same consumption risk. For that reason they measure risk in securities by their contribution to aggregate consumption risk. This is demonstrated in Section 3.3 below by using NMEU to derive the consumption-based pricing model (CBPM) from the ADPM in (3.11). Uncertainty and risk 3.2.2 Measuring risk aversion Risk aversion plays a key role in determining the equilibrium risk premium in security returns, and there is evidence that it changes with wealth; consumers with relatively more wealth are likely to be marginally less risk-averse. In project evaluation analysts include a risk premium in discount factors on risky net cash flows, and the task is less complex when consumers measure and price risk in the same way. In effect, risk aversion measures the degree of concavity in the utility function over uncertain consumption expenditure. An example is shown in Figure 3.5 where income can vary between I1 and I2 with probabilities π1 and π2, respectively. To simplify the analysis we assume there is one future time period. With state-independent preferences we can map utility from consumption in each state using the ~ ~ function U(I ), where the loss in utility from facing the variance in income is U(I¯ ) − EU(II ) ~ for I¯ = E(I ). In monetary terms the consumer is prepared to forgo income RP(I¯ )= I¯−Î to receive Î with certainty. Clearly, this difference rises with the degree of concavity of the utility function. Arrow (1971) and Pratt (1964) define a number of widely used measures of risk aversion. The first of them is: Definition 3.6 The coefficient of absolute risk aversion (ARA) measures the curvature of the utility function as ARA : = − U ′′ ( I ) U ′( I ) For a strictly concave utility function, such as the one illustrated in Figure 3.5, a negative second derivative makes ARA positive. One way of isolating this measure of risk aversion is to take a second-order Taylor series expansion around I¯ for the relationship that defines the risk premium (RP) in Figure 3.5, where EU(I~) = U[I¯ − RP(I¯)], as 1 U ′′ ( I ) 1 2 RP( I ) = − σ 2I = σ ARA.31 2 U ′( I ) 2 I U U(I ) – U( I ) ~ EU(I ) – – ^ RP( I ) = I − I I1 ^ I – I Figure 3.5 Consumption with expected utility and objective probabilities. Uncertainty and risk The utility function exhibits constant absolute risk aversion when ARA is independent of the level of income, it exhibits decreasing absolute risk aversion (DARA) when ARA declines with income, and increasing absolute risk aversion (IARA) when ARA rises with income. DARA has some intuitive appeal because consumers with relatively high incomes (and wealth) are less averse to risk at the margin. When ARA declines as income rises the utility function becomes less concave because U′′ falls more than does U′. Perhaps the most widely used measure of risk aversion is the following: Definition 3.7 The cofficient of relative risk aversion (RRA) is a normalization of ARA that accounts for the initial value of consumption: U ′′(I) RRA : = − I. (3.15) U ′(I) It is obtained as the coefficient on the variance in the proportional, rather than absolute, change in consumption expenditure.32 Anecdotal evidence suggests consumers with higher income (and wealth) have a lower ARA. But this may well be consistent with all consumers having the same RRA. Indeed, there are empirical studies that find support for preferences with a constant coefficient of relative risk aversion (CRRA). Alternatively they could have decreasing relative risk aversion (DRRA) or increasing relative risk aversion (IRRA), as none are ruled out by the conventional restrictions imposed on preferences in consumer theory. It is ultimately an empirical question what values ARA and RRA take for consumers, and whether they rise or fall with wealth. There is more empirical support for DARA than there is for DRRA. Barsky et al. (1997) used survey data to find that RRA rises and then falls with wealth, while Guiso and Paiella (2001) used survey data to find that DARA and IRRA. Experimental studies by Gordon et al. (1972), Binswanger (1981) and Quizon et al. (1984) found support for IRRA because the fraction of wealth consumers invest in risky securities declines with their wealth, contrary to US household data where the fraction of wealth consumers invest in risky securities increases as their wealth increases. Peress (2004) includes costly information to explain these conflicting observations, where increasing returns to information acquisition are large enough to overturn the tendency for consumer portfolio shares to decrease with wealth. Estimates of the value of the coefficient of RRA range from near 0 to 2. For example, Friend and Blume (1975) obtain an estimate from US household data around 2, and Fullenkamp et al. (2003) use data from a television game show with large amounts of money at stake and find a value in the range from 0.6 to 1.5. 3.2.3 Mean–variance preferences Most consumption-based pricing models are derived using mean–variance analysis where preferences for future consumption can be completely described by the first two moments of the probability distribution over state-contingent outcomes. This occurs when consumption expenditure is normally distributed, or when consumers have mean–variance preferences.33 Either way, the utility functions must be state-independent for consumers to care only about the statistical distribution of their future consumption. Any preference they have for consuming in one state over another is determined solely by their probabilities, Uncertainty and risk where the one with higher probability is preferred. A mean–variance analysis adopts one of the two state-independent utility functions in Figure 3.4, where expected income with ~ objective probabilities is defined as E(II ) =ΣsπsIs, and with subjective probabilities as E h ( I) = ∑ s π hs I s , while the variance in income is defined as σ I = E[ I − E ( I)]2 and σ h = E h [ I − E h ( I)]2 , respectively. I 3.2.4 Martingale prices Traders in financial markets form expectations about the economic returns to securities that can be paid as a combination of capital gains and cash distributions. By exploiting any profits they invoke the no arbitrage condition on security returns. This activity highlights the important role of information, where traders compute the statistical properties of the net revenues generated by underlying real assets from which security returns are paid. The income paid to shareholders ultimately comes from production activities by the firms who issue shares, and investors gather information about these activities as well as any other conditions that will affect the economic income they generate. Traders who can acquire and process information more efficiently can make profits by finding assets with the same risk paying different expected returns. This arbitrage activity, if unrestricted, will feed private information into current security prices until assets with the same risk pay the same expected returns in a competitive equilibrium. Unless traders have more information than does the market (which is the public information reflected in current security prices) they cannot expect to make profits from trading securities. In particular, no trading rule can outperform a buy and hold strategy when the no arbitrage condition holds. Loosely speaking, this is referred to as the efficient markets hypothesis. Fama (1970) argued the capital market is efficient when the information of traders is included in security prices, and identified three different versions of efficiency based on three different information sets: a weak form of efficiency when information is based on past prices; a semi-strong form of efficiency when the information is past prices plus publicly available information; and a strong form of efficiency when the information also includes insider information. There is considerable interest in finding an economic model of asset pricing that is consistent with the efficient markets hypothesis.34 Samuelson (1965) did this using a martingale model, where random variable xt is a martingale with respect to an information set ωt if it has E(xt+1|ωt) = xt, with E( ) being the expectations operator. By a process of iteration the expected future values of the variable are the same as its current value. Samuelson argued security prices will be a discounted martingale if consumers are risk-neutral. Basically, the prices of securities paying only capital gains must rise at the risk-free interest rate when the no arbitrage condition holds for risk-neural consumers. And that makes discounted future prices equal to current prices.35 The martingale model can be demonstrated by writing the state-contingent discounts factors in (3.11) as ϕs = 1/(1 + is) for all s. When consumers are risk-neutral these discount factors become ϕs = πs/(1 + i) for all s, where 1/(1 + i) is the price of a risk-free bond, with Σsϕs = 1/(1 + i). Moreover, when payouts in R are security prices (without dividend payments), with Rks = paks for all k, s, the ADPM in (3.11) can be decomposed as: • E ( pak ) π p = ∑ s aks = pak ∀k . 1+ i 1+ i s Uncertainty and risk Prices deviate from this discounted martingale model when consumers are risk-averse as their discount factors are a combination of risk preferences and probability assessments about the states. Ross (1977a) showed that we can always derive a normalized expectations operator that makes security prices discounted martingales. This is achieved by normalizing the vector of state-contingent discount rates, which are prices of primitive (Arrow) securities, as π*s = ϕs /Σsϕs for all s, where Σsϕs = 1/(1 + i) is the price of the risk-free bond that pays one unit of the numeraire good in every state. These normalized prices have the same property as probabilities, with Σs π*s = 1 but they are not strictly probabilities, unless consumers are risk-neutral. For risk-averse consumers the normalized Arrow prices are combinations of subjective probability assessments about the likelihood of states and their preferences for transferring income between states and the current period. By using these normalized prices as the expectations operator we can rewrite the ADPM in (3.11) as: π∗R = pa , 1+ i with π* := ( π1* ,…, π*S ). Clearly, when the payouts in R are security prices, we have E *(Rk)/ (1 + i) = p*Rk/(1 + i) = pak for all k, using the normalized expectations operator E*(.) with normalized Arrow prices as probabilities. Thus, security prices are discounted martingales based on the normalized expectations operator. These normalized Arrow prices are frequently referred to as risk-neutral probabilities because they play the same role as probabilities when security payoffs are discounted using the risk-free return. Indeed, when consumers are risk-neutral the normalized Arrow prices are probabilities, with π*s ≠ πs for all s, but when they are risk-averse the normalized prices contain a risk premium, and π*s ≠ πs for all s. Box 3.3 Obtaining martingale prices from traded security prices As a way to illustrate how the (discounted) martingale pricing model is used to value capital assets, we derive a normalized expectations operator by dividing the primitive (Arrow) prices in Box 3.1 by their sum (Σsφs = 1/(1 + i) = 0.95), where π* ≈{0.32,0.52,0.16}. These normalized prices are used to value the payouts on capital assets when they are discounted by the risk-free interest rate. For the three traded securities in Box 3.1, we have pADL share = pIntec share = pGovt. Bond 0.52 × 15 1+ i 0.32 × 60 0.16 × 30 1+ i 0.16 × 20 ≈ 12; + ≈ 21; 1+ i 1+ i 0.32 × 20 0.52 × 20 0.16 × 20 = + + ≈ 19. 1+ i 1+ i 1+ i This example makes it clear how normalized Arrow prices are state probabilities for risk-neutral consumers. We can see the role of risk aversion in the Arrow prices by comparing the normalized expectations operator in the martingale model to the state probabilities (π), where: π ≈{0.35, 0.55, 0.10} Risk aversion places extra weight on payouts in the third state and less weight on payouts in the first two states. Uncertainty and risk Notice how the normalized expectations operator shifts the risk premium from the discount rates (ϕ) in (3.11) to the expectations operator π*. 3.3 Asset pricing in a two-period setting In this section we derive the consumption-based pricing model (CBPM) in a two-period setting by assigning NMEU functions to consumers in the Arrow–Debreu asset economy. As noted above, there are four popular pricing models in finance that are special cases of the CBPM where consumers have the same consumption risk. Thus, they measure and price the risk in capital assets in the same way.36 As preparation for the analysis in Chapter 4, we examine the properties of the CBPM. In particular, we look at why consumers have the same consumption risk, and why diversifiable risk attracts no risk premium. We extend the CBPM by adopting power utility functions and mean–variance analysis. Both simplify the analysis considerably: power utility makes consumption expenditure in each time period a constant proportion of wealth, and mean–variance analysis restricts the information needed to summarize the statistical properties of consumption expenditure. 3.3.1 Asset prices with expected utility In a two-period setting most analysts make the NMEU function time-separable, with EU = U ( I 0 ) + δE U ( I) , where 0 < δ ≤ 1, and E {U ( I)} = Σs π sU ( I s ) . By using these preferences we can rewrite the ADPM in (3.11) as the consumption-based pricing model (CBPM), ~ ~ R) = pa, E(m = δU ′( I) / U ′( I o ) the stochastic where E( ) = Σsπs( ) is the common expectations operator, m discount factor, which is also referred to as the pricing kernal or state price density, and • Box 3.4 Using the CBPM to isolate the discount factors in Arrow prices We can decompose the primitive (Arrow) prices obtained in Box 3.1 earlier by using the CBPM. When consumers with common expectations use the state probabilities π: = {0.35, 0.55, 0.10} their stochastic discount factors are obtained by dividing the Arrow prices by their respective probabilities, with ms = ϕs/πs, where: m ≈ {0.86, 0.91, 1.50}. By using these discount factors and the state probabilities we can decompose the three security prices in Box 3.1 as follows: pADL Share = (0.55 × 0.91 × 15) + (0.10 × 1.50 × 30) ≈ 12, pIntec Share = (0.35 × 0.86 × 60) + (0.10 × 1.50 × 20) ≈ 21, pGovt. Bond = (0.35 × 0.86 × 20) + (0.55 × 0.91 × 20) + (0.10 × 1.50 × 20) ≈ 19. Uncertainty and risk R = (R1 , ... , RK ) the vector of random payouts to the k = 1, ... , K securities. By using NMEU we can separate risk assesments from the marginal utility derived from consumption expenm. =π diture in the stochastic discount factor in (3.11) as ϕ Since consumers use the same expectations operator E( ), and face a common payoff matrix R and common vector of security prices pa, they have the same stochastic discount factor in the CBPM.37 • Box 3.5 Using the CBPM in (3.18) to compute expected security returns The decomposition in (3.18) can be confirmed by using the state probabilities π: = {0.35, 0.55, 0.10} and stochastic discount factors m ≈ {0.86, 0.91, 1.50}, derived in Box 3.4, to compute the current prices of the three securities in Box 3.1. Since the price of a risk-free bond that pays ) = 0.95, the risk-free interest rate is i = 0.0526316. The one dollar in each state is E ( m expected payouts and rates of return for each security, together with their covariance terms, are summarized below. Security E (Rk ) Cov(m,R k ) E(ik ) k ) Cov(–m,i ADL Share Intec Share Govt. Bond 11.25 23 20 1.3125 −0.85 0 −0.0625 0.0952381 0.0526316 −0.109375 0.040476 0 ) E ( Rk ) + Cov ( m , Rk ), we have: Using the pricing equation in (3.18), with pk = E ( m pADL Share = (0.95 × 11.25) + 1.3125 = 12, pIntec Share = (0.95 × 23) − 0.85 = 21, pGovt. Bond = (0.95 × 20) + 0 = 19. It is possible to obtain a so-called beta model from (3.17) by writing the price of any risky security k as , Rk ) = E ( m ) E ( Rk ) + Cov ( m , Rk ) = pak .38 E (m After defining random security returns as Rk = (1 + ik ) pak , and solving the price of the ) = 1/ RF = 1/(1 + i ), we have risk-free bond that pays one dollar in every state as E ( m ik − i = β mk λ m ,39 , ik )/ Var ( m ) is the beta coefficient that measures the quantity of where β m,k = Cov ( − m )/ E ( m ) the price of market risk which is independmarket risk in security k, and λ m = Var( m ent of Uncertainty and risk Box 3.6 Using the CBPM in (3.19) to compute expected security returns To use the beta version of the CBPM in (3.19) we need to calculate the random rates of return for each of the three securities (k) in Box 3.1 using ik = Rk / pk − 1. They are summarized below with their expected values and covariance with the stochastic discount factor. Security returns Security State 1 State 2 State 3 E (ik ) Cov( – m,i k ) ADL Share Intec Share Govt. Bond −1 1.8571429 0.0526316 0.25 −1 0.0526316 1.5 −0.047619 0.0526316 −0.0625 0.0952381 0.0526316 −0.109375 0.040476 0 We obtained these expected returns for each security as the probability-weighted sum of their returns in each state, using π: = {0.35, 0.55, 0.10}. They can also be obtained by using ~ ~ , ik ) / Var ( m ) and the beta model in (3.19), with E (ik ) = i + β m ,k λ m, where β mk = Cov ( m ~ ~ λm = Var(m )/E(m). For the stochastic factors computed in Box 3.5, with m ≈ {0.86, 0.91, 1.50}, ~) = 0.95 and Var(m ~ ) = 0.0341883, where: we have E(m E (iADL Share ) = 0.0526316 − ( 3.1991928 × 0.0359877 7) ≈ −0.0625, E (iIntec Share ) = 0.0526316 + (1.183919 × 0.0359877) ≈ 0.0952381, i = 0.0526316 + (0 × 0.0359877) = 0.0526316. Govt. Bond In practice the stochastic discount factors in (3.17) are (potentially complex) non-linear functions of a large number of exogenously determined variables in the economy. Thus, the two versions of the CBPM in (3.18) and (3.19) are difficult to obtain from observable data. The popular pricing models examined later in Chapter 4 rely on additional assumptions to make the stochastic discount factor linear in a small number of state variables reported in aggregate data. While that makes them easier to use, they are less robust empirically. The CBPM in (3.17) provides a number of very useful insights into the way expected security returns are affected by risk. First, in equilibrium the (risk-free) interest rate (i) is determined by the rate of time preference (r), consumer risk aversion and the growth in aggregate consumption expenditure. To see why that is the case, use δ = 1/(1 + r) to write ) = 1 / 1 + i ), as: E (m 1 + i = (1 + U ′( I 0 ) . EU ′( I) Risk-neutral consumers (with constant U′(I)) have a rate of time preference equal to the interest rate (with ρ < i). Expected consumption growth and risk aversion both cause the rate of time preference to fall below the interest rate. We have ρ < i with (a) consumption growth and no uncertainty, where U′(I0) > U′(I1) and (b) no expected consumption growth and uncertainty (with E(I~) = I0 and σ 2I > 0), where risk aversion drives down expected utility (with EU ′( I) < U ′( I 0 ) ). Uncertainty and risk Second, no premium is paid for diversifiable (idiosyncratic) risk when it can be costlessly eliminated in a frictionless competitive capital market. This is referred to as the mutuality principle, where consumers use financial securities to pool this risk and eliminate it from their consumption. When the return on a risky security j (with σ2j >0) has zero covariance , i j ) = 0) , then from (3.19) its expected return must be with aggregate consumption ( Cov ( m equal to the risk-free return, with ¯ij = i. Only market risk is priced by the CBPM when diversifiable risk can be costlessly eliminated from consumption.40 Third, in a complete capital market the competitive equilibrium outcome in the Arrow–Debreu asset economy is Pareto optimal where consumers have the same stochastic ~ ) in (3.17). Thus, they have the same discounted growth in their marginal discount factor (m utility from consumption, and with diminishing marginal utility, changes in marginal utility are negatively correlated with changes in consumption so that consumers face the same consumption risk. Thus, security returns that covary positively with aggregate consumption pay a risk premium because they contribute to consumption risk. In general, however, the functional relationship between the risk premium on security returns and their covariance with aggregate consumption is non-linear due to the concavity of the utility function. Thus, we cannot replace the beta coefficient in (3.19) with a consumption-beta coefficient without placing further restrictions on preferences and/or the stochastic properties of aggregate consumption and security returns. One of the most common ways of obtaining a closed-form solution for the stochastic discount factor in (3.17) is to adopt a power utility function, which normally takes the form ⎧ It1+−1γ , for γ ≠ 1, ⎪ U ( It +1 ) = ⎨ 1 − γ ⎪ ln ( I , for γ = 1, t +1) ⎩ where γ is the CRRA.41 Since it has a constant CRRA there is a one-to-one mapping between changes in marginal utility and changes in aggregate consumption. This is confirmed by using these functions to solve the stochastic discount factor in (3.17) as:42 ⎧δ( I / I )− γ , for γ ≠ 1, t +1 = ⎨ t +1 t m −1 ⎩ δ( I t +1 / I t ) , forr γ = 1. By using them to measure the wealth of an infinitely lived representative consumer, we have ∞ ⎧ ∞ t = E ∑ δ t (1 + g )− γ I , for γ ≠ 1, −γ I E δ ( I / I ) ∑ t +1 t t +1 t t +1 t +1 ⎪ t ⎪ t =1 Wt = ⎨ t =∞1 ⎪ E δ t ( I / I )−1 I = δ I for γ = 1, t +1 t t +1 ⎪⎩ t ∑ 1− δ t t =1 where gt +1 = ( It +1 − I t )/ I t is the growth rate in consumption in period t. With log utility (γ = 1) consumption is a constant proportion of wealth in each period, where the stochastic discount t −1 = 1/(1 + iW, t +1) with iW, t +1 being the return on factor over period t to t + 1 is equal to m 43 wealth. There is a linear relationship between the expected return on securities and their covariance with aggregate consumption for both versions of the power utility function in (3.20) when security returns are jointly log-normally distributed with aggregate consumption.44 Uncertainty and risk Box 3.7 Consumption with log utility: a numerical example An infinitely lived consumer with current wealth of $1.5 million and a rate of time preference that makes δ = 0.95 will consume approximately $78,947 when they have the log utility function in (3.20) with γ = 1. This is confirmed by using the solution to wealth in (3.22), where I t = [(1 − δ ) / δ ]Wt = 0.052631578 × $1, 500, 000 ≈ $78, 947. Another important feature of power utility functions is the inverse relationship between the elasticity of intertemporal consumption expenditure (Ω) and CRRA. Both are measures of the curvature of the utility function over consumption expenditure, where the first relates to differences over time and the second to differences over uncertain outcomes in each period. This inverse relationship is confirmed by using (3.20), with γ ≠ 1, to write the marginal rate of substitution between consumption over time as −γ ⎛I ⎞ U1′ = δ⎜ 1 ⎟ . U 0′ ⎝ I0 ⎠ After taking the log of this expression we obtain the elasticity of intertemporal consumption expenditure: Ω=− d In ( I t +1 / I t ) 1 = . d In (U t′+1 /U t′ ) γ This tells us how sensitive intertemporal consumption is to changes in its relative cost, where changes in the marginal rate of substitution are driven by changes in the relative price of consumption over time. Thus, when consumers with power utility have a high CRRA they dislike changes in consumption within each period and also across time periods, while the reverse applies when they have a low CRRA. In other words, highly risk-averse consumers regard consumption as highly complementary across uncertain outcomes and also across time periods. They prefer smooth consumption flows over their lives.45 3.3.2 The mutuality principle To demonstrate the way consumers can costlessly eliminate diversifiable risk from their consumption in a complete capital market, we consider a two-period endowment economy with two states of nature.46 In the good state (G) individuals consume their income endowments (which they can transfer between periods by trading securities), while they incur a common income loss L in the bad state (B). The bad state occurs with exogenous given probability πB and the good state with probability πG = 1 − πB, for all consumers. To simplify the analysis we allow consumers to trade a full set of primitive (Arrow) securities in the first P P period, one for each state, with respective prices paB and paG . They use them to transfer their consumption expenditure over time and between states to maximize NMEU functions, where the consumer problem can be summarized as: ⎧ ⎪ max ⎨U ( I 0 ) + π B δU ( I B ) + π G δU ( I G ) ⎪ ⎩ P a − pP a ⎫ I 0 = X 0 − paB B aG G ⎪ I B = X 1 − L + aB ⎬ ⎪ I G = X 1 + aG ⎭ Uncertainty and risk To simplify the analysis we remove any aggregate uncertainty by fixing the income endowment ( X 1 ) in the second period and invoking the law of large numbers to make the aggregate loss in income certain at πBHL, where πBH is the proportion of the population (H) who incur loss L.47 The only uncertainty consumers face here is whether or not they fall into that group. Using the first-order conditions for the optimally chosen security trades, they transfer income between the two states until MRS B,G ( I ) ≡ P π B δU ′( I B ) paB = ≡ MRTB,G ( I ), P π G δU ′( I G ) paG where MRTB,G(I) is the marginal cost of transferring income from the good to the bad state. P = π /(1 + i ) and In a frictionless competitive capital market the security prices solve as paB B 48 P paG = π G /(1 + i ). After substituting them into (3.25) we have U′(IB) = U′(IG), where riskaverse consumers eliminate risk from their consumption expenditure, with IB = IG. This equilibrium outcome is located on the 45° line in Figure 3.6. In the absence of a capital market, consumers would locate at their income endowment point E. Notice from (3.25) that the indifference curves must have a slope equal to the relative probabilities of the two states (πB/πG) along the 45° line. At the endowment point risk-averse consumers have a marginal valuation for bad state consumption that exceeds its relative cost (with MRSB,G(I) > πB/πG). The gains from transferring income from the good to the bad state are maximized by trading to ˆI on the 45° line where consumption risk is eliminated. Thus, in a frictionless complete capital market there is no risk premium in security returns for diversifiable risk because it can be costlessly eliminated. This is confirmed by using the P = π /(1 + i ) P ≡ π /(1 + i ) equilibrium security prices paB and paG to compute consumer G B wealth at an interior solution when the three budget constraints in (3.24) bind, where: π B aB + π G aG π B ( X 1 − L + aB ) π G ( X 1 + aG ) + + 1+ i 1+ i 1+ i X1 − π B L = X0 + . 1+ i = X0 − It is the same as wealth when the primitive securities are replaced by a risk-free bond that stops consumers transferring income between the states to smooth consumption. In other words, wealth is independent of the amount of income transferred between the good and bad states when security prices are based on their relative probabilities. Thus, we have: Definition 3.8 (The mutuality principle) In a frictionless competitive capital market with common information, diversifiable risk is costlessly eliminated from consumption and attracts no risk premium in expected security returns. Only non-diversifiable (market) risk attracts a risk premium in these circumstances. The mutuality principle can fail to hold when there are transactions costs, asymmetric information and state-dependent preferences. Each will now be examined in turn. In the presence of a constant marginal cost (τ) of trading primitive securities (with τ > 0 when as > 0, and τ < 0 when as < 0, for s ∈{B, G}) we have Uncertainty and risk IG IB = IG – x1 ^ I Slope = −πB /πG Slope = −pPaB /pPaG EU ^I EUE – x1−L Figure 3.6 The mutuality principle. πB + τ πB > . πG − τ πG Thus, risk-averse consumers no longer eliminate all the diversifiable risk from their consumption expenditure. The effects of trading costs are illustrated in Figure 3.7, where they contract the consumption opportunity set around the endowment point E. Consumers choose an equilibrium allocation such as Î which lies off the 45° line. Asymmetric information can also cause the mutuality principle to fail. When traders have different information and form different beliefs, the primitive security prices can deviate from the discounted state probability assessments made by consumers, thereby resulting in equilibrium allocations off the 45° line. Other problems can arise from asymmetric information IG IB = IG – x1 ^ I Slope = −πB /πG Slope = −πB + τ πG − τ EU ^I 45° – x1−L Figure 3.7 Trading costs. Uncertainty and risk when consumers can affect their probabilities of incurring losses by expending effort, or when they have different loss probabilities. These issues are examined later in Chapter 5. Consumers with state-dependent preferences will not in general eliminate diversifiable risk from their consumption expenditure, even when they can do so costlessly. Indeed, we observe situations where consumers get different utility from the same real consumption bundle in different states of nature. An obvious example is where states of nature determine a consumer’s health which changes the way utility maps from real consumption. If, for example, they get more utility from every consumption bundle in the good state they will not choose to equate their consumption in each state when faced with primitive security P = π /(1 + i ) P = π /(1 + i ) prices paB and paG . This can be formalized by assigning to each G B consumer the state-dependent expected utility function, EU = U 0 ( I 0 ) + δπ BU ( I B , B ) + δπ GU ( I G , G ), where the optimally chosen allocation of consumption across the two states must now satisfy π B δU ′( I B , B) π B = . π G δU ′( I G , G ) π G It is now possible that U′(IB,B) ≠ U′(IG,G) when IB = IG. An example is illustrated in Figure 3.8, where the optimal allocation of consumption occurs at Î , with IB < IG, because the consumer has a higher net marginal valuation for consumption in the good state on the 45° line. Even though consumers can costlessly eliminate risk from their consumption, they choose not to do so because they get more utility at the margin from consumption in the good state over the bad state. If transactions costs raise the relative cost of good state consumption the consumer bears even more risk. This example also conveniently demonstrates why it is difficult in practice to separate risk from preferences over consumption when consumers have subjective probabilities and statedependent utility. Whenever primitive security prices deviate from their state probabilities, P / pP with π B / π G ≠ paB aG, the relationship between the slopes of the indifference schedule and IG IB = IG – x1 ^ I EU ^I Slope = −πB /πG 45° – x1−L Figure 3.8 State-dependent preferences. Uncertainty and risk budget constraint are combinations of probability assessments and consumption benefits. By using the state-dependent subjective expected utility function, EU = U 0 ( I 0 ) + δ ∑ π hs U ( I s , s ), to decompose the ADPM in (3.17), we have h R) = pa , E0h ( m where E0h (⋅) = Es π hs (⋅) is the subjective expectations operator that is based on information ~) and security prices available at time 0. Since consumers face the same random payouts (R ( pa), they face the same Arrow security prices. But they do not decompose them in the same way when they use different state probabilities, with ϕ s = π hs msh for all h. It is possible to identify their subjective probabilities when they have state-independent preferences. And we do so by computing the slopes of their indifference schedules on the 45° line with constant Box 3.8 The mutuality principle: a numerical example Janet consumes a single good corn (x) in each of two periods to maximize expected utility, lnx0 + πB0.95 ln xB + πG0.95 ln xG, where x0 is current consumption and xB and xG bad and good state consumption, respectively, in the second period with probabilities πB = 0.4 and πG = 0.6. She has 2140 kg of rice in the first period which is allocated to current consumption and two primitive securities, aB and aG. Each security pays a kilo of corn in the bad and good states, respectively, and trade at current prices pB and pG (measured in units of corn), where Janet’s budget constraint is x0 + pB aB + pG aG ≤ 2140. Thus, she consumes xG = aG in the good state, and xB = aB − 500 in the bad state where there is a loss of 500 kg of corn due to theft. Since 40 per cent of the population always incurs this loss it is diversifiable risk (by the law of large numbers) and there is no aggregate risk in the economy. In a frictionless competitive capital market, with 1/(1 + i) = 0.95, the primitive security prices are equal to pB = πb/(1 + i) = 0.38 and pG = πG/(1 + i) = 0.57. When Janet makes her utility-maximizing consumption choices, they satisfy π B 0.95 x0 xB λB λ0 π G 0.95 x0 xG λG λ0 where λ0, λB and λG are the Lagrange multipliers for her three constraints. Since optimally chosen security demands satisfy pB = λB/λ0 = πB/(1 + i) and pG = λG/λ0 = πG/(1 + i), Janet consumes x0* = x B* + xG* = 1, 000 kg , with x B* = aB* − 500 = 1, 000 kg and xG* = aG* = 1, 000 kg. Thus, the mutuality principle holds here because all the diversifiable risk has been eliminated from her second-period consumption. consumption across the states. Since U′(IB, B) = U′(IG, G) in (3.28) when IB ≠ IG, the slopes of the indifference schedules are equal to the ratios of the state probabilities. Alternatively, we could do the same thing to identify the state-dependent preferences of consumers when they have objective probabilities. But with subjective probabilities and state-dependent preferences we cannot identify their risk assessments without imposing additional restrictions on their preferences or the probability distributions. Uncertainty and risk 3.3.3 Asset prices with mean–variance preferences We saw in Section 3.3.1 how consumers use the same stochastic discount factors in the CBPM in (3.17). Without making further assumptions, however, the task of estimating the stochastic discount factors in (3.17) is potentially complex. In each time period the variance in aggregate consumption expenditure depends on the variance in income as well as the variance in relative commodity prices in all future time periods. Thus, even with time-separable expected utility, consumption in each period is a function of wealth, which is the discounted present value of all future consumption flows. And relative commodity prices matter because they determine the real consumption opportunities in each future time period. In a general equilibrium setting, aggregate consumption is likely to be a non-linear function that is potentially cumbersome to use in computational work. But even if we manage to solve it as a function of one or a small number of aggregate variables, we also need to measure the stochastic properties of the randomness they impart to aggregate consumption. Popular pricing models in finance adopt mean–variance analysis, where these two moments completely summarize the impact of risk in aggregate consumption on the utility of consumers. There are two ways to invoke a mean–variance analysis on the CBPM in (3.17). i Consumers with quadratic preferences care only about the mean and variance in their ~ ~ ~2 (real) consumption expenditure.49 An example is the utility function U(I ) = aI − 1⁄2bI which makes the stochastic discount factor ⎛ a − bI ⎞ = δ⎜ , m ⎝ a − bI 0 ⎟⎠ where the pricing equation for any security k in (3.19), using Rk (1+ ik ) pak and ) = 1 / (1 + i ), becomes E (m ik − i = ψ Cov ( I, ik ), with ψ = −(1 + i) [δb/(a − bI0)] being a constant coefficient. By creating a derivative security with unit sensitivity to the risk in aggregate consumption expenditure (with Cov( I, iI ) = σ 2I we can write the asset pricing model in (3.19) as ik − i = β Ik λ I ,50 where β Ik = Cov(iI , ik )/ σ 2I is the consumption-beta coefficient for security k, and λ I = ( iI − i )/σ 2I the premium for consumption risk. The derivative security is a mimicking portfolio constructed to replicate the risk in aggregate consumption (with βII = 1) where the pricing equation in (3.32) differs from (3.19) by measuring risk in security returns by their covariance with aggregate consumption rather than the discounted change in marginal utility. In effect, quadratic preferences make changes in aggregate consumption a proxy for changes in marginal utility. And the model also holds unconditionally in a multi-period setting (which means it is independent of the time period) when the risk-free return is constant and security returns are identical and independently and identically distributed to rule out shifts in the investment opportunity set over time. Uncertainty and risk ii Wherever possible we try to minimize the restrictions imposed on consumer preferences. Thus, a preferable approach to adopting quadratic preferences is to assume aggregate consumption is normally distributed where its probability distribution is fully described by the mean and variance, with:51 E ( e I ) = e E ( I)+ 1 2 σ 2I .52 The normal distribution is a symmetric bell-shaped function with almost all the probability mass within three standard deviations of the mean. An example is shown in Figure 3.9 for the return on an asset with a mean of 10 per cent and standard deviation of 12 per cent. It can be a derivative security created by bundling traded securities together in a portfolio. There is a 68.26 per cent probability that the asset return will lie within one standard deviation of the mean (−2 and 22 percentage points), a 95.44 per cent probability it will lie within two standard deviations of the mean (−14 and 34 percentage points), and a 99.74 per cent probability it will lie within three standard deviations of the mean (−26 and 46 percentage points). (These probabilities are represented by the areas below the distribution function over the respective deviations from the mean in Figure 3.9.) When all future consumption is funded from payouts to a portfolio (P) of securities we can map utility indirectly over the mean and standard deviation in its expected return, as V ( iP , σ P ). For risk-averse consumers the function increases with the mean and falls with the standard deviation. In other words, higher expected consumption makes consumers better off, while a larger variance makes them worse off. An indifference curve for a risk-averse consumer is illustrated in Figure 3.10 as VRA ( iP , σ P ). Since utility declines with additional risk it has a positive slope, which becomes steeper with increasing disutility. The slope of the line tangent to the indifference curve at any point in the consumption space (like point A in Figure 3.10) tells us the consumer’s marginal valuation for risk; it is their price of risk (σP). To understand how risk aversion impacts on the asset pricing models examined later in Chapter 4, we frequently consider what happens when consumers are risk-neutral. The indifference curve for a risk-neutral consumer is illustrated in Figure 3.10 as the ∼ i (%) −26 – i − 3σ −14 – i − 2σ −2 – i−σ 10 – i Figure 3.9 Normally distributed asset return. 22 – i+σ 34 – i + 2σ 46 – i + 3σ Uncertainty and risk – iP – VRA(iP, σP) Preference direction A Slope = – diP dσP dV = 0 – VRN(iP, σP) σP Figure 3.10 Mean–variance preferences. dotted line labelled VRN ( iP , σ P ). It is horizontal because utility is unaffected by changes in risk, where no risk premium is required to get consumers to bear consumption risk. In these circumstances the returns on all the risky assets are equal to the risk-free interest rate. In a two-period setting consumption risk originates from aggregate uncertainty in secondperiod endowments and production. As owners of the endowments and shareholders in firms, consumers ultimately bear this non-diversifiable risk. When they have quadratic preferences, or second-period consumption is normally distributed, we can write their NMEU function in a two-period setting as EU = U 0 ( I 0 ) + V ( I , σ I ). The asset pricing models examined in Chapter 4 use mean–variance analysis. It conveniently makes the stochastic discount factor in the CBPM in (3.17) linear in the factors (state variables) that isolate the market risk in aggregate consumption.53 3.4 Term structure of interest rates Capital assets with net cash flows over multiple future time periods are valued by discounting them for the opportunity cost of time and risk. Long-term stochastic discount factors are used for net cash flows in each time period, and they are the product of a full set of shortterm stochastic discount factors, one for each consecutive time period up to the date of the cash flows. For example, the present value (at time t) of random cash flows in some future period T > t is computed using the long-term stochastic discount factor T = δT −t m U ′( IT ) . U ′( I t ) This can be decomposed as Uncertainty and risk T = t m t +1 ⋅t +1 m t + 2 ⋅ … ⋅T −1 m T , m t = δU ′( It )/U ′( It −1 ) is the short term discount factor over each period t−1 to t.54 where t −1 m Using this decomposition we can write the current (at time t) market price of security k with payouts over T periods as ⎛ T t+ j ⎞ ⎛ T ⎞ Pakt = Et ⎜ ∑ ∏ w −1 m w Rk ,t + j ⎟ = Et ⎜ ∑ mt + j Rk ,t + j ⎟ . ⎝ ⎠ ⎝ j =0 w =t ⎠ j =0 Since long-term discount factors are based on long-term interest rates, and short-term discount factors on expected short-term interest rates, the expected short- and long-term interest rates are related to each other by arbitrage. The term structure of interest can be identified by comparing the long-term stochastic discount factors for government bonds with different dates to maturity. As government bonds are not (in general) subject to default risk their returns are (approximately) risk-free. That means they make the same payouts at every event in each time period, even though the risk-free interest rate can change over time.55 If there is a full set of long-term government bonds with maturity dates in each future time period we can obtain a full set of forward spot rates. And they are equal to the expected spot rates when the (pure) expectations hypothesis holds. Consider a discount bond that pays a unit of real purchasing power in the second year of its life (with RB2 = 1).56 Using the CBPM in (3.17) in a multi-period setting, its current price (at t = 0) is equal to the expected value of the (long-term) stochastic discount factor, which we can decompose, using (3.34), as 1 , 1 m 2 ) 2 ) = E0 ( 0 m 1 ⋅ 1 m 2 ) = E0 ( 0 m 1 ) E0 ( 1 m 2 ) + Cov 0 ( 0 m E0 ( m 1 ) E0 ( 1 m 2 ) is the current value of holding two short-term discount bonds where E0 ( 0 m expected to pay a unit of real purchasing power in the second period. Since the interest rate 1 ) = 1/(1 + i1 ) , where i1 is the interest in the first period is known (at t = 0), we have E0 ( 0 m rate in the first period. We also know the average annual yield to maturity (i2) on the long2 ) = 1/(1 + i2 )2 . However, term bond as it also trades in the first period, where E0 ( 1 m the interest rate on the second-period short-term bond is uncertain (at t = 0) as it trades at the end of the first period (at t = 1). A forward spot rate (1 f 2) is embedded in the price of the long-term bond, with: 2 ) = E0 ( m 1 . (1 + i1 )(1 + 1 f 2 ) When two short-term bonds are perfect substitutes for the long-term bond, the forward rate is equal to the expected spot rate. The relationship between these spot rates can be obtained by writing the current value of the short-term stochastic discount factor in the second period 2 ) = 1/[1 + E0 ( 1 i2 )] . Using the decomposition in (3.34), we have as E0 ( 1 m 1 , 1 m 2 ) 2 ) = E0 ( 0 m 1 ⋅ 1 m 2 ) = E0 ( 0 m 1 ) E0 ( 1 m 2 ) + Cov 0 ( 0 m E0 ( m where, by arbitrage, Uncertainty and risk 1 1 2 ). 1 , 1 m = + Cov 0 ( 0 m (1 + i1 )(1 + 1 f 2 ) (1 + i1 )(1 + E0 ( 1 i2 ) 105 (3.37) The covariance term is a risk premium, referred to as the term premium, that captures any differences in aggregate consumption risk from holding long- rather than short-term bonds. When the pure expectations hypothesis holds the short- and long-term bonds are prefect 1 , 1 m 2 ) = 0 and 1 f2 = E0(1i2). substitutes, with Cov 0 ( 0 m Most empirical tests of the expectations hypothesis try to find a constant risk premium that is independent of the bond’s term to maturity. There are a number of explanations for the risk premium. Long-term real bonds provide a less risky way of funding future consumption than rolling over a sequence of short-term real bonds as forward spot rates are known with certainty while expected spot rates are not beyond the first period. On the other hand, long-term bond returns are more volatile than short-term bond returns, particularly for nominal bonds which are affected by uncertainty about the future rate of general price inflation. The presence of a risk premium makes it difficult to solve short-term interest rates using the yield curve. In the absence of a full set of long-term bonds they can be obtained from computable general equilibrium models, where most adopt assumptions to make short-term interest rates functions of a small number of factors (state variables). One approach adopts power utility in the CBPM and assumes security returns are lognormally distributed with the stochastic discount factor. Changes in the interest rate are then determined by the set of factors that cause aggregate consumption risk to change over time. Cochrane (2001) provides examples of these models and examines their properties. Problems 1 Suppose there are three states of nature where the vector of prices for the primitive (Arrow) securities is ϕ = {0.25, 0.34, 0.38}. i Compute the price of a risk-free bond that pays $1 in every state. ii Compute the normalized probabilities (π*) in the martingale pricing model. (These were explained in Section 3.2.4.) Explain how they are used to compute the values of capital assets. Why are they referred to as risk-neutral probabilities? Consider a competitive capital market in a two-period setting where two financial securities have the following state-contingent payouts: State Security A Security B The securities currently trade at market prices paA = $16 and paB = $18, respectively. i Compute the current market prices for the primitive (Arrow) securities. ii Calculate the risk-free interest rate using your answers in part (i), then calculate the rate of return in each state for the respective probabilities π1 = 0.57 and π2 = 0.43. iii What is the current price of an asset that pays $10 in the first state and $15 in the second state? Uncertainty and risk Compute the normalized probabilities (π*) used in the martingale pricing model and then use them to price the securities in part (iii). When are these the true probabilities? Consider the following payouts to three risky securities in a two-period setting with three states of nature: Security A B C Prices ($) (t = 0) 5 10 8 Payouts ($) State 1 State 2 State 3 i Derive the price of the primitive (Arrow) securities from this data. Explain how they relate to the probabilities when consumers are risk-neutral. ii Compute the risk-free interest rate using the Arrow prices in part (i). Is it equal to the rate of time preference in a frictionless competitive capital market? iii Compute the normalized probabilities (π∗) in the martingale pricing model and show how they are used to price the three securities. Asset pricing models When traders value capital assets they include a risk premium in their discount factors as compensation for any market risk in the net cash flows. This adjustment is made for projects undertaken by the public and private sectors, and for securities they sell to finance them. In a competitive capital market where the no arbitrage condition holds, traders face the same risk premium, but they may not compute it in the same way. Indeed, consumers with different information can measure and price risk differently, where asset pricing models are agent–specific. A key objective in finance research is to derive an asset pricing model where consumers measure and price risk in the same way. Ideally it should also be straightforward to use by isolating risk with a small number of state variables that are reported as aggregate data in national accounts. In this chapter we examine four equilibrium asset pricing models that do this – the capital asset pricing model (CAPM) developed by Sharpe (1964) and Lintner (1965), the intertemporal capital asset pricing model (ICAPM) by Merton (1973a), the arbitrage pricing theory (APT) by Ross (1976) and the consumption-beta capital asset pricing model (CCAPM) by Breeden and Litzenberger (1978) and Breeden (1979). Following Cochrane (2001) we derive these models as special cases of the consumptionbased pricing model (CBPM) obtained earlier in (3.17). It can be summarized as ~ ~ Et(m Rk) = pak, where Et(.) is the common expectations operator conditioned on information the stochastic discount factor, with m τ = δU ′( Iτ )/( I t ) for each period available at time t , m τ, Rk the stochastic payouts to security k, and pak its price at time t. Since consumers ultimately derive utility from bundles of goods, they value securities by their contribution to final consumption, and with common Et (⋅), Rk and pak they have the same stochastic ~ discount factor (m ) when they can trade in a frictionless competitive capital market. As the discount factors are determined by consumption in each period, consumers therefore have the same consumption risk in the CBPM. Thus, we can solve them as functions of variables that determine aggregate consumption risk. Unfortunately, however, they are in general quite complex non-linear functions that are difficult to solve and estimate empirically. The four pricing models overcome this problem by placing restrictions on preferences, wealth and/or the stochastic properties of security returns and aggregate consumption. They all have a linear stochastic discount factor in the state variables (factors) used to isolate aggregate consumption risk. The return on the risky (market) portfolio is the only factor in the CAPM because consumer wealth is confined to portfolios of securities. And expected security returns are linearly related to their covariance with this factor because they are jointly normally distributed. The ICAPM extends the CAPM to allow consumption risk to change over time due to shifts in the investment opportunity set and changes in relative commodity prices. Additional state variables are used to account for these changes in aggregate Asset pricing models consumption risk, thereby increasing the covariance terms in the linear pricing equation. In the CCAPM consumers have the same constant coefficient of relative risk aversion (CRRA) when security returns are lognormally distributed with aggregate consumption. This provides a oneto-one mapping between changes in wealth and consumption in each time period where expected security returns are linear in their covariance with aggregate consumption. The APT adopts a different approach by using a linear factor analysis to isolate risk in security returns. The factors themselves are not necessarily the source of consumption risk, but rather they are macro-variables used to identify any common component of changes in security returns – that is, to identify their systematic risk. There is no requirement for the returns to be jointly normally distributed in the APT as the linearity is imposed through the factor analysis. All of these models have strengths and weaknesses. For example, the single factors in the CAPM and the CCAPM are identified by the models themselves, while the additional factors in the ICAPM and the factors in the APT are not specified by the models. Unfortunately consumers have no risky income from labour or other capital assets in the CAPM and the ICAPM, as income is confined to payouts on portfolios of securities. Wages and salaries are a significant source of income for most consumers and it can also be stochastic, particularly in sectors of the economy that experience regular fluctuations in activity. While labour and other income are included in the CCAPM, consumers must have the same constant coefficient of relative risk aversion for the variance in aggregate consumption to be the single risk factor. There are also problems measuring aggregate consumption flows as figures reported in the national accounts omit leisure and other non-marketed goods, and they include some items that should really be included in capital expenditure. In practice, the CAPM is widely used because of its simplicity and accessibility to data. Most use a broadly based index of stocks trading on the national stock exchange as their market portfolio. They are a value-weighted index, like the Standard Poor’s 500 in the United States and the All Ordinaries Index in Australia. Initially the CAPM was derived as the solution to the portfolio choice problem of consumers. Most textbook presentations follow this approach by deriving the efficient mean–variance frontier for risky securities in a frictionless competitive capital market to demonstrate the diversification effect. When security returns are less than perfectly correlated some of their variability can be eliminated by holding them in portfolios. Any risk that cannot be diversified in this way is market (non-diversifiable) risk that someone in the economy must bear. Thus, it is the only risk that attracts a premium in security returns. The CAPM is examined in Section 4.1 below by following the approach used in Copeland and Weston (1988) where the efficient mean–variance frontier for risky securities is derived in a number of steps.1 The analysis begins with two risky securities to demonstrate the diversification effect identified by Markowitz (1959) and Tobin (1958). Consumers are then allowed to hold portfolios that combine a risky security with a risk-free security along a linear budget constraint called the capital market line. The APT is also examined separately in Section 4.2 to demonstrate the role of arbitrage in removing diversifiable risk from consumption expenditure, and the role of mimicking portfolios to price market (non-diversifiable) risk. These are common features of the consumption-based models examined in this chapter. Another reason for analysing the APT separately is to demonstrate a linear factor analysis which isolates market risk empirically by identifying the common component in security returns. We then follow Cochrane (2001) and derive the four consumptionbased pricing models – the CAPM, the ICAPM, the APT and the CCAPM – as special cases of the CBPM in Section 4.3. These derivations are slightly more formal because they focus on the direct link between consumption and security returns, with portfolio choices and arbitrage pushed into the background of the analysis. For that reason we start the analysis by deriving the CAPM as the solution to the portfolio choices of consumers in Section 4.1, and the APT model as the solution to arbitrage portfolios in Section 4.2. Asset pricing models As noted above, the most attractive feature of the CBPM is that every consumer measures and prices risk in the same way, where in equilibrium the return on each asset (k) is equal to the risk-free interest rate (i) plus a risk premium (Θk) for the market risk in the asset, with ik = i + Θk. Conveniently, investors compute this risk premium by measuring the same quantity of market risk (qRk), which they value at the same price (pR), with Θk = pR q RK. The task is further simplified by the fact that market risk is the non-diversifiable variance in an asset’s return. In more general models, however, the equilibrium risk premium will not be measured and priced identically by consumers, where more information than just the variance in the asset return may be required to isolate market risk. For example, with trading costs consumers can have different consumption risk, where the pricing models become agent-specific. To see how the pricing models are used in practice, consider an asset (k) with random net cash flows of R˜k in the second period. Using one of the four consumption-based pricing models, we can compute its current price as Pak = Rk , 1 + i + Θk ¯ k is the expected payout to the security. The discount rate i + Θk compensates asset where R holders for the opportunity cost of time (i) and the market (non-diversifiable) risk in the net cash flows (Θk) on each dollar of capital invested in the asset. Even though the risk premium is isolated using different state variables in the four models it is computed in the same way because every consumer has the same consumption risk. 4.1 Capital asset pricing model As noted earlier, the CAPM is a popular pricing model because it is relatively straightforward to use. But it relies on a number of important assumptions that may not hold in practice. For that reason it is important to know the role they play so that users can assess the integrity of CAPM estimates. Financial analysts frequently use the model to approximate the risk premium on capital assets in a systematic way, rather than making a rough guess. Then, by choosing a range of values around this estimate, they undertake a sensitivity analysis to see what difference other assumptions make in the evaluation process. In this section we derive the CAPM by analysing the portfolio choices of consumers. The analysis commences with a summary of consumer preferences and the consumption space, before deriving the investment opportunity set. Then the pricing equation is obtained by bringing these two components together in the optimization problem for consumers. Finally, we relax the assumptions in the CAPM to see what role they play. 4.1.1 Consumption space and preferences All the consumption risk in the CAPM originates from the risk in the returns to portfolios of securities held by consumers. We capture this by writing the consumer problem in the two-period Arrow–Debreu asset economy as ⎧⎪ X ≤ X 0 − V0 + η0 ≡ I 0 ⎫⎪ 2 max ⎨v ( I ) 0 ⎬, X S ≤ RS ≡ I S , ∀s ⎪⎭ ⎩⎪ where Rs = Σk ak Rks is the payout to the portfolio of securities in each state s.3 Consumers have no labour income in the second period, and no income from capital assets such as Asset pricing models houses and land. Thus, in the first period they allocate their wealth to current consumption expenditure (X0) and save the rest by purchasing a portfolio of securities with payouts to fund future consumption expenditure (Xs). This allows us to write the indirect utility function over future consumption expenditure, as v(R1, ... , RS).4 In the model, consumers are assigned the time-separable von Neumann–Morgenstern expected utility function in (3.13), and security returns are jointly normally distributed. This allows us to summarize their preferences for future consumption using the means and variances in the returns on their portfolios, with V ( iP , σ P ), where iP is the expected return on portfolio (P) and σP its standard deviation.5 The indifference schedules for this utility function are illustrated in Figure 3.10. Box 4.1 Average annual returns on securities with different risk By comparing the differences in the expected returns to stocks and bonds we can see how large the risk premium on equity is and how much it varies over time. The following data summarizes the average premium paid to equity over long-term (10-year) US government bonds and short-term (six-month) US Treasury bills for the period 1951–2001. There are eight separate countries plus Europe, Australasia and the Far East (EAFE) Index and the Morgan Stanley Capital International (MSCI) World Index, where the equity returns are measured for the broadest index available in each country. Based on these comparisons shares are riskier than bonds, and long-term bonds are riskier than short-term bonds. Country Equity-bond premium (%) Equity-bill premium (%) Australia Canada France Germany Italy Japan United Kingdom United States Europe EAFE MSCI 4.57 2.29 3.85 3.11 1.38 4.57 4.79 5.25 5.24 4.78 4.52 5.75 3.23 5.21 5.30 2.42 6.52 5.79 6.28 6.17 5.71 5.45 But these differences are somewhat misleading as they are based on nominal (geometric) returns and therefore do not account for the different effects of inflation on stocks and bonds. Real risk premiums are summarized below for a subset of these countries over the period 1925–2001. Notice how bonds outperform equity in Canada and Japan in the period 1979–2001. In some years equity and bonds paid negative real returns. Country Australia Canada France Germany Italy Japan United Kingdom United States 3.74 — 8.38 8.58 9.42 7.12 0.94 2.94 7.00 7.00 5.72 5.01 1.91 11.02 4.89 7.62 0.98 –1.74 2.94 3.13 1.45 –1.80 5.01 3.99 Data source: Taylor (2007). Asset pricing models 4.1.2 Financial investment opportunity set Now we examine the investment opportunity set for investors with mean–variance preferences. This identifies the largest expected return that can be achieved at each level of risk by bundling together traded securities. As noted above, we follow Copeland and Weston by developing this budget constraint in the CAPM in stages to provide insight into the role of diversification, and to clarify the reason why all investors ultimately measure and price risk identically. The budget constraint is derived separately for: i ii iii iv two risky securities; one risky security and one risk-free security; many risky securities; many risky securities and one risk-free security. The last of these steps provides the budget constraint in the CAPM, which is referred to as the capital market line Two risky securities The random payouts on two risky securities (A and B) are summarized in Table 4.1, together with their corresponding probabilities. The mean–variance consumption opportunities from holding one or other of the securities are illustrated in Figure 4.1. Since the returns on these assets do not move together, it will be possible to diversify risk by bundling them together in portfolios, where the diversification effect determines the shape of the consumption opportunity set which must pass through points A and B in Figure 4.1. We determine the shape of the mean–variance frontier by marginally increasing the portion of asset A held in the portfolio and computing the change in its expected return (i p) over the resulting Table 4.1 Random returns on securities A and B States 0.30 0.20 0.40 0.10 − 0.15 0.50 0.10 0.50 0.15 0.25 − 0.15 0.10 14.5 6.5 25.4 4.5 2.7 16.5 Expected returns (%) Variance (%) Standard deviation (%) change in its standard deviation (σp). If we define a as the portion of asset A held in the portfolio (P) and 1 − a as the remaining portion held in asset B, the expected return on the portfolio is iP = aiA + (1 − a ) iB . It has a variance of: σ 2P = a 2 σ 2A + (1 − a )2 σ 2B + 2 a(1 − a )σ AB , Asset pricing models – iP(%) A Figure 4.1 Investment opportunities with two risky securities. with σ AB = Cov(iA , iB ) = E ( (iA − iA )(iB − iB ) being the covariance of the asset returns. The diversification effect from bundling the securities together is determined by their coefficient of correlation, which is ρ AB = Corr (iA , iB ) = σ AB . σΑσ B If the asset returns are perfectly positively correlated, with ρAB = +1, there is no diversification effect, while at the other extreme, if they are perfectly negatively correlated, with ρAB = −1, complete diversification is possible. Thus, there is a diversification effect whenever this coefficient is less than +1, and it increases as the coefficient approaches −1. We now derive the mean–variance frontier at each of these bounds to establish its shape in Figure 4.1 for more realistic interim values of the coefficient of correlation. We start with the case of no diversification (ρAB = +1).When asset returns move perfectly together they do not offset each other, where an example is given in Figure 4.2. The returns are plotted over the random outcomes in the left hand panel, and against each other in the right-hand panel. The positive linear relationship in the right-hand panel indicates they are perfectly positively correlated. Notice how in the left-hand panel the returns always move in the same direction even though they do not have the same deviation from their normalized common mean return (i¯). This makes the relationship between them in the return space linear with a positive slope that is less than unity as the returns on asset B are always larger. In these circumstances we can use the definition of the coefficient of correlation in (4.4) to write the variance on the portfolio in (4.3) as σ 2P = ( aσA + (1 − a )σB )2 , where the standard deviation in the return on the portfolio is the weighted sum of the standard deviations of the two asset returns, with sP = asA + (1 − a)sB . Thus, the slope of the mean–variance frontier is constant, with Asset pricing models ~ i Slope < 1 iB iA – i – i Outcome s Figure 4.2 Perfectly positively correlated returns. diP / da i −i = A B > 0. d σ P / da σ A − σ B This is a line that passes through points A and B, as shown in Figure 4.3. Between these points the consumer is holding positive combinations of both assets, while above point A security B is sold to fund additional purchases of security A, and below point B security A is sold to fund the additional purchases of security B. Eventually, by going short in asset A, the standard deviation on the portfolio can be driven to zero. Further borrowing causes the standard deviation to rise along the dashed line, but this part of the frontier is dominated by points on the line vertically above where for each level of risk the expected return on the portfolio is higher. Thus, the dashed line is not part of the efficient mean–variance frontier which maximises the expected return at each level of risk. We now turn to the case of complete diversification with ρAB = −1. Since the asset returns move perfectly against each other, it is possible to construct a bundle with positive holdings a (1 > 0 −a )< 0 – iP(%) 14.5 Efficient frontier Figure 4.3 Efficient mean–variance frontier with ρAB = +1. Asset pricing models ~ i ⏐Slope⏐< 1 iA – i – i Outcome s Figure 4.4 Perfectly negatively correlated returns. of the two assets that eliminates the risk on the portfolio. In Figure 4.4 the random returns are plotted over, all possible outcomes in the left-hand panel and against each other in the right-hand panel. Once again, the returns on asset B deviate more from the normalized common mean return (i¯) than the returns on asset A, except that now they move in opposite directions. There is a linear relationship between them in the return space and it has a negative slope with an absolute value less than unity due to the larger deviations in the returns on asset B. With ρAB = −1, we can write the variance on the portfolio in (4.3) as σ P2 = ( aσ A − (1 − a )σ B , 2 where the return on the portfolio has a standard deviation which is the weighted difference in the standard deviations of the returns on the two assets. Since the returns move perfectly against each other it is possible to eliminate risk in the portfolio by choosing‚ â ≈ 0.61. This is the minimum variance portfolio (MVP) which has an expected return of 10.6 per cent, where the slope of the efficient mean–variance frontier is constant and changes sign either side of this bundle: ⎧ iA − iB > 0, for a > aˆ, ⎪ diP / da ⎪ σ A + σ B =⎨ d σ P / da ⎪ iA − iB < 0, for a < aˆ. − ⎪⎩ σ A + σ B The efficient mean–variance frontier is illustrated in Figure 4.5 by the line with intercept 10.6 that passes through point A; it isolates the largest expected return on the portfolio at each level of risk. Partial diversification with −1 = ρAB ≤ + 1 is more realistic as there is normally some market risk in the economy that cannot be eliminated by the diversification effect. Examples of negatively and positively correlated returns are illustrated in the left- and right-hand panels, respectively, of Figure 4.6. The efficient frontier is non-linear in these circumstances Asset pricing models – iP (%) a>1 sP (%) Figure 4.5 Efficient mean–variance frontier with ρAB = −1. because its slope is a function of the asset share, and it lies within the bounds established by the frontiers for the two extremes considered above. An example is shown in Figure 4.7 as the solid line from the MVP through point A. The returns summarized in Table 4.1 have a covariance of σAB = 0.010725, and coefficient of correlation of ρAB = 0.002555. The minimum variance portfolio (â) is obtained using the portfolio variance for the securities in (4.3), as â ≈ 0.234.6 In other words, the variance in the portfolio is minimized by holding approximately 23.4 per cent of each dollar in security A and the remaining 76.6 per cent in security B. Now we are in a position to consider how consumers value risky assets A and B, when: a they have homogenous expectations (which gives them the same mean–variance frontier); and b there are no short-selling (borrowing) constraints (so they can trade along the efficient frontier beyond point A by selling asset B. Negative correlation Positive correlation Slope > 0 −iB – i iB −iB – i ⏐Slope⏐< 1 Figure 4.6 Partially correlated returns. Asset pricing models – iP (%) 14.5 10.6 ρ AB = Efficient frontier ρAB = +1 MVP 4.5 a>1 ) 0. d σ P /da σA This is illustrated by the line with intercept 3.0 passing through point A in Figure 4.9. As investors move away from security A into risk-free security F, the risk in their portfolio approaches zero. This not a diversification effect, but rather a reduction in the share of the risky security in the portfolio. Someone in the economy must bear the market risk in asset A, where the equilibrium security returns must equate the aggregate demands and supplies for both securities. The risk premium of 11.5 per cent for asset A is sufficient compensation to attract enough consumers to bear its market risk of 25.4 per cent. Now we consider how consumers measure and price the risk in security A, when they: a have homogenous expectations (and evaluate the means and variances on the two assets identically); b can trade a risk-free security;7 and c face no short-selling (borrowing) constraints (so they can trade along the efficient frontier beyond point A by selling the risk-free asset). Asset pricing models – iP (%) 1 a> 0 )< −a 1 ( A )> −a (1 a> Efficient frontier 3.0 a< 0 )> −a (1 Figure 4.9 Portfolios with a risk-free security (F). Since consumers face the same linear efficient frontier they will price and measure the risk in security A in the same way. In fact, the only risk they face is determined by the variance in the return on asset A which they combine with the risk-free security according to their risk preferences. Examples of portfolios for two investors 1 and 2 with different risk preferences are shown in Figure 4.10, where individual 1 holds relatively more of the risk-free security. Some investors may trade beyond point A on the efficient frontier by borrowing at the risk-free rate. The consumption risk for each consumer is determined by the proportion of asset A they hold in their portfolio, with σP1 = a1σA and σP2 = a2σA.8 As they face the same market risk (σA = 25.4) and have indifference curves with the same slope along the linear efficient frontier they will measure and price risk in the same way. Thus, there is a common asset pricing model for all consumers in the economy, where the price of risk is determined by the slope of the frontier, with ( iA − i ) / σ A ≈ 0.45. This simple example provides considerable insight into the CAPM which is derived with many risky securities. Before taking that final step we examine the efficient mean–variance frontier with many risky securities and no risk-free security to analyse the diversification effect in a more realistic setting. – iP (%) 14.5 – iP2 – iP1 A 2 3.0 sP1 sP2 25.4 Figure 4.10 Efficient mean–variance frontier with risky security A and risk-free security F. Asset pricing models Many risky securities and no risk-free security In practice many risky securities are sold by firms that bring different production risk to the capital market. An important role of the capital market is to allow investors to trade this risk and, where possible, to eliminate that part of it that is diversifiable by bundling securities in portfolios. In the absence of a risk-free security, the expected return on a risky portfolio drawn from K traded securities is K iP = ∑ ak ik , k =1 where ak is the proportion of each dollar of saving allocated to security k = 1, ... , K, with Σkak = 1. The variance in this portfolio return is: K σ 2p = ∑ ∑ ak a j σ kj . k =1 j =1 Clearly, the number of covariance terms has expanded from the example considered earlier with two risky securities A and B. This is best illustrated by writing the variance on the portfolio return, using the variance–covariance matrix, as: ⎡ σ12 … σ1 K ⎤ ⎢ ⎥ σ = ⎡⎣ a1 … aK ⎦⎤ ⎢ ⎥ 2 ⎥ ⎢ σ K 1 … σ KK ⎣ ⎦ 2 P ⎡ a1 ⎤ ⎢ ⎥ ⎢ ⎥. ⎢ aK ⎥ ⎣ ⎦ There are as many variance terms as assets (K) along the diagonal of the variance–covariance 2 matrix, but K −K covariance terms off the diagonal. The covariance terms determine the size of the diversification effect, and empirical estimates using stock market data suggest that most of the diversifiable risk can be eliminated from portfolios by bundling 15–20 securities together. This is illustrated in Figure 4.11, where the variance on the returns to optimally chosen portfolios approaches the non-diversifiable (market) risk as the number of securities in the portfolio rises. Ultimately market risk is the risk in aggregate consumption, and securities pay investors a risk premium as compensation for bearing it. In an equilibrium this premium equates the aggregate demand for and supply of market risk, which emanates from the production activities σP Diversifiable risk Non-diversifiable (market) risk Figure 4.11 Portfolio risk and number of securities. Asset pricing models – iP Figure 4.12 Efficient mean–variance frontier with many (N) risky securities. of firms. In the absence of a risk-free security the efficient mean–variance frontier for K risky securities with partial diversification is illustrated by the solid curve starting at the MVP in Figure 4.12. Expected returns and standard deviations for all traded securities must lie on or inside the mean–variance frontier. In the CAPM setting investors have homogenous expectations and face the same efficient mean–variance frontier. However, they will measure and price the risk in traded securities differently when they hold different portfolios. Two representative investors are illustrated in Figure 4.13, where individual 2 has a more risky portfolio, with σP2 >σP1. As was the case previously with two risky securities A and B, consumers compute the risk premium for any risky security k by its contribution to the risk in their portfolio. They then value this risk using the slopes of their indifference curves at consumption points 1 and 2 in Figure 4.13. Since they have different market portfolios and different marginal valuations – iP – iP 2 – iP1 V2 V1 Figure 4.13 Portfolios with many risky securities. Asset pricing models for risk, the asset pricing model is agent-specific. Even though consumers see the same risk premium on each security they do not decompose it in the same way. Many risky securities and one risk-free security These are the trading opportunities in the CAPM, where expected returns on security portfolios are iP = aM iA + (1 − aM )i, with aM being the proportion of saving invested in a bundle of risky securities (M) and 1− aM the remaining proportion invested in risk-free security F. The market portfolio is a derivative security constructed from positive combinations of the (K) risky traded securities. Since the return on the risk-free security is certain, we have σF = σFM = 0, where the variance on the returns to investor portfolios becomes σ 2P = aM2 σ 2M + (1 − aM )2 σ 2F + 2 aM (1 − aM )σ MF = aM2 σ 2M . By combining risky bundle M with the risk free security, the slope of the mean–variance consumption opportunity frontier is constant, with: diP / daM i −i = M > 0. d σ P / daM σM This is referred to as the capital market line (CML), and is illustrated in Figure 4.14. In the CAPM every investor faces the same CML where: a they have homogenous expectations (and therefore see the same risky efficient mean–variance frontier); b there are no short-selling (borrowing) constraints (so they can trade along the CML beyond point M by selling the risk-free security); c they can trade a risk-free security; and, d there are no taxes or transactions costs. – iP CML – iM Figure 4.14 Capital market line. Asset pricing models Since investors face the same linear efficient mean–variance frontier they all choose the same risky bundle, called the market (M) portfolio, and have the same marginal valuation for risk. Thus, they measure and price risk identically. In particular, they measure risk by the standard deviation in the return on the market portfolio (σM), and price it using the slope of the CML, which is diP i −i = M . σM dσ M It is the premium that equates the aggregate demand for and supply of every traded security in the capital market. Suppose every investor in the economy becomes marginally less risk-averse where this moves their consumption bundles up along the CML. This creates an excess demand for risky bundle M and an excess supply of risk-free security F. A plausible outcome would see a higher interest rate and a flatter CML as the risk premium falls. In fact, the efficient mean–variance frontier for risky securities would fall when investors are willing to bear the same market risk at a lower risk premium. As a consequence, the market portfolio is likely to change as firms adjust their investment choices in response to the lower cost of capital. In the new equilibrium these adjustments would once again equate the demands for and supplies of all the traded securities in the economy. That is why the CAPM is frequently referred to as an equilibrium asset pricing model. Two-fund separation holds in the CAPM because every investor bundles risky securities into the same derivative asset (M) which they combine with the risk-free security (F). They choose different amounts of market risk based on their risk preferences where relatively less risk-averse investors hold more of risky bundle M in their portfolio. Indeed, some investors may even choose to borrow at the risk-free rate to increase their holding of the derived risky bundle to trade beyond point M along the CML in Figure 4.14. But they all face the same market risk (σM). In other words, they have the same consumption risk. The final step is to price each risky traded security held in the market portfolio. In the CAPM no risky security is held outside it, and all risk emanates from the underlying production risk in the economy, which attracts a risk premium when consumers are risk-averse. 4.1.3 Security market line – the CAPM equation The asset pricing equation in the CAPM was derived independently by Sharpe (1964) and Lintner (1965). We provide an informal derivation here to draw out the economic intuition. A formal derivation is provided later in Section 4.2.1. Consider a portfolio which combines one of the risky securities (k) with the market portfolio, where the expected return on this portfolio is iP = ak ik + (1 − ak )iF . It has a variance of σ 2P = ak2 σ k2 + (1 − ak )2 σ 2M + 2 ak (1 − ak )σ kM . Think of ak as the excess demand for security k, and then raise it marginally to evaluate its impact on the slope of the efficient mean–variance frontier for risky securities with ak = 0. This experiment tells us how much risk security k contributes to risky bundle M; it is the slope of the efficient mean–variance frontier for the risky securities at point M in Figure 4.14, where Asset pricing models diP / dak ik − iM = . d σ P / dak ( σ kM − σ 2M ) / σ M 123 (4.19) When every asset k is optimally held inside the market portfolio its contribution to market risk is equal to the premium for market risk, which is the slope of the CML: ik − iM ( σ kM − σ 2M )/ σ M Slope of risky efficcient frontier = Slope of CML iM − i . σM By rearranging these terms, we obtain the CAPM pricing equation, i k = i + (i M − i ) β k , where βk = σkM /σ2M is the beta coefficient that measures the amount of market risk in security k. It is referred to as the security market line (SML) and is based on two sets of assumptions. The first set relates to preferences: ¥ ¥ Consumers are risk averse, have homogeneous expectations and maximize NMEU functions in a two-period setting. Future consumption is funded solely from returns to portfolios of securities. Security returns are jointly normally distributed. The second set are concerned with the budget constraint (CML): ¥ ¥ ¥ ¥ Consumers have homogeneous expectations. There are no borrowing constraints. A risk-free security exists. The capital market is competitive and frictionless (to rule out taxes and transactions It is easy to see why the CAPM is a popular model. The risk premium is based on the return to the common market portfolio which is normally estimated from time series data for a broadly based (value-weighted) price index of publicly traded stocks. In other words, consumption risk is identified by a single factor in the model. If any security k is a perfect substitute for the market portfolio, then βk = 1 and the pricing equation in (4.21) collapses to ik = iM . Notice how the market risk in the return to security k is determined by its covariance with the return on the market portfolio. Using the coefficient of correlation defined in (4.4), we can write the beta coefficient as βk = ρkM σk /σM. When the return on security k is perfectly correlated with the return on the market portfolio, with ρKM = 1, then we must have σK = σM. But assets with a higher standard deviation (σk > σM) can also have βk = 1 if the extra risk is diversifiable. The SML in (4.21) is illustrated in Figure 4.15. By arbitrage, all expected security returns must lie on the SML. In other words, the no arbitrage condition holds in the CAPM, where the only differences in expected Asset pricing models – ik (%) E – iG F bk (%) Figure 4.15 Security market line. returns must be due solely to differences in market risk. To see this, consider the risky security G with beta coefficient βG. If its expect return lies above the SML at point E it pays economic profit. As investors increase demand for this security its price rises until the expected return is driven down onto the SML. Conversely, if investors expect the return to be at point F where no risk premium is being paid, the fall in demand for the security drives down its price until its expected return rises onto the SML. Box 4.2 The CAPM pricing equation (SML): a numerical example The following financial data is taken from an imaginary economy where the CAPM holds. We use the share price index as the market portfolio held by all consumers, where its return in each year is computed by summing capital gains to the dividend yield. This index is a broadly based value-weighted index with weights equal to the market value of each firm’s equity as a proportion of the total value of equity traded. Share price index 2006 5821.00 2005 4924.70 2004 4308.57 2003 3829.84 2002 4091.71 2001 4486.52 2000 4067.56 1999 3518.65 Mean Variance Price change (%) 18.2 14.3 12.5 −6.4 −8.8 10.3 15.6 — 7.96 102.37 Dividend yield (%) 2.5 3.0 4.5 1.5 0.0 2.0 2.8 — 2.33 1.66 Market return (%) Treasury bill rate (%) Deadlock return (%) 20.70 17.30 17.00 − 4.90 − 8.80 12.30 18.40 — 10.29 123.93 6.2 6.0 5.8 5.4 5.6 5.6 5.4 — 5.71 0.08 7.55 6.29 9.12 1.76 2.39 9.59 11.57 — 6.90 11.65 Using this data we find that the CAPM pricing equation is: ik = 5.71 + 4.58βk , with i ≈ 5.71, iM − i ≈ 4.58 and β k = Cov (iM , ik )/ Var (iM ). Since the return on a Deadlock share has a covariance with the return on the market portfolio of σDM ≈ 32.12, we can |decompose its expected return, using the pricing equation, as iD = 5.71 + 1.19 ≈ 6.90 per cent , where βD = σDM / σ2M = 32.12/123.93 ≈ 0.26 is the amount of market risk it contributes to consumption expenditure. Thus, Deadlock shares pay a risk premium of 4.58βD ∼ 1.19 per cent. Asset pricing models Box 4.3 Numerical estimates of beta coefficients by sector Beta books can be purchased in most countries. They provide estimates of the beta coefficients for all publicly listed companies where the return on the market portfolio is normally computed using time series data for a broadly based value-weighted share price index such as the Standard & Poor’s 500 in the United States or the All Ordinaries Index in Australia. The following table summarizes average beta coefficients for publicly listed companies trading on the Australian Securities Exchange. They are reported as average coefficients for 20 sectors in the economy. The food, beverages and tobacco sector has the lowest beta coefficient at 0.57, while the highest is 1.37 in the insurance sector. Banks Capital goods Commercial services and supplies Consumer durables and apparel Consumer services Diversified financials Energy Food and staples retailing Food beverage and tobacco Health care and equipment services Insurance Materials Media Real estate Retailing Software and services Technology hardware and equipment Telecommunication services Transportation Utilities Market 0.78 0.99 1.21 1.35 0.91 0.78 1.15 0.61 0.57 0.96 1.37 1.16 0.98 0.91 0.93 1.28 0.85 0.35 0.91 0.35 1.00 Source: Based on financial data taken from Aspect Financial Analysis on 17 May 2007. This database is produced by Aspect Huntley Pty Ltd. 4.1.4 Relaxing the assumptions in the CAPM Two important features of the CAPM make it popular: i Expected security returns are linear in a single risk factor. ii All investors measure and price risk identically. The beta books that are published in most countries are evidence of its popularity. They provide estimates of the beta coefficients for securities listed on the stock exchange. But it is important that financial analysts understand the assumptions in the CAPM and what role they play. We do this here by relaxing some of the main assumptions one at a time, while the results from empirical tests of the model are summarized later in Section 4.5. With risk-neutral investors the indifference schedules are horizontal lines in the mean–variance space as no additional compensation is required for increases in risk. The equilibrium outcome is illustrated in Figure 4.16 where the CML is also horizontal at the risk-free return. All securities pay the risk-free return (i), where the SML in (4.21) collapses to ¯ik = i for all k. There is considerable evidence to suggest that risk aversion is a robust assumption in the CAPM. Asset pricing models – iP – VRA(iP ,σP) – i = iM Figure 4.16 Risk-neutral investors. When security returns are not joint-normally distributed we cannot, in general, describe the distributions of returns on portfolios solely by their means and variances. Indeed, it may take additional moments for the distribution to fully describe the returns on portfolios, and risk may not be linearly related to them. One way to rescue the CAPM is to adopt quadratic preferences, where investors only care about the means and variances in their portfolio returns. But placing restrictions on preferences is much less appealing. A number of empirical studies have tested security returns on portfolios to see whether they are jointly normality distributed. Fama (1965) did so for securities traded on the New York Stock Exchange and found they were symmetric with fat tails. In other words, they are approximately bell-shaped with infinite variances. If investors have heterogeneous expectations they will not observe the same mean– variance frontier for risky securities. This is illustrated in Figure 4.17, where two consumers have different capital market lines and choose different market portfolios. Thus, they measure and price risk differently, with: ik h = iF + ( iM h − iF )β kh , for h ∈1, 2. – iP CML1 CML2 – iM1 –2 iM sM1 sM2 Figure 4.17 Heterogeneous expectations. Asset pricing models Differences in expectations normally result from costly information where that can compromise the competition assumption and the mutuality principle which both apply in the CAPM. With borrowing constraints that restrict sales of the risk-free security the CML becomes non-linear at the point where the constraint binds. The efficient mean–variance frontier is illustrated in Figure 4.18 where no borrowing is allowed at the risk-free rate. It is the CML up to the market portfolio, then it becomes the efficient mean–variance frontier for risky securities. Once investors locate beyond point M on the efficient frontier they hold different risky bundles and therefore measure and price risk differently. All other investors measure and price risk identically as they hold risky bundle M and have the same marginal valuation for risk along the linear segment of the frontier between points i and M in the diagram. Borrowing constraints may also limit arbitrage activities that drive profits from security returns. In a competitive capital market a perfect substitute can be created for every security by bundling together existing traded securities, where arbitrage equates the expected return on the security with the return on its derivative. When borrowing constraints restrict the ability of traders to create these derivative securities the competition assumption may fail to hold, and there may be profits in security returns, which is not consistent with the CAPM pricing equation in (4.21). In the absence of a risk-free security, investors hold different risky bundles on the non-linear efficient mean–variance frontier. Thus, they measure and price risk differently. Black (1972) argues the CAPM can be rescued in these circumstances when investors create derivative securities with no market risk in them as replacements for the risk-free security. They are referred to as zero-beta securities because they have βZ = 0. These derivatives are normally created by shorting some securities and going long in others, where the CAPM pricing equation becomes ik = iZ + ( iM − iZ )βk . Unfortunately the Z security is not unique. Indeed, there are different ones for each market portfolio on the risky efficient mean–variance frontier. An example with two market portfolios is given in Figure 4.19. – iP – iM Figure 4.18 No borrowing. Asset pricing models – iP CML1 CML2 M2 M1 – iZ 2 – iZ 1 Figure 4.19 Zero beta securities. Elton and Gruber (1995) manage to derive an aggregated CAPM pricing equation under these circumstances where the market portfolio and the Z security are the weighted sum of the individual investor market portfolios and Z securities which are both drawn from the same set of risky traded securities. Clearly, it is a much more difficult pricing equation to estimate and use in applied work. When there are taxes on security returns, investors choose the same risky market portfolio when they face the same after-tax CML. They can have different tax rates on different types of securities, but they must be the same for all investors. An example is given in Figure 4.20, where the tax rate on interest is higher than the tax rate on returns to all the risky securities held in the market portfolio. Since investors face the same before-tax (BT) and after-tax (AT) capital market lines they choose the same market portfolio. However, when they face different marginal tax rates on the same security returns they have different after-tax capital market lines and choose different market portfolios. Elton and Gruber (1995) and Brennan (1970) derive an aggregated CAPM pricing equation where the market portfolio is determined by the weighted after-tax returns on the risky portfolios chosen by investors. Clearly, it is much harder to compute than the simple CAPM pricing equation without taxes in (4.21). The effects of transactions costs on the CAPM are similar to income taxes when they distort security returns, but they differ by using resources rather than transferring them as tax revenue. They also make it costly to eliminate diversifiable risk, where any marginal costs – iP i – iM (1−tM) Figure 4.20 Income taxes. Asset pricing models incurred must be included in the asset pricing equation. Asymmetric information is much more likely when information is costly to acquire, where investors with different costs will likely have different information and form different expectations about security returns. Most popular asset pricing models assume there are no transactions costs, based largely on the view that institutional investors, who are specialist traders in the capital market, have low marginal transactions costs. In these circumstances they create risky mutual funds for individual consumers facing higher costs due to the relatively small value of their security trades. A number of interesting puzzles arise in finance when equilibrium outcomes are examined in models without trading costs. For example, firms pay no dividends to fully taxable shareholders in economies with a classical corporate tax system. This so-called dividend puzzle is examined later in Chapter 7 where one of the explanations relies on differential trading costs on paying dividends and capital gains. The CAPM holds when there are more than two time periods if the interest rate and relative commodity prices are constant and security returns are independently and identically distributed over time. This leaves consumers facing the same efficient mean–variance frontier, and the same real income, in every time period. Once consumption risk changes over time the CAPM fails to hold. Merton (1973a) extends the CAPM to the intertemporal setting by adding additional factors in the pricing equation to explain changes in market risk. This is the intertemporal CAPM which we examine later in Section 4.3.2. 4.2 Arbitrage pricing theory One of the less attractive features of the CAPM is that it predicts every consumer will hold the same risky portfolio. It also relies on security returns being jointly normally distributed and consumers holding all their net wealth in financial securities. In response to these concerns Ross (1976) derives a pricing equation by isolating the common component of changes in security returns using a linear factor analysis. As the name suggests, the APT relies crucially on arbitrage to eliminate any profits from security returns and to provide investors with the ability to eliminate idiosyncratic risk from their portfolios. An important starting point is the assumption that security returns can be fully described by a linear factor model, where the random return on any traded security k is related to g = 1, ... , G factors and noise, with: ik = ik + β k 1 f1 + … + β k G fG + ε k ∀k ∈ K ,9 (4.22) ~ where βkg is the sensitivity of the return on security k to the risk _isolated by factor g, fg the ~ ~ deviation in the value of factor g from its mean value (with fg = Fg − Fg and E ( f g ) = 0 for all g ), and ε k an error term (with E (ε k ) = 0 for all k ). When (4.22) is used as a regression equation ~ ~ the factor deviations are uncorrelated with each other (cov( f g, fj ) = 0 for all g ≠ j)), and the model describes the returns to securities, and not just any arbitrary set of returns, when the error terms are uncorrelated across securities (with E(ε~k , ε~j) = 0 for all k, j).10 To simplify the analysis, we report the factor deviations as rates of return on their mimicking factor portfolios (with f g = ig − ig for all g ). This makes the sensitivity coefficients in (4.22) standard beta coefficients, with β kg = cov(ik , ig )/ var (ig ). Each mimicking portfolio is a derivative security with unit sensitivity for one factor and zero sensitivity for all others. Thus, their risk premiums are market premiums for the risk isolated by each factor. Asset pricing models It is important to note that (4.22) is not a functional relationship as the factors are not necessarily the source of aggregate uncertainty in security returns. Rather, they are correlated with it, and are typically macroeconomic variables such as industrial production and inflation, where deviations in security returns from their expected values are due to deviations in the values of these common factors from their means plus noise. Since factor risk is nondiversifiable it attracts a premium, while the noise, which can be diversified away inside portfolios with a large number of securities, attracts no premium. A pricing equation for the APT is derived by first estimating the beta coefficients in (4.22) using a statistical analysis and then pricing the factor risk using the law of one price in a frictionless competitive capital market where the no arbitrage condition holds. The risk premium for each factor g is obtained by constructing a mimicking portfolio and deducting the risk-free interest rate from its expected return, with ig − i for all g . A formal derivation of the pricing equation is provided below in Section 4.3.3 where it is obtained as a special case of the CBPM in (3.17). An intuitive derivation is provided here by demonstrating the properties and assumptions in the model, in particular the role of arbitrage. We begin by creating a risk-free arbitrage portfolio (A) with no initial wealth, where: K ∑a k =1 A k = 0. Using the linear factor model in (4.22) the random return on this portfolio is K iA = ∑ akAik = ∑ akAik + ∑ ∑ akAβ kg f g + ∑ akA ε k . k =1 g =1 k =1 k =1 For it to be risk-free the security weights must be chosen to eliminate the factor risk in the second term, with ∑ k agAβ kg = 0 for each factor g, and there must be enough securities (K) in the portfolio to eliminate idiosyncratic risk in the third term, with ∑ k akA ε k = 0. As the number of securities in the arbitrage portfolio increases, the weight for each security becomes smaller, thereby eliminating the diversifiable risk. In these circumstances the return on the arbitrage portfolio is non-stochastic, with K iA = ∑ akA ik = 0. k =1 Thus, when the no arbitrage condition holds all profits are eliminated from security returns, where the return on the arbitrage portfolio, which is constructed with no initial wealth, must be zero.11 By using the properties of linear algebra, the three orthogonality conditions, ∑ k =1 akA = 0, ∑ k akAβ kg = 0 and ∑ k akAε k = 0, impose a linear relationship on the coefficients for the portfolio weights, with ik = λ 0 + ∑ λ g β gk ,12 where λ0 and λg are non-zero constants. And the constants are themselves rates of return, which is confirmed by using (4.26) for the risk-free security (F), with βFg = 0, where λ0 = i, Asset pricing models and the mimicking factor portfolios, with βkg = 1 for all k = g and βkg = 0 for all k π g, where λ g = ig − i. After substitution, we have the APT pricing equation, ik − i = ∑λ g β gk ∀k , g where λ g = ( ig − i ) is the risk premium for factor g risk and β gk = Cov (ig , ik ) / Var (ig ) the beta coefficient that measures its contribution to the market risk in security k. It is based on the following assumption: ¥ ¥ ¥ ¥ Consumers are risk–averse with homogeneous expectations. Security returns are described by a linear factor model. The law of one price holds. There is a risk free security. Notice that this pricing equation has a similar structure to the CAPM equation in (4.21). Investors with homogenous expectations measure and price risk identically and therefore use the same factors (state variables) to identify market risk. The difference between the models is that the APT does not require jointly normally distributed asset returns, and risk is isolated using more than one factor. Unfortunately, however, the factors are not identified in the model. Instead, they are identified empirically by using data to find the best fit for the linear factor model in (4.22). Some analysts find the APT model more appealing as the risk factors are normally macroeconomic variables that investors monitor to evaluate economic activity. Chen et al. (1986) use US data to find four suitable candidates in the index of industrial production, changes in default risk premiums, differences in the yields on shortand long-term government bonds, and unanticipated inflation. Without a common set of factors the APT equation becomes an agent-specific pricing model. Box 4.4 The CAPM as a special case of the APT When the return on the market portfolio is the single factor that isolates market risk in the APT model, we have from (4.27) that, for any security k, ik = i + ( iM − i )β Mk , with β kM = Cov (ik , iM )/ Var (iM ). This is the CAPM pricing equation in (4.21) where the linearity comes from the linear factor model and not from assuming security returns are jointlynormally distributed. The single factor is much more likely when consumers fund all their future consumption from returns to portfolios of risky securities. If, in a two-period setting, asset returns are jointly normally distributed and consumer income is restricted to the returns on portfolios of securities, the APT equation in (4.27) collapses to the CAPM model in (4.21) where the variance in the return to the market portfolio is the sole factor in the model. 4.2.1 No arbitrage condition Arbitrage plays an important role in the derivation of the APT model. But it is no less important in the CAPM, or, for that matter, any of the other equilibrium asset pricing models we examine later. They all rely on arbitrage to eliminate profits from expected security Asset pricing models Box 4.5 The APT pricing equation: a numerical example Suppose we undertake an empirical analysis and find security returns are isolated using two factors – an index of industrial production (Y) and unanticipated inflation (P). The statecontingent returns on factor mimicking portfolios and two securities, Alpha (A) and Bastion (B), are summarized below. Returns (%) Factor portfolios Shares (k) States Probabilities −5.00 25.00 −15.00 35.00 12.00 401.00 0 6.25 34.50 3.90 −5.00 18.16 13.34 211.75 σkY σkP bkY bkP 35.0 18.0 −24.0 28.0 16.35 447.33 212.05 256.03 0.53 1.21 5.0 25.0 −10.0 40.0 16.75 333.19 360.25 40.61 0.90 0.19 0.25 0.30 0.20 0.25 Mean Variance σYP i Since each factor portfolio isolates the risk for a single factor, with βYY = βPP = 1 and βYP = βPY, = 0, their returns have zero covariance with each other, where Cov ( fY , f P ) = E ( fY f P ) = Cov (iY , iP ) = σ YP = 0. When these factors isolate all the market risk there are no residuals in the expected returns to securities, where the APT pricing equation becomes: ik = 6.25 + (12.00 − 6.25)β kY + (13.34 − 6.25)β kP . The premium for market risk isolated by the index of industrial production is 5.75 per cent, while it is 7.09 per cent for unanticipated inflation. After substituting the beta coefficients for the two shares, Alpha and Bastion, we find they have expected returns, of: iA = 6.25 + (12.00 − 6.25) 0.53 + (13.34 − 6.25) 1.21 ≈ 1787, iB = 6.25 + (12.00 − 6.25) 0.90 + (13.34 − 6.25) 0.19 ≈ 12.77. Based on these calculations, traders could make arbitrage profits by selling security A and buying security B when the APT equation above correctly predicts their expected returns, as we have i¯A >16.35 and i¯B >16.75. returns where any differences are explained by risk. This can be illustrated for the arbitrage trader (A) who maximizes profit π A = Rk akA + RD aDA by constructing a risk-free portfolio with no cost to initial wealth, where Pak akA + PaD aDA = 0. It combines a risky security (k) with its perfect substitute (D) created by bundling together other traded securities. The optimization problem is illustrated in Figure 4.21 when security D initially has a higher expected return. The budget constraint (WA) is the solid line with slope - paD /pak where every dollar allocated to one security must be financed by selling the other one. The iso-profit lines (illustrated as dashed lines) isolate combinations of securities k and D that hold profit constant at p A′. Initially they have a steeper slope (in absolute value terms) than the slope of the budget constraint, where profits are obtained by going long in security D and short in security k. When the consumer holds portfolio A′ by selling akA′ dollars of security k to fund the purchase of a DA ¢ dollars of security D, the profit πA′ is illustrated as distance 0B in Figure 4.21. In the absence of transactions costs the trader would maximize profit by being Asset pricing models π^A Slope = − paD pak – R Slope = − –D Rk B 0 −akA′ −ak A′ πA′ Figure 4.21 Arbitrage profits. infinitely long in security D and infinitely short in security k. This process eliminates the excess return on security D by equating their expected returns, with Rk = RD and pak = paD. It is the no arbitrage condition where the iso-profit lines ( πˆ A ) have the same slope as the budget constraint. 4.3 Consumption-based pricing models As noted in the introduction to this chapter, the CAPM and the APT are special cases of the consumption-based pricing model (CBPM) in (3.17). In a multi-period setting, the consumption-based pricing model (CBPM) is (4.28) ) = p E ( mR a where E(.) = sπs(.) is the expectations operator conditioned on information at the beginning = δU ′ ( Iτ )/U ′( I t ) the stochastic discount factor over period t to of the first period (t ), m ˜ τ, R τ the payouts to the K securities at τ and pa the vector of current security prices at t. It is based on the following important asumptions: i Consumers have time-separable NMEU functions with a constant rate of time preference. ii They have common expectations and conditional perfect foresight. iii The capital market is competitive, frictionless and complete. Since consumers can trade in a frictionless competitive capital market they have the same stochastic discount factor and face the same consumption risk.13 Thus, the risk premiums in expected security returns are determined by their covariance with aggregate consumption. That means we can solve the discount factors as functions of the variables that determine aggregate consumption risk. We follow Cochrane (2001) in this section by deriving the CAPM and the APT, together with the ICAPM and the consumption-beta capital asset pricing model CCAPM, as special cases of the CBPM in (4.28). This is an effective way of comparing their strengths and weaknesses. When there are multiple time periods, (4.28) is derived using a time-separable NMEU function with a constant rate of time preference (r), where, for an infinitely lived consumer, we have ∞ EU t = Et ∑ δ j {U ( It − j )},14 j =0 Asset pricing models with 00 and b < 0. Since the price of market risk is iM − i = − (1 + i )b, Var(iM ) ) = c = 1 / (1 + i ) ≠ 0 to write the pricing equation as we can use E ( m 1 + ik = (1 + i ) 1 − E[( c + biM − biM )(1 + ik )] + E ( c + biM − biM ) E (1 + ik ) . (1 + ik )] = 1, with the linear discount This collapses to the CBPM in (4.28), where E[ m factor m = c + b(iM − iM ). Asset pricing models While there are two time periods in the CAPM derived by Sharpe and Lintner, it also holds for multiple time periods when the interest rate and relative commodity prices are constant and security returns are i.i.d. over time. In effect, consumers are in a steady-state equilibrium facing the same aggregate risk in each period, where the market risk in security returns in every period is described by a single factor, which is how they covary with the return on the market portfolio. Based on this derivation, we find two important features of the CAPM model. First, all the aggregate consumption risk comes through security returns, thereby limiting the ability of the CAPM to explain how consumers measure risk when they also have income from labour and other capital assets. Second, changes in aggregate consumption risk are not accommodated by the CAPM. The ICAPM addresses this problem by extending the CAPM to multiple time periods by including additional factors to describe changes in the investment opportunity set and relative commodity prices. 4.3.2 Intertemporal capital asset pricing model As noted above, the popularity of the CAPM stems from its simplicity – in particular, the way it isolates consumption risk with a single state variable using mean–variance analysis. And this state variable is specified by the model as the variance in the return on the market portfolio which every consumer combines with a risk-free bond. In applied work the market portfolio is normally derived as a value-weighted index of the securities trading on the stock exchange. While the CAPM is frequently used by analysts in a multi-period setting, there is evidence to suggest aggregate consumption risk changes over time. Merton (1973a) extends the CAPM to accommodate changes in the investment opportunity set using a continuoustime analysis. The model maintains most of the assumptions in the CAPM – in particular, that consumers with homogeneous expectations maximize state-independent time-separable expected utility functions, they hold all their wealth in portfolios of securities, and can trade a risk-free bond (but with a return that can vary over time).20 We derive the ICAPM as a special case of the CBPM in (4.28) using a discrete-time analysis of the consumer problem summarized by the value function in (4.31) where vectors of additional state variable (z) are included to describe changes in aggregate consumption risk. To obtain the model in Merton we make the following additional assumptions: i At the beginning of each period t consumer wealth is confined to a portfolio of financial securities awt with market value pat atW = Wt − I t and a stochastic net return over the next period of iW ,t +1 .21 ii Security returns are jointly-normally distributed. Relaxing assumptions (iii) and (iv) in the CAPM allows aggregate consumption risk to change over time. Following Merton, we restrict the analysis to a single consumption good and assume changes in aggregate risk can be described by a single state variable. When security returns and the state variable are multi-variate normal we use Stein’s lemma (see note 19) to decompose the pricing equation in (4.28), with the stochastic discount factor in (4.32), as: ik − i = A Cov (iM , ik ) + B Cov ( z1 , ik ),22 where ⎧VWz (⋅) ⎫ ⎧V (⋅) pa aW ⎫ A = − (1 + i )δE ⎨ WW ⎬. ⎬ and B = − (1 + i )δE ⎨ ⎩ VW (⋅) ⎭ ⎩ VW (⋅) ⎭ Asset pricing models Merton assumes the interest rate is the sole factor needed to describe changes in the investment opportunity set, where a security n is identified with returns that are perfectly negatively correlated with changes in the interest rate, so that ρni = −1. This security is used as a proxy for the single factor (zn) that describes future changes in market risk, where the expected excess return on any security k ≠ n in (4.34) becomes ik − i = Aσ Mk + Bσ nk . After multiplying the excess return on each security in the market portfolio by their portfolio shares and summing them, we obtain the risk premium in the market portfolio: iM − i = Aσ 2M + Bσ nM . Setting k = n in (4.35), and using the risk premium in (4.36) to solve the variables A and B in (4.34), we have the ICAPM pricing equation, ik − i = σ k (ρkM − ρnM ρnk ) σ (ρ − ρnM ρkM ) ( iM − i ) + k nk ( in − i ), 2 ) 2 ) σ M (1 − ρnM σ n (1 − ρnM where M is the wealth portfolio and n the derivative security that is perfectly negatively correlated with changes in the interest rate. It is based on assumptions (i) and (ii) in the CAPM, but relaxes (iii) and (iv) by allowing changes in the investment opportunity set and in relative commodity prices over time. The additional covariance terms make it slightly more complex than the CAPM equation in (4.21), where the first term is compensation for non-diversifiable risk in the market portfolio, and the second term compensation for non-diversifiable risk due to changes in the interest rate over time. There are additional terms in (4.37) when changes in the investment opportunity set are described by more than one state variable. Long (1974) shows how this is much more likely with multiple consumption goods, where additional factors describe changes in their relative prices.23 There are a number of special cases where the pricing equation in (4.37) can be simplified. When the returns on the market portfolio and security n are uncorrelated (with ρnM = 0) the pricing equation in (4.37) collapses to a multi-beta model, where ik − 1 = σ kM σ ( iM − i ) + kn ( in − i ). 2 σM σ 2n Merton identifies two situations where the ICAPM becomes the CAPM – the first is where the interest rate is non-stochastic, with σn = 0, and the second is where all traded security returns are uncorrelated with changes in the interest rate, with ρki = 0 for all k.24 4.3.3 Arbitrage pricing theory In both the CAPM and ICAPM consumer preferences can be summarized by the mean and variance of the return on their wealth, which is confined to a portfolio of securities, as security returns are jointly normally distributed. The arbitrage pricing theory (APT) is more general because it makes no assumption about the distributions of the returns on securities and allows consumers to receive other types of income, including income from labour. Instead it assumes the security returns can be fully described by the linear factor model in (4.22). Using the CBPM in (4.28) for each security k, with Rk = (1 + ik + β k f + ε k ) pak , we have Asset pricing models Box 4.7 The ICAPM pricing equation: a numerical example The following data will be used to compute the expected return on Homestead (H) shares when the ICAPM in (4.37) holds. The market return is for a broadly based share price index that contains all risky traded securities, while security n is perfectly negatively correlated with the Treasury Bill rate. Market return (%) Year 2006 2005 2004 2003 2002 2001 2000 Mean Variance Standard deviation ρ HM ρnM Security n return (%) Treasury bill rate (%) 20.70 4.2 6.2 17.30 4.6 6.0 17.00 5.2 5.8 −4.90 6.0 5.4 −8.80 5.6 5.6 12.30 5.7 5.6 18.40 6.1 5.4 --------------------------------------------------------------------------10.29 5.34 5.71 123.93 0.44 0.08 11.13 0.66 0.28 --------------------------------------------------------------------------ρni 0.97 −1.00 ρnH −0.51 −0.51 Based on this data the expected return on any risky security k solves: ik = 5.71 + σ k (ρkM + 0.51ρnk ) σ (ρ + 0.51ρkM ) (10.29 − 5.71) + k nk (5.34 − 5.71). 2 11.13[1 − ( − 0.52 ) ] 0.66[1 − ( − 0.51)2 ] When the return on Homestead shares has a variance of sH = 6.94 per cent and correlation coefficients of ρHM = 0.97 and ρnH = −0.51. i = 5.71 + 0.60(10.29 − 5.71) − 0.28(5.34 − 5.71) ≈ 8.56 per cent. H Thus, the share contains consumption risk of 0.60 due its positive covariance with the return on the market portfolio, and consumption risk of −0.28 due changes in the risk-free interest rate. The negative sensitivity coefficient for the interest rate risk indicates the return on Homestead shares is positively correlated with the return on security n, which is negatively correlated with the interest rate. It means Homestead shares contain interest rate risk, and since there are risk benefits from holding security n its expected return is less than the risk-free rate. Once the sensitivity coefficients for the two factors are priced, the premium for the risk in the market portfolio is 0.60 (10.29 −5.71) ≈ 2.75 percentage points, while for the interest rate risk it is – 0.28 (5.34 – 5.71) ≈ 0.10 percentage points. Together they constitute a total risk premium of approximately 2.85 percentage points. The contribution by each risk factor can be determined by computing their beta coefficients, where β nH = σ nH / σ n2 = −2.35 / 0.44 ≈ −5.3 and, β MH = σ MH / σ 2M = 75.09 /123.93 ≈ 0.61. Thus, if we set ρnM = 0 we find iH = 10.47 per cent, where the correlation coefficient of ρnM = − 0.51 reduces the expected share return by 1.91 percentage points. (1 + ik + β k f + ε k )] = 1,25 E[ m where βk is a (1×G) vector of beta coefficients, with β kg = Cov (ik , ig )/ Var (ig ) for all g , and ~ f a (G × 1) column vector of deviations in factor returns from their means, with Asset pricing models f = i − i for all g. The factor returns are returns on mimicking portfolios with unit sensig g g tivity to one factor and zero sensitivity to all others. These factor securities are derivatives created by bundling securities together from the K traded securities in the capital market. ) = 1/(1 + i ) , we can Using the decomposition for covariance terms, and noting that E ( m rewrite this pricing equation as ) − E ( m ε k ) . 1 + ik = (1 + i ) 1 − β k E ( mf When the residuals are eliminated from the mimicking factor portfolios through the diverε k ) = 0 for all k, where the sification effect (with βg = 1) they have a zero price, with E ( m risk premium for each factor g is g ). ig − i = − (1 + i ) E ( mf After substituting this into the previous equation we obtain the APT pricing equation in (4.27).26 The model can be used in a multi-period setting by including additional factors to isolate changes in aggregate consumption risk. Unfortunately, however, actual security returns do not display an exact factor structure as there are residuals in estimates of their expected values. Since the APT model uses statistical analysis to identify the set of factors that isolate common movements in security returns, it relies on all the idiosyncratic risk being eliminated inside large factor portfolios. But most estimates of the beta coefficients in the linear factor model have R2 values less than unity. Cochrane (2001) shows that these residuals have non-unique positive prices which undermine the APT model. The larger the number of traded securities that can be bundled into factor portfolios, the closer the R2 values get to unity. The smaller the error terms become, the better the APT model gets at pricing risky securities. 4.3.4 Consumption-beta capital asset pricing model One of the main deficiencies of the ICAPM and the APT is that they do not specify all the factors that isolate aggregate consumption risk. None are specified in the APT because the factors are macro variables chosen to provide the best fit in a linear factor analysis (with the highest R2). In the ICAPM the variance in the market portfolio isolates consumption risk, but none of the factors used to explain changes in market risk over time are specified by the model. Thus, both models are more difficult to use than the CAPM. The CAPM can be used in a multi-period setting if real aggregate consumption expenditure is constant over time. Breeden and Litzenberger (1978) and Breeden (1979) derive the CCAPM in a single-good, multi-period setting. Breeden extends the analysis to accommodate multiple goods, and does so in a continuous-time setting where aggregate uncertainty follows a Markov process of the Ito type.27 However, data on aggregate consumption is not reported at point in time, but rather for quarterly periods. Breeden and Litzenberger (1978) derive the CCAPM for discrete time periods by making the following assumptions: i There is a single consumption good. ii The interest return is constant and security returns are independently and identically distributed over time. Asset pricing models iii Consumers have preferences with the same constant coefficient of relative risk aversion. iv Aggregate consumption and security returns are jointly lognormally distributed. Assumptions (i) and (ii) make consumption risk the same in each future time period, while assumptions (iii) and (iv) make the stochastic discount factor linear in aggregate consumption risk. It should be noted that, unlike the CAPM and the ICAPM, wealth is not confined to returns on portfolios of securities in the CCAPM.28 There is a one-to-one mapping between wealth and aggregate consumption in each time period when consumers have a constant coefficient of relative risk aversion (γ), while the stochastic discount factor is linearly related to aggregate consumption risk when security returns and consumption growth are jointly lognormally distributed.29 This is confirmed by using the CBPM in (4.28) with the power utility function in (3.20) to isolate the return on security k as Et [(1 + gt +1 )− γ (1 + ik , t +1 )] = 1 + ρ, where ρ is the rate of time preference, γ the CRRA and gt +1 = ( It +1 − I t )/ I t the growth rate in consumption expenditure. Notice how the stochastic discount factor is now a t +1 = (1 + gt +1 )− γ . function of the growth rate in consumption in the same period, with m When security returns and consumption growth are lognormally distributed we can decompose (4.38), with time subscripts omitted, as E[1n(1 + ik )] − 1n (1 + i ) = γ Cov[1n (1 + g),1n (1 + ik ) − 30 Var[1n (1 + ik )]. For small enough values of ik , i and g~ , this can be approximated as 31 ik − i = γ Cov( g, ik ). The premium for aggregate consumption risk is obtained by creating its mimicking portfolio (I) with stochastic return ¯iI, where from (4.40) we have γ = ( iI − i ) / Var (iI ). After substitution, this leads to the CCAPM pricing equation ik − i = ( iI − i β Ik where β Ik = Cov(iI , ik ) / Var(iI ) is the beta coefficient that measures the aggregate consumption risk in any risky security k. Like the CAPM, this is a linear pricing model with a single beta coefficient. But it too relies on a number of simplifying assumptions that may restrict the ability of the model to explain the observed risk premiums in security returns. First, consumers have a common and constant CRRA, and aggregate consumption and security returns are lognormally distributed. CRRA preferences provide a one-to-one mapping between changes in aggregate consumption and wealth, while lognormality generates a linear relationship between security returns and the beta coefficients used to isolate aggregate consumption risk. Ruling out shifts in the investment opportunity set and adopting a single commodity makes aggregate consumption risk constant in real terms over time. That makes current aggregate consumption risk the sole factor in the model.32 Also, with constant consumption risk the pricing equation Asset pricing models in (4.41) holds unconditionally. Allowing consumption risk to change over time would add additional beta coefficients to the pricing equation. The CCAPM is more general than both the CAPM and ICAPM because it also allows income from labour and other capital assets. Box 4.8 The CCAPM pricing equation: a numerical example The aggregate consumption data summarized below is for an economy where the CCAPM in (4.41) holds. Over the period 1990–2006 the 2 per cent annual interest rate and relative commodity prices are both constant over time. Since the rate of return on the mimicking portfolio I is perfectly correlated with the growth rate in aggregate consumption expenditure, with Corr ( g, iI ) = 1 , the risk premium for consumption risk is iI − i = 0.06. Year Level ($bn) Growth rate (g) Return portfolio I Return Security A 2006 2.04 0.07 0.16 0.11 2005 2.04 0.04 0.09 0.08 2004 1.91 0.06 0.14 0.07 2003 1.83 0.05 0.11 0.06 2002 1.73 −0.01 −0.02 0.00 2001 1.65 0.04 0.09 −0.03 2000 1.67 0.05 0.11 −0.04 1999 1.60 −0.03 −0.07 −0.12 1998 1.52 0.04 0.09 0.16 1997 1.57 0.09 0.21 0.09 1996 1.51 −0.05 −0.11 0.03 1995 1.39 0.04 0.09 0.12 1994 1.46 0.07 0.16 0.03 1993 1.40 0.06 0.14 0.04 1992 1.31 −0.01 −0.02 −0.03 1991 1.24 0.05 0.11 0.19 1990 1.25 — — — -------------------------------------------------------------------------------------Mean 1.59585 0.03500 0.08000 0.04719 Variance 0.05992 0.00144 0.0075102 0.00603 Standard deviation 0.24479 0.03791 0.08666 0.07766 After computing the covariance between the returns on the mimicking portfolio I and security A, with σIA = 0.034036, we can use (4.41) to confirm the expected return on security A is iA = i + ( iI − i )β IA = 0.02 + (0.06 × 0.45319) ≈ 0.04719, with β IA = σ IA / σ 2I = 0.0034036 / 0.0075102 = 0.45319. With multiple consumption goods, changes in their relative prices can affect the composition of investor consumption bundles, which can change utility, even without changing future consumption expenditure. In the ICAPM the relative price changes are identified by additional beta coefficients in the pricing equation. Recall that the single-good version of the ICAPM already has two beta coefficients – one for the risk in the market (wealth) portfolio and another for changes in it over time. Breeden extends the single-beta CCAPM to multiple consumption goods in a continuous-time setting by measuring expected security returns and consumption expenditure in real terms, where the pricing equation becomes Asset pricing models ik * = i * + ( ij * − i * ) β*Ik , β*Ij with i*, ¯i k* and ¯i j* being the real returns on a risk–free bond and securities k and j, respectively, and β∗Ik and β∗Ij the real consumption betas for securities k and j. The price index used to discount security returns is constructed with marginal weights as they provide the correct valuation for goods purchased from an additional dollar of income, while the price index for computing real aggregate consumption is constructed with average weights as they are used in the calculation of average real consumption, which is inversely related to the marginal utilities of consumption goods. 4.4 A comparison of the consumption-based pricing models Four equilibrium asset pricing models were derived in the previous section as special cases of the consumption-based pricing model in (4.28). All of them i use set factors to isolate the risk in aggregate consumption; and ii have pricing equations that are linear in these risk factors. Their important assumptions are summarized in Figure 4.22. As special cases of the CBPM they are all based on consumers having time-separable NMEU functions with homogeneous expectations. With state independence, consumers care about the statistical distribution of their consumption expenditure in each future time period, while time separability makes the stochastic discount factor between any two periods independent of consumption expenditure in other time periods; it means the growth in marginal utility will depend only on consumption expenditure in that time period.33 Under these circumstances CBPM - Consumers have time-separable state-independent NMEU functions - They have homogeneous expectations and conditional perfect foresight - Tthe capital market is competitive, frictionless and complete Security returns Jointly normally distributed Linearly related to factors - two periods - all future income from security returns ~ ~ m = c + bM iM - multiple periods - all future income from security returns ~ ~ ~ - multiple periods - identical CRRA - future income from any source ~ = c + b i~ m ii - multiple periods - future income from any source m = c+bM iM + bn in ~ = c + ∑ b i~ m g gg Figure 4.22 Main assumptions in the consumption-based asset pricing models. Asset pricing models consumers with homogeneous expectations have the same stochastic discount factor and, as a consequence, the same changes in marginal utility. Given the inverse relationship between marginal utility and consumption, they must also have the same consumption risk. Thus, the stochastic discount factor in the CBPM can be solved as a function of aggregate consumption risk. Any differences in the models arise from the additional assumptions they make to isolate the aggregate consumption risk. All of them use mean–variance analysis, where it results from consumption risk being normally distributed in the CAPM, ICAPM and CCAPM, while it results from a linear factor analysis in the APT.34 For the CAPM and the ICAPM future consumption is funded solely from payoffs to securities. And since consumers can trade a risk-free security they combine it with the same bundle of risky securities (M) whose variance determines changes in their future consumption expenditure. In the two-period CAPM there is a single beta coefficient that measures how much security returns covary with the return on M, while there are additional beta coefficients in the multi-period ICAPM that summarize changes in the investment opportunity set (and relative commodity prices). The multiperiod CCAPM also has a single beta coefficient because consumers have the same constant coefficient of relative risk aversion that makes consumption expenditure a constant coefficient fraction of wealth in each time period. The APT model is also a multi-period model which, like the ICAPM, uses a number of factors to isolate changes in aggregate consumption risk. On the plus side, the single factors in the CAPM and CCAPM are specified by each model, while the multiple factors in the ICAPM and the APT are variables that investors frequently monitor when assessing the returns to securities. Also, ICAPM, CCAPM and APT can be used in a multi-period setting. On the minus side, the CAPM cannot be used in a multi-period setting unless security returns are i.i.d. and the interest rate and relative commodity prices are constant over time. But consumers need to have the same constant coefficient of relative risk aversion in the CCAPM, while the additional factors used to isolate changes in consumption risk are not specified in the ICAPM or the APT. A criticism that is common to all models is that they are based on the CBPM in (4.28) where consumers have time-separable NMEU functions. Arguably, the most restrictive assumption is that of homogeneous expectations. If we allow consumers to have different subjective expectations they will, in general, measure and price risk differently, where pricing models are based on individual, rather than aggregate data. Another extension would allow statedependent preferences, but again consumers would likely measure and price risk differently, even with homogeneous expectations, as they would no longer have the same changes in consumption expenditure over time. Thus, we cannot solve the discount factors in (4.28) as functions of aggregate consumption expenditure. Indeed, the problem is further compounded when consumers have both state-dependent preferences and subjective expectations. 4.5 Empirical tests of the consumption-based pricing models Given the nature of the assumptions made in the consumption-based pricing model in (4.28), it should not be surprising that the asset pricing models derived from it perform poorly when confronted with data. There are good practical reasons for wanting to derive pricing models where consumers measure and price risk identically using a small number of state variables that can be accessed in reported data. Models based on individual consumption data are impractical because it is costly data to obtain. One problem for empirical tests of these pricing models is the absence of appropriate data – in particular, the expected values of the risk factors and the means and variances on Asset pricing models security returns. Most studies assume that ex-post time series data provides a true reflection of the statistical attributes of their distributions when observed by consumers ex ante. Another problem arises when the reported data does not provide all the information needed. For example, consumers in the CCAPM measure market risk in security returns by their covariance with aggregate real consumption expenditure. It must include consumption flows to capital, as well as non-marketed consumption such as leisure and other home-produced goods. Most countries measure their national accounts on a quarterly basis where aggregate consumption expenditure excludes expenditure on major capital items and includes the rental value of housing consumed by owner-occupiers. However, some capital expenditure is included, while a considerable proportion of non-marketed consumption is omitted. These discrepancies may not be a significant problem if they are closely correlated with measured aggregate real consumption, particularly when on average they are relatively small.35 Early empirical studies tested the pricing models, in particular the CAPM, to see whether they could successfully explain the risk premiums in security returns without considering whether the resulting consumption risk was consistent with measures of risk aversion obtained from observed consumer behaviour. That link was made later by Mehra and Prescott (1985) who tested the CCAPM using a computable general equilibrium model where they identified equity premium and low risk-free real interest rate puzzles. We summarize these empirical findings in the following two subsections. 4.5.1 Empirical tests and the Roll critique Using time series data Black et al. (1972) divide all the securities traded on the New York Stock Exchange (NYSE) over the period 1931–1965 into 10 portfolios and estimate the coefficients on the CAPM pricing equation: ik − i = λ 0 + λ1βk + εk , where λ0 = 0 and λ1 = iM − i when the CAPM holds. Their main findings are as follows: λ 0 > 0 and λ1 < iM − i , which implies securities with low (high) beta coefficients pay higher (lower) returns than the CAPM would predict. ii β dominates other terms as a measure of risk. iii The simple linear model fits best. i Blume and Friend (1973) draw similar conclusions using cross-sectional returns. They construct 12 portfolios with approximately 80 different stocks listed on the NYSE over three separate periods between 1955 and 1968. Fama and MacBeth (1973) extend the analysis in Black et al. and find omitted variables in the CAPM. Their findings support the multi-factor ICAPM that accounts for changes in aggregate consumption risk. Roll (1977a) was critical of these (and other) empirical tests of the CAPM, arguing that the only true test is whether the market portfolio is ex ante mean–variance efficient, where the linearity of the model follows by implication. There are an infinite number of mean–variance efficient market portfolios where by construction the expected returns on the individual securities in each portfolio must be linearly related to their beta coefficients. Fama and French (1992, 1993) include firm size and book-to-market equity ratios as additional factors to explain a cross-section of average returns to securities traded on the NYSE not explained by the CAPM or the CCAPM. Lettau and Ludvigson (2001) derive Asset pricing models conditional versions of these models by allowing the stochastic discount factor to change over time. But instead of including additional factors to describe changes in consumption risk they scale the parameters in the discount factor with a proxy for the log consumption–wealth ratio, and find the conditional models perform about as well as the three-factor pricing model used by Fama and French.36 Their findings are supported by Campbell and Cochrane (2000) who test the conditional versions of the CAPM and CCAPM. Using US data, Hansen and Singleton (1982, 1983) find the unconditional CCAPM performs poorly in explaining the time variation in interest rates and the crosssectional pattern of average returns on stocks and bonds, while Wheatley (1988) also rejects the model using international data. In fact, Mankiw and Shapiro (1986), Breeden et al. (1989), Campbell (1996) and Cochrane (1996) find it performs no better than, and in most cases worse than, the unconditional CAPM in explaining cross-sectional differences in average returns. Campbell and Cochrane (2000) argue the market return in the CAPM captures time variations in risk premiums much better than consumption growth in the CCAPM because the market return is affected by dividend–price ratios while consumption growth is not. This view is supported by Campbell (1993) based on empirical tests of a discrete-time version of the ICAPM. 4.5.2 Asset pricing puzzles As noted by Cochrane (2001), early tests of the CAPM and ICAPM focused on their ability to explain the risk premiums in expected security returns without considering how much risk was being transferred into real consumption expenditure. When testing the CCAPM, Mehra and Prescott looked at whether the implied values of the (constant) coefficient of relative risk aversion and the (constant) rate of time preference were consistent with the risk in aggregate real consumption. Using US data, they discovered the equity premium and low risk-free real interest rate puzzles, where the premium puzzle finds the need to adopt a coefficient of relative risk aversion in the CCAPM that is approximately five times larger than its estimated value in experimental work, while the low risk-free rate puzzle finds the observed real interest rate much lower than the CCAPM would predict when the coefficient of relative risk aversion is set at its estimated value. Once it is set at the higher values required to explain the observed equity premium the predicted real interest rate is even higher. We demonstrate these puzzles using the Hansen and Jagannathan (1991) bound on the price of risk in security returns. It is an adaptation of the Sharpe ratio (Sharp 1966) which measures the equilibrium price of risk, for any security k, as ( ik − i ) / σ k . Using the CBPM equation in (4.28) when consumers have a power utility function and security returns are jointly log normally distributed with aggregate consumption, we have the following definition. Definition 4.1 The Hansen-Jagannathan bound on the equilibrium price of risk in the CCAPM is ik − i ≤ (1 + i ) σ m ≈ (1 + i ) γ σ g , σk where sm is the variance in the pricing kernel, γ the constant coefficient of risk aversion and sg the growth rate in aggregate real consumption expenditure, with I/ I 0 = 1 + g and g = ( I − I 0 )/ I 0 . Asset pricing models Table 4.3 The asset pricing puzzles in US data σM i Mehra and Prescott (1985) Cochrane (2001) 3.57% 6.98% 16.54% 0.80% 1% 9% 16% 1% This bound on the Sharp ratio is obtained by setting the coefficient of correlation between the stochastic discount factor and the real return on the security k at its upper bound of unity, , ik ) = 1.37 Since (4.42) conveniently relates the risk premium to the CRRA with Corr ( − m (γ), it can be used to demonstrate the equity premium puzzle identified by Mehra and Prescott, where they add dividends to the US Standard & Poor’s 500 Index and divide it by the consumer price index to obtain a measure of its real return over the period 1889–1978. The real risk-free return is computed using short-term Treasury bills over the same period. Similar data is collected by Cochrane (2001) for the value-weighted index of stocks trading on the NYSE over the post-war period in the US. The relevant statistics for the two data sets are summarized in Table 4.3, where M denotes the index of stocks used in each study. We compute the Sharpe ratio and the coefficient of relative risk aversion using these data. , ik ) = 1 which is the upper bound used The results are summarized in Table 4.4 for Corr ( − m ~ ~ in (4.42), and Corr (− m ,iM)=0.2 which is used by Cochrane. The implicit values for γ are much larger than those obtained from empirical estimates, which fall within the range 0 to 2. Friend and Blume (1975) obtain an estimate of 2 using household data in the US, while Fullenkamp et al. (2003) obtain values ranging from 0.6 to 1.5 using data from a television game show. Clearly, the value of γ in the CCAPM pricing equation is 5 times larger than 2 using the Mehra–Prescott data, and 25 times larger using Cochrane’s data when , iM ) = 1. They are significantly higher for Corr ( − m , iM ) = 0.2. Corr ( − m To demonstrate the low interest rate puzzle identified by Mehra and Prescott, we use (4.28) with the power utility function in (3.20) to compute the expected price of the risk-free bond as ) = δE (1 + g)− γ = E (m 1 , 1+ i where g = ( I − I 0 )/ I 0 is the growth rate in aggregate real consumption expenditure. An approximate relationship between the interest rate, the rate of time preference and the growth rate in consumption expenditure is obtained by expressing this bond price in logarithmic form as Table 4.4 Equity premium puzzle Sharpe ratio: RRA(γ) ~ ~ Corr(− m , iM) = 1 ~ ~ Corr(− m, iM) = 0.2 Mehra and Prescott (1985) Cochrane (2001) Asset pricing models ln(1+ i ) ≈ γE ( g) − ln δ.38 147 (4.43) There is good intuition for this relationship. Consumers need a higher return on consumption transferred to the future as saving when they are more risk-averse, and when they expect a higher growth rate in consumption expenditure. Similarly, a higher rate of time preference (which lowers δ) reduces saving and drives up the interest rate (with ln δ < 0 for 0 < δ < 1). Table 4.5 applies the data in Table 4.3 to the relationship in (4.43) for different values of γ and δ. In both data sets in Table 4.3 the average real interest rate was approximately 1 per cent, which is much lower than the rate predicted by the CCAPM with power utility. Indeed, it is almost three times higher using Mehra and Prescott’s data and twice as high using Cochrane’s data, with γ = 1. And this difference is even larger for higher values of γ and δ. 4.5.3 Explanations for the asset pricing puzzles There are essentially two ways to explain these puzzles – one modifies consumer preferences in the CCAPM, while the other finds more risk in individual consumption than there is in aggregate consumption.39 This subsection summarizes the intuition for these extensions, along with their ability to explain the two puzzles identified by Mehra and Prescott.40 Preference modifications As noted in previous sections, a number of important restrictions are placed on consumer preferences in the CCAPM. Most extensions relax time separability and state independence in the following ways. Habit theory. This makes utility for a representative agent depend on one or a combination of past own consumption (internal habit), past consumption of others, and the current consumption of others (external habit). These models add a habit variable to the utility function at each point in time, where its equilibrium effects are determined by the way habits are formed and how they change over time. For example, Abel (1990) models habit as a multiplicative function of past consumption by others, together with past own consumption Table 4.5 Low risk-free interest rate puzzle RRA(γ ) For d = 0.99 1 2 10 50 For δ = 0.94 1 2 10 50 Mehra and Prescott data E ( g ) = 1.83% Cochrane data E ( g ) = 1% 2.88% 4.78% 21.29% 152.20% 2.03% 3.05% 11.63% 66.54% 8.35% 10.35% 27.75% 165.61% 7.45% 8.53% 17.57% 75.40% Asset pricing models to capture internal habit, Constantinides (1990) makes habit an exponential function of past own consumption, and Campbell and Cochrane (1999) make it an additive function of past consumption by all other consumers.41 With internal habit consumers become attached to a particular level of consumption which they prefer to maintain, while external habit is based on past consumption of others, reflecting a concern for relative consumption levels. The benefits most people get from consumption at any point in time depend on their past consumption as well as the amount consumed by neighbours or their social peers. Internal habit extends standard preferences by relaxing time separability, while external habit introduces consumption externalities. Both approaches provide an explanation for the risk-free rate puzzle by raising aggregate saving. For internal habit consumers save more when consumption is habit-forming, while for external habit they save more due to their sensitivity to aggregate consumption risk.42 Both approaches explain a large equity premium if, in the case of internal habit, consumers are highly sensitive to their own consumption risk, or, in the case of external habit, they are highly sensitive to aggregate consumption risk. But high sensitivity to own consumption risk requires a high degree of risk aversion, whereas aggregate consumption is fairly smooth. Thus, in both cases a high degree of risk aversion is required to explain why consumers are indifferent between bonds and equity. In other words, habit formation cannot successfully explain the equity premium puzzle identified by Mehra and Prescott.43 Separating risk aversion from intertemporal substitution. There is an inverse relationship between the CRRA and the elasticity of intertemporal substitution when consumers have 1− γ standard preferences. Indeed, for the power utility function U ( I t ) = I t /(1 − γ ) the elasticity of intertemporal substitution is 1/γ.44 Thus, highly risk-averse consumers view consumption in different time periods as more complementary. And when they are reluctant to substitute consumption intertemporally in a growing economy the equilibrium interest rate has to be higher. Epstein and Zin (1989) suggest using the generalized expected utility preferences of Kreps and Porteous (1978) and Selden (1978) that separate the coefficient of relative risk aversion from the elasticity of intertemporal substitution. They write the utility function as U ( I t , Et (U t +1 ) = ⎡(1 − δ ) I t1− γ / θ + δ Et ( I t1+−1γ ) ⎢⎣ 1/ θ ⎤ ⎥⎦ θ /(1− γ ) where θ = (1 − γ)/(1 − 1/Ω) with Ω being the elasticity of intertemporal substitution. This collapses to the time-separable power utility function when γ = 1/Ω. While relaxing this inverse relationship provides a solution to the risk-free rate puzzle, it does not solve the equity premium puzzle. Ultimately, a relatively high value for the CRRA is required to explain the large equity premium in the presence of low consumption risk. Behavioural experiments and loss aversion. Benartzi and Thaler (1995) and Barberis et al. (2001) use evidence from behavioural studies to justify the inclusion of a state variable in the utility function to capture additional welfare effects from financial gains and losses on security portfolios. They argue that utility falls more when there are losses than it rises when there are gains, where these welfare effects are not captured as direct benefits derived from their consumption flows. In effect, they suffer from loss aversion which drives up the risk premium on equity. Thus, the equity premium in the data can be explained with a lower CRRA. While this approach can successfully explain the equity premium and riskfree rate puzzles, it does so in a somewhat ad hoc fashion. In fact, it may provide evidence that consumer preferences are state-dependent. Asset pricing models Heterogeneous consumption risk Consumers have the same individual consumption risk in the consumption-based pricing models because they can costlessly eliminate diversifiable risk. A number of studies extend these models by allowing individuals to face different consumption risk. This personalizes the asset pricing equation and impacts on the equilibrium risk premium. There are a number of reasons for incomplete insurance, including (i) incomplete markets and borrowing constraints and (ii) transactions costs. Incomplete markets and borrowing constraints can stop consumers from eliminating diversifiable risk from their consumption expenditure. In particular, they may not be able insure against variations in labour income when human capital cannot be used as collateral, and private insurance may be restricted by moral hazard and adverse selection problems in the presence of asymmetric information. Weil (1992) is able to explain the equity premium and risk-free rate puzzles in a two-period setting with incomplete financial markets. Consumers who cannot fully insure against idiosyncratic risk will save more to offset increases in future consumption risk. The extra saving drives down the risk-free rate, while the extra individual consumption risk drives up the equilibrium premium on returns to equity over debt. However, the ability of incomplete markets to explain these pricing puzzles is mitigated in an multi-period setting by dynamic self-insurance where consumers can offset low (transitory) consumption shocks by borrowing. This provides them with a substitute for insurance unless there are borrowing constraints. Heaton and Lucas (1996) make numerical simulations in a computable general equilibrium model where they find that incomplete markets and borrowing constraints have a small effect on the risk-free interest rate in an infinite horizon setting.45 Constantinides and Duffie (1996) extend the analysis of incomplete financial markets by making idiosyncratic labour income shocks permanent. For example, when labour income falls and stays low for ever, dynamic self-insurance cannot overcome the inevitable fall in consumption, even in the absence of borrowing constraints. When these income shocks are sufficiently large and persist they can raise individual consumption risk above aggregate consumption risk sufficiently to explain the low interest rate and high equity premiums in the data. However, Heaton and Lucas estimate idiosyncratic shocks to labour income using US data and find it has an autocorrelation of approximately 0.5 that reduces the risk-free rate by only a small amount. Transactions costs explain the equity premium when equity is much more costly to trade than debt. Based on turnover rates for equity traded on the NYSE, Fisher (1994) finds that the bid–ask spread on equity needs to be as high as 9.4 to 13.6 percentage points. There are a range of different costs that traders face, including broking fees, taxes and a range of processing costs, that create the spreads between buyer and seller prices. But these costs do not appear to be large enough to explain the equity premium puzzle. In an infinite horizon setting, Aiyagari and Gertler (1991) include differential transactions costs on debt and equity used by consumers to smooth idiosyncratic shocks to labour income in the absence of formal insurance. When equity is relatively costly to trade, consumers use debt to offset these income shocks. And this is consistent with the high turnover rates for debt and the low turnover rates for equity in financial markets. Aiyagari and Gertler find that debt in the US turns over on average between three and seven times each year, depending on the type of debt instrument, while equity turnover is negligible. Since self-insurance relies on trading both debt and equity, relatively large transactions costs on equity leave consumers with higher individual consumption risk. This is similar to the Asset pricing models explanation for the pricing puzzles in Constantinides and Duffie where dynamic self-insurance cannot eliminate persistent shocks to labour income. The higher risk in individual consumption explains the equity premium, while the demand for debt to smooth the variance in future consumption reduces the interest rate. Swan (2006) finds one-way transactions costs as small as 0.5 percentage point on equity can explain both pricing puzzles due to the (invisible) costs of forgone equity trades at 5.7 per cent of value. These marginal costs arise from inefficient risk sharing in the presence of differential trading costs, and are approximately 15 times higher than the observed trading costs. Debt turnover rises significantly, due to its lower transactions costs, to match the lower net marginal gain from spreading risk with equity, but without the same risksharing benefits. Thus, individual consumption risk is higher than aggregate consumption risk, and there is a lower interest. As noted earlier, explanations for the equity premium and low interest rate puzzles can be divided between those that seek to extend the CCAPM by modifying consumer preferences, and those that allow different individual consumption risk. Grant and Quiggin (2004) argue there are potentially large differences in the welfare and policy implications of these explanations. Whenever the observed risk premium and the low risk-free rate are equilibrium outcomes in an efficient capital market, there are no potential welfare-improving policies. Habit formation, transactions costs and preferences that separate risk aversion from intertemporal substitution are explanations that fall into this category. In contrast, explanations based on market failure or investor irrationality may provide opportunities for welfareimproving policies if governments can overcome market failure or improve on irrational private outcomes. Grant and Quiggin argue that, whenever governments can eliminate idiosyncratic risk in labour income at lower cost than private traders using the tax system, the discount rate on public sector projects is marginally higher than the risk-free interest rate.46 As a consequence, welfare can be raised through tax-funded public investment. Similarly, if private financial markets for trading aggregate risk are incomplete, then, consistent with proposals by Arrow and Lind (1970), the cost of capital for investment in the public sector will be lower than it is for the private sector undertaking the same projects when the government can spread aggregate risk more efficiently. Indeed, Grant and Quiggin link the implications of the equity premium puzzle to the arguments made by Arrow and Lind to identify potentially large welfare gains from macroeconomic stabilization policies that reduce fluctuations in aggregate income. There are, however, good reasons to be cautious about this claim. First, the scope for governments to diversify risk more efficiently than private markets seems rather optimistic.47 It is difficult to find circumstances where agents in the public sector are better informed and better placed to overcome asymmetric information, or trade at lower transactions costs, than private agents. But even in circumstances where they can, governments have difficulty implementing stabilization policies to counteract the effects of the business cycle on economic activity. Indeed, they have trouble identifying turning points in the business cycle, as well as problems implementing the appropriate tax-spending changes in a timely manner. Additionally, there are principal–agent problems in the public sector that make it a notoriously inefficient operator, where any potential welfare gains from a lower cost of capital can be offset by a lower marginal productivity of investment in the public sector. For example, managers of public enterprises face soft budget constraints and frequently succumb to excessive union-backed wage demands, particularly when politicians are sensitive to disruptions that impact adversely on their voter support. Asset pricing models A logical implication of explanations for the large equity premium that gives the public sector a lower cost of capital, is that all aggregate investment should be financed through the public sector. Indeed, this would also be the case when loss aversion explains the large equity premium. By using taxes to finance investment we avoid the financial losses on privately issued securities. But that is unlikely to lower the cost of capital when taxpayers suffer loss aversion from the risk transferred into their taxes. 4.6 Present value calculations with risky discount factors There are a number of important issues to address when valuing capital assets with risky net cash flows. Frequently, they have revenues and costs with different risks that can change over time. Moreover, aggregate consumption risk itself can change over time. This section looks at how the consumption-based pricing models examined earlier in Section 4.3 are used to value assets in these circumstances. Since consumers have common information they measure and price risk identically, and by employing mean–variance analysis there is a linear relationship between expected security returns and market risk premiums. While these properties simplify the task of computing risk-adjusted discount factors, it is not straightforward to use them in present value calculations over multiple time periods, particularly when consumption risk can change over time. 4.6.1 Different consumption risk in the revenues and costs It is not uncommon for assets, and projects more generally, to have revenue streams with more or less risk than the costs of generating them. Indeed, these differences are identified by managers of statutory monopolies when regulatory agencies impose ceilings on their prices to restrict monopoly profits. When revenues are more risky than costs, managers seek less restrictive price caps so they can pay a risk premium to their capital providers. k in 12 months’ time when the share Consider share k which pays a random dividend DIV expires. It is funded from net cash flows ( NCF ) generated by the firm who issued the share, and is the difference between its risky revenues ( CST ) per share. When the REV ) and costs ( CAPM holds we can compute the current value of the share by discounting its expected net k = REV k − CST k , as cash flows, with NCF pak = k ) E ( NCF , 1 + i + ( E (iM ) − i β k where β k = β NCFk / pak is the project risk per dollar of capital invested in the share, with k , i )/Var(i ) After solving the current share price, we have β NCFk = Cov(NCF M M . pak = CEk , 1+ i k ) − ( i − i )β being the certainty-equivalent net cash flows. After with CEk = E ( NCF M NCFk deducting a premium for project risk, the remaining net cash flows provide shareholders with consumption benefits equal to the value of their initial capital plus compensation for the Asset pricing models opportunity cost of time at the risk-free interest rate i. We can also value the share by discounting its revenues and costs separately as: pak = k ) Tk) E ( REV E (CS − , 1 + i + ( E (iM ) − i β REVk / PVREVk 1+ i + ( E (iM ) − i βCSTk / PVCSTk where β REVk / PVREVk and βCSTk / PVCSTk are, respectively, the market risk per dollar of revenue and cost in present value terms. After rearranging this expression, we have pak = ) − E (CST ) − ( i − i )(β E ( REV M REVk − βCSTk ) 1+ i Box 4.9 Valuing an asset with different risk in its revenues and costs Consider share B that makes one dividend payout in 12 months’ time when the CAPM holds. ) of a firm on S = 500 shares issued at the It is paid from the random net cash flows ( NCF 0 )) for )) and costs ( E (CST beginning of the period (t = 0). The firm’s expected revenues ( E ( REV the period are summarized below, together with their covariance with the return on the market portfolio. Mean Covariance with iM ) E ( NCF ) E ( REV ) E (CST 800 0.5 1540 0.87808 740 0.37808 0.15 0.0016 0.03 0 Using the net cash flows to compute the current share price, we have PaB = )/ S − ( i − i )β / S E ( NCF 800 / 500 − (0.12 )(0.5 / 0.0016 )/ 500 0 0 M NCF = ≈ 1.48. 1+ i 1.03 This can be decomposed by computing the present values of the revenues and costs as: )/ S − ( i − i )β / S E ( REV 0 M REV 0 1+ i 1540 / 500 − (0.12 )(0.87808 / 0.0016 )/ 500 = ≈ 2.86 1.03 PV ( REV ) = and ) / S − ( i − i )β / S E (CST 0 M CST 0 1+ i 740 / 500 − (0.12 ) (0.37808 / 0.0016 ) / 500 = ≈ 1.38. 1.03 PV (CST ) = By deducting the current value of the costs from the revenues, we have ) − E (CST )}/ S − ( i − i ) (β − β ) / S {E ( REV 0 0 M REV CST 1+ i (1540 − 740)/ 500 − (0.12 ) (548.8 − 236.3)/ 500 = ≈ 1.48. PaB = Asset pricing models The risk in the revenues and costs is related to the project risk through the covariance between the net cash flows and the return on the market portfolio, with: k − E ( REV k )] − [CST k − E (CST k )] = σ σ MYk = E ( iM − E (iM ) [ REV M REVk − σ M CSTk , leading to βNCFk = βREVk − βCSTk. After substitution we obtain the asset value in (4.45), k ) − PV (CST k ). where pak = PV ( REV While it sounds counter-intuitive, more market risk in the costs, all other things constant, will make the share more valuable. In effect, the costs transfer consumption flows to other agents in the economy. If market risk in the costs (with βCSTk > 0) exceeds market risk in the revenues (with βREVk > 0) the project risk becomes negative (with βYk = β REVk − βCSTk < 0 ), where the expected discount rate for the net cash flows E(˜ik) is less than the risk-free rate i and less than the discount rates for both the revenues E (iREVk ) and costs E (iCSTk ) . 4.6.2 Net cash flows over multiple time periods Most capital assets have risky net cash flows over a number of future time periods, and there are two main reasons why their market risk can change over time – one is due to investor reassessments of the project risk, while the other is from changes in aggregate consumption risk. Both make present value calculations more complex, and we demonstrate this by computing the present value of a security (k) with net cash flows ( NCFt ) over T periods. With constant aggregate consumption risk we use the CAPM to compute its current price pak = ∑ t =1 kt ) E ( NCF Π tj =1 [1 + i j + ( iMj − i j )β kt , j ] ~ where βkt, j = Cov[Vkt, j / E(Vkt−j), iMj]/Var(i Mj) is the market risk in the discount rate at time ~ j on net cash flows realized at t with a market value of Vkt, j . In the period when the net cash flows are realized ( j = t) the beta coefficient is the normalized project risk, with kt , i )/ E (V ) . In all prior periods the beta coefficients are compensation β kt ,t = Cov ( NCF Mt kt ,t −1 for reassessments of the risk in the net cash flows. But while the discount factors in (4.47) can have different expected values in each time period they must be non-stochastic when computed using the CAPM. That is, the potentially different values of the risk-free rate, the return on the market portfolio and the beta coefficient in each period are known with certainty at time 0. Fama (1977) argues that intermediate uncertainty is admissible in the CAPM if it contributes no uncertainty to the beta coefficients in the discount factors. When uncertainty is partially resolved with the passing of time, investors may expect to get new information that will lead them to revise their assessment of the risk in the net cash flows in periods prior to the date they are realized. Fama introduces multiplicative uncertainty by allowing Asset pricing models expectations of net cash flows realized at t to evolve in each prior period τ (omitting subscript k) as: t ) = E ( NCF t )(1 + ε ), Eτ ( NCF τ −1 τ where ε~ τ is a random variable with zero mean and constant covariance with the return on the market portfolio. This process imparts uncertainty to Cov(Vt , τ , iM τ ) and ~ Eτ−1(Vt,τ), but without imparting uncertainty to the discount rates in (4.47). Fama demonstrates this by using (4.48) to write the discounted value at τ − 1 of the net cash flows realized at date t as: ⎛ ⎧1 − λ τ Cov(Vτ , iM τ )/ Eτ−1 (Vτ ) ⎫ 1 ⎞ , Vτ−1 = Eτ−1 (Vτ ) ⎨ ⎬ = Eτ−1 (Vτ ) ⎜ 1 + iτ ⎝ 1 + E (iτ ) ⎟⎠ ⎭ ⎩ with λt =(i Mt − it)/Var (iMt).48 Notice how the normalized covariance Cov(Vτ, iMτ)/Eτ−1(Vτ) here is not the same as the normalized covariance used to determine the beta coefficients in the discount rates of (4.47), where they are Cov(Vτ , iM τ )/ E (Vτ−1 ). Starting in the period prior to realization (t − 1) and using (4.48) to iterate back in time to the current period (0), with t , we have Vt = NCF t ⎛ t ⎧ ⎫ ⎞ t ) ∏ ⎪⎨1 − λ j Cov (V j , iMj ) / E j −1 (V j ) ⎪⎬ = E ( NCF t ) ∏ ⎜ 1 ⎟⎟ . V0 = E0 ( NCF 0 ⎜ j +1 ⎝ 1 + E (i j ) ⎠ j =1 ⎩ 1+ ij ⎪ ⎭⎪ It is clear from this expression that the ratio Cov(Vτ , iM τ )/ Eτ−1 (Vτ ) must be non-stochastic for the expected discount rate E(it) to be non-stochastic, which is the case for the multiplicative uncertainty in (4.48), as the two variables are perfectly correlated, with: Cov (Vτ , iM τ ) Eτ−1 (Vτ ) Cov[ Eτ ( NCFt ), iM τ ] = Cov (ε τ , iM τ ).49 t ) E ( NCF τ −1 Clearly, the discount factors for each net cash flow in (4.47) can change over time due to changes in the risk-free rate, the return on the market portfolio and the beta coefficients. But, as noted above, all of these variables are known with certainty at time 0. While the risk-free and market returns in each time period are the same for all other net cash flows, they can have different beta coefficients in each time period due to differences in their contribution to market risk and in their intermediate uncertainty. We could also decompose the net cash flows in these valuation formulas by separating the revenues and costs in each period using the analysis illustrated in the previous section by noting β NCFτ = β REVτ − βCSTτ in each time period τ. Fama (1977) identifies circumstances where the present value calculations using (4.50) are less complex, and we demonstrate them here by computing the current value of security t . D with a single net cash flow in period t of NCF i No intermediate uncertainty. All the beta coefficients in periods prior to realization are zero when investors do not expect to revise their assessments of the project risk, where the current value of security D in (4.50) becomes ⎫ t −1 t ) ∏ ⎛ 1 ⎞ ⎧⎪1 − λ t Cov ( NCF t , iMt ) / Et −1 ( NCF t ) ⎪ . paD = E0 ( NCF ⎬ ⎨ ⎜ ⎟ j =1 ⎝ 1 + i ⎠ 1 + it j ⎩ ⎪ ⎭⎪ Asset pricing models It is the expected value of the net cash flows at t − 1 discounted at the risk-free interest rate to the current period. A risk premium is only paid in period t when the net cash flows are realized because that is when they impact on the consumption risk of investors. We can also write this expression, as: t −1 1 ⎞ t ) ∏ ⎛ 1 ⎞ ⎛ paD = E0 ( NCF , ⎜ ⎟ ⎜ j =1 ⎝ 1 + i ⎠ ⎝ 1 + E ( it ) ⎟⎠ j where E(it ) is the only risk-adjusted discount rate. Constant discount factors. When the risk-free interest rate, the return on the market portfolio and the beta coefficients in the discount rates are constant over time the current value of security D in (4.50) can be written as paD t t ) ⎧1 − λΩ t ⎫ E0 ( NCF = E0 ( NCF t ) ⎨ , ⎬ = t ⎩ 1 + i ⎭ (1 + E (iD ) where Ω t = Cov(Vj , iM ) / E j −1 (Vj ) is the same in each period j. Now investors expect the net cash flows to become more uncertain as time passes, where the extra expected project ~ ~ ~ risk in each period generates the same risk premium. Since Cov(Vj,i M)/Ej−1(Vj) is constant, ~ Cov(Vj , iM ) must rise over time to offset the increase in Ej−1(Vj).This increase in project risk is confirmed by writing the current value of the security in these circumstances as paD = t ) CE ( NCF t ) E0 ( NCF , = t (1 + i )t (1 + E (iD ) t ) is the certainty-equivalent value of the net cash flows at realization where CE ( NCF date t. Based on this relationship we can see that the risk premium grows by (1 + E (iD ) (1 + i ) − 1 in each period. Thus, using the CAPM (or one of the other consumption-based pricing models) to value capital assets with a constant expected discount rate includes intermediate uncertainty in periods prior to the date net cash flows are realized. In other words, uncertainty is expected to increase over time, where the longer the time to realization the greater the uncertainty. It is important to emphasize the point made earlier that a single risk premium is paid to investors in period t for bearing the project risk. They are not paid a risk premium in prior periods to compensate them for bearing this project risk, but rather they are paid compensation for revisions made to their expectations of the project risk. Finally, when aggregate consumption risk changes over time we can use one of the three multi-period consumption-based asset pricing models with G additional risk factors to isolate changes in the investment opportunity set, where the valuation formula in (4.47), becomes t ) E0 ( NCF . t =1 ∏ j =1{1 + i j + ∑ g j ∈G ( E (ig j ) − i j )β g j , j } T pak = ∑ Asset pricing models If we allow admissible intermediate uncertainty using (4.48) we can write the current value of the security as ⎫ T t ⎧1 − ∑ g j λ g j Cov (V j , ig j ) / E j −1 (V j ) ⎪ t ) ∏ ⎪ pak = ∑ E0 ( NCF ⎬, ⎨ t =1 j =0 1 + ij ⎭⎪ ⎩⎪ where λ g j = ( E (ig j ) − i j )/ Var (ig j ) is the normalized premium for risk isolated by factor g in ~ time period j,E(i gj) being the expected return on its mimicking portfolio. A numerical example is provided in Box 4.10 using the two-factor version of the ICAPM in equation (4.37) above. In the ICAPM, APT and CCAPM, the additional factors isolate changes in aggregate consumption risk over time. Even with intermediate uncertainty the risk-free interest rate, the returns on the mimicking factor portfolios and the beta coefficients for the factor risk in the discount rates in each time period are known with certainty. There is empirical evidence that suggests security returns depend on trading rules over longer time periods where investors condition expected returns on information about variables such as the dividend– price ratio and firm size. Investors predict different expected returns on capital assets based on (possibly private) information they have about these variables which they use as signals. As noted earlier in Section 4.5.1, Fama and MacBeth (1973) find that the CAPM performs better empirically when additional factors such as firm size and book-to-market values are added to the model. Campbell and Cochrane (2000) and Lettau and Ludvigson (2001) get similar results by deriving conditional versions of the CAPM and CCAPM. Instead of including additional factors, however, they scale the parameters in the linear discount factors using the log consumption-wealth ratio. In practical situations financial analysts frequently use the consumption-based pricing models as though they are unconditional models by assuming the current information investors use to value assets is fully reflected in their risky discount factors. Others assume aggregate consumption risk is constant over time and use the CAPM to compute an expected one-period simple return which they use as the discount rate in every time period. As noted above, this assumes there is intermediate uncertainty in time periods prior to the realization of the cash flows. Those who allow the consumption risk to change do so by adding additional factors to the CAPM, using the ICAPM, or by scaling the parameters in the linear discount factors in the CAPM and CCAPM. Box 4.10 Using the ICAPM to compute the present value of a share We use the ICAPM in a multi-period setting to compute the value of a share D which is expected to pay a dividend (DIV1) of $1.44 in 12 months’ time, and a final dividend (DIV2) of $2.30 in 24 months time when there is no intermediate uncertainty. Both dividends have the same aggregate consumption risk, and to simplify the analysis we compute the return on a mimicking portfolio (n) which is perfectly correlated with the stochastic interest rate (ρni = 1) and uncorrelated with the return on the market (M) portfolio (ρnM = 0). This makes the coefficients in the ICAPM pricing equation in (4.37) standard beta coefficients. We assume the expected return on the market portfolio and the expected risk-free rate are constant over time, where the covariance between the dividends and the returns on the market and mimicking portfolios are summarized below, together with the variance in their returns. Asset pricing models Mean Variance Covariance with DIV 0.18 0.25 0.30 0.04 0.08 0.06 0.03 — — Using the ICAPM, we compute the present value of the dividends in each period and sum them to obtain the current price of share D: E0 ( DIV1 ) − ( iM − i )β MDIV − ( in − i )β nDIV 1+ i 1.44 − (0.15) (0.3 / 0.25) − (0.01)(0.06 / 0.08) = ≈ 1.22, 1.03 PV ( DIV1 ) = E0 ( DIV2 ) − ( iM − i )β MDIV − ( in − i )β nDIV (1 + i)2 2.30 − (0.15) (0.3 / 0.255) − (0.01)(0.06 / 0.08) = ≈ 1.99. 1.0609 PV ( DIV2 ) = Thus, the current share is paD = PV ( DIV1 ) + PV ( DIV2 ) ≈ $3.21. We can now compute the risk-adjusted discount factors for the dividends in each period. For the dividends paid in the second period, we have PV ( DIV2 ) = E0 ( DIV2 ) ≈ 1.99, (1 + i ){1 + i + ( iM − i )β MB + ( in − i )βnB } with βMB = βMDIV/PV1(DIV2) ≈ 0.59, βnB = βnDIV/PV1(DIV2) ≈ 0.37 and E (iB ) ≈ 0.12 . If there is intermediate uncertainty that makes the risk-adjusted discount rate constant over time, the present value of the dividends in the second period will fall by approximately 16 cents to $1.83. This additional consumption risk reduces the current share price to $3.05. Problems 1 The returns on shares A and B in four equally likely states at the end of next year are summarized below. State 0.3 0.4 0.2 0.1 Rates of return (%) Share A Share B −25 50 5 40 30 25 −40 30 i Calculate the expected return, variance and standard deviation for each share. ii Compute the coefficient of correlation for the returns to these shares. Asset pricing models iii Calculate the expected return, variance and standard deviation on a portfolio with 60 per cent invested in share A and 40 per cent in share B. Compute the diversification effect for this portfolio. iv Derive the standard deviation for the return to the minimum variance portfolio and compute the diversification effect. v Explain what factors determine the risk premium paid on any security. In a capital market where the CAPM holds the expected return on a portfolio (G) that combines the risk-free asset (F) and the market portfolio (M) is 25 per cent. (This is based on a risk-free rate of 5 per cent, an expected return on the market portfolio of 20 per cent, and a standard deviation in the return on portfolio G of 4 per cent. This information is summarized in the diagram below.) RP (%) G RG (25) RM (20) RF (5) σM σG (4) σP (%) i What is the expected rate of return on a risky security that has a correlation coefficient with the market portfolio of 0.5 and a standard deviation of 2 per cent? ii What is the correlation coefficient between the returns on portfolio G and the market portfolio? Assume a mean–variance opportunity set is constructed from two risky shares, A and B, with the variance–covariance matrix for their returns of ⎛0.0064 0 ⎞ ⎜ ⎟. 0.0016 ⎠ ⎝ 0 Share A has an expected return of 25 per cent and share B an expected return of 15 per cent. Suppose investor I chooses a ‘market portfolio’ which consists of 80 per cent in share A and 20 per cent in share B, whereas investor J chooses a different ‘market portfolio’ with 50 per cent in each share. Calculate the beta coefficient (βA) of share A for each investor. Explain why they differ. Security prices are determined, in part, by the non-diversifiable risk in their expected net cash flows. Suppose investors can construct a portfolio by combining two risky securities A and B with expected returns and standard deviations summarized below. i 0.08 0.4 0.12 0.6 Consider whether it is possible to diversify risk by bundling these assets together when the covariance on their returns is 0.08, and identify the factors that determine the size Asset pricing models of the diversification effect. Explain how investors would choose their risky bundle when there is a risk-free security and the returns on assets A and B are jointly normally distributed. How would they compute the risk premium for each asset A and B? (Assume investors have homogeneous expectations, and assets A and B are the only risky securities that trade. You are not required to compute their risky portfolios or the risk premiums on the assets.) Two shares A and B trade in a capital market where the CAPM holds and they have current prices of $50 and $25, respectively. They are not expected to pay dividends over the next 12 months and at that time their prices in each of the three possible states of the world are summarized below. State Share A Share B 0.1 0.7 0.2 $40 $55 $60 $28 $30 $20 Other information about the market includes sM = 0.10, ρAM = 0.8 and ρBM = 0.2. i Calculate the beta coefficient for each share when the standard deviation in the return to the market portfolio is σM = 0.10, and the coefficients of correlation between the returns on each share and the market portfolio are ρAM = 0.8 and ρBM = 0.2, respectively. ii Derive the expected return and standard deviation of a portfolio consisting of 40 per cent invested in share A and 60 per cent invested in share B. What is the beta coefficient of this portfolio? Traders in the capital market where the CAPM holds expect the return on the market portfolio to be i M = 0.16 with a standard deviation of σM = 0.20 when the risk-free interest rate is i = 0.08. They also compute a covariance between the returns on risky security k and the market portfolio of σkM = 0.01. i If you obtain new information that indicates the expected return on security k is 6 per cent (with σkM = 0.01) should you purchase it? ii If security k actually pays 15 per cent over the year, has the CAPM failed? Show how the intertemporal CAPM pricing equation in (4.37) becomes the CAPM pricing equation in (4.21) when the interest rate is non-stochastic, with σn = 0. Repeat the exercise when traded security returns are uncorrelated with changes in the interest rate, with ρki = 0 for all k. Provide economic intuition for these outcomes. Use the consumption-based pricing model in (4.28) to solve the wealth of a consumer with the power utility function U ( I t ) = I t1−γ / (1 − γ). Solve the coefficient of relative risk aversion for this function and then show that it is inversely related to the rate of time preference. This question asks you to examine the consumption-based asset pricing model. i Representative agent pricing models in the financial economics literature are special cases of the CBPM. Explain how consumers measure risk in the CBPM and why it is a representative agent model. In particular, summarize the assumptions that make it a representative agent model. How would allowing state-dependent preferences change the CBPM? ii The CAPM, ICAPM, APT and CCAPM are special cases of the CBPM where in each model the stochastic discount factor (pricing kernel) has a linear relationship with the factors that isolate aggregate consumption risk using mean–variance analysis. Asset pricing models Explain what the stochastic discount factor measures and how it is affected by risk aversion, then examine the way the factors used to isolate aggregate consumption risk are determined in each of the four models. Derive the coefficient of relative risk aversion and the stochastic discount factor for the utility function U ( I t ) = I t1−γ / (1 − γ) , where It is consumption expenditure at time t. Why do consumers need to have preferences with a constant and identical coefficient of relative risk aversion in the single-good, single-beta coefficient version of the CCAPM? How does it differ from the CAPM and the single-beta coefficient version of the ICAPM? 10 This question looks at the mutuality principle and its implications for consumption risk faced by individual consumers. i In all the representative agent pricing models the mutuality principle holds. Explain this principle using the insurance problem for a large number of identical consumers who maximize expected utility over given income (M) facing loss (L) with probability π. Identify the important assumptions for it to hold, then show why idiosyncratic risk is costless to trade when it does. ii Constantinides and Duffie (1996) explain the equity premium and low risk-free rate puzzles identified by Mehra and Prescott (1985) in the consumption-based asset pricing model by relaxing the requirement for the mutuality principle to hold. Outline the two puzzles and then provide an intuitive explanation for the solution offered by Constantinides and Duffie. (As you are unlikely to be familiar with their formal analysis you need only conjecture an intuitive explanation.) iii Summarize two of the extensions made in the finance literature to the consumptionbased pricing model that attempt to explain the pricing puzzles identified by Mehra and Prescott (1985) without moving outside the representative agent model framework. Provide intuitive explanations for the extensions and comment on their ability to explain the puzzles. Private insurance with asymmetric information There are a number of different sources for the risk in consumption expenditure. Consumers hold securities with risky returns and have variable income from labour and other capital assets. Diversifiable risk in security returns is eliminated by bundling them together in portfolios, whereas most of the diversifiable risk in their labour and other income is eliminated by purchasing insurance. We examined the diversification effect inside portfolios of securities earlier in Chapter 3. In this chapter we look at the role of insurance where consumers pool individual risk that can be eliminated across the population by the law of large numbers. Individual risk is where a portion of the population incurs income losses that do not affect aggregate consumption. The only uncertainty is over the identity of the consumers in the group incurring losses. In a frictionless competitive market where insurance trades at prices equal to the probability of incurring losses, consumers with state-independent preferences fully insure to eliminate individual risk from their consumption expenditure. In effect, consumers pay premiums into a pool of funds that cover the insurance claims made by the proportion of the population incurring losses. When insurance trades at these actuarially fair prices there is no expected cost to consumers from removing individual risk from their consumption expenditure so they fully insure. When individual risk can be costlessly eliminated in this way it attracts no premium, where the only premium in expected security returns is determined by aggregate non-diversifiable risk. This is referred to as the mutuality principle that holds in all the consumption-based asset pricing models examined earlier in Chapter 4. We look at insurance with common (symmetric) information in Section 5.1 and then extend the analysis by introducing asymmetric information in Section 5.2. Consumers will fully insure against individual risk in a frictionless competitive equilibrium when traders have common information and state-independent preferences. We use this as a benchmark to identify the effects of trading costs and asymmetric information. Consumers choose not to fully insure when trading costs raise the price of insurance above the probability of incurring losses. When they are minimum necessary costs of trade the competitive equilibrium outcome is Pareto efficient, where expected security returns rise to compensate consumers for the cost of eliminating individual risk from their consumption expenditure. A number of government policies, including price stabilization schemes and publicly funded insurance, are justified as ways to overcome the effects of asymmetric information on private insurance. Moral hazard and adverse selection are the most widely cited problems. With moral hazard consumers have the ability to reduce their individual risk by undertaking costly self-protection. Whenever marginal effort, which cannot be observed by insurers, is not reflected in the price consumers pay for insurance, they less than fully insure. Adverse selection occurs when there are consumers with different probabilities of incurring Private insurance with asymmetric information losses that insurers cannot costlessly identify and separate. Low-risk types suffer from highrisk types buying low-risk policies. This imposes externalities on low-risk consumers. At one extreme high-risk types may prove too big a problem for the existence of a private insurance market. These are the most common reasons cited for incomplete insurance markets. Newbery and Stiglitz (1981) argue that moral hazard and adverse selection problems are especially severe in developing countries and they recommend the use of price stabilization policies to reduce the risk in consumer incomes. Dixit (1987, 1989) argues, however, that these stabilization policies should be evaluated in the presence of the moral hazard and adverse selection problems. Unless governments have better information than private traders, or can trade risk more efficiently, the stabilization policies are unlikely to be socially beneficial. Before we commence the formal analysis it is helpful to illustrate the difference between aggregate uncertainty and individual risk.1 Aggregate uncertainty is economy-wide nondiversifiable risk which agents trade according to their differing risk preferences, while individual risk is diversifiable across the economy by the law of large numbers. The difference between them can be illustrated in consumer budget constraints. Consider a situation where every individual has the same endowment of money income, M ( s ), in each state of nature s. Since it can vary across states of nature they face aggregate uncertainty. Now suppose they can also suffer a loss L with probability π in each state s, where the income for each consumer becomes M B (s) = M (s) − L for the bad (B) outcome, and MG (s) = M (s) for the good (G) outcome without the loss with probability 1 − π. When a large number of consumers (H) have the same probability of loss π, aggregate income in each state will be equal to their expected income multiplied by the number of consumers, ( πM ( s ) + (1 − π ) M G ( s ) H = ( M ( s ) − πL H . Within each state aggregate income is non-stochastic as a fixed proportion π of the population always has low income, while the remaining proportion 1 - π of the population always has high income. Thus, there is scope in this setting for mutual insurance among consumers to eliminate their individual risk. The combined effects of aggregate uncertainty and individual risk on consumer income are illustrated in a two-period setting with three states of nature in Figure 5.1, where individual risk doubles the number of random outcomes for consumers. In each state of nature bad outcomes occur with probability ps × π, and good outcomes in each state with probabilityps × (1 − π).2 The analysis could be generalized by allowing loss L and its probability to both be state-dependent, but that would complicate things without providing much additional insight into the following results. Private insurance with asymmetric information t=0 t=1 – M(1) – M(2) – M(1) – M(1) − L – M(2) – M(2) − L – M(3) – M(3) – M(3) − L Figure 5.1 Aggregate uncertainty and individual risk. We focus on individual risk in this chapter as it is where asymmetric information problems arise. Since aggregate uncertainty is common to consumers it is possible for them to negotiate Pareto optimal intertemporal resource transfers whenever they agree on the true state and can trade in competitive markets. We examined the effects of aggregate uncertainty in detail in Chapters 3 and 4, so it is removed from the following analysis. 5.1 Insurance with common information It is useful to establish the full insurance equilibrium outcome as a benchmark for understanding how trading costs and asymmetric information affect private insurance. This benchmark occurs in a frictionless competitive economy where consumers have common information and maximize von Neumann–Morgenstern expected utility functions. Since these preferences are state-independent, consumers have the same marginal utility of income in states with the same consumption expenditure. Thus, they fully insure against individual risk when the marginal cost of insurance is equal to the probability of bad state outcomes. This benchmark is derived in Section 5.1.1 before trading costs are included in Section 5.1.2. 5.1.1 No administrative costs Consider an economy with h = 1, ... , H identical consumers who choose a single good to maximize an NMEU function when income is subject only to individual risk. In the absence of insurance (0) the problem for each consumer is ⎧⎪ X ≤ M − L ≡ I B ⎫⎪ 3 max ⎨ EU 0 = πU ( X B ) + (1 − π )U ( X G ) B ⎬, X G ≤ M ≡ IG ⎭⎪ ⎩⎪ with M being a fixed endowment of money income, XB consumption expenditure in the bad state (B) when a dollar loss of L is incurred with probability π, and XG consumption Private insurance with asymmetric information expenditure in the good state (G) without the loss.4 There is no discount factor in the expected utility function as we assume uncertainty is resolved the instant consumption choices have been made. In effect, no time elapses between the consumption choice and the resolution of uncertainty. By the law of large numbers there is certain aggregate income of (M − πL)H. Consumers face individual risk where a fixed proportion of the population incurs loss L. The only uncertainty is whether or not they are in that group. Clearly, individuals consume their income endowments in the absence of insurance. After substituting the budget constraints in (5.1) into the expected utility function, we have EU0 = pU(M−L) + (1−p)U(M). At the endowment point the slope of the indifference curve measures the marginal valuation of bad to good state consumption expenditure, with MRS B,G = dX G dX B = dEU 0 = 0 − πU B′ (1 − π )U ′ where U B′ = ∂U /∂ x B and U G′ = ∂U /∂xG are, respectively, the marginal utility in the bad and good states. Consumption without insurance is illustrated at point E in Figure 5.2. Along the 45° line where consumption expenditure is constant every indifference schedule has the same slope, MRS B,G = dX G −π = , dX B (1 − π with U B′ = U G′ at point A in the diagram. Indeed, the indifference schedules have this slope for all consumption bundles in the commodity space for risk-neutral consumers, while XG XG = XB M E Slope = − π 1−π Figure 5.2 Consumption without insurance. Private insurance with asymmetric information it only holds for bundles on the 45° line for risk-averse consumers. In effect, risk averse consumers are marginally neutral to risk on the 45° line where they have no consumption risk. Consumers use insurance to transfer income from the good to the bad state, where Q is the dollar value of insurance they purchase for premium P. By pooling the premiums they create a mutual fund to cover claims made by those who incur the income loss, where the consumer problem with insurance can be summarized as ⎧⎪ X ≤ M − L + Q − P ≡ I B ⎫⎪ max ⎨ EU Q = πU ( X B ) + (1 − π )U ( X G ) B ⎬. X G ≤ M − P ≡ IG ⎭⎪ ⎩⎪ Optimally chosen insurance (at an interior solution with Q > 0) satisfies ⎛ ∂P ∂P ⎞ − (1 − π )U G ′ πU B ′ ⎜ 1 − = 0, ⎟ ∂Q ⎝ ∂Q ⎠ where ∂P/∂Q is the marginal cost of additional cover. In a frictionless competitive market this price of insurance is obtained from the solution to the problem max η = (P−pQ) H for insurers, where η is the profit from selling insurance to H consumers in the population, with total revenue of PH and total cost of πQα H where pH is the number of people who incur the loss.5 Since the optimal supply of insurance solves: ⎞ d η ⎛ dP =⎜ − π⎟ H = 0, dQ ⎝ dQ ⎠ a dollar of insurance trades at price dP/dQ = π, where each consumer pays premium P = πQ. After substituting this price into the optimality condition in (5.3), we have πU B′ (1 − π ) − (1 − π )U G′ π = 0. For this to hold we must have the same consumption in each state, with U B′ = U G′ , where full cover is chosen with Q = L. This outcome is illustrated in Figure 5.3 where consumers trade from their endowment point at E to point Q on the 45° line where they have the same consumption expenditure of M − P in each state. Whenever income can be transferred from the good to the bad state at a price equal to the probability of loss, consumers with state-independent preferences fully insure. In effect, they can transfer consumption expenditure to the bad state at the same rate nature deals Private insurance with asymmetric information XG XG = XB E Slope = − π 1−π P Q P M−L Figure 5.3 Full insurance. Box 5.1 Full insurance: a numerical example Leonard has a fixed income endowment of $500 which he allocates to consumption expenditure to maximize expected utility EU0 = 0.4 lnXB + 0.6 lnXG, where XB is consumption in the bad state when he loses $200 from theft with probability π = 0.4, and XG consumption in the good state without the loss. In the absence of insurance (or other financial securities), Leonard consumes his endowment and gets expected utility of EU0 ª6.0102, where his marginal valuation for income in the bad state is MRS BG = − dX G dX B = dEU 0 = 0 πX G 0.4 × 500 200 = = ≈ $1.11. (1 − π ) X B 0.6 × 300 180 If the marginal cost of transferring a dollar of income from the good to the bad state is less than this amount he will insure against the risk of theft. Since he has a diminishing marginal valuation for income in the bad state he is risk-averse, with d(dXG/dXB)/dXB > 0. (There is no aggregate uncertainty in this example and individual risk is diversifiable across the population by the law of large numbers.) When Leonard can purchase insurance (Q) in a frictionless competitive market (with common information) at a marginal cost of c = $0.40 his budget constraints for bad and good state consumption expenditure are, respectively, XB = 300 − 0.4Q + Q and XG = 500 − 0.4Q. His optimal insurance choice solves dEU Q dQ = 0.4 ∂EU Q ∂X B (1 − c ) − 0.6 ∂EU Q ∂X G 0.24 0.24 = 0, − XG XB with XB = XB = $420 and Q* = $200. Thus, Leonard fully insures and gets expected utility of EU Q* :≈ 6.0403, which is approximately 0.50 per cent higher than expected utility without insurance. Private insurance with asymmetric information the income loss to them, so they fully insure. While they are worse off in the good state, the gain in the bad state is largly due to risk aversion that makes their indifference schedules concave to the origin in the consumption space. Their consumer surplus is the change in utility (EUQ − EUO) in the move from endowment point E to point Q on the 45° line in Figure 5.3. This is the most consumers would pay to have access to a competitive insurance market. As noted earlier, we use the full insurance outcome as a benchmark for identifying the effects of trading costs and asymmetric information on private insurance in the rest of this section and the next. 5.1.2 Trading Costs In practice insurers employ labour, invest capital and incur other operating expenses when they trade insurance. While some costs arise from gathering information, others arise from writing policies and processing claims. These administrative costs may be fixed for each policy sold or may change with the amount of cover purchased. With a constant cost of tC to process each dollar of cover claimed, the problem for competitive insurers becomes max η = (P − π(1 + τC )Q)H, where τCπQH is the total cost of processing the insurance claim Q. The optimal supply of insurance solves ⎫ dη ⎧ dP =⎨ − π(1 + τC ) ⎬ H = 0, dQ ⎩ dQ ⎭ where the price of each dollar of insurance is dP/dQ = π(1 + τC). Thus, consumers pay a premium of P = π(1 + τC)Q, which includes trading costs of τCπQ to cover the administrative costs of processing claims. We obtain the optimal insurance cover by substituting dP/dQ = π(1 + τC) into (5.3), where, for an interior solution (with Q > 0), we have πU B′ (1 − π ) − (1 − π )U G′ π − τC ( π 2U G′ + (1 − π ) πU G′ ) = 0. The positive marginal trading costs in the third term must be offset by higher marginal utility in the bad state, with U B′ > U G′ , where this requires lower consumption expenditure M – L + Q – P < M – P and L > Q. Consumers choose not to insure at all when πU B′ (1 − π ) − (1 − π )U G′ π < τC ( π 2U B′ + (1 − π ) π U G′ ). The equilibrium outcome for partial insurance is illustrated at Q′ in Figure 5.4. Once administrative costs push the market price of insurance above the probability of loss the indifference curve must be tangent to the budget constraint at bundles located above the 45° line. If insurers incur a constant administrative cost of τQ for writing each dollar of insurance, its market price is higher at ∂P/ ∂Q = π/(1 − τQ), where consumers pay a premium of P = pQ/ (1 – τQ). These costs raise the slope of the budget constraint to −π/(1 − τQ)/[1 − π/(1−τQ)], and consumers only partially insure (if they insure at all). Private insurance with asymmetric information XG XG = XB E Q Q′ EUQ EUQ′ Slope = − π 1−π Slope = − π(1 + τC) 1 − π(1 + τC) Figure 5.4 Partial insurance with processing costs. Private insurance is unaffected by fixed trading costs when insurers can fund them using access fees (AF) that do not exceed consumer surplus. An access fee shifts the endowment point for consumers from E to E¢ along the dashed line that is parallel to the 45° line in Figure 5.5. It is the most consumers would pay to trade in the insurance market at a price equal to the probability of loss. At this price they fully insure at point QAF. If the access fee rises above this amount consumers do not insure at all because it makes them worse off. When firms cannot use access fees or price-discriminate along consumer demand schedules they pass the fixed costs into a higher price of insurance and consumers partially insure (if they insure at all). Access fees can be problematic due to leakage in demand, but that is unlikely in the insurance market as policies are verifiable legal contracts between individual consumers and insurers. XG XG = XB M Q AF Slope = − π 1−π Figure 5.5 Insurance with fixed administrative costs. Private insurance with asymmetric information Box 5.2 Administrative costs and insurance: a numerical example Suppose insurers incur a constant marginal cost of τP = 0.2 on each dollar of insurance they sell to Leonard in Box 5.1, where the price he must now pay rises from $0.40 to c = π/(1 – τP) = $0.50. His budget constraints for bad and good state consumption expenditure are, respectively, XB = 300 − 1⁄2Q + Q and XG = 500 − 1⁄2Q, where his optimal insurance choice solves dEU Q dQ π ⎛ 0.4 ⎞ 1 − π 0.4 = 0, ⎜1 − ⎟− X B ⎝ 1− τP ⎠ XG 1− τP with XB = 2⁄3XG. Using the budget constraints we find Leonard purchases insurance of Q* = $50, and consumes X G* = $475 and X B* = $316.67 . He has expected utility of 6.0116, which is approximately 0.475 per cent lower than his expected utility without trading costs. Since the trading costs raise the relative cost of each dollar of bad state consumption expenditure from π/(1 − π) = 2⁄3 to π/(1 - π - τP) = 1, Leonard no longer fully insures. Recall from Box 5.1 that his marginal valuation for bad state consumption is 2⁄3 of a dollar of good state consumption expenditure when he has the same consumption in each state, which is less than the marginal cost of insurance. It is tempting to automatically conclude trading costs are a source of inefficiency when they change the relative cost of good and bad state consumption expenditure and restrict private insurance. But while they are minimum necessary costs of trade they do not distort private activity. If regulatory or other barriers restrict entry into the market the price of insurance can rise above marginal cost and cause allocative inefficiency. Throughout the following analysis we assume trading costs are zero, or if they are positive they are fixed and less than consumer surplus. Hence, any equilibrium outcome with less than full insurance will result from market failure. 5.2 Insurance with asymmetric information Insurers need to know the probability of income losses for consumers. In many situations consumers can change these probabilities and/or the size of any losses by expending effort. Moreover, they can have different loss probabilities. For example, drivers have different skills and other attributes that give them different accident probabilities, and the probability of having a car accident can be reduced by driving more carefully and in good weather conditions. When information is costly to obtain insurers can have incomplete (asymmetric) information about these probabilities, and this can lead to equilibrium outcomes where some consumers have less than full insurance. Two cases are considered in this section, moral hazard and adverse selection.6 5.2.1 Moral hazard In most practical situations consumers can take actions to reduce expected losses in income from individual risk. They take precautions to reduce the probability of loss through self-protection, or the size of the loss incurred through self-insurance. We look at how they impact on the demand for market insurance using the analysis in Ehrlich and Becker (1972). When competitive insurers can costlessly observe marginal reductions in the probability of income losses they adjust their insurance premiums accordingly, and consumers fully insure. Private insurance with asymmetric information However, with costly monitoring and asymmetric information, the price of insurance will not reflect the marginal effort expended by individual consumers who have a diminished incentive to self-protect and choose to partially insure. We make the probability of loss a function of effort, with π(e), where it is assumed dp/de = πe < 0 and d2π/de2 = πee > 0. This relationship links the levels of effort and insurance together, where the more consumers insure, the less effort they expend on self-protection (with de/dQ = eQ < 0), so that dp / dQ = πQ = πe eQ > 0. The consumer problem with self-protection and insurance becomes ⎧⎪ X ≤ M − L + Q − P ⎫⎪ max{Q ,e} ⎨ EU Q = π( e )U ( X B ) + [1 − π( e )]U ( X G ) − e B ⎬, XG ≤ M − P ⎭⎪ ⎩⎪ where the cost of effort e is measured as a dollar cost to expected utility. To demonstrate moral hazard and identify its consequences for the amount of insurance traded, we consider two extremes – no monitoring and complete monitoring. With no monitoring the insurance premium is determined by P = π(Q) Q, and with complete monitoring it is determined by P = π(e)Q. No Monitoring (P = π(Q) Q) is the extreme form of asymmetric information where monitoring is prohibitively costly, and the price of insurance is not directly affected by individual changes in effort. Instead, effort has an indirect effect on the premium when insurers observe a reduction in the probability of income losses at the aggregate level. They see the amount of insurance purchased and can anticipate the level of selfprotection, but without observing its marginal effects. Thus, the price of insurance is determined by the amount purchased, with ∂P/∂Q = π (Q ). Using (5.7), the optimal level of effort solves ∂EU = π e (U ( X B ) − U ( X G ) − 1 = 0, ∂e where the first term is the marginal benefit from self-protection and the second term its marginal cost. Since πe < 0, utility in the good state must exceed utility in the bad state to make the first term positive and equal to unity, with U(XB) < U(XG). Thus, consumers less than fully insure. In the absence of monitoring there is no reduction in the price of insurance from marginal increases in effort in (5.8). Any benefits flow indirectly through the insurance decision where insurers identify the probability of loss from the amount of insurance purchased. Using (5.7), the optimal insurance choice solves: ∂EU = π(1 − π )(U B′ − U G′ ) − π Q Q( πU B′ + (1 − π )U G′ ) = 0, ∂Q where the first term is the net marginal consumption benefit from insurance, and the second term the change in the insurance premium, with πQ > 0. Notice that the first term is the same as the condition for optimally chosen insurance in the absence of self-protection in (5.3), while the second term is the higher price of insurance due to the fall in self-protection; it is Private insurance with asymmetric information an externality that spills over from the effort choice. Thus, consumers partially insure, with U B′ > U G′ and XB < XG. Complete monitoring (P = π(e)Q) is the opposite extreme where monitoring is assumed to be costless, so that insurers observe marginal effort and adjust the price of insurance accordingly, with ∂P/ ∂Q = π(e). At an interior solution to the consumer problem in (5.7) the optimal effort choice solves ∂EU = π e (U ( X B ) − U ( X G ) − 1 − π e Q πU B′ + (1 − π )U G′ = 0, ∂e where the last term isolates the reduction in the price of insurance from marginal effort that leads to more self-protection than the solution in (5.8) without monitoring. In these circumstances the optimal insurance choice solves ∂EU = π(1 − π )(U B′ − U G′ ) = 0, ∂I which is the same as the optimal condition in (5.3) with common information where consumers fully insure, with U′B =U′G.7 In summary, consumers only partially insure when they are not compensated for their marginal effort with costly monitoring. The lack of monitoring imposes an externality on consumers that affects their effort and insurance choices. 5.2.2 Adverse selection Another externality arises from asymmetric information when insurers cannot distinguish between consumers with different individual risk. Low-risk types suffer from the presence of high-risk types who purchase insurance at low-risk prices. We demonstrate this externality using the analysis in Rothschild and Stiglitz (1976) where consumers are divided into those with either a high (H) or low (L) probability of loss – a proportion λ have the same high probability πH and remaining proportion 1 − λ the same low probability πL. In every other respect they are identical because they have the same preferences, income and dollar loss. We rule out moral hazard by assuming they cannot change their risk type through self-protection. With different risk types the consumer problem becomes X h ≤ M − L + Q h − σ h Q h ⎫⎪ ⎪⎧ max ⎨ EU Q h B ⎬ for h ∈{H , L}, X Gh ≤ M − σ h Q h ⎪ {Q h } ⎩ ⎭⎪ where EU Q h = π hU ( X Bh ) + (1 − π h )U ( X Gh ), and σ h is the price of insurance for each risk type h ∈ {H, L}. The optimal insurance decision for an interior solution solves π hU B′ h (1 − σ h ) − (1 − π h )U G′ h σ h = 0 for h ∈{ H , L}. Private insurance with asymmetric information Box 5.3 Self-protection with costless monitoring: a numerical example The impact of self-protection on private insurance will be demonstrated here by allowing Leonard (in Box 5.1) to reduce his probability of losing income through theft, with π = 1 − e for 0 ≤ e ≤ 1, where e is the cost to expected utility from expending effort. Notice how effort has a positive and diminishing marginal product, with π e < 0 and π ee > 0 . With costless monitoring Leonard will ⎧⎪ X ≤ 300 − π ( e )Q + Q ⎫⎪ max ⎨ π ( e )InX B + [1 − π ( e )]InX G + e B ⎬ X G ≤ 500 − π( e )Q ⎪⎩ ⎪⎭ In the absence of insurance (with Q = 0) he consumes his income endowment in each state, with XB = 300 and XG = 500, where optimal self-protection (at an interior solution) must satisfy the condition − 1 2 e0 ( InX B − InX G ) − 1 = 0. This leads to e0* ≈ 0.0652 and π *0 ≈ 0.75, with expected utility of EU0 ª 5.7691. When Leonard can purchase insurance in a frictionless competitive market (with complete information), the optimal insurance choice satisfies π π (1 − π ) − (1 − π ) = 0, XB XG and the optimal effort level − 1 2 eQ ( InX B − InX G ) + Q ⎛ π 1− π⎞ + − 1 = 0. ⎜ X G ⎟⎠ 2 eQ ⎝ X B Based on the insurance condition Leonard fully insures, with X B* = X G* and Q* = 200, where this allows us to write the condition for optimal effort as 3 e *Q + 2 eQ* − 1 = 0, with eQ* = 0.0788354 and π* ª 0.72. The ability to transfer income from the good to the bad state at a marginal cost equal to the probability of loss raises his expected utility by almost 0.5 per cent from EU0 ª 5.7691 to EUQ ª 5.7965. By rearranging this expression we find that indifference curves over good and bad state consumption have slope equal to the relative cost of insurance, with dX Gh dX Bh = dEU h = 0 − π hU B′ h −σ h = foor h ∈{ H, L}. (1 − π h )U G′ h (1 − σ h ) Equilibrium in the insurance market can take a number of forms. Insurers may personalize the contracts for high-and low-risk types in a separating equilibrium, or they may sell a Private insurance with asymmetric information Box 5.4 Self-insurance without market insurance At the beginning of this section we noted the possibility of consumers being able to self-insure against income losses. Suppose Leonard can reduce the size of his loss from theft by expending effort to secure it in a safe place, with L = 200 − 200 e , where this effort reduces expected utility by e/4. To simplify the analysis we assume he cannot self-protect and faces a given probability of loss of π = 0.4, where his optimization problem becomes ⎫ ⎧ X B ≤ 500 − L ⎪ ⎪ e max ⎨0.4 InX B + 0.6 InX G + X G ≤ 500 ⎬. 4 ⎪ ⎪ L = 200 − 200 e for 0 ≤ e ≤ 1⎭ ⎩ In the absence of market insurance, the optimal effort level solves − 0.4 100 1 − = 0. XB e 4 By using the constraint on consumption expenditure in the bad state when it binds, with X B = 300 + 200 e , we have e0* = 0.234103, which results in loss L*0 ≈ $103.23 and consumption expenditure of X B* ≈ $396.77 and X G* ≈ $500. In these circumstances Leonard’s expected utility is EU0 ª 6.0636, which is approximately 0.9 per cent higher than his expected utility without self insurance in Box 5.1. The additional consumption opportunities with self-insurance are illustrated below. In the absence of insurance the consumption opportunities are constrained by the frontier BEC, and with self-insurance they are constrained by frontier BE′C′. As Leonard expends effort to reduce the income loss it has two competing effects on his expected utility. It rises with the extra bad state consumption and falls with the extra effort, where the extra consumption moves him to a new indifference schedule with higher utility while the extra effort reduces the utility on each indifference schedule (a relabelling effect). Thus, his expected utility is maximized at an outcome like point A along segment EE′ of the consumption frontier, where at the margin the move to a new indifference schedule is offset by the relabelling effect. XG E′ EU0 C′ C 300 397 500 Private insurance with asymmetric information Box 5.5 Self-insurance with competitive market insurance When Leonard can self-insure and purchase market insurance he has additional consumption opportunities if market insurance is less costly at the margin than self-insurance. In this setting his optimization problem becomes ⎫ ⎧ X B ≤ 500 − L − 0.4Q + Q ⎪ ⎪ e max ⎨0.4 InX B + 0.6 InX G + X G ≤ 500 − 0.4Q ⎬. 4 ⎪ ⎪ L = 200 − 200 e for 0 ≤ e ≤ 1⎭ ⎩ At an interior optimum the demand for market insurance satisfies π (1 − π ) π = 0, (1 − π ) − XB XG where Leonard fully insures, with Q* = 200 and X B* = X G* . His optimal effort level satisfies − 0.4 100 1 − = 0. X B eQ* 4 By using the budget constraint for bad state consumption, with X B = 420 + 80 e , we can write the optimality condition for effort as 21 eQ* + 4 eQ* − 8 = 0, where eQ* = 0.127246. Leonard expends less effort when he can purchase market insurance than he did previously in its absence in Box 5.4. Even though the income loss rises to L*Q ≈ $128.66, market insurance increases his consumption expenditure in each state to X B* = X G* ≈ $448.54, where expected utility rises by approximately 0.2 per cent to EUQ ª 6.0742. The new equilibrium outcome is illustrated in the diagram below at point F which is on an indifference schedule with higher expected utility than consumption at point A in the absence of market insurance. The larger income loss moves him to the left of point A, while market insurance allows him to trade along the solid line with slope π/(1 − π) onto the 45° line. XG B 449 A F E′ Slope = − π 1−π 198 45° C′ C 300 371 449 Private insurance with asymmetric information single contract to all risk types in a pooling equilibrium. Both are examined to see whether they are robust to competition. With complete information, insurers can separate the risk types, so they offer insurance at actuarially fair odds, with sH = πH and sL = πL. Thus, from (5.13) both risk types fully insure, with U B′ h = U G′ h for h = H, L. This separating equilibrium is illustrated in Figure 5.6 where high-risk types locate at point H on budget constraint PH, and low-risk types locate at point L on budget constraint PL. The slopes of their budget constraints are equal to the ratio of the bad to good state probabilities. Once again, the price of insurance is determined by competitive insurers who max η = ∑ ( (1 − π h )σ h − π h (1 − σ h ) Q h H h , for h ∈ H , L . h with HH = λH and HL = (1−λ)H. From the first-order condition on this problem, we have ph/(1−ph) = σh /(1−sh) for h ∈ {H, L}, where insurers break even when the risk types are correctly screened. When insurers cannot separate the risk types, the high-risk consumers try and locate at L by declaring themselves to be low-risk types.9 Thus, insurers make losses as they raise insufficient revenue to cover the cost of their insurance claims. Therefore, with asymmetric information, the contracts L and H cannot be equilibrium contracts. In a pooling equilibrium, insurers sell a single contract to both risk types. This is illustrated in Figure 5.7 as contract PP on price line P– that lies between the separating price lines PL and PH. This price line has slope σ/(1− σ ), where σ is the average price of insurance, with σ = λπH + (1 − λ)πL and λ = NH/(NH + NL). Along P insurers make losses on high-risk policies, but they are cross-subsidised by profits on low risk policies. But the pooling equilibrium is not stable as new entrants to the insurance market can offer low-risk type contracts in the cross-lined region in Figure 5.7 that makes them better off without attracting the high-risk types who remain at PP. However, the pooling contract PP is no longer XG X G = XB M E L M − PL M − PH EUH′ PL Slope = − σL πL =− 1 − σH 1 − πH PH M−L M−PH M−PL M Figure 5.6 Insurance with complete information. Slope = − σH πH =− 1 − σH 1 − πH Private insurance with asymmetric information Box 5.6 A separating equilibrium Consider an insurance market where 20 per cent of consumers are high-risk types (H) with probability πH = 0.6 of incurring a $200 loss, while the remainder are low-risk types (L) with probability πL = 0.4 of incurring the same size loss. They all maximize the expected utility function π h √ X Bh + (1 − π h ) √ X Gh , for h ∈ H , L, where X Bh and X Gh are consumption expenditure in the bad and good states, respectively. When they have the same fixed money income of $500 we can summarize their optimization problem as ⎧ X h ≤ 500 − 200 − σ hQ h + Q h ⎪⎫ max ⎨ π h √ X Bh + (1 − π h ) √ X Gh Bh ⎬ ∀h ∈ H , L, X G ≤ 500 − σ h Q h ⎭ ⎩⎪ with σh being the marginal cost of insurance Qh. There is no aggregate uncertainty here because 60 per cent of high-risk types and 40 per cent of low-risk types suffer the $200 loss in income with certainty. The only uncertainty is over the identity of the consumers that incur the losses in each group. In the absence of insurance (with Q h = 0 for all h) everyone consumes their endowment, with X Bh = $500 − $200 = $300 and X Gh = $500 for h ∈ H, L, where high-risk types get expected utility of EU 0H = 19.34 and low risk types EU 0L = 20.35. When they can purchase insurance against the income loss their optimal choice satisfies: πh 1 − πh σ = 0 ∀h ∈ H , L. (1 − σ h ) − h 2√ XB 2 √ X Gh h In a frictionless competitive market with common information each risk type is offered an insurance contract that allows them to purchase insurance at a marginal cost equal to their probability of loss, with σ h = π h for all h. Thus, in this separating equilibrium (SE) every consumer fully insures, where high-risk types pay a premium of 0.6 × $200 = $120, and low risk types a premium of 0.4 × $200 = $80. The high-risk types consume X BH = X GH = $380 and raise their H = 19.49, expected utility by 0.78 per cent to EU SE while the low risk types consume L = 20.49. X BL = X GL = $420 and raise their expected utility by 0.69 per cent to EU SE XG XG = XB E L PP EUH′ – P EUH PH M−L Figure 5.7 Pooling equilibrium. Private insurance with asymmetric information Box 5.7 A pooling equilibrium We reconsider the numerical example provided earlier in Box 5.6 by introducing asymmetric information. Consumers know whether they are high (H) or low (L) risk types, but insurers do not. If all high-risk types purchase low-risk policies in a separating equilibrium (without restrictions on the level of cover), insurers incur losses of (σL −pH)$200 = − 0.2 × $200 = $40 on every policy. If insurers have no way of separating the risk types and offer a (break even) pooling contract (P) the marginal cost of insurance becomes σ = λπ H + (1 − λ ) π L = 0.44 , where λ = 0.2 is the proportion of high-risk consumers in the population. The insurance cover in a pooling equilibrium is determined by the insurance chosen by low-risk types, which satisfies πL 1− πL σ = 0. (1 − σ ) − 2 √ X BL 2 √ X GL By using their budget constraints, X BL = 300 + 0.44 QP and X GL = 500 − 0.44 QP , we can rewrite this condition as π L (1 − σ ) √ (500 − 0.44 Q p ) = (1 − π L )σ √ ( 300 + 0.44 QP ) = 0. where the optimal insurance choice is QP* ≈ $79.24, which is less than full cover. Clearly, the high-risk types prefer to insure fully at the pooling price as it is lower than their probability of loss, σ < π H . But any attempt to purchase more cover would allow insurers to identify them. Thus, when both risk types purchase pooling contracts they choose the same level of cover and have consumption expenditure of XB ª $333.21 and XG ª $436.79, where high-risk types have expected utility of EU PH = 19.61, which is 0.62 per cent higher than expected utility in the separating equilibrium, while expected utility for low-risk types falls by 1.12 per cent to EU PL = 20.26. This loss in utility for low-risk types is a measure of the negative externality imposed on them by the high-risk types not being truthful. profitable because low-risk types cross-subsidize the high-risk types. Therefore, a pooling equilibrium will not exist in these circumstances. A constrained separating equilibrium can exist when insurers cannot screen the risk types by restricting cover on low-risk policies. An example is illustrated in Figure 5.8 by low-risk policy L′ on price line PL, where the high-risk types are indifferent between L′ and policy H on price line PH. Clearly, low-risk types prefer full insurance at L, but the unconstrained separating equilibrium is unstable. Insurers break even when they sell contracts L′ and H because consumers separate according to their risk type. It now remains to show that these policies are robust to competition from other types of contracts. Indeed, there are circumstances where pooling contracts can make both risk types better off than they are with contracts L′ and H. It depends on the location of the pooling price line relative to the indifference curves of low-risk types. Two pooling price lines, P1 and P2 , lie between the separating price lines PL and PH in Figure 5.9. The slope of the break-even pooling price line is determined by the proportion of consumers in each risk type, where a larger proportion of high-risk types makes it steeper. Consider price line P1 , which has a low proportion of high-risk types than P2 . Since it cuts the indifference curve of low-risk types a pooling contract can make both risk types better Private insurance with asymmetric information XG XG = XB L′ L PL H PH M−L Figure 5.8 Separating equilibrium. off in the cross-lined region. While these contracts undermine the constrained separating equilibrium, as we saw earlier, the pooling equilibrium cannot exist either. Thus, the insurance market closes down and no contacts are traded in these (extreme) circumstances. If the pooling price line lies below the indifference curves of the low-risk types through point L′ in Figure 5.9 a pooling contract cannot undermine the constrained separating equilibrium. For example, no contract along price line P2 can attract low-risk types where this allows the constrained separating equilibrium at L′ and H to exist. In summary, high-risk types impose externalities on low-risk types when insurers cannot separate them due to asymmetric information. The unconstrained separating equilibrium and the pooling equilibrium are unstable as high-risk types attempt to trade low-risk policies. XG XG = XB L′ L H – P2 PH M−L PL EUL – P1 Figure 5.9 Non-existence of separating equilibrium. Private insurance with asymmetric information Box 5.8 A constrained separating equilibrium In Boxes 5.6 and 5.7 we solved the insurance outcomes in separating and pooling equilibria, respectively. Since these equilibrium outcomes are unstable, insurers offer high- and low-risk policies with a constraint on the cover offered to low-risk types. We find the constraint on them by isolating the consumption bundle where the indifference curve for high-risk types (tangent to price line PH) cuts the low-risk price line PL at point L′ in the diagram below. All the insurance contracts along PL between endowment points E and L′ make low-risk types better off, while high-risk types prefer full cover at point H along price line PH. XG E L′ H EU H SE PH 45° 314 We isolate L′ by solving the consumption bundle where the high-risk type indifference curve for bundle H, 0.6 X B + 0.4 X G = 19.49, cuts the price line for low-risk policies, XG = 700 − 2⁄3XG, where X GL′ ≈ $490.65 and X BL′ ≈ $313.86. Thus, the insurance cover on low-risk policies in the constrained separating equilibrium solves X GL ′ = 500 − 0.4 QL ′ ≈ $490.65, with QL′ ≤ 23.375, L ≈ 20.38. where low-risk consumers get expected utility of EU CE This equilibrium outcome is stable because the low-risk indifference schedule through L′ lies above the pooling price line. Thus, their expected utility is higher in the constrained separating equilibrium than it is in the L ≈ 20.38 > EU ≈ 20.26 pooling equilibrium, with EU CE . P A constrained separating equilibrium exits when a pooling contract cannot attract low-risk types. But low-risk types are worse off because they cannot fully insure, and this welfare loss is due to actions by high-risk types. 5.3 Concluding remarks We have examined the role of competitive insurance in economies with individual risk. Consumers with state-independent expected utility functions fully insure when there is common information and no marginal trading costs. They deviate from this equilibrium outcome when there are marginal trading costs or asymmetric information with costly monitoring. When insurers cannot observe effort by consumers to reduce their probability of loss they do not adjust the insurance premium for marginal effort where consumers choose less than full insurance. And when they cannot separate different risk types they restrict insurance cover on low-risk policies to deter higher-risk types from taking them. Private insurance with asymmetric information Problems 1 Jeremy purchases insurance (Q measured in dollars), at a price equal to the probability (π) of incurring income loss (L), in order to I 1− γ I B ≤ X − L + Q − πQ ⎪⎫ I 1− γ ⎪⎧ max ⎨ EU = π B + (1 − π ) G ⎬, 1− γ 1 − γ I G ≤ X − πQ ⎪⎩ ⎪⎭ where X is an endowment of income and 0 ≤ γ 0. (Start from the endowment point with e = 0 and then increase effort.) Derive the marginal cost of increasing bad state consumption (measured in units of good state consumption) and isolate the circumstances where optimally chosen self-insurance eliminates the variability in consumption across the two states. ii Now suppose consumers can self-insure and purchase insurance (Q) in a competitive market at price p per dollar. (Assume there are no transactions costs.) This makes their consumption in each state xB = M − L(e) − e + Q − pQ. and xG = M − e − pQ. Derive the first-order conditions for optimally chosen insurance when self- and market insurance are both positive. What is the marginal cost of increasing bad state consumption (measured in units of good state consumption) with market insurance, and how does it compare with the marginal cost of bad state consumption through self-insurance when they are optimally chosen? Use this cost comparison to explain how self-insurance changes when transactions costs raise the dollar cost of market insurance p. (Assume changes in effort do not affect p.) Derivative securities Financial securities are used to fund investment in future consumption flows. As noted in Chapter 3, these flows are subject to aggregate uncertainty and individual risk – aggregate uncertainty must ultimately be borne by consumers, while individual risk can be eliminated through the diversification effect from holding securities in portfolios and by trading private insurance. In a complete capital market where the no arbitrage condition holds consumers costlessly eliminate individual risk. In previous chapters no distinction was made between primary financial securities, such as shares and bonds, and the derivative securities written on them, such as options and futures contracts. Derivatives have values that derive from underlying assets, both financial and physical, because they represent claims to them at predetermined prices and times. Derivatives are normally thought of as financial securities whose values derive from one or a bundle of other financial securities, but the term is used more widely here to include options and futures contracts for commodities. There has been a large growth in derivative trades in recent years. Micu and Upper (2006) report a combined turnover in fixed income, equity index and currency contracts (including both options and futures) on international derivatives exchanges of $US344 trillion in the fourth quarter of 2005. Most financial contracts were for interest rates, government bonds, foreign exchange and stock indexes, while the main commodity contracts were for metals (particularly gold), agricultural goods and energy (particularly oil). Derivative securities play a key role in facilitating trades in aggregate risk and allowing investors to diversify individual risk by completing the capital market. They also provide valuable information about the expectations investors have for future values of the underlying assets. An option contract gives the bearer the right to buy or sell an underlying asset at predetermined price on or before a specified date – a call option is the right to buy the asset and a put option the right to sell it. They are not obliged to exercise these rights, and do so only if it increases their wealth. In contrast, a forward contract is an obligation to buy an underlying asset at a specified price and time. They are frequently implicit contracts, where, for example, consumers commit to purchase a house or car at a future time at an agreed price, and most employers commit to pay wages and salaries for labour services rendered to them. A futures contract is a standardized forward contract that trades at official stock exchanges, such as the New York Stock Exchange and the Australian Securities Exchange. They can be traded repeatedly up to the settlement date, where the gains and losses made on them are settled daily through a clearing house. To ensure they are liquid markets, traders are required to maintain deposits with them to cover expected daily gains and losses, and price limits are employed to restrict the size of daily changes in futures prices. Standardized options contracts also trade on formal exchanges. Derivative securities A key objective in this chapter is to price these derivative securities. One approach would be to adopt an economic model that allows us to solve the stochastic discount factor in the consumption-based pricing model in (4.28) and use it to value the payouts to derivatives. This is the approach adopted in the asset pricing models examined earlier in Chapter 4 where restrictions were imposed on consumer preferences and the distributions of returns to securities to make the stochastic discount factor linear in a set of state variables reported in aggregate data. But the preferred approach obtains pricing models for derivatives as functions of the current values of the underlying asset prices, together with conditions specified in the contracts. Since the assets already trade we can use their current prices as inputs to the pricing model without trying to compute them. In effect, this approach works from the premise that markets price assets efficiently and all we need to do is work out how the derivatives relate to the underlying assets themselves. In Section 6.1 we summarize the peculiar features of options contracts and then present the Black and Scholes (1973) option pricing model. This values share options using five variables – the current share price, its variance, the expiry date, exercise price and the riskfree interest rate. It is a popular and widely used model because this information is readily available, but it does rely on a number of important assumptions. For example, the variance on the underlying share is constant and the option is a European call option which cannot be exercised prior to expiration as is permissible for American options. There is evidence that the variances in share prices change over time, and it may be optimal to exercise an American option early when shares pay dividends. After summarizing the defining features of futures contracts in Section 6.2 we look at how they are priced. Once again, their values are determined by the current price of the underlying asset, the settlement date, margin requirements, price limits and storage costs when the goods are storable. Commodity futures trade for agricultural commodities, metals and oil, while financial futures trade for interest rates, stock indexes, shares, bonds and foreign currencies. It is rare for the underlying assets to be delivered at settlement, and cash settlements are much more common. Traders who buy a futures contract commit to pay the contract price for the underlying asset at settlement. When the future spot price is less than the contract price the buyer pays the difference to the seller by way of cash settlement, while the reverse applies when the future spot price is higher. Physical commodities may actually be delivered at settlement when they are used as inputs to production. 6.1 Option contracts In this section we focus on formal options contracts, but it is also important to recognize the role of informal contracts in the allocation of resources. For example, firms may buy land that gives them the option of expanding activities in locations where availability of land in the future is uncertain. If there are fixed sunk costs from undertaking activities with uncertain future payouts, there may be gains from delaying them until the uncertainty is partially resolved. There are welfare gains when the benefits from waiting exceed the costs of creating the option.1 In other circumstances options raise welfare by allowing a more efficient allocation of risk between consumers. For example, share options allow consumers to truncate the payouts on shares, as the contracts give them the right, but not the obligation, to trade them at (or before) a specified time and price. Specialized options contracts trade over the counter while standardized contracts trade on formal exchanges. Derivative securities 185 6.1.1 Option payouts There are two types of options – a call that gives the buyer the right to purchase an underlying asset, and a put that gives the buyer the right to sell it. These contracts specify the following conditions: ∑ ∑ ∑ The underlying asset that can be traded at the discretion of the buyer. Most of the standardized financial contracts are for a fixed quantity of financial securities, such as individual shares, bonds, stock indexes and foreign exchange, while standardized commodity contracts specify quantities of goods with defined qualities. Usually quality is determined by setting bounds on their physical attributes. The expiration date when the contract lapses. The exercise (strike) price for the underlying asset when the contract lapses. European options can only be exercised when they expire, while American options can be exercised any time up to or at expiration. Later we identify realistic circumstances where the American options will not be exercised early because the expected payouts are higher from waiting. For that reason we examine European options in the following analysis and do so for an individual ordinary share (S) in a publicly listed company. Holders of call options on shares receive no dividends or voting rights until they exercise the option. Thus, the payouts (at expiration, time T) from holding a European call option are CT = max ST − SˆT , 0 where ST is the random share price at time T and Sˆ T the exercise price. Since the option is only exercised when ST > SˆT the buyer pays no more than SˆT for the share. A European put option gives the buyer the right to sell the share at time T, where the payouts are PT = max( SˆT − ST , 0). Since it is exercised when ST < SˆT , the buyer receives no less than SˆT from selling the share. The payouts to both contracts at time T are summarized in Figure 6.1 by the solid lines, while the dashed lines summarize the corresponding liabilities incurred by sellers. Trading costs would shift down the solid lines and shift up the dashed lines. These payouts are not profits because buyers pay a price for option contracts. There are expected profits when the discounted value of the option payouts exceed the option price. And this occurs when traders have different information about the share price at the expiration date. In a frictionless competitive market with common information, arbitrage eliminates profit by equating option prices to the discounted value of their payouts. When European share options are written the exercise price is usually set near the market price of the share (when it pays no dividends). If, at any time prior to expiration, the share price exceeds the exercise price the call option is in the money, it is out of the money when the share price is lower, and at the money when it is the same. This also applies to put options when the relationships between the market and exercise prices are reversed. Options have a positive market value even when, prior to Derivative securities Call option Put option PT ~ ST < S^T ~ ST > S^T ~ ST 45° 45° ^ ST ~ ^ ST ≤ ST ~ ST 45° 45° ^ ST ~ ^ ST ≤ ST Figure 6.1 Payouts on options contracts at expiration date (T). expiration, they are out of the money if the variance in the share price creates the possibility of their being in the money at expiration. Traders use options contracts to exploit any profits from having different information and to spread risk. They combine options with other assets to create perfect substitutes for all existing traded securities. By bundling securities with their perfect substitutes they can create risk-free arbitrage portfolios to exploit any profits in security returns. Options can also be used to complete the capital market so that consumers can trade in every state of nature. Before demonstrating these roles we summarize the payouts (at date T) to the underlying share ( ST ), and to a risk-free zero coupon bond (BT) with a payout equal to the exercise price ( Sˆ T). These are illustrated in Figure 6.2, where payouts to buyers of both securities are solid lines and payouts by their sellers are dashed lines. ~ ST S^T 45° 45° ~ ST −BT ~ −ST Figure 6.2 Payouts at time T on shares and risk-free bonds. Derivative securities 187 Based on the law of one price, the payouts to a call option on this share can be replicated by purchasing the share and a put option on it, and selling a zero coupon bond with a payout equal to the exercise price on the option, where CT = ST + PT − BT .2 The combined payouts to these three securities are illustrated in Figure 6.3. There are potential arbitrage profits when the current price of the option is not equal to the discounted value of the payouts to the three securities that replicate it. In a competitive capital market there are perfect substitutes for all new securities where options play an important role in making this possible. To see how options can be used to shift risk between traders with different information, we consider three strategies for combining put and call options: ∑ ∑ spread, where a put and call are combined with the exercise price on the put set below the exercise price on the call at a common expiration date; straddle, which combines a put and call on a share with the same exercise price and expiration date; strip and strap, which combine two puts with a call and two calls with a put, respectively. The payouts for a straddle are illustrated in Figure 6.4, together with two probability distributions for the share price with the same mean value. Clearly, the payouts to the straddle rise with the variance in the share price. When traders have common information and see the same probability distribution for the share price the straddle pays a normal return. But traders can make profits from holding the straddle when they have different information than the market that indicates there is greater volatility in the share price (around an unchanged mean). The dashed curve is a distribution where the variance in the share price is larger than the variance observed by market traders in the solid curve. Thus, traders with different information expect ~ ST ~ CT ~ PT ~ ST Figure 6.3 Replicating payouts on a call option. Derivative securities Market distribution ~ PT ~ CT Trader distribution ~ ST Figure 6.4 Payouts to a straddle. the two option contracts to have higher expected payouts and therefore value them above their current market prices. Clearly, if the volatility turns out to be smaller than what the market believes then traders make losses when they are long in the straddle. Those with better information make profits by taking the appropriate positions in the market, and this provides them with an incentive to become better informed. A spread allows traders to access payouts located in the tails of the probability distribution for the share price. Thus, it becomes more attractive than the straddle when the largest difference between the trader and market expectations occurs in the extremities of the distribution. Traders can make profits by constructing a butterfly when they expect less volatility in the share price (with an unchanged mean) than the rest of the market. This is a strategy that goes long in a call in the money (with strike price SˆT + a), long in a call out of the money (with strike price SˆT − a) and short in two calls at the money (with strike price Sˆ T ). The payouts at expiration (T) are illustrated in Figure 6.5, together with the different market and trader probability distributions. 6.1.2 Option values Up to this point we have summarized the payouts for short and long positions on call and put options at their expiry dates. The next important step is to compute the market prices Trader distribution Market distribution S^T − a S^T + a ~ ST Figure 6.5 Payouts to a butterfly. Derivative securities 189 in time periods prior to expiration, that is, to compute the discounted present value or their payouts at any time t < T. To simplify the analysis we focus on share options and then consider how the pricing model changes for options on other assets. In a competitive capital market where the law of one price holds, Stoll (1969) uses (6.3) to obtain the put–call parity relationship for European share options, given by CT − PT = ST − BT , where the option contracts have the same exercise price, which is also the payout to the risk-free bond, with BT = SˆT. This is confirmed by using the option payouts in (6.1) and (6.2), where max ST − SˆT , 0 − max SˆT − ST , 0 = ST − SˆT . Since this parity relationship holds at expiration in every state of nature, it also holds for the current values of the assets, with: C0 − P0 = S0 − B0 , where B0 = Sˆ T /(1 + i)T is the value of risk-free debt that pays SˆT with certainty at the expiry date. This means there is no need to separately compute the prices of call and put options. Once we know the value of a call option we can use it, together with the share price and the risk-free interest rate, to compute the value of the put option. For that reason we will focus on pricing call options on shares in the following analysis. The same model can be used to price European and American call options when shares pay no dividends as American options are not exercised early in these circumstances. This is demonstrated by comparing the payouts on two portfolios of securities – one long in a share that pays no dividends and the other long in a call on the share and a zero coupon bond with a payout equal to the exercise price on the call option ( ˆST). At expiration the payout on the share ( ST ) is less than the combined payout on the call and bond (max( ST , SˆT )), which means the market value of the share cannot exceed the market value of the call and bond at any time t < T: St ≤ Ct + SˆT . (1 + i )T −t On rearranging this expression, we can see why at any time t < T the market value of the call must be strictly greater than the payout from exercising the option early, with: CT ≥ St − SˆT > St − SˆT , (1 + i )T −t where traders maximize profit by holding options until they expire, or by selling them rather than exercising early. But the following pricing models will not in general apply to American Derivative securities options when shares pay dividends, unless all traders expect the same dividend payments and compute their impact on the future share prices in the same way. As noted earlier, we could use the consumption-based pricing model in (4.28) to value the call option in any time period t < T, as: T CT ) = E [ m T ⋅ max ( ST − SˆT , 0 )]. Ct ( S , T ) = E ( m But additional restrictions need to be imposed on consumer preferences and/or the distributions of security prices before this model can be estimated using financial data. In general T ) is a non-linear function of a potentially circumstances the stochastic discount factor ( m larger number of variables that are difficult to solve. Before taking a different approach, however, we can use this pricing model to place upper and lower bounds on the option value using the current share price, the risk-free interest rate, the exercise price and the exercise date. This is illustrated in Figure 6.6, where the option value is measured against the share price at time t < T along curve AB. Since the payout on the option approaches the current share price as T goes to infinity, ASt sets the upper bound on the option value. In fact, it will be slightly lower when shareholders have valuable voting rights that do not accrue to option holders. The lower bound is determined by the current value of the payouts to the option, which is the difference between the current share price and the discounted value of the exercise price, where: CT ≥ ST − SˆT and Ct ≥ St − SˆT . (1 + i )T On that basis, the option value must lie inside the shaded region in Figure 6.6, where curve AB is an example of the valuation schedule. When the current share price is zero the call option has no value as traders are expecting no future net cash flows. Even when the current share price is equal to the discounted value of the exercise price it still has a positive value because the variance in the share price means there is a positive probability ~ ST ~ BC t S^T /(1 + i )T Figure 6.6 Bounds on call option values ~ St Derivative securities 191 it will be in the money at time T. The vertical distance between the lower bound and schedule AB is a measure of the time value of the option due to the variance in the share price. As the current share price rises above the discounted value of the exercise price the option value approaches the lower bound in the diagram because the share price is less Box 6.1 Valuing options with Arrow prices: a numerical example Consider the European call and put options written on Purple Haze Ltd. shares. They expire in 3 months’ time and have an exercise price of $19.25. The current share price is $19.10, while its future value at expiration in each of six possible states of nature are summarized below, together with the full set of Arrow prices. States Share Arrow 19.10 0.990 25.80 0.18 12.50 0.22 16.20 0.16 26.40 0.19 8.90 0.09 22.00 0.15 Since the current value of a three-month risk-free bond that pays one dollar in each state is the sum of the Arrow prices ($0.99), the interest rate for the period is approximately 1 per cent, with i = 1/$0.99 − 1 ≈ 0.01. The payouts to the call and put options in each state are summarized below in the absence of transactions costs, with Cs = Ss − $19.25 and Ps = $19.25 − Ss, respectively. Payouts States Call Put 6.55 0.00 0.00 6.75 0.00 3.05 7.15 0.00 0.00 10.35 2.75 0.00 Using the Arrow prices the current value of the call option is C0 = (0.18 × $6.55) + (0.19 × $7.15) + (0.15 × $2.75) = $22.95, while the current value of the put option is P0 = (0.22 × $6.75) + (0.16 × $3.05) + (0.09 × $10.35) = $2.90. We can use these prices to confirm the put–call parity relationship in (6.4) by computing he current value of a risk-free bond that pays $19.25 in 3 months’ time, as B0 = ($19.25 ¥ $0.99) = $19.0575, where: C0 − P0 = S0 − B0 ≈ $0.05. $2.95 − $2.90 = $19.10 − $19.05 Arrow prices are not used in practice as they are not observable. In general, they are difficult to compute using reported data as they are potentially complicated functions of the exogenous variables that determine a competitive equilibrium outcome. That is why the popular pricing models in finance compute Arrow prices by placing restrictions on consumer preferences and/or security returns. likely to rise much further in the future. In other words, higher share prices are located further to the right-hand side of the probability distribution. As the exercise price rises the option value approaches the share price because there is a greater chance of it being in the money at expiration, but a higher exercise price lowers the option value by reducing the payouts at expiration. As noted earlier, we could derive an option pricing model that solves the share price by adopting the approach used to derive the CAPM, ICAPM, APT and CCAPM in Chapter 4. But that makes the model more difficult to use in practice because it solves the underlying value of the asset subject to the restrictions imposed by the options contracts. Since share prices are functions of variables that are difficult to measure, even with minimal restrictions placed on preferences and security returns, the pricing models perform poorly in empirical tests. Indeed, we saw earlier in Chapter 4 how poorly they perform in a number of empirical studies. Black and Scholes (1973) adopt a different approach by using the current share price to determine the market value of the options written on them. As they do not attempt to solve the price of the underlying asset their pricing model is much easier to use because it is a function of a small number of variables that are readily obtained from reported financial data. 6.1.3 Black–Scholes option pricing model Black and Scholes price a European call option on a share by constructing a replicating portfolio that combines the share with a risk-free bond, where the portfolio is continually adjusted over time to make its payoffs the same as the option. They invoke the law of one price and set the option value equal to the value of its perfect substitute, the replicating portfolio. Unlike the consumption-based pricing models examined in Chapter 4, no restrictions are placed on consumer preferences in their model. They make the following important simplifying assumptions: 1 The share price follows a random walk in continuous time, which is consistent with it being lognormally distributed in discrete time, and it has a constant variance. 2 It is a European option on a share that pays no dividends. 3 The risk-free interest rate is constant. 4 There are no frictions such as taxes and transactions costs in the capital market, where the no arbitrage condition holds (as a basis for invoking the law of one price). Rather than provide a complete derivation of the model we focus on providing intuitive explanations for the steps taken. Based on the put–call parity relationship in (6.4), we construct a continuous risk-free hedge portfolio (H) which combines the share with a call option on it, and which at each time t (omitting time subscripts to avoid notational overload) has a market value of H = as S + aC C , where aS and aC are the number of shares and call options held in the portfolio, respectively. Over time the share price and option value change, with dH = aS dS + aC dC . Derivative securities 193 To see how the share and call are combined in the risk-free hedge portfolio at each point in time, consider the situation illustrated in Figure 6.7 where the value of the option is 7.5 cents when the current share price is 40 cents. (The positively sloped line is tangent to a point G located on the call option valuation schedule AB illustrated in Figure 6.6.) If, over the next very small interval of time, the share price can be either 35 cents or 45 cents, the slope of the valuation schedule is ∂C/∂S = 0.5. The hedge portfolio is kept risk-free by selling ∂C/∂S = 2 call options, which is one over the slope of the valuation schedule at point G. When the share price is 45 cents the investor loses 2.5 cents on each option sold due to the increase in its market valuation. But this 5 cent loss is offset by the 5 cent gain in the share price. In contrast, when the share price is 35 cents the investor gains 5 cents on the two options, offsetting the 5 cent loss on the share. Since ∂S/∂C options are short-sold with every share purchased in the hedge portfolio, set aS = 1 and aC = − 1/(∂C/∂S) in (6.6), where: dH = dS + ∂S dC . ∂C The next step is to specify how the share and option values change over time. Since the option value derives from the underlying share price, we need only explain how the share price changes. Black and Scholes assume the rate of return on the share follows a geometric Brownian motion in continuous time, which over a small time interval (dt) is described by: dS = µ S dt + σ S dz, S with µS being the instantaneous expected rate of return on the share (which measures the drift in the random walk over the time interval dt), σS the instantaneous standard deviation in the rate of return on the share, and dz a Wiener process.3 ~ Ct 10 G dS 35 Figure 6.7 Constructing a risk-free hedge portfolio. ~ St Derivative securities A Wiener process of Brownian motion is the continuous-time limit of a random walk with independent increments having mean zero and variance proportional to the time interval.4 Thus, we can interpret µS dt as the expected rate of return from holding the share over the next small interval in time (dt), and σS dz as the unexpected change in the return (with E(dS/S) = µ S), where the unexpected change is the product of the instantaneous standard deviation (σS), which is positive, and the stochastic deviation (or white noise, dz), which can be positive or negative. Thus, the proportionate change in the share price can be positive or negative over each small time interval. Since z is purely random it is non-differentiable so standard calculus cannot be used to integrate the stochastic differential equation in (6.8). Instead, Ito’s lemma is used to obtain a differential equation for changes in the value of the call option, of dC = ∂C ∂C 1 ∂ 2C 2 2 dS + dt + σ S dt . 5 ∂S ∂t 2 ∂ S2 S After substituting this into (6.7), and noting that the rate of return on the hedge portfolio is by construction risk-free, with dH/H = idt, we obtain a differential equation for changes in the current value of the call option: ∂C 1 ∂2C 2 2 6 ∂C σ S . = iC − i S− ∂t 2 ∂S2 S ∂S The most notable feature of this equation is that it is non-stochastic due of the absence of dS, which results from pricing the call option inside the continuously adjusted hedge portfolio to make it risk-free. Black and Scholes solve this differential equation subject to the boundary condition on the payouts at expiry date T in (6.1) (and also requiring C = 0 when S = 0), which leads to the Black–Scholes option pricing model, C t (S t , T ) = S t N (d1) – SˆT e −iτ N (d 2), where d1 = 1n (St / SˆT ) + (i + σ 2S /2 )τ σS τ and d2 = d1 − σ S τ , for τ = T − t. It is based on the following assumptions: ∑ Changes in the share price are described by a Wiener process with a constant variance. ∑ It is a call option that pays no dividends. ∑ The interest rate is constant. ∑ There are no frictions in the capital market and the no arbitrage condition holds. As noted above, the striking feature of this model is that it is a function of five variables we can obtain from reported financial data. Four of them are directly observable in reported financial data (the current share price, exercise price, expiration date, and interest rate), while the fifth (the variance in the share price) can be estimated from historical data. Also, the model does not depend on investor risk preferences because they determine Derivative securities 195 Box 6.2 The Black–Scholes option pricing model: a numerical example We use the Black–Scholes model here to compute the current value of a European call option written on an AutoGrand share. The current share price is $12.50, while the exercise price on the option is $9.50 when it expires in 3 months’ time. Using past data, we find the share price has a standard deviation of 25 per cent, and the risk-free interest rate is 5.5 per cent. If no dividends are expected on the share over the next 3 months we can compute the current value of the option by substituting S0 = $12.50, SˆT = $9.50 σS = 0.25, i = 0.055 and τ = 0.25 into the Black–Scholes model in (6.11), C0 = 12.50 N ( d1 ) − 9.5e − ( 0.055 × 0.25 ) N ( d2 ), with d1 = ln(12.5 / 9.5) + (0.055 + 0.252 /2 ) ≈ 2.367994766, 0.25 √ 0.25 and d2 = d1 − 0.25 √ 0.25 ≈ 2.242994766. Using the cumulative standard normal distribution function we find the inverse value of the hedge ratio is N(d1) ≈ 0.991057605, and the probability the option will be in the money at expiration is N(d2) ≈ 0.987551424. After substituting these into the valuation equation above, and using e−(0.055 ¥ 0.25) = 0.986344099, the option value is C0 ≈ $3.13. These workings are summarized below as the base case, together with the recalculation of the option value for a different expiration date, standard deviation, interest rate, exercise price and current share price, respectively. S0 ST τ i σ d1 d2 N(d1) N(d2) e-iτ C0 Change (%) τ = 0.5 σ = 0.35 i = 0.08 ST = 12.50 12.5 9.5 0.25 0.055 0.25 2.3680 2.2430 0.9911 0.9876 0.9863 12.5 9.5 0.5 0.055 0.25 1.7964 1.6196 0.9638 0.9473 0.9729 12.5 9.5 0.25 0.055 0.35 1.7343 1.5593 0.9586 0.9405 0.9863 12.5 9.5 0.25 0.08 0.25 2.4180 2.2930 0.9922 0.9891 0.9802 12.5 12.5 0.25 0.055 0.25 0.1725 0.0475 0.5685 0.5189 0.9863 S0 = 9.50 9.5 9.5 0.25 0.055 0.25 0.1725 0.0475 0.5685 0.5189 0.9863 The option value rises for increases in the time to maturity, the standard deviation in the share price and the interest rate, while it falls for increases in the current share and exercise prices. Derivative securities the current value of the share price which is not solved in the model. Despite its apparent complexity, there is good intuition for the functional relationships in (6.11). N (.) is the standard cumulative normal distribution function over the random variable z with mean zero and unit standard deviation. It is the stochastic process that generates the variance in the share price, where N(d1) is the inverse of the hedge ratio (with N (d1) = ∂C/∂S ) and N (d2) the probability the option will be in the money at expiration. On that basis we can interpret the value of the call in (6.11) as the market value of the shareholding required to replicate the call St N (d1) less the present value of the implicit amount borrowed ˆS T e –iτ N (d 2 ). We can confirm the earlier conjecture about the way the five variables in (6.11) affect the value of the option. It increases with a higher share price, interest rate, variance in the share price and expiration date. A larger variance and later expiration date both raise the value of the option by increasing the likelihood of it being in the money at expiration. Indeed, the time value of the option derives from the variance in the share price which increases over time. To see why this happens, consider the term σ S τ . It measures the standard deviation in the rate of return on the share price over the life of the option (τ ∫ T − t ), and is a property of the Wiener process that generates the uncertainty in the share price. In each infinitesimally small time interval over the life of the option the share price can rise or fall proportionally by σS around the expected increase of µS. Since it can rise (or fall) by σS around this expected trend in every period, the variance over the life of the option contract 2 (τ) becomes σ S τ , so that the standard deviation is σ S τ . Thus, the later the expiration date the larger is the variance in the return on the underlying share. A higher interest rate also increases the option value because it reduces the current value of paying the exercise price, while a higher exercise price reduces the option value. Merton (1973b) modifies the Black–Scholes option pricing model by including continuous dividend payments on the share and finds it is not optimal for traders to exercise early. Roll (1977b) achieves the same result by assuming dividend payments are known beforehand. Beyond these restrictions, however, the impact of dividends on the valuation of American call options is unclear. Similar problems arise for the valuation of American put options when the share price falls below the exercise price. As it gets closer to zero traders can eventually do better by exercising early and investing the proceeds in bonds for the remaining time to expiration. Merton was also able extend the Black–Scholes model by allowing a stochastic interest rate, but not a stochastic variance in the share price or a random maturity date. Cox et al. (1979) derive the Black–Scholes model using binomial distributions, while Cox and Ross (1976) extend it by allowing the variance in the share price to change in a constant elasticity of variance model. 6.1.4 Empirical evidence on the Black–Scholes model Black and Scholes test their pricing model using data for over-the-counter options on securities traded in the Unites States between 1966 and 1969. They use their model to estimate the expected prices of these contracts and compare them to the actual prices. Any differences between them did not provide significant expected profit when combined inside the hedge portfolio. In particular, the profit could not be eliminated with transactions costs as low as 1 per cent. Galai (1977) and Bhattacharya (1983) also find evidence supporting the ability of the Black–Scholes model to predict option prices using data from the Chicago Board of Options Exchange. However, Macbeth and Merville (1979) and Beckers (1980) obtained better estimates of in-the-money options using the constant elasticity of variance Derivative securities 197 model than the constant variance Black–Scholes model. Rubinstein (1985) concluded the extensions to the Black–Scholes model could not explain all the bias in its estimates all the time. 6.2 Forward contracts It is common for consumers to make commitments to buy and sell commodities in future time periods. For example, they agree to trade major items like houses and cars in this way. Firms also contract to buy inputs ahead of time to ensure uninterrupted future production flows and to reduce the variance in their costs, while others contract to sell their outputs ahead of time to reduce the variance in their sales revenue. The seller of a forward contract agrees to deliver an underlying asset at a specified date (or, for commodity contracts, within a specified period of time) at a specified price. Many of these exchanges are implicit forward contracts while others are official, and the official contracts take one of two forms. The first are specialized forward contracts that trade between specified sellers and buyers in over-the-counter trades, while the second are standardized futures contracts that trade on official futures exchanges run by most stock exchanges.7 Over-the-counter contracts match individual buyers and sellers and are rarely traded between the time they are written and the date they are settled. They are used by traders of assets where differences in quality are important to buyers. In contrast, buyers and sellers of futures contracts trade them frequently over this period but through an official futures exchange. There is no matching of buyers and sellers in futures markets because there is less variability in the quality of the underlying assets or their quality can be summarized in sufficient detail in standardized contracts. The underlying assets can be physical commodities, such as wheat, wool and metals, or financial securities, including individual shares, bonds, share indexes and foreign currencies. Clearly, financial securities are likely to be less variable in quality than commodities, and as a consequence are more easily accommodated in standardized futures contracts. Quality differences in some commodities can be summarized fairly accurately in standardized contracts as well. For example, wool can be described by a comprehensive set of characteristics such as colour, fibre length, fibre width, vegetable matter content, weight and yield, which are measured and reported at all the wool auctions in Australia. For that reason, wool futures are actively traded in Australia. Futures contracts are highly liquid markets where gains and losses from movements in futures prices are settled daily through clearing houses operated by futures exchanges. In other words, the gains and losses are marked to the market each trading day. The main reason for doing this is to stop traders defaulting on their contracts, or at least, to limit the losses when default occurs. To that end, futures traders are required to make initial deposits and maintain them at margins that are expected to cover any daily losses. Additionally, bounds are set to limit the daily price changes on futures contracts, where the price limits are normally raised when they bind for a number of consecutive days. Without margins and price limits, futures traders can be exposed to very large losses because they pay no price when the contract is initially written. This is a problem when speculators trade futures contracts solely for the purpose of making expected profits by exploiting different information from the market. They make profits by combining the underlying asset with its futures contract when they expect a future spot price for the commodity at settlement different from the rest of the market. For example, if the market expects a lower future spot price the speculator can make profits by selling the futures contract and buying the underlying asset. Derivative securities Forward contracts play a number of important roles that facilitate the efficient allocation of resources over time. Their prices are important signals about the expectations market traders have for future prices of the underlying assets, where speculators can profit from accessing new information. Another role is that of hedging, where traders use futures to transfer aggregate risk and eliminate individual risk. Farmers sell wool futures to reduce the variance in their income, where they trade aggregate risk to agents who specialize in risk bearing, and diversify individual risk across wool growers with uncorrelated production risk. Futures contracts are used to perform these functions when they do so at lower cost than other alternatives such as purchasing explicit insurance contracts. 6.2.1 Pricing futures contracts We now turn to the pricing of futures contracts, noting that prices of non-standarized forward contracts can be obtained as special cases. As stated earlier, no price is paid for a futures contract at the time it is written, where the contract price is the expected spot price of the underlying asset at the date of settlement. In prior time periods the asset price normally changes, thereby causing the futures price to change. For the most part, the underlying asset is not traded at settlement, but rather the difference between the contract and spot price is paid in cash. If the contract price is higher, the buyer pays the difference to the futures exchange which transfers it to the seller, while the reverse applies when it is lower. This ensures the seller always ends up being paid the contract price for the underlying asset at settlement. However, rather than wait until then, traders settle their gains and losses at the end of the each trading day based on the closing price of their contract. This is where the gains and losses are marked to market against deposits lodged at the futures exchange by traders. In time periods closer to the settlement date the contract price approaches the underlying asset price. Since there are fundamental differences between the payouts to commodity and financial futures contracts, we consider then separately. A commodity futures contract that delivers a storable commodity (N) at settlement date T has a current price (0 FNT) equal to the current spot price of the commodity plus the opportunity cost of time and storage costs: FNT = pN 0 (1 + i T )T + 0 Q NT (1 + i T )T , where iT is the average annual yield on a risk-free bond that matures at time T, and 0 QNT the present value of the marginal cost of storing commodity N over the period. As a way to understand this relationship, consider a situation where storage is costless, so that 0 FNT = pN0 (1 + iT )T. A trader who sells a futures contract commits to sell the commodity at time T for price 0 FNT. By arbitrage in a frictionless capital market this price must be equal to the cost of borrowing funds at the risk-free rate to buy the good now at price pN0 and to hold it until time T when the contract expires. At this time the trader receives certain revenue of 0 FNT and retires the debt by paying pN0 (1 + iT )T. Marginal storage costs raise the futures price because they increase the cost of transferring the commodity into the future. Hicks (1939) and Keynes (1923) refer to this arbitrage activity as hedging, while Kaldor (1939) also includes a marginal convenience yield in the commodity futures contract price Derivative securities 199 when stocks provide positive benefits (by way of lower costs) for users. For example, a steel producer can minimize disruptions to its production run by holding (or having access to) stocks of coal, iron and other raw material inputs, where the benefits lower the futures price, so that FNT = ( pN 0 + 0 QNT − 0YNT (1 + iT ) T , 8 where 0 YNT is the present value of the marginal convenience yield from storing a unit of the commodity until time T. In practice, it is useful for traders to know whether futures prices are accurate predictors of expected spot prices because it provides them with valuable information for making intertemporal consumption choices. In a certainty setting (or with riskneutral consumers) the futures price in (6.13) is also the expected spot price for the commodity at time T, with 0 FNT = E0 ( pNT ) . However, when the return from holding the commodity is uncertain the expected spot price becomes E0 ( pNT ) = pN 0 [1 + E (iNT )]T ,9 where E (iNT ) is the expected annual yield from holding commodity N until time T. Any storage costs and convenience yield are included in this expected return. When the commodity contributes to consumption risk, and consumers are risk-averse, a risk premium will drive the futures price below the expected spot price. Indeed, some commodity producers use futures contracts to reduce their consumption risk by transferring it to speculators who are specialists at risk bearing, and they pay a risk premium to them as compensation by discounting futures prices. This can be demonstrated by using one of the consumption-based asset pricing models examined earlier in Chapter 4 to isolate the risk premium embedded in the expected holding return to commodity N in (6.14). If, for example, the CCAPM in (4.41) holds, the expected spot price in (6.14) can be decomposed as E0 ( pNT ) = ( pN 0 + 0 QNT − 0YNT ) (1 + i ) T −1 (1 + i + ( iI − i ) β IN , where βIN is commodity N’s contribution to aggregate consumption risk, and i¯I − i is the market premium paid for this risk. Notice the risk premium ( iI − i )β IN is only paid in the last period. Thus, there is no intermediate uncertainty where consumers revise their expectations about the commodity risk in prior periods.10 It is clear from (6.13) and (6.15) how the risk premium drives the futures price below the expected spot price. Keynes refers to this as normal backwardation, and it can be illustrated by considering the futures contract that matures in 1 year, with T = 1, where FNT = E0 ( pNT ) − pN 0 ( iI − i ) β IN . Derivative securities Dusak (1973) finds no evidence of any discount in the futures prices for wheat, corn and soybeans in US data. In other words, all the commodity price variability was diversifiable risk and attracted no risk premium. Futures prices for financial securities do not include the last two terms in (6.13) because they trade in highly liquid markets with (almost) no storage or other trading costs and no convenience yield. Thus, the current price of a futures contract for share S that pays no dividends and is settled at date T is: FST = S0 (1 + iT )T , where S0 is the current share price and iT the average annual yield to maturity (T) on a longterm bond. There are two ways of acquiring share S at time T – one is to purchase it now by paying price S0 and then holding it until time T, while the other is to purchase a futures contract which allows the holder to pay the price 0 FST for the share at time T. Arbitrage in a frictionless competitive capital market will equate the present value of these options, with S0 = 0 FST/ (1 + iT) T. Once again, the futures price will be less than the expected spot price when the economic return on the share contains market risk. To see this, we compute the expected share price at settlement date T in the absence of intermediate uncertainty as E0 ( ST ) = S0 (1 + iT )T −1 [1 + iST ], where iST is the expected return to share S in period T. When the CCAPM in (4.41) holds we can decompose (6.18) as E0 ( ST ) = S0 (1 + iT )T −1 [1 + i − ( iI − i )β IS ], where ( iI − i )β IS is the premium for the share’s contribution to aggregate consumption risk. Clearly, this risk premium drives the futures price in (6.17) below the expected spot price in (6.19), with 0 FST < E0 ( ST ). When dividends are paid in periods prior to settlement the futures price in (6.17) falls as they are not received by the holder of the futures contract, so that ST )](1 + i )T , FST = [ S0 − PV0 ( DIV T ST ) being the discounted present value of the dividends paid to share S over with PV ( DIV periods 0 to T. Since share S has a lower market value when it pays dividends prior to settlement the futures price also falls. Futures prices for discount bonds (D) are described by (6.17) as all their payouts occur at maturity: FDT = D0 (1 + iT )T . Derivative securities 201 This can differ from the expected spot price when the interest rate changes over time. Long (1974) uses the ICAPM to compute the forward price of a discount bond that matures at time T when the interest rate and relative commodity prices are stochastic, and finds that the expectations hypothesis can fail in the presence of the no arbitrage condition. As a way to demonstrate this point, we allow changes in aggregate consumption risk through changes in the interest rate (βiD) and relative commodity prices (βπD), where the expected spot price for the discount bond becomes ) = D [1 + i + ( i − i ) β + ( i − i ) β + ( i − i ) β ]T , E0 ( D T 0 1 M 1 MD i 1 iD π 1 πD Box 6.3 Prices of share futures: a numerical example Long-grain brown rice (R) is a storable commodity that is harvested twice a year on Equatorial Island – once in March and then again in September. It is stored by traders at each harvest and then released to the market before the next harvest. Traders incur wastage and other storage costs with a present value of 0 QRT = $0.10 per kilo of rice, while consumption demand for rice and the annual risk-free interest rate (of 6 per cent) are constant over time. When the current price of long-grain brown rice is pR0 = $1.20 per kilo, then, in the absence of a convenience yield, the price of a futures contract that promises to deliver 1 kg in 3 months’ time (with T = 0.25), is FRT = ( pR 0 + 0 QRT ) (1 + i )T = ($1.20 + $0.10)1.015 ≈ $1.32. If storage provides retail outlets with a convenience yield of 0 YRT = $0.05 per kilo of rice in present value terms, the futures price falls to FRT = ( pR 0 + 0 QRT − 0 YRT ) (1 + i )T = ($1.20 + $0.10 − 0.05)1.015 ≈ $1.27. In the absence of uncertainty, arbitrage equates the expected spot price to the futures price, with pRT ) traders can make profits by going short in (selling) 0 FRT = E0 ( pRT ) ≈ $1.27 If 0 FRT > E0 ( rice futures and long in (buying) rice, while the reverse applies when 0 FRT 0 and βπD > 0. However, ) when the bond provides a hedge against aggregate conit is possible that 0 FDT > E0 ( D T sumption risk, with βiD < 0 and βπD < 0. All these pricing relationships are obtained in a competitive capital market where the no arbitrage condition holds. When the futures price is higher than the expected spot price of the underlying asset at settlement there are arbitrage profits from going long in the futures contract and short in the asset, while the reverse applies when the contract price is lower. By taking equal and opposite positions in the share and the futures contract, the portfolio is risk-free. 6.2.2 Empirical evidence on the relationship between futures and expected spot prices Houthakker (1968), Cootner (1960) and Bodie and Rozansky (1980) find evidence to support normal backwardation, while Telser (1981), Gray (1961), Rockwell (1967) and Dusak (1973) find that futures prices are unbiased predictors of spot prices without any risk premium. Despite their different findings on normal backwardation, Bodie and Rosansky (1980) and Dusak (1973) find the CAPM does poorly at explaining commodity returns because commodity prices are negatively correlated with inflation while stock returns are positively correlated with it. Fama and French (1987) find marginal evidence of a risk premium in futures prices when commodity contracts are bundled into portfolios, as well as a convenience yield in the prices of some commodity futures, which both appear to vary over time. In that case the risk premium should be measured using the ICAPM or APT, both of which allow aggregate consumption risk to change over time. Fama and French also find evidence that futures prices are good predictors of expected spot price when commodities are stored at relatively low cost. Any demand and supply shocks are transferred into prices across time periods in these circumstances. Roll (1984) examined frozen orange juice futures where most of the variation in price is explained by changes in weather, and found futures prices predicted the weather better than did the US National Weather Service. In summary, there is mixed evidence on normal backwardation in futures prices, and when there is a risk premium it appears to vary over time. Also, futures prices are good predictors of expected spot prices when storage costs are low. Problems 1 Options contracts are actively traded derivative securities. Examine the factors that determine the value of a European put option written on an individual share at time (t) Derivative securities 203 prior to its expiration date (T). Compare its value at t < T to its value at T for each possible share price. Consider how the value of the option at t < T is affected by increases in the variance in the share price, the expiration date, the interest rate and the exercise price. Identify reasons why investors would purchase put options on shares. 2 European call options trade on shares in Linklock Roofing Pty Ltd. These shares have a current price of $1.05 and pay no dividends over the life of the option. i Calculate the current value of a call option on a Linklock share when there is a standard deviation of 30 per cent in the share price on the expiration date in 6 months’ time. The exercise price at that time is $1.00 and the annual risk-free interest rate is 5 per cent. ii Identify the number of call options that must be combined with each Linklock share in a risk-free hedge portfolio. iii Recalculate the option value in part (i) above when: a the maturity date is increased to 1 year; b the standard deviation in the share price at maturity rises to 45 per cent; c the interest rate rises to 8 per cent; d the current share price falls to $1.00; e the current share price rises to $1.10. Explain the reasons for the changes in the option value in each of these cases. 3 Compute the current value of a European put option on a Fleetline share that pays no dividends over the life of the option contract when the vector of Arrow prices for the five possible states of nature, is ϕ: = {0.18, 0.08, 0.35, 0.10, 0.25}. The option has an exercise price of $2.50 at the expiration date when the state-contingent share prices are: State Share price at the expiration date ($) 2.8 2.5 3.4 1.2 2.3 Corporate finance A significant proportion of capital investment is financed through security sales. While consumers borrow funds to purchase homes, cars and other capital assets, most private investment is undertaken by corporate firms who sell a range of securities that are classified in general terms as debt and equity instruments. Many of these securities are purchased by large institutional investors, such as insurance companies and mutual funds, who convert them into derivative securities. As specialist finance institutions they facilitate resource flows at lower cost, which has the potential to simultaneously raise the expected returns received by consumers at each level of risk and to reduce the cost of capital for firms financing risky investments. Consumers bundle securities together into portfolios to determine their future consumption risk, while institutional investors create derivative securities to satisfy consumer risk preferences and to earn profits from private information about the net cash flows of firms which are ultimately paid as security returns. By exploiting profitable opportunities they provide firm managers with a greater incentive to operate in the interest of their shareholders and bondholders, but these ideals may be compromised when there are trading costs and asymmetric information. Before analysing the financial policy choices of firms we summarize the different ways they can raise funds for investment in Section 7.1. Many of the primary assets they sell are used by financial institutions to create a vast array of derivative securities that perform a number of important wealth-creating roles, including the provision of risk-spreading services and transfers of information through arbitrage activity. The range of financial decisions made by firms can be separated into the capital structure choice, which determines the debt–equity mix for a given level of investment, and dividend policy, which determines how income is distributed to investors as dividends, interest or capital gains. We examine capital structure choices in Section 7.2 and dividend policy in Section 7.3. In both sections the analysis starts in a classical finance model where investors with common information trade in frictionless competitive markets. In this setting the Modigliani–Miller (MM) financial policy irrelevance theorems hold, so that real equilibrium outcomes in the economy are independent of the types of financial securities used to fund investment and of the way securities distribute their income. It provides a simple framework that can be extended to a more realistic setting in stages to identify the separate factors that determine the optimal financial policy choices made by consumers and firms. These factors are difficult to isolate in a general model where taxes, trading costs and asymmetric information are included from the outset. By introducing them one at a time to the classical finance model we obtain a much clearer understanding of the likely real effects of different financial policy choices. Corporate finance 7.1 How firms finance investment Most private investment is undertaken by corporate firms who acquire separate legal identity under corporate law. They are created to exploit, among other things, any economies of scale from large production runs. As institutions they have no initial wealth of their own, so they sell financial securities to finance their investment. Firms have three main sources of funds: they can sell new shares, including ordinary (or common) shares, preference shares, publicly listed shares and proprietary shares; retain earnings on existing shares; and, sell debt, including short- and long-term, secured and unsecured, debt with fixed and variable interest rates, accounts payable and bank overdrafts. In a certainty setting without taxes and transactions costs these sources of finance are perfect substitutes and will therefore pay the same rate of return in a competitive capital market. However, they are not in general perfect substitutes in the presence of risk, taxes and transactions costs. Most new share issues are publicly listed common shares with limited liability that trade on stock exchanges. When companies list their shares on a stock exchange they must fulfil a number of important legal obligations. In particular, they must publish information at prescribed times each year and issue a prospectus with new share issues that provides important information to investors about the management of the finance and production activities of firms. Limited liability shares restrict the legal claims that can be made against the wealth of shareholders to the value of their invested capital. This is important because it limits the risk firm managers can impose on shareholders when they have less information. But limited liability shares force bondholders to bear risk when losses exceed the capital of shareholders, which is why there are default provisions in corporate law that allow bondholders to file to have firms declared bankrupt when they cannot make their interest payments. Once bankruptcy claims are granted administrators are appointed to restrict the actions of managers. These important institutional features distinguish debt from equity, particularly in the presence of uncertainty and asymmetric information. Even though bondholders have prior claims to the net cash flows of firms, they face default risk when losses exceed the invested capital of shareholders, while shareholders face risk, which is bounded by limited liability, because they have a residual claim on the net cash flows. But most shareholders have voting rights that allow them to influence the investment choices made by firm managers. Indeed, majority shareholders can take firms over by changing managers, merging them with other firms, or liquidating them. Another important difference between debt and equity arises from the different taxes on their returns. For example, share income is taxed twice, while interest payments on debt are subject only to personal tax under a classical corporate tax system. Moreover, there are higher personal taxes on cash income paid as dividends and interest than there are on capital gains in most countries. We examine the effects of these important institutional features on the financial policy choices of firms in the following sections. 7.2 Capital structure choice As owners, shareholders have the ability to affect the way firms operate, but without providing all the capital, as debt allows them to leverage their control over firms. The factors that impact on this leverage policy can be isolated by first establishing conditions under which the Modigliani–Miller leverage irrelevance theorem holds. This identifies important Corporate finance equilibrium forces at work in a frictionless competitive capital market where consumers have common information. In particular, it emphasizes the role of arbitrage that equates the expected returns to securities in the same risk class. In this setting leverage policy is irrelevant to the market value of firms. By extending the analysis to accommodate taxes and asymmetric information, it is possible to identify circumstances where changes in leverage have real effects. When Modigliani and Miller (1958) proved their irrelevance theorems they did not explicitly identify the need for common information. Indeed, it was implicit in much of the analysis of financial policy at that time. More recently, however, greater emphasis has been placed on the role of asymmetric information. If investors have less information than managers about the net cash flows of firms, their financial structure choices can have real effects by signalling new information, or by changing the incentives facing managers and the decisions they make. In these circumstances leverage policy can change the cost of capital and affect a firm’s market valuation. Modigliani and Miller (1963) extended their earlier analysis by including a classical corporate tax. Since it falls on income paid to shareholders by making interest payments taxdeductible expenses, it drives equity out of the corporate capital market in a classical finance model without leverage–related costs. In a competitive capital market all corporate income is paid to consumers through the lowest tax channel as interest. Thus, no tax revenue is raised by the corporate tax as firms issue only debt in these circumstances. Clearly, other factors must offset this tax advantage of debt to explain the significant amount of equity that trades in most capital markets. Prior to the irrelevance theorems of Modigliani and Miller (1958, 1961) the finance literature examined the role of leverage-related costs, and, in particular, that of default costs, in determining the optimal debt–equity choices of firms. As they increase leverage there is a greater probability of defaulting on interest payments when shares have limited liability. And this occurs because there is variability in the firms’ net cash flows which must eventually spill over onto debt at high levels of leverage. When bondholders know how risky the debt becomes, its price sells at a discount to compensate them for the default risk, where leverage is irrelevant to the cost of capital and the market valuation of the firm. But when bondholders have less information than firm managers about this risk, bond prices may not discount sufficiently to properly compensate them. Most countries write bankruptcy provisions in their corporate laws as a way to protect bondholders in these circumstances. The associated default costs are third party claims on firm net cash flows that reduce the value of the firm to its capital providers. Once marginal expected default costs offset the interest tax deduction on debt, corporate firms also sell equity, where an optimal capital structure trades off marginal leverage-related costs against the interest tax deductions. Clearly, bankruptcy costs rely on asymmetric information, but that was not recognized explicitly until the more recent literature identified other forms of leverage-related costs. Most studies examine the role of agency costs when firm financial policy alters the incentives facing capital providers and firm managers in an asymmetric information setting. For example, there can be principal–agent problems when it is costly for bondholders and shareholders to monitor the actions of firm managers, where higher leverage increases interest payments and reduces the free cash flows that can be used by managers for private gain. Harris and Raviv (1991) provide a comprehensive summary of the agency costs that change with leverage. Lost corporate tax shields are another source of leverage-related costs, but they can arise in a common information setting. In most countries profits and losses are not treated symmetrically by the classical corporate tax, which taxes profits without making tax refunds Corporate finance on losses. Tax losses occur when tax-deductible expenses, including interest and depreciation, exceed the net cash flows. When firms cannot sell their tax losses to other firms or carry them forward at interest, they lose the real value of their tax deductions. And since tax losses occur when firms default, the expected value of these lost corporate tax shields rises with leverage. Earlier empirical work by Warner (1977) and Altman (1984) showed that the default costs were considerably less than the interest tax deductions on debt. This led people to seek other explanations for use of equity in the presence of a classical corporate tax. Miller (1977) likened them to the rabbit in a horse and rabbit stew, and responded by including personal taxes on security returns. Prior studies focused on factors affecting firms and ignored those affecting investors – in particular, the role of personal taxes. This is probably because they (perhaps implicitly) adopted a partial equilibrium analysis to examine the financial decisions made by firms. Miller recognized the importance of including demandside factors and exploited two important features of personal tax codes to explain the presence of equity: first, marginal cash tax rates are progressive where different consumers have different tax rates on security returns; and second, taxes on capital gains are lower than taxes on cash income. Thus, it is possible for high-tax consumers (with cash tax rates above the corporate rate) to prefer equity that pays capital gains, even though they are taxed twice, once at the corporate rate and then again at the personal tax rate. Since low-tax investors must have a tax preference for debt both securities trade in the Miller equilibrium, where investors form strict tax clienteles. When both securities trade leverage irrelevance holds for individual firms in this setting. The analysis is general enough to accommodate uncertainty because there is common information and no trading costs in a competitive capital market. Subsequent empirical studies by Graham (2000) and Molina (2005) find evidence of larger expected default costs when indirect bankruptcy costs are also taken into account. They use information provided by debt rating agencies to get estimates of the default probabilities which they apply to their estimates of the costs of default. In the following subsections we examine the important role of taxes and risk in firm capital structure choices, starting with the results obtained by Modigliani and Miller. 7.2.1 Certainty with no taxes We begin by proving the Modigliani–Miller leverage policy irrelevance theorem in a certainty setting without taxes. While the outcome is fairly obvious in this setting, the analysis provides an ideal opportunity to establish a simple methodology for analysing more complicated cases in following sections. As a way to identify the factors impacting on equilibrium outcomes we obtain separate relationships between security returns that would make consumers and firms indifferent to debt and equity. (These are the demand and supply conditions, respectively, discussed below.) Much of the early analysis in corporate finance focused on factors affecting firms without explicitly recognizing the important role of factors affecting consumers. And this is especially important when taxes are included in the analysis. The approach we use is formalized by the demand and supply conditions, as well as the equilibrium condition, which identifies the relationship between the market returns to debt and equity in a competitive capital market equilibrium. The two-period certainty model of an asset economy in Section 2.2.5 is extended here by allowing consumers and firms to trade two risk-free securities, debt (B) and equity (E), where the current market value of the portfolio held by each consumer (h) is paB ahB + paE aEh , Corporate finance Box 7.1 Debt–equity ratios by sector As a way to illustrate the financial structure choices of firms we report the debt–equity (B/E) ratios for publicly listed companies on the Australian Securities Exchange in 15 sectors of the economy. There is no debt issued in the energy sector, while transportation has the highest ratio at 60.7 per cent. No. Capital goods Commercial services and supplies Consumer durables and apparel Consumer services Energy Food and staples retailing Food, beverages and tobacco Health care & equipment services Materials Media Retailing Software and services Technology hardware and equipment Telecommunications services Transportation Market 34.1 28.4 43.6 32.5 0 60 49.4 6.9 0 22.7 35.6 1.4 0 5.8 60.7 37.2 Source: Based on financial data reported by Aspect Financial Analysis on 17 May 2007. This database is produced by Aspect Huntley Pty Ltd. with payouts of ahB paB (l + iB )+ a hE paE (1 + iE ) in the second period. Thus, their optimal security trades satisfy ϕ1h (1 + ik ) ≤ 1, k = B, E,1 where ϕ1 is the primitive (Arrow) price of security that pays one dollar in the second period; it is the discount factor used by the consumer to compute the current value of income in the second period.2 h Proposition (Demand condition). Consumers are indifferent to debt and equity in a certainty setting without taxes, when the securities pay the same return, with: iB = iE.3 Proof. In a competitive equilibrium without taxes, transactions cost or borrowing constraints, consumers trade both securities until, using (7.1), we have ϕ1h (1 + iB ) = ϕ1h (1 + iE ). Whenever iB ≠ iE they will hold the security paying the highest return, preferring debt if iB > iE and equity if iB < iE. This arbitrage activity, which Modigliani and Miller refer to as homemade leverage, leads to (7.2). Corporate finance Box 7.2 A geometric analysis of the demand condition Useful insights can be obtained from a geometric analysis of the demand condition for an individual consumer whose optimal debt–equity choice is illustrated in the diagram below where the budget line (Mh) maps the largest combinations of debt and equity that can be traded from income transferred between the two periods. A saver chooses current consumption and then purchases a portfolio of securities from remaining current income, while a borrower sells securities to transfer future income to the current period. The slope of the budget line is determined by the relative cost of debt (−paB/paE), and is constant for a price-taker. The indifference schedules (vh), which are illustrated as dashed lines, isolate the bundles of debt and equity that provide the consumer with same utility, and are defined for optimally chosen consumption expenditure in each period. Thus, we are looking at the security trades with all other things held constant. Since consumers derive utility from consuming payouts to securities, the slopes of the indifference schedules are determined by the relative payout to debt (−(1 + iB)paB/(1 + iE)paE), and are linear because the two securities are equally risky and the consumer is a price-taker. When the demand condition (DC) holds, the indifference schedules have the same slope as the budget Mh. Since iB = iE they are willing to hold any of the bundles along indifference schedh ule vDC . ahE Mh Slope= − PaB (1+iB) PaE (1+iE) v hDC Slope= − PaB PaE ahB Mh Whenever the indifference schedules and budget line have different slopes the consumer has unbounded demands for the security paying the highest return. For example, when the indifference schedule is flatter than the budget line (with iB < iE) the consumer has an infinite demand for equity funded by selling debt. The reverse applies when the indifference schedule is steeper. Thus, the consumer is willing to buy or sell both securities when the indifference schedules h ) ( vDC have the same slope as the budget constraint as confirmation of the demand condition (DC) in (7.2). In a certainty setting where the Fisher separation theorem holds, firms maximize profit by choosing a portfolio of securities to minimize their cost of capital and a level of investment (Z0) to maximize their current market value, with V0 = Y1 ( Z 0 ) ,4 1 + (1 − b )iE + biB Corporate finance where Y1(Z0) is the market value of the net cash flows, b the portion of capital (V0) financed with debt, and 1 − b the remaining portion financed with equity. When debt and equity are optimally traded by each firm ( j), they satisfy ϕ1j (1 + ik ) £ 1, for k ∈B, E, where ϕ j is the price of a primitive (Arrow) security that pays one dollar in the second period; it is the discount factor used by firms to value their future net cash flows. Proposition (Supply condition). Firms are indifferent to debt and equity in a certainty setting without taxes, when each security has the same marginal cost, with: iB = iE. Proof. In a competitive equilibrium without taxes, transactions cost or borrowing constraints, firms trade both securities until, using (7.4), we have ϕ1j (1 + iB ) = ϕ1j (1 + iE ). Whenever iB > iE firms can reduce the cost of capital and increase their value by selling only equity. Indeed, they can make arbitrage profits by selling more equity than they need to finance their production investment by using it to purchase debt, while the reverse applies when iB < iE.5 This arbitrage activity in a frictionless competitive capital market leads to (7.5). Proposition (Equilibrium condition). In a frictionless competitive equilibrium consumers and firms are indifferent to debt and equity, with iB = iE, j and they have same discount factors, with ϕ1h = ϕ1 = ϕ1 for all h, j. Proof. In a competitive equilibrium without taxes, transactions costs or borrowing constraints the two securities must pay the same rates of return to eliminate arbitrage profits and bound the equilibrium demands and supplies. Consumers purchase only debt and firms supply only equity whenever iB > iE, while the reverse applies when iB > iE. Once the equilibrium condition in (7.6) holds firms cannot make profits, and consumers cannot increase their utility, by changing their debt–equity choice. Since (7.2) and (7.5) both hold, we have from (7.1) and (7.4), that ϕ1 (1 + iB) = ϕ1 (1 + iE ) ∀h, j. Thus, consumers and firms use the same discount factors to value capital assets. The Modigliani–Miller (MM) leverage policy irrelevance theorem is a direct implication of (7.6). Since debt and equity are perfect substitutes for consumers and firms, the aggregate debt–equity mix is irrelevant. At the firm level, changes in leverage have no impact on their market valuation, where from (7.3) we have dV0 db = dZ0 = 0 (iE − iB )V0 = 0. 1 + biB + (1 − b )iE Corporate finance Box 7.3 A geometric analysis of the supply condition The supply condition for each firm j is illustrated in the diagram below where the asset production frontier Rj Rj isolates the bundles of debt and equity the firm can supply, while the isoprofit lines (ηj) are the bundles of debt and equity that provide the same profit. Asset supplies are ultimately constrained by the discounted value of the firm’s net cash flows (Y1 j ), where the most debt it can issue is aˆ Bj = Y1 j / [ paB (1 + iB )] , and the most equity aˆ Ej = Y1 j / [ paE (1 + iE )] . The slope of the asset prodution frontier is the marginal cost of raising leverage, with − paB (1 + iB )/ paE (1 + iE ), and it is constant for price-taking firms. The iso-profit schedules are also linear for the same reason, and their slope measures the net marginal revenue from raising leverage (−paB/paE) for a given level of investment. If they are steeper than the asset production frontier (with iB < iE) firms supply only debt, while the reverse applies when the iso-profit lines are flatter (with iB > iE). Indeed, firms have unbounded demands for the security paying the highest return because they can use the proceeds to sell the security with lowest return and make profits from arbitrage. These profits are eliminated when the securities pay the same rate j of return (with iB = iE) because the iso-profit lines ( ηSC ) satisfy the supply condition (SC) and have the same slope as the asset production frontier. j aE Rj a^ E Slope = − Slope = − paB (1+iB) paE (1+iE ) paB paE η SC j a^ B aB Rj At the aggregate level the debt–equity mix is irrelevant to consumers because the securities are perfect substitutes. Indeed, there are no risk benefits from bundling them together as both produce the same future consumption flows. As noted earlier, MM leverage irrelevance is straightforward in a certainty setting without taxes. There is really no need to have more than one security in this setting because there is no risk to diversify or taxes and other leverage related costs to minimize. But it is useful to demonstrate the leverage irrelevance theorem in these circumstances because it emphasizes the way arbitrage activity drives the equilibrium relationship between security returns with the same risk. Arbitrage is crucial in all the MM financial policy irrelevance theorems. Indeed, it is important in all the equilibrium asset pricing models we examined earlier in Chapter 4. In following subsections the leverage irrelevance theorem can hold for individual firms but not in aggregate when risk and taxes are introduced. Corporate finance Box 7.4 Modigliani–Miller leverage irrelevance: a geometric analysis It is possible to demonstrate MM leverage irrelevance by using the diagrams in Boxes 7.2 and 7.3 above. Since all firms face the same security prices and returns, we obtain the aggregate production frontier by summing the discounted value of their net cash flows. It is the line labelled RR in the diagram below where security trades are aggregated over firms, with Σ j α kj = α k for k = B, E . All the bundles of securities along this frontier exhaust the net cash flows of firms. As consumers also face the same security prices and returns they have indifference schedules with the same slope, and we obtain aggregate indifference schedules (v) by summing the utilities they derive from the aggregate debt–equity bundles supplied by firms. Since firms pay out all their net cash flows the aggregate production frontier is also the aggregate budget constraint for consumers. aE R Slope = − paB (1+iB ) paE (1+iE ) vDC Slope = − paB (1+iB ) paE (1+iE ) aB R In a competitive equilibrium when the no arbitrage condition in (7.6) holds the aggregate indifference schedule lies (vDC) along the aggregate production frontier. As a consequence, consumers get the same utility from every bundle of debt and equity along RR, which means the aggregate debt–equity ratio is irrelevant to them. 7.2.2 Uncertainty with common information and no taxes Modigliani and Miller proved their irrelevance theorems in an uncertainty setting where traders have common information. Even though they placed little emphasis on the role of common information in their analysis, its importance has since been recognized. We initially demonstrate leverage irrelevance in an economy without taxes by using the CAPM pricing equation, and later generalize it in the Arrow–Debreu model outlined in Section 3.1.3. Since investors have homogeneous expectations and trade in a frictionless competitive capital market in the CAPM, leverage simply redistributes given project risk between shareholders and bondholders without affecting the value of the firm. Investors know what the firm’s project risk is, and how it is distributed by leverage policy between debt and equity, where changes in their expected security returns must reflect changes in risk bearing without altering the total risk premium firms pay. This also applies in the more general Arrow–Debreu state-preference model as the key requirements for leverage policy Corporate finance irrelevance are competition, no trading costs and common information. In this setting all profits are eliminated from expected security returns by arbitrage where they can only differ by the amount of project risk in them. Thus, changes in capital structure have no real effects on consumers because they do not alter their consumption opportunities. The analysis in the Arrow–Debreu economy is more general than the CAPM because no restrictions are placed on the distributions of security returns, or on the preferences and wealth of consumers. Instead, it identifies circumstances where the risk-spreading opportunities available to consumers are unaffected by the leverage policy choices of firms. Leverage irrelevance using the CAPM The security market line in the CAPM is an equilibrium asset pricing equation that combines the demand and supply conditions. Thus, it can be used to compute the market value of firms in (7.3) when they have random net cash flows, with Vc = Y ( Z ), where c = 1 + biB + (1 − b )iE is the user cost of capital.6 Now the returns to debt and equity can be different due to the non-diversifiable (project) risk in the net cash flows. To provide a benchmark for determining how asset values are affected by changes in leverage, consider the unlevered firm (U) which has an expected user cost of capital of cU = 1 + iEU , where the expected return on its equity, using the CAPM, is iEU = i + ( iM − i )β EU , with β EU = Cov(i˜EU , i˜M )/ Var (i˜M ) = σ EU M / σ 2M being the market risk in each dollar of equity capital. Since shareholders bear all the project risk (βY) in the firm, we can decompose the beta coefficient for equity as β EU = βY , VU where βY = Cov(Y˜ , i˜M )/Var (i˜M ) and VU is the current market value of the firm.7 Thus, the beta coefficient for unlevered equity is the market risk in the net cash flows (which is referred to here as project risk) per dollar of capital invested in the firm. Substituting the beta coefficient in (7.9) into (7.8), and applying the expectations operator to the market value of the unlevered firm, we have VU = Y − ( iM − i )βY . 1+ i This is the certainty-equivalent value of the firm, where the risk-adjusted expected net cash flows Y − ( iM − i )βY are discounted by the risk-free user cost of capital. The risk premium ( iM − i ) βY is compensation the firm must pay to risk-averse shareholders for bearing its project risk. Now suppose the firm finances investment by selling risk-free debt and equity, where the expected user cost of capital for the levered firm (L) becomes cL = 1 + bi + (1 − b ) iEL. Corporate finance Box 7.5 The market value of an all-equity firm: a numerical example Duraware Pty Ltd is a publicly listed company that produces sports clothing. It has no debt and the current market value of its shares is $1.64 million. In 12 months’ time Duraware is expected to have net cash flows of $1.68 million, so its expected user cost of capital solves VU = Y / cU , as cU = 1 + iE ≈ 1.13. If the net cash flows have a covariance with the return on the market portfolio of 12 per cent, when the variance in the return on the market portfolio is 9 per cent, the firm’s project risk is βY = Cov (Y˜ , ˜iM )/Var ( ˜iM ) = 0.12 / 0.09 ≈ 1.33. Using the CAPM with a risk-free interest rate of 5 per cent, the risk premium in the expected return on equity (of 8 per cent) can be decomposed as U iEU = 0.05 + (0.15 − 0.05)β EU ≈ 0.13, where the beta coefficient is β EU = βY /VU ≈ 0.81. This allows us to write the current value of the firm as VU = E = Y 1 + i + ( iM − i )βY /VU Rearranging terms, we have VU = E = Y − ( iM − i )βY $1.86 m − $0.133m = ≈ $1.64 m, 1+ i 1.05 with ( iM − i )βY = 0.10 × $1.33 = $0.133m being the total risk premium paid to shareholders. Thus, the firm has risk-adjusted net cash flows of $1.86m − $0.133m = $1.73m. When bondholders bear no project risk they are paid a risk-free return that cannot be affected by price-taking firms. However, the expected return on equity will change with leverage now because shareholders are bearing all the project risk. By using the CAPM we can write the firms expected user cost of capital as iEL = i + ( iM − i )β EL, where the beta coefficient for each dollar of equity is β EL = βY , (1 − b )VL with VL being the current market value of the levered firm, and (1 − b)VL = EL the current market value of its levered equity. When (7.11) is substituted into the expected user cost of capital we find the value of the levered firm is the same as the value of the unlevered firm in (7.10), with VL = VU. Thus, the market value of the firm is independent of leverage, even though debt pays a lower expected return, with iEL > i . To see why, consider how the expected user cost of capital changes when leverage is raised marginally, where diE dcL = i − iEL + (1 − b ) L = 0.8 db db Corporate finance The lower return on debt reduces the cost of capital by i − iEL , and it is offset by the increase in the expected return on each dollar of remaining equity due to the increase in its beta coefficient in (7.11) as (1 - b)VL falls. As confirmation of this result, Modigliani and Miller derive a linear relationship between the return on levered and unlevered equity by noting that VU = Y Y = VL = . 1 + iEU 1 + bi + (1 − b ) iEL Rearranging terms, we have iEL = iEU + ( iEU − i ) b , 1− b where the change in the return on levered equity becomes diEL db iEU − i (1 − b )2 > 0. Substituting (7.14) into (7.12), and using (7.13), we find that dcL /db = 0. This derivation of MM leverage irrelevance makes the implicit assumption that there is no restriction on the amount of project risk that shareholders can be asked to bear when debt is riskfree. Thus, at high levels of leverage firms may need to collect additional funds from shareholders to pay a risk-free return to bondholders in bad states with low net cash flows. In practice, however, most shares have limited liability which restricts shareholder losses to the value of Box 7.6 Leverage policy with risk-free debt: a numerical example Suppose the unlevered company Duraware in Box 7.5 issues risk-free debt and retires equity without changing total investment. When the debt constitutes 75 per cent of the firm’s current market value (VL = $1.64m), more risk is transferred to each dollar of equity, with β EL = 1.33 βY = ≈ 3.24, (1 − b )VL 0.25 × 1.64 where the risk premium in the expected return to equity must rise by 400 per cent to ( iM − i )β EL = 0.10 × 3.24 ≈ 0.32. But the higher expected return to equity of 37 per cent does not raise the expected user cost of capital due to the lower cost of the risk-free debt, with cL = 1 + bi + (1 − b ) iEL = 1 + 0.04 + 0.09 = 1.13. Thus, the market value of the firm is unchanged at $1.64m. Corporate finance their invested capital. Whenever the losses are greater than this some of the project risk is transferred to bondholders. In practice, a number of institutional arrangements have been adopted to protect bondholders from bearing more risk than they know about, including bankruptcy provisions, reporting requirements, and inviting large bondholders onto company boards. But in a common information setting where shareholders and bondholders know how much project risk there is and who bears it, changes in leverage simply redistribute it between them without altering the aggregate risk premium firms must pay to the capital market. This is confirmed by noting that the expected user cost of capital with risky debt and equity becomes c = 1 + biB + (1 − b ) iE , where the respective beta coefficients are βB = (1 − µ )βY µβY and β E = , bVL (1 − b )VL with µ being the share of project risk borne by bondholders, and 1 − µ the share of project risk borne by shareholders. Default occurs when firms cannot meet their interest commitments, where the expected return on debt must rise to compensate bondholders for bearing project risk. But this shifts project risk from shareholders without changing the total risk premium paid by firms, with dcL di di = iB − iE + b B + (1 − b ) E = 0. db db db By using the CAPM to solve the expected returns to debt and equity with the beta coefficients in (7.15) we obtain the value of the firm in (7.10), which is independent of b. Thus, leverage irrelevance holds with risky debt and equity in a common information setting. Now a marginal increase in leverage can raise or lower the expected return on equity because there are two competing effects on its beta coefficient when debt is risky – the value of equity capital (1 − b)VL and the amount of project risk borne by shareholders (1 − µ)βY in (7.15) both fall. If debt is less risky at the margin the equity beta coefficient rises without changing the expected user cost of capital as the higher expected return on equity offsets the cost saving from issuing less costly debt (with iE > iB ) . But when the extra debt is more risky at the margin, higher leverage reduces the expected return on equity by lowering its beta coefficient, and this offsets the cost premium on the extra debt issued ( iE < iB ). While firms normally issue debt that is less risky than their equity, that is not always the case. Indeed, during the 1980s a number of firms funded large takeover bids using junk bonds which were riskier than their equity. One advantage of using the CAPM to demonstrate leverage irrelevance is that it allows us to compute expected returns to debt and equity, and to demonstrate why the user cost of capital is unaffected by changes in capital structure. In this setting all traders measure and price market risk in the same way and they know who bears the project risk. All that leverage policy does is redistribute unchanged project risk between shareholders and bondholders. But the requirements for the CAPM to hold are more restrictive than the requirements for leverage irrelevance. It only requires common information in a frictionless competitive capital market. When Modigliani and Miller proved this theorem they emphasized the role of homemade leverage as a way for consumers to undo changes in capital structure by firms. This rebundling activity by consumers can be demonstrated much more clearly using the Corporate finance Box 7.7 Leverage policy with risky debt: a numerical example If we let the debt issued by Duraware in Box 7.6 bear 25 per cent of the project risk its beta coefficient becomes βB = µβY 0.25 × 1.33 = ≈ 0.27. bVL 0.75 × 1.64 This introduces a risk premium of (iM −i) βB = 0.10 × 0.27 ª 0.03 to the expected return on debt, which rises to 8 per cent. Since shareholders bear less project risk the beta coefficient on equity falls from 3.24 to β BL = (1 − µ )βY 0.75 × 1.33 = ≈ 2.43, (1 − b )VL 0.25 × 1.64 where the low-risk premium on equity of ( iM − i )β EL = 0.10 × 2.43 ≈ 0.24 reduces its expected return by 3 percentage points to 29 per cent. Since Duraware still pays the same total risk premium of ( iM − i )βY = ( iM − i )[bβ B + (1 − b )β EL ] = 0.81, it has the same expected user cost of capital, with cL = 1 + biB + (1 − b ) iEL ≈ 1 + 0.06 + 0.07 ≈ 1.13. As a consequence, Duraware’s market value is unchanged at $1.64m. state-preference model of Arrow and Debreu in Section 3.4, which is more general than the CAPM. Leverage irrelevance in the Arrow–Debreu economy In a frictionless competitive capital market each traded security can be priced using the Arrow–Debreu model in (3.11) as ∑ϕ s pak (1 + iks ) = pak , k ∈ K. Since all consumers and firms face the same payout to each security k in each state s, with pak (1+ iks ) = Rks , they use the same discount factors ϕ s = ϕ sh = ϕ sj for all h, j to compute the security prices. We follow conventional analysis and divide securities into one of two types – debt (B) and equity (E) – which traders can use to create a full set (K) of primitive (Arrow) securities. As noted above, there is no obvious way to distinguish between debt and equity in this common information setting without taxes. The standard approach is to give debt a prior claim on the net cash flows and equity the residual claim. But once investors know how risky the net cash flows are and how the risk is divided between debt and equity, a prior claim provides no real advantage to bondholders as they are compensated Corporate finance with the appropriate risk premium. Indeed, with limited liability, debt can be more risky than equity at high enough levels of leverage. In a complete capital market where traders can create a full set of primitive securities arbitrage equates the rates of return on payouts in each state, with iBs = iEs for all s. When firms increase leverage, with investment held constant, they are transferring a given set of risky net cash flows to investors with debt instead of equity, where by the law of one price we have from (7.16) that ∑ ϕ (1 + i s ) = ∑ ϕ s (1 + iEs ). It is important to emphasize that this relationship is derived for substitutions between equally risky debt and equity instruments. Whenever firms increase leverage, holding investment constant, the state-contingent payouts they make on the extra debt must come from payouts formerly made to equity. In other words, a change in leverage represents a constant risk rebundling of debt and equity securities. An example of this in a three-state world is k = aF ⎡1⎤ ⎢ ⎥ ⎢1⎥ ⎢1⎥ ⎣ ⎦ k = aE 1 = ⎡0 ⎤ ⎢ ⎥ ⎢1 ⎥ ⎢1 ⎥ ⎣ ⎦ k = aE 2 + ⎡1 ⎤ ⎢ ⎥ ⎢0 ⎥ ⎢0 ⎥ ⎣ ⎦ where the extra unit of risk-free debt (F) replaces two risky shares (E1 and E2). Clearly, the equilibrium condition iBs = iEs for all s does require every security to pay out in the same state. Indeed, individual debt and equity securities can pay in different states, but when they do make payouts in the same state they must pay the same rate of return. There is no optimal capital structure for individual firms or the aggregate economy in the Arrow–Debreu economy with common information. This is confirmed by writing the current market value of the firm as V0 = Ys 1 + (1 − b )iEs + biBs When the no arbitrage condition holds in a complete capital market with common information, we have iBs = iEs for all s where the value of the firm is independent of b. Thus, MM leverage irrelevance holds in the Arrow–Debreu economy. There are a number of important ways to extend the models we have used to demonstrate leverage irrelevance. Taxes and leverage related costs are introduced next. 7.2.3 Corporate and personal taxes, leverage-related costs and the Miller equilibrium One important difference between debt and equity results from the way they are taxed. Under a classical corporate tax system equity income of corporate firms is taxed twice, once Corporate finance Box 7.8 Leverage irrelevance in the Arrow–Debreu economy: a geometric analysis Equilibrium outcomes in the Arrow–Debreu economy with complete capital markets are equivalent to a certainty analysis. All agents have certain real income and can choose their consumption bundles in each state of the world, and the only uncertainty is over the state that actually eventuates. For that reason, we can use the same certainty analysis as in Box 7.4 to illustrate the capital market equilibrium under uncertainty. In the diagram below the aggregate production possibility frontier is linear because all the debt–equity bundles along RR make payouts from the same aggregate state-contingent net cash flows of firms. When the capital market is complete consumers can trade in every state, and constant risk debt–equity substitutions along RR are irrelevant to them when the demand condition holds, with iBs = iEs ∀s. Since these debt–equity bundles are perfect substitutes, both in terms of risk and state-contingent returns, consumers have linear indifference schedules (vDC) with the same slope as the aggregate production possibility frontier. For that reason the aggregate debt–equity ratio is irrelevant to them, and the market valuations of individual firms are unaffected by their debt–equity choices. Thus, MM leverage irrelevance holds for the aggregate capital market and for individual firms. Suppose, for example, that one or more firms raise their leverage and move the aggregate debt–equity mix from point A to point D along RR in the diagram. Then consumers simply adjust their portfolios to preserve their preferred real consumption in each state of nature without any change in their utility. aE R vDC − with iBS = iES ∀s A aB R inside the firm at the corporate tax rate and then again at the personal rates of shareholders, while interest income is taxed once at the personal level. Modigliani and Miller (1963) extended their financial policy irrelevance theorems by including a classical corporate tax. As noted in the introduction to this chapter, personal taxes were not included because their analysis focused on factors directly impacting on firms. We isolate the role of investor tax preferences on security trades by using a certainty analysis, and then include uncertainty to account for the role of risk preferences. The classical corporate tax discriminates against equity as capital gains are not taxed at the personal level until they are realized by investors. It is not, in general, feasible to tax capital gains as they accrue to investors because they are difficult to calculate. Often there are no markets where changes in the values of their assets can be objectively determined, so they are taxed at realization rather than as they accrue. This gives shareholders an incentive Corporate finance to delay realizing their capital gains to reduce the present value of their tax liabilities. In response, most governments levy a corporate tax on equity income when it accrues (on an annual or semi-annual basis) inside corporate firms. Income on unincorporated firms, such as partnerships and sole owners, is only subject to personal tax on the grounds that it is mostly realized in lieu of wages and salaries. Just classical corporate tax In a certainty setting the classical corporate tax (tc) leads to an all-debt equilibrium. We can demonstrate this using the conditions for optimally traded debt and equity by corporate firms, with9 ϕ(1 + iB )(1 − tC ) ≤ 1, for debt, ϕ(1 − tC + iE ) ≤ 1, for eqquity. Since interest and the repayment of capital (V0) are deductible expenses the tax falls solely on equity income.10 By using these conditions we find that firms are indifferent to debt and equity when the supply condition is iB(1 − tC ) = iE. Due to the absence of personal taxes the demand condition in (7.3) will also apply in this setting, with iB = iE. Since these conditions cannot hold simultaneously there is no equilibrium condition where debt and equity will both trade. If the supply condition holds consumers will only purchase debt as iB > iE, while firms will only supply debt when the demand condition holds as iB(1 − tC) < iE. Thus, there is an all-debt equilibrium where all corporate income is transferred to consumers as interest payments which are not subject to corporate tax. Indeed, whenever firms pay income as dividends or capital gains on shares, consumers have lower future consumption due to the transfer of resources to the government as tax revenue.11 Clearly, MM leverage irrelevance fails in these circumstances. This is confirmed by using the payout constraint for profit-maximizing corporate firms in the presence of the corporate tax to write their current market value as V0 = Y1 (1 − tC ) . 1 − tC + biB (1 − tC ) + (1 − b )iE With b = 1 the value of the firm becomes V0 = Y1/(1 + iB), which is independent of the corporate tax. Notice how interest and the repayment of capital attract implicit tax refunds in (7.21). Since they shield the net cash flows from tax they are frequently referred to as corporate tax shields, which in total are equal to bV0 (1 + iB) + (1 − b)V0 . The all-debt equilibrium also arises in an uncertainty setting with common information when consumers can satisfy their risk preferences by just holding debt. To do so they need access to a full set of debt securities so they can trade in every set of nature. In the Arrow–Debreu economy with a complete capital market for firms and consumers, the demand condition is iBs = iEs for all s, while the supply condition is iBs (1− tC ) = iEs for all s. Once again, they cannot hold simultaneously and the equilibrium outcome is all debt. Risk preferences play a role when consumers need to bundle debt and equity together to trade Corporate finance Box 7.9 The capital market with a classical corporate tax: a geometric analysis The impact of the corporate tax on the debt–equity choice is illustrated in the capital market diagram below. By taxing the income paid to shareholders it reduces the net cash flows that firms can distribute to them, where the equity intercept of the aggregate asset production frontier contracts from aˆE to αˆ CE as it rotates downwards around aˆB to RCRC. This makes its slope flatter than the indifference schedules, where, in a competitive equilibrium, consumer utility is maximized by the all-debt outcome at aˆB. aE R a^ E vDC − with iB = iE a^ E iB(1−tC ) = iE a^ B aB R RC Since tax revenue is returned to consumers as lump-sum transfers the new debt–equity bundle on the asset frontier RCRC must also lie on the pre-tax frontier RR when there is no change in intertemporal consumption. Consumers have the same initial resources but are facing distorted security prices. If current consumption rises the new debt–equity bundle will lie inside RR, while the reverse applies when current consumption falls. As no tax revenue is raised in the alldebt equilibrium consumers have the same real income. And with unchanged intertemporal consumption the new asset frontier RCRC cuts the pre-tax frontier RR at aˆB. across states of nature, and that happens when trading costs make it too costly for firms to create a full set of debt securities. In a frictionless setting, however, competition provides firms with the necessary incentive to create these securities. At this point it is important to stress that the analysis in this section is not meant to be a realistic description of the capital market. Rather, it provides a very clear demonstration of the way that corporate tax discriminates against debt in favour of equity. In practice, there are a number of other factors that impact on the debt–equity choices of firms and consumers. At the time Modigliani and Miller presented their irrelevance theorems the conventional analysis obtained optimal debt–equity choices by including leverage-related costs with the corporate tax. We now examine these costs before summarizing the empirical evidence on their role. Leverage-related costs There are a number of reasons why firms incur leverage-related costs that impose third party claims on their net cash flows. These are claims by agents other than bondholders, shareholders Corporate finance and the government. Equity is supplied when marginal leverage-related costs offset the interest tax shield before reaching an all-debt equilibrium. Most early studies focus on bankruptcy costs, but there are also lost corporate tax shields and agency costs. Bankruptcy and agency costs both require asymmetric information, while lost corporate tax shields do not. Each of them is now considered in turn, beginning with bankruptcy costs. When firms issue limited liability shares, their debt eventually becomes risky at high levels of leverage. Default occurs whenever their net cash flows fall below the risk-free interest payments on their debt. As leverage rises, the probability of default eventually becomes positive and increases. However, in a common information setting there are no default costs because bondholders know ex ante how much project risk they bear and bond prices sell at a discount to compensate them. They cannot, in these circumstances, make legal claims against firms when default occurs as they knew about risk at the time they purchased the debt. But with asymmetric information bondholders may not be aware of the default risk when they purchase debt. Once it occurs they can then make claims against firms by applying to have them declared bankrupt. Provisional administrators are appointed to determine whether the firms should be reorganized or liquidated. Any associated costs are third party claims on their net cash flows that reduce the funds available to bondholders and shareholders. Since the probability of default increases with leverage, expected bankruptcy costs are positively related to leverage. Firms in non-defaulting states have sufficient net cash flows to meet their interest payments on debt, with Ys ≥ (1 + iBs) bV0, while in defaulting states they have Ys < (1 + iBs) bV0. If default costs are incurred in every defaulting state firms have even less to distribute to bondholders, with Ys –hsV0 < (1 + iBs) bV0, where hs is the default cost per dollar of capital invested by the firm in defaulting state s.12 The relationship between leverage and default is illustrated in Figure 7.1, where the net cash flows for a representative firm are mapped over states of nature. When leverage is set at or below b there is no default because the firm can pay a risk-free return on its debt. At b the net cash flows just cover the payouts to bondholders and there are no funds available for shareholders. Once leverage rises above b , ~ Ys ~ Ys ~ ^ (1+i )bV0 > Ys for b > b (1+i)bV0 Shareholder losses ~ ^ (1+i )bV0 ≤ Ys for b ≤ b ^ (1+i )bV0 States (s) Defaulting states Figure 7.1 Default without leverage-related costs. Corporate finance however, there are defaulting states where the net cash flows are not large enough to pay a risk-free return on debt. In a common information setting bondholders know about the defaulting states and how much they will lose in them so that bond prices sell at a discount. Thus, there are no default costs and MM leverage irrelevance holds. However, with asymmetric information bondholders do not have complete information about the risk firm managers impose on them, where bankruptcy provisions act as a costly deterrent. This makes more sense in a multi-period setting where firm managers care about their reputations and want to avoid presiding over bankrupt firms. Constant default costs are illustrated by the shaded area in Figure 7.2, where it is assumed the firm is declared insolvent in every defaulting state. These bankruptcy costs reduce the funds available to bondholders in defaulting states. They have an expected value of h (b ) = ∑ s π s hs , where hs is the default cost in each state s per dollar of capital invested in the firm. Since the probability of default rises with leverage, we have dh /db > 0. With costly default and the corporate tax we can write the expected user cost of capital as c = (1 − tC ) + (1 − b ) iE + bi(1 − tC ) + h (b )(1 − tC ),13 where the bankruptcy costs are tax-deductible expenses along with capital and interest payments. Now an optimal interior debt–equity mix satisfies di dc dh = i(1 − tC ) − iE + (1 − b ) E + (1 − tC ) = 0. db db db When the expected return on equity rises to compensate shareholders for bearing the same project risk on less equity capital, with i − iE + (1 − b )( diE /db ) = 0, firms equate the marginal default cost to the interest tax shield it generates, with ~ Ys ~ Ys (1+i )bV0 ^ (1+i )bV0 ≥ Ys for b > b ^ (1+i )bV0 ^ (1+i )bV0 ≤ Ys for b > b States (s ) Defaulting states Figure 7.2 Default with leverage-related costs. Corporate finance itC = dh (1 − tC ). db A number of studies argue bankruptcy costs are insignificant. For example, Haugen and Senbet (1978) argue they are limited to the lesser of the costs of going bankrupt and the costs of avoiding it. When bankruptcy occurs ownership and control of the firm are transferred to bondholders, and firms can avoid this outcome by selling new shares and using the funds to repurchase fixed claims on their assets. This makes bankruptcy costs the lesser of the costs of transferring ownership and control to bondholders or new shareholders. Two important issues are ignored by this analysis; the first is how default costs impact on consumption risk, while the second is the role of agency costs when there is asymmetric information. Notice how default costs increase the downside risk in the net cash flows in Figure 7.2. If this changes the non-diversifiable risk in the payouts to investors additional terms will appear in (7.24) to accommodate the resulting changes in the expected returns to debt and equity. Agency costs are examined later in this subsection.14 Lost corporate tax shields are examined by DeAngelo and Masulis (1980a) in an Arrow–Debreu economy with a full set of primitive securities and common information. In most countries corporate tax treats income and losses differently, where tax is collected on income but not refunded (in full) on losses. Table 7.1 summarizes the state-contingent returns paid to debt and equity after corporate tax, where the states of nature are assigned numbers that rise with the net cash flows. Default occurs in states s ∈ [0, sˆ) where at sˆ the net cash flows just cover the payouts to debt, with Ys = (1 + i )bV0 . The tax losses are equal to the amount by which the tax-deductible expenses exceed the net cash flows, with [ibV0 − V0] − Ys ≥ 0. No default occurs in states s ∈[ sˆ, s ), but there are tax losses because a fraction αs of shareholder capital is not returned to them, with 0 ≤ αs< 1. At s there are no tax losses and no income is paid to shareholders, with Ys − (1 + i )bV0 = (1 − b )V0 . In the final group of states s ∈ ( s , S ] shareholders are paid income of Ys – (1 + i)bV0 – (1−b) V0 > 0.15 The lost corporate tax shields in states s ∈ [0, s ) reduce the value of the firm, and since they rise with leverage, MM leverage irrelevance fails. Indeed, there is an optimal capital structure for the firm in these circumstances because higher leverage increases the number of defaulting states. There are three ways firms can get the full value of their tax deductions: through tax refunds from the tax office; by selling them to firms with tax profits; and by carrying them forward with interest. Governments rarely pay tax refunds or allow firms to sell their tax losses. Most, however, do allow firms to carry their tax losses forward, but without interest.16 Thus, in periods when firms have tax losses, the present value of their tax deductions is eroded. And since interest payments are tax-deductible expenses lost corporate tax shields are related to leverage, where an increase in leverage raises the probability of default and reduces the present value of the tax shields. Table 7.1 Payouts in the absence of tax refunds on losses States s ∈[0, sˆ) s ∈[ sˆ, s ) s ∈[ s , S ] Ys iG for all the three securities to trade. We can use the supply condition derived earlier in (7.20) because all equity income is treated in the same way by the corporate tax, with iB(1−tC ) = iD = iG ,22 where the relationship between the security returns must satisfy iB > iD = iG. Clearly, this is incompatible with the demand condition in (7.26). Thus, when the demand and supply conditions are combined, we have an equilibrium relationship between the tax rates of interest dividends capital gains (1 − t Bh ) > (1 − tC )(1 − t Bh ) < (1 − tC )(1 − t Eh ) ∀h . Corporate finance This confirms the proposition made earlier based on the tax rates summarized in Table 7.2 that no consumer has a tax preference for dividends in the Miller equilibrium. Instead, they divide into strict tax clienteles, with: (1 − t Bh ) > (1 − tC )(1 − t Eh ), for debt specialists, (1 − t Bh ) < (1 − tC )(1 − t Eh ), for equity specialistts, (1 − t ) = (1 − tC )(1 − t ), for marginal investors. h B h E Equity specialists prefer shares that pay capital gains. They must be high-tax investors (t Bh > tC ) with marginal cash tax rates that are higher than the combined corporate and personal taxes on capital gains. While all low-tax investors (t Bh < tC ) are debt specialists, not all high-tax investors are equity specialists. In practice there may not be any marginal investors, but none are needed for both securities to trade. Box 7.12 Tax preferences of high-tax investors in Australia To demonstrate how plausible these tax relationships are in practice, consider Australian taxpayers in the top tax bracket with a marginal cash tax rate of 46.5 per cent when the corporate tax rate is 30 per cent. They will have a tax preference for equity that pays capital gains whenever their marginal tax rates on capital gains are less than 23.6 per cent, where: (1 − t ) ≈ (1 − t )(1 − t ), (0.535) (0.70) (0.754 ) h B h E for t Eh ª23.6. Whenever there are consumers with a tax preference for debt and others with a tax preference for equity they can increase their wealth through tax arbitrage by trading the two securities with each other. If debt specialists sell shares to equity specialists and use the proceeds to buy their debt, both groups generate net tax refunds which transfer revenue from the government budget.23 Miller simplifies the analysis by endowing tax rates on consumers, but they will have unbounded demands for their tax preferred securities. Three studies examine different ways to bound security demands: Dammon and Green (1987) make personal tax rates increasing functions of income so that tax arbitrage eliminates investor tax preferences; Jones and Milne (1992) include a government budget constraint to bound the revenue consumers can extract through tax arbitrage;24 and Miller (1988) imposes borrowing constraints on consumers.25 While Miller’s approach does simplify the analysis, it conceals potentially important endogenous relationships identified by Dammon and Green and by Jones and Milne that can have important welfare implications for the final equilibrium outcome.26 With short-selling constraints, debt and equity specialists have bounded demands for securities in the Miller equilibrium and both securities trade. Due to the absence of any constraints on security trades by firms the market returns to debt and equity will satisfy the supply condition in (7.27) in a competitive capital market. When it does, MM leverage policy irrelevance holds for individual firms. This is confirmed by using the supply condition to write the current market value of the firm in (7.21) as Corporate finance V0 = Y1 (1 − tC ) Y = 1 , (1 − tC ) + iG 1 + iB which is independent of b. But the aggregate debt–equity ratio does matter in the Miller equilibrium because there must be enough debt and equity to satisfy the security demands of debt and equity specialists. Whenever it lies within these bounds the aggregate debt–equity ratio is irrelevant to consumers if there are marginal investors who are willing to hold either security. If debt and equity specialists cannot satisfy their tax preferences they Box 7.13 The Miller equilibrium: a geometric analysis It is possible to see the peculiar attributes of the Miller equilibrium in the debt–equity space diagram below where the aggregate asset production frontier RCRC maps the debt–equity bundles over the aggregate net cash flows of firms trading in the corporate sector of the economy. Its slope is determined by the supply condition in (7.27) without dividends. Since consumers face different personal tax rates their indifference schedules have different slopes. To simplify the analysis we assume the consumers in each tax clientele have the same tax preferences. Point A isolates the minimum debt ( aB ) needed to satisfy debt specialists, while point B isolates the minimum equity ( aE ) needed for equity specialists. Any additional debt and equity supplied between these points is held by marginal investors. Thus, points A and B and the distance between them are determined by the net wealth of the investors in each tax clientele. The slopes of the indifference schedules for each clientele reflect their different tax preferences, where B ) they have larger (negative) slopes than frontier RCRC for debt specialists ( vDC , a lesser (negaM ) E ) tive) slope for equity specialists ( vDC , and the same slope for marginal investors ( vDC . As long as the aggregate debt–equity bundle lies between points A and B along frontier RCRC it is irrelevant to consumers. Since debt and equity specialists are holding their tax-preferred securities any differences between the bundles in this region are absorbed by marginal investors. Once the aggregate debt–equity ratio moves outside these bounds aggregate welfare falls. For example, bundles that lie above point A along RCRC do not provide enough debt for debt specialists so they hold equity and are worse off due to the extra tax burden on them. aE vDC RC A vDC B v DC aB aB RC Corporate finance will have lower welfare due to the extra tax burden imposed on them from holding the higher-taxed security. A number of commentators on the Miller equilibrium draw on the role played by homemade leverage in the original proofs of MM leverage irrelevance to argue there must be marginal investors and certainty for MM leverage irrelevance to hold in the presence of taxes. They claim marginal investors are needed in the model to absorb changes in firm capital structure, while certainty removes risk preferences from security demands so that consumers divide into strict tax clienteles.27 But leverage irrelevance holds in the Miller equilibrium without marginal investors and with uncertainty. Suppose there are no marginal investors in a certainty setting, so that all consumers are debt or equity specialists. If one firm raises its leverage (with investment held constant) there is an excess supply of debt and an excess demand for equity that puts upward pressure on the market price of equity and downward pressure on the market price of debt. Other firms respond to these price changes by substituting equity for debt until the aggregate debt–equity ratio is restored to its original level. Thus, the market value of individual firms is unaffected by changes in leverage as changes in security prices induce other firms to take offsetting positions so that consumers continue to hold their tax–preferred securities. In a frictionless competitive capital market where profit-maximizing firms respond to security price changes consumers get their tax-preferred securities. Stiglitz (1974) recognized that rebundling by financial intermediaries (as agents of corporate firms) would make leverage policy irrelevant in a frictionless competitive capital market without taxes.28 Even in the absence of personal taxes and short-selling constraints, homemade leverage is likely in practice to be more costly than rebundling on the supply side of the market by specialist traders with lower transactions costs. While there are no transactions costs in the Miller equilibrium, there are borrowing constraints on consumers to bound their security demands and rule out tax arbitrage. Thus, all the arbitrage activity must be undertaken by profit-maximizing firms. Now suppose we introduce uncertainty to the earlier analysis of the Miller equilibrium. It is tempting to conclude consumers will not separate into strict tax clienteles in these circumstances as their security demands will be determined by a combination of risk and tax preferences. Auerbach and King argue consumers will forgo some of the benefits from holding tax-preferred securities and bundle debt and equity together to satisfy their risk preferences. In response, firms will form leverage clienteles to create different risky mutual funds for consumers with the same tax preferences. They argue these mutual funds are unlikely to satisfy the risk preferences of every consumer. Kim (1982) and Sarig and Scott (1985) argue there are no leverage clienteles in the Miller equilibrium because consumers can satisfy their risk preferences by holding just tax-preferred securities. There are two ways firms (or financial intermediaries) achieve this outcome: by providing a complete set of debt and equity securities so that consumers can create a full set of primitive equity and primitive debt securities; or by creating securities to satisfy the risk and tax preferences of consumers (in effect, they create personalized mutual funds constructed solely from tax-preferred securities). While the outcome in Auerbach and King is more realistic, they are implicitly including transactions costs and asymmetric information to stop firms from creating personalized risky mutual funds for consumers. Clearly, they are trying to explain what actually happens in the capital market, but, in doing so, are moving outside the confines of the frictionless classical finance model of the Miller equilibrium. In a frictionless competitive economy with common information, firms know the risk and tax preferences of every consumer and are driven by the profit motive to satisfy them. Due to the absence of trading costs Corporate finance Box 7.14 The Miller equilibrium without marginal investors A geometric analysis helps to clarify the reason why marginal investors are not required in the Miller equilibrium. In their absence there is an optimal aggregate debt–equity bundle for the corporate sector of the economy at aˆ on the asset production frontier RCRC in the debt–equity space diagram below. Since consumers face borrowing constraints to restrict tax arbitrage they are unable to access any arbitrage profits when the after-tax security returns are not equal. Instead, that role is undertaken by profit-maximizing firms which equate the cost of debt and equity along the frontier RCRC, with iB(1 − tC) = iG. Whenever changes in leverage by one or more firms moves the aggregate debt–equity bundle away from aˆ, other firms respond to the (incipient) changes in security prices and bring the bundle back to aˆ where consumer tax preferences are satisfied. Profit-maximizing firms undertake this repackaging due to the absence of restrictions on their security trades, which is reflected in the linearity of the asset production frontier. Thus, the homemade leverage identified by Modigliani and Miller in their original proof of the leverage irrelevance theorem without taxes will not be possible in the Miller equilibrium with taxes when there are no marginal investors. aE a^ E aB RC consumers are not required to trade off risk and tax preferences in these circumstances. In practice, consumers do purchase bundles of debt and equity, often as mutual funds, to satisfy their conflicting risk and tax preferences as firms cannot costlessly create their personalized risky tax-preferred securities.29 Moreover, it is too costly for them to create a full set of primitive debt and equity securities to make the capital market double complete. This is confirmed by Kim et al (1979) who find empirical evidence of shareholder leverage clienteles, where firms choose capital structures to satisfy investors with different risk and tax preferences. Even though leverage clienteles are absent in the Miller equilibrium, it does, however, establish the important arbitrage activity by firms (or their agents financial intermediaries) in competitive capital markets. In practice, trading costs are likely to restrict homemade leverage, where consumers face higher trading costs than specialist financial intermediaries. As transactions costs fall and traders acquire better information about investor risk and tax preferences the actual capital market outcome will converge to the Miller equilibrium. MM leverage irrelevance for individual firms in the Miller equilibrium with uncertainty can be confirmed by computing the market value of firms in the Arrow–Debreu economy as Corporate finance V0 = Ys (1 − tC ) ∀s, (1 − tC ) + biBs (1 − tC ) + (1 − b )iGs where equity pays capital gains. When both securities trade in a complete capital market the supply condition in (7.27) holds, with iBs(1 − tC) = tGs for all s, where the value of the firm becomes V0 = Ys /(1 + iBs) for all s, which is independent of the debt–value ratio. And this also applies without marginal investors. A growing number of countries have reformed their tax systems to remove the double tax on dividends. For example, governments in Australia, the United Kingdom, New Zealand and the United States have adopted tax imputation systems that give shareholders credit for corporate tax paid on dividends. This makes all investors indifferent to interest and dividends. We examine the impact of dividend imputation later in Section 7.3.3. The Miller equilibrium in open economies With perfect capital mobility the market returns to domestic debt and equity are determined by the returns on perfect substitutes in world markets, where the supply condition becomes iB (1− tCF ) = iG , for the foreign corporate tax rate tCF .30 In a certainty setting countries form supply clienteles where those with higher corporate tax rates supply only debt, with iB (1 − tC ) < iB (1 − tCF ) = iG , and those with lower rates supply only equity, with iB (1 − tC ) > iB (1 − tCF ) = iG . The country with the corporate tax rate that satisfies the supply condition is determined by the aggregate demand for debt and equity in the international capital market, which depends on the personal tax rates in each country. If there are tax agreements between countries that give domestic residents credits for any foreign personal tax payments, consumer income is subject only to domestic personal tax rates, where the demands for debt and equity in each country will be determined by the tax relationships; (1 − t Bh ) < (1 − tCF )(1 − t Eh ). The larger the aggregate demand for equity, the greater the number of countries supplying it, where the country with the highest corporate tax rate determines the supply condition for the returns to debt and equity. 7.2.4 The user cost of capital In (7.22) we gave a general expression for the expected user cost of capital, which is the weighted average cost of capital (WACC) used to compute the market value of a firm in a two-period setting with risk and taxes. It is the average cost of raising and using each dollar of capital invested by bondholders and shareholders, where the total economic cost of capital is obtained by multiplying the market value of the firm by (7.22). When the no arbitrage condition holds it is equal to the firm’s after-tax net cash flows, with cV0 = Y1 . In some circumstances the WACC in (7.22) is also the marginal cost of capital (MCC) used by firms to determine their level of investment, with dY1 dZ 0 = c .31 db=0 When the MCC is constant in the absence of fixed costs, we have MCC = WACC. This occurs in the following circumstances: Corporate finance i In a certainty setting the last term in (7.22) disappears because there are no default costs, and the user cost of capital simplifies to: c = (1 − tC ) + biB (1 − tC ) + (1 − b)iE. Since firms cannot affect the returns they pay to debt and equity in a competitive capital market, the user cost of capital is unaffected by their investment choices. When both securities trade, we have iB(1 − tC) = iE, where MM leverage irrelevance holds, and profitmaximizing firms equate the value of the marginal product of investment to the MCC, which is also the WACC. ii It unlikely for (7.22) to be the MCC when there is uncertainty, even with common information, as leverage-related costs and project risk on each dollar of capital are affected by additional investment. If investment has scaling effects on the net cash flows in each state of nature, nothing happens to the WACC in (7.22) because the probability of default and the project risk per dollar of capital are unchanged. iii In a two-period setting there is depreciation in the WACC in (7.22) because firms liquidate in the second period and repay capital to investors from their net cash flows. This makes depreciation unity in the user cost of capital. In a multi-period setting, however, the first term in (7.22) is replaced by − Φ t (1 − tC ) = −(Vt − Vt −1 )(1 − tC )/Vt −1 , which is the rate of change in the market value of the firm over the period from t - 1 to t; it is the expected rate of economic depreciation when Vt < Vt − 1. For most depreciating assets there is less than complete depreciation, with 0 ≤ − Φ t < 1. Frequently, however, firm values rise in some time periods. For example, there are firms which invest a significant portion of their capital in assets such as land, buildings and goodwill, and they can increase in value. In periods when capital gains on these assets are large enough to offset reductions in the market values of their depreciating assets, their market values will rise. The rate of appreciation or depreciation in the value of a firm will not change with investment when it has a scaling effect on the value of its outputs and inputs. When this happens the WACC in (7.22) is also the MCC that determines the optimal level of investment. These cases tell us something about the circumstances where the WACC is not equal to the MCC: i When the amount of project risk per dollar of capital changes with investment, firms must pay higher expected returns to shareholders and/or bondholders to compensate them for bearing this extra risk. In the absence of lost corporate tax shields, which eliminates the last term in (7.22), the condition for optimally chosen investment becomes dY1 dZ 0 = c + b(1 − tC ) db=0 ∂ iB ∂i + (1 − b ) E > c . ∂Z 0 ∂Z 0 While extra project risk raises the user cost of capital and reduces the market value of the firm, MM leverage irrelevance continues to hold when traders have common information. But once investment changes the project risk on each dollar of capital invested, the MCC deviates from the WACC in (7.22).32 Corporate finance ii Whenever profits and losses are not treated symmetrically by the corporate tax, there are expected default costs from lost corporate tax shields, even in a common information setting. If the extra project risk from additional investment changes the probability of default, it also changes the expected default costs in the last term of (7.22), where the condition for optimally chosen investment becomes dY1 dZ 0 = c + b(1 − tC ) db = 0 ∂i ∂iB ∂h + (1 − b ) E + (1 − tC ).33 ∂Z 0 ∂Z 0 ∂Z 0 The last term is the change in expected default costs when investment affects the probability of default. iii With more than two time periods the first term in (7.22) is replaced by − Φ t (1 − tC ), which measures the rate of change in the market value of the firm over period from t - 1 to t, where Φt < 0 when it declines, and Φt > 0 when it rises. It would seem reasonable to expect this term to be a function of the level of investment as firms are likely to change their input mix when they expand investment and production. In other words, it would seem unlikely, even in the long run when all inputs can be varied, that firms will simply scale their operations when they change investment, where the condition for optimally chosen investment, in the absence of changes in project risk and expected default costs, becomes dY1 dZ 0 = c− db = 0 ∂ Φ1 (1 − tC ). ∂Z 0 The second term is the change in the (average) rate of economic depreciation that causes the MCC to deviate from the WACC in (7.22). Ross (2005) derives an expression for the cost of capital in a multi-period setting with common information and finds that it differs from the standard WACC formula because shareholders and bondholders in bankrupt firms incur losses from recapitalization and reorganization costs. Recapitalization losses occur because investors are forced to exchange their initial debt and equity for securities with lower values. It therefore assumes the capital market is incomplete due to transactions costs. In a frictionless complete capital market where investors can trade in every future state, any recapitalization is costless and is already included in current security prices. iv The WACC can deviate from the MCC when traders have asymmetric information. Additional investment that provides new information or affects the actions taken by firm managers can change the expected user cost of capital. For example, when traders get better information about a firm’s project risk it changes the risk premium paid to debt and equity and the cost of capital. Most governments estimate the user cost of capital for firms in different sectors of the economy to determine how their policies or other factors impact on private investment and employment. Private traders also estimate the user cost of capital to guide their investment decisions. It is reasonably clear from (7.22) and the subsequent discussion that depreciation allowances, the corporate tax, project risk and default costs all play an important role in determining the user cost of capital. Other factors can also play a role by impacting on the equilibrium returns paid to debt and equity, and on the prices that Corporate finance determine the net cash flows to investment. We conclude this section by considering three of them. First, investor demands are affected by personal taxes on the returns to debt and equity. In the previous subsection we saw how the double tax on dividends makes interest and capital gains more attractive for all investors. Corporate and personal taxes affect the relative returns to debt and equity as well as their equilibrium levels. From the supply condition in (7.27), with iBs (1 − tC) = iDs = iGs for all s, we have iBs > iDs = iGs due to the tax deductibility of interest. The combined corporate and personal taxes drive wedges between the pre- and post-tax returns to debt and equity, thereby raising the cost of capital for firms and lowering the after-tax income received by investors; the larger the tax wedges, the lower the aggregate level of saving and investment. Investors divide into strict tax clienteles in the Miller equilibrium to minimize their tax payments. Recall from the previous subsection how this occurs in a common information setting without transactions costs where consumers effectively have access to a full set of primitive debt and primitive equity securities. However, investors may bundle debt and equity together and incur larger tax burdens to satisfy their risk preferences when there are trading costs and asymmetric information. This further reduces saving and investment by simultaneously raising the cost of capital for firms and reducing the after-tax returns to investors. From time to time governments reform their spending and taxing policies to expand aggregate income and employment. Lower corporate and personal tax rates expand private investment and saving by reducing the cost of capital and raising the after-tax returns to investors. The final change in the cost of capital in (7.22) is ultimately determined by the interest elasticities of aggregate investment demand and aggregate saving.34 A lower corporate tax rate reduces the value of interest tax deductions. When it falls below the lowest marginal cash tax rate every investor becomes a high-tax investor, with t Bh > tC for all h, where this is likely to increase the number of equity specialists in the Miller equilibrium. Second, governments in some countries allow firms in politically sensitive sectors of the economy to accelerate their depreciation deductions for tax purposes. While this provides an implicit subsidy by raising the present value of their tax deductions relative to firms in other sectors, it has efficiency effects that can reduce aggregate income. Concessions of this kind are fairly common in relatively new industries and for firms which undertake research and development. When the implicit subsidy from accelerated depreciation allowances corrects externalities in these activities it can raise aggregate income. But they too are frequently granted on political rather than efficiency grounds. Third, fiscal and monetary policies can also impact on the user cost of capital by changing the equilibrium returns to debt and equity. For example, tighter monetary policy can drive up the cost of capital and discourage investment, at least in the short term when there are nominal price rigidities that allow changes in the money supply to have real effects in the economy. Extra government spending can also affect the user cost of capital when it reallocates resources inter-temporally. In recent years governments have been much less inclined to use fiscal and monetary policies to smooth economic activity through normal cyclical changes. They have difficulty identifying the cycles, and it also takes considerable time to legislate and implement their policy responses. Moreover, monetary policy is much less effective in economies with flexible nominal prices, while government spending may be ineffective when there are principal–agent problems between voters, politicians and bureaucrats that affect the provision of public services. Corporate finance Box 7.15 The Miller equilibrium with a lower corporate tax rate A lower corporate tax rate will reduce any tax preferences for equity. Indeed, it makes initial marginal investors debt specialists and some initial equity specialists marginal investors or even debt specialists. The effects of making debt specialists marginal investors are illustrated in the debt–equity space diagram below where initially debt and equity specialists have the same tax preferences within each group. Prior to the tax change there are no marginal investors and the aggregate debt–equity bundle aˆ satisfies the security demands of debt and equity specialists in the corporate sector of the economy. As the corporate tax rate falls it rotates the aggregate asset possibility frontier upward around the debt axis. Since firms can distribute more of their net cash flows to shareholders they can issue more equity at each level of debt when they supply both securities. At the new lower tax rate tC′ former debt specialists become marginal investors along asset frontier RC′ RC′ , with (1 − t Bh ) = (1 − tC′ )(1 − t Eh ). All the aggregate debt–equity bundles along this frontier above aˆ′ are irrelevant to consumers. However, consumers are worse off when the aggregate debt–equity bundles are moved below aˆ′ along the frontier. aE R′C vDC ′ vDC RC ^ a′ E vDC ′ aE a^ E v DC aB RC ′ It should be noted that the lower corporate tax rate will increase the aggregate wealth of consumers by reducing allocative inefficiency. If that changes their intertemporal consumption choices it will affect their demands for the two securities, where that causes parallel shifts in the new asset frontier RC′ RC′ and moves the new debt–equity bundle along it. If there is a rise in future consumption for all consumers the new frontier shifts out and the debt–equity bundle aˆ′ moves down the frontier. Policy changes like these need to be evaluated formally using a model that incorporates the public sector in the economy. We do this in the next chapter where marginal policy changes are examined in a tax-distorted economy. 7.3 Dividend policy Another important financing decision firms make is how to distribute equity income to shareholders. Most corporate debt has a fixed market value and pays variable interest, while the market value of equity can vary through time with income paid as dividends and capital gains. Dividend policy determines whether equity income is paid to shareholders as a cash Corporate finance dividend or retained by firms who repurchase shares to pay capital gains. Modigliani and Miller (1961) prove dividend policy is irrelevant to the market values of firms in a frictionless common information setting without taxes. Whenever firms use income to repurchase a dollar of equity the market value of their equity falls by a dollar, while shareholder wealth is unchanged as the cash they receive matches the fall in the market value of their shareholdings. But dividend irrelevance fails in the presence of a classical corporate tax where personal taxes favour capital gains over dividends. This was demonstrated earlier in Section 7.2.3, where no dividends are paid in the Miller equilibrium.35 A large number of studies have attempted to solve the dividend puzzle. In recent years a number of governments have replaced the classical corporate tax with an imputation tax system that grants credits to shareholders for corporate tax collected on their dividends; it removes the double taxation of dividends and makes them subject to the same personal tax as interest income.36 It may seem puzzling that they did not instead just abolish the corporate tax altogether. But the reason for not doing so is the same as the reason for introducing a corporate tax in the first place. In its absence firms have an incentive to pay equity income as capital gains that are subject to lower personal tax rates. Recall from an earlier discussion that there are two reasons why: first, most governments set lower statutory rates on them relative to cash income; and second, they are taxed on realization and not accrual. The more time it takes to realize capital gains, the lower the effective tax rate on them. The imputation tax system recognizes this by using the corporate tax as a withholding tax which is credited to shareholders when they receive the income as dividends. Thus, it taxes equity income on accrual inside firms and removes any incentive for then to delay paying it as capital gains (when shareholders have marginal personal (cash) tax rates less than or equal to the corporate tax rate). Governments are attracted to the imputation tax system because it removes the double taxation of dividends under the classical corporate tax system and, in so doing, is a less discriminatory tax. There are some remaining tax preferences, however, and they will be identified later in this section. We begin the analysis with a simple proof of MM dividend policy irrelevance in the absence of taxes. Then we summarize the dividend puzzle identified earlier in the Miller equilibrium before considering a number of attempts to resolve it, including, differential transactions costs on dividends and capital gains, share repurchase constraints and agency costs. Unfortunately, none on these explanations appear to provide an adequate resolution to the puzzle, which is why many argue it is one of the most intractable problems in finance. 7.3.1 Dividend policy irrelevance Dividend policy irrelevance can be demonstrated in a two-period certainty setting by separating equity into shares that pay dividends (D) and capital gains (G), where the optimal debt and equity choices of consumers and firms satisfy ϕ(1 + iB ) = 1, for debt, ϕ(1 + iD ) = 1, for equity payiing dividends, ϕ(1 + iG ) = 1, for equity paying capital gains, leading to an equilibrium condition when the no arbitrage condition holds, of Corporate finance iB = iD = iG. 239 (7.31) In the absence of taxes the three securities are perfect substitutes in a certainty setting so they must pay the same rates of return.37 The market value of profit-maximizing firms is obtained using their payout constraint as V0 = Y1 , 1 + biB + giG + diD where b =paB aB/V0, g = paG aG/V0 and d = paD aD/V0, with b + g + d = 1. Clearly, when the equilibrium condition in (7.31) holds, the value of the firm is unaffected by changes in leverage or dividend policy. 7.3.2 The dividend puzzle Miller (1988) identified the dividend puzzle by combining corporate and personal taxes in a classical finance model. Under a classical corporate tax, dividends are subject to higher tax than all other forms of corporate income for fully taxable consumers who prefer interest or capital gains. But this creates an obvious dilemma because in practice corporate firms pay a significant proportion of their income as dividends. In a sample of 156 US firms, Sarig (2004) found that on average they distributed approximately 61 per cent of their earnings, with approximately 91 per cent being paid as dividends over the period 1950–1997.38 The dividend puzzle was isolated earlier in the Miller equilibrium using (7.28), where interest dividends capital gains 39 (1 − t Bh ) > (1 − tC )(1 − t Bh ) < (1 − tC )(1 − t Eh ) ∀h . The double tax on dividends makes them preferable to interest, which is subject only to personal tax, and capital gains, which are subject to lower personal tax. For that reason, no dividends are paid in the Miller equilibrium. There have been a number of attempts to resolve the dividend puzzle. Three main explanations are considered here. The first of these is trading costs. Firms and shareholders incur transactions costs such as bank fees, mailing charges and stamp duty when dividends are paid. In contrast, firms that repurchase shares to pay capital gains must by law provide information about these transactions, pay broking fees and incur other transactions costs, while shareholders also incur broking fees and other transactions costs when they sell their shares. The trading costs to pay capital gains are typically much larger than the costs of paying dividends, particularly for individual shareholders. According to Barclay and Smith (1988) they are higher for capital gains, but not by enough at the margin to offset the extra tax burden imposed on dividends under a classical corporate tax system.40 The second explanation is share repurchase constraints. In some countries there are regulations that restrict share repurchases by firms. They were originally adopted to stop firms creating speculative runs to inflate their share prices above fundamentals, but in recent times they have been used explicitly to stop firms from avoiding the higher taxes on dividends. For example, in the United States penalties are imposed on firms that systematically repurchase Corporate finance Box 7.16 The dividend puzzle The dividend puzzle is illustrated in the diagram below, where the asset possibility frontier RCRC isolates the largest amount of equity firms can supply from their aggregate net cash flows after meeting their obligations to bondholders and paying corporate tax. Since capital gains and dividends are both subject to corporate tax it does not alter the slope of the asset frontier, where both types of equity must pay the same market return for firms to supply them, with iD = iG. But personal taxes are lower on capital gains for all consumers, so their indifference schedules are steeper than the asset possibility frontier where they require iD (1− t Bh ) = (iG − t Eh ) to trade both securities, with iD > iG. That is why firms only pay capital gains at aˆG. Whenever they pay dividends shareholders are driven onto lower indifference schedules due the extra tax burden imposed on them. aD RC vDC − with iD(1−t Bh ) = iG(1−t hE) iD = iG a^ G aG RC shares to avoid the higher taxes on dividends. Occasional share repurchases are permitted to allow firms to restructure their capital. Auerbach (1979), Bradford (1981) and King (1977) offered the new or trapped view of dividends where they argue share repurchase constraints force firms to pay dividends. Unfortunately, though, this equilibrium outcome has an Achilles’ heel because it relies on the important assumption that firms cannot trade each other’s shares. Inter-corporate equity is significant in practice, and it provides a substitute for share repurchases. To see this, consider a firm A with $100 of equity income. If it cannot repurchase $100 of its own shares to pay capital gains it can buy $100 of firm B’s shares, and firm B then uses the proceeds to buy $100 of firm A’s shares. In the absence of transactions costs or taxes on inter-corporate equity, firm A has replaced $100 of cash with $100 of equity in firm B which offsets the value of its own outstanding shares. The market value of firm B is unchanged by these trades because the $100 liability from selling its own shares is matched by the value of shares it holds in firm A, while shareholders in firm A have 100 dollars of cash that offsets the lower value of equity they now hold in firm A. Thus, the $100 income generated by firm A has been transferred to its shareholders as capital gains. Governments are less concerned now than they were in the past about firms trying to inflate their share valuations through share repurchase activity because institutional traders have more information about the identity of buyers and sellers and how much equity they trade. In fact, brokers frequently share this information with each other to stop traders from Corporate finance Box 7.17 The dividend puzzle and trading costs The impact of trading costs on the choice between dividends and capital gains is illustrated in the diagram below. For trading costs (T) to make dividends preferable they must be relatively higher for capital gains, where those incurred by consumers make their indifference schedules T ) flatter ( vDC , and those by firms make the asset production frontier steeper ( RC RCT ). In effect, higher trading costs raise the relative cost of capital gains for firms and reduce their relative return to consumers. In the diagram we simplify the analysis by assuming the trading costs only apply to capital gains, where this rotates the asset production frontier around the intercept âD. Dividends trade once the slope of the indifference schedules are the same or flatter than the slope of the asset production frontier. In practice, these relative trading costs do not appear large enough on their own to explain the payment of dividends under a classical corporate tax system. aD RC a^D vDC a^ TG RTC ^ a G aG RC exploiting inside information, thereby making it more difficult for single traders to corner the market by spreading their trades across a number of brokers. The third explanation is signalling and agency costs when there is asymmetric information. Bhattacharya (1979) and Miller and Rock (1985) identify circumstances where firm managers use dividend payments to signal the quality of their expected net cash flows to shareholders with incomplete information. Higher taxes on dividends are signalling costs that allow them to function as a credible signal in these circumstances. When dividend payments are optimally chosen these costs are equated at the margin to the benefits shareholders get from the information provided. Using financial data for 156 firms in the US over the period 1950–1997, Sarig (2004) finds empirical support for the signalling benefits of dividends, where the benefits from changes in dividends were found to be larger than the benefits from changes in share repurchases, while an increase in profitability leads initially to an increase in share repurchases and then later an increase in dividends once there is confirmation of the profitability being sustained in the long run. Rozeff (1982) and Easterbrook (1984) argue that dividends can be paid to reduce agency costs arising from managers consuming perquisites and from managerial risk aversion. Since dividend payments reduce free cash flows they limit managerial perquisites. They also force firms to go to the capital Corporate finance Box 7.18 The dividend puzzle and share repurchase constraints When there are constraints on the security trades of firms that force them to pay equity income as dividends, consumers are forced onto lower indifference schedules due to the extra tax burden. This outcome is illustrated in the diagram below where consumers are forced to locate D at point aˆD on the lower indifference schedule vDC . aD RC a^ D vG DC vD DC a^G aG RC market more frequently for funds thereby placing greater scrutiny on the investment choices of risk-averse managers who underinvest in risky profitable projects. These explanations for the payment of dividends postulate a positive relationship between dividends and the level of asymmetric information. In contrast, the pecking order theory of Myers and Majluf (1984) finds a negative relationship between them. When potential shareholders have less information than existing shareholders about the profitability of new projects they can discount share prices by more than the net present value of the profits. Since this makes existing shareholders worse off they reject these projects so that managers are forced to move down the pecking order and use internal funds and (risk-free) debt. Since lower dividends create a larger pool of internal funds there is a negative relationship between dividends and the level of asymmetric information. Using data for manufacturing firms that traded on the NYSE and the AMEX over the five-year period 1988–1992, Deshmukh (2005) finds empirical support for the pecking order theory over the signalling theory. 7.3.3 Dividend imputation It is clear from the dividend puzzle examined in the previous section why a classical corporate tax distorts the financing decisions of firms. One way to eliminate the double tax on equity income is to eliminate the corporate tax altogether. But in its absence shareholders have an incentive to delay realizing this income as capital gains so they can lower their effective personal tax rates.41 In effect, their tax liabilities are delayed at no interest cost. The corporate tax deters this activity by collecting revenue on equity income as it accrues inside firms, but the double taxation is especially problematic for dividends. Corporate finance Box 7.19 The new view of dividends with inter-corporate equity A more realistic way for firms to overcome share repurchase constraints is through intercorporate equity trades undertaken on their behalf by financial intermediaries (F). Their role is illustrated in the diagram below. As specialist security traders their asset production frontier (RFRF) passes through the origin, and is linear with the same slope as the asset production frontier for corporate firms in a frictionless competitive capital market. In the presence of share repurchase constraints, firms must distribute their after-tax net cash flows as dividends at point aˆD. But these payouts can be converted into capital gains when financial intermediaries purchase aˆG shares from corporate shareholders using funds raised by selling aˆD of their own shares to corporate firms. By choosing bundle aF the intermediaries distribute equity income of corporate firms to shareholders as capital gains, but without corporate firms buying back their own shares. Instead, corporate firms and financial intermediaries end up holding the same value of each other’s shares. aD a^ D RF −aG vG DC a^ G 0 RC −a^ D −aD aF RF In practice, financial intermediaries incur trading costs that reduce the dividends they can convert into capital gains. These costs cause their asset frontier RFRF to kink downwards around the origin. However, as long as they are smaller than the extra tax on dividends paid to shareholders, and smaller than the costs incurred by corporate firms trading each other’s shares, financial intermediaries will perform this role in a competitive capital market. Governments in a number of countries have adopted an imputation tax system to remove the double tax on dividends. Any corporate tax collected on dividend income is credited to shareholders by the tax office. In effect, the corporate tax is used as withholding tax to remove the incentive for firms to delay realizing equity income as capital gains which attract no tax credits. Another important reason for keeping the corporate tax is to collect revenue on domestic income paid to foreign shareholders as personal taxes are normally levied only on the income of domestic residents. One important aspect of dividend imputation is the distinction it makes between franked (F) and unfranked dividends (U). Franked dividends are paid from income subject to corporate tax, while unfranked dividends are paid from untaxed income. Firms have untaxed income due to differences in their economic and measured income, where economic income is the extra consumption expenditure firms generate for their shareholders. Economic and Corporate finance measured income were compared earlier in Chapter 2, where the main difference arises from the treatment of the changes in the values of capital assets.42 Under an imputation tax system optimally chosen security trades by consumers will satisfy43 ϕ[1 + iB (1 − t Bh )] ≤ 1, for debt, ⎡ ⎤ i ϕ ⎢1 + F (1 − t Bh ) ⎥ ≤ 1, for equity paying franked dividendss, 1 − t C ⎣ ⎦ h ϕ[1 + iU (1 − t B )] ≤ 1, for equity paying unfranked dividends, ϕ[1 + iG (1 − t Eh ) ≤ 1, for equity paying capital gains. When shareholders receive a franked dividend of iF pF aF they declare its pre-tax value iF pF aF/(1−tC) as taxable income. They are then granted tax credits for corporate tax paid by firms, where the amount they pay the tax office is iF pF aF h (t − t ). 1− tC B C Shareholders with a marginal cash tax rate equal to the corporate tax rate (t Bh = tC ) make no additional tax payments, high-tax shareholders (t Bh > tC ) make additional payments, and lowtax shareholders (t Bh < tC ) get excess tax credits refunded to them. Based on the optimality conditions in (7.33), consumers will have a demand condition which makes them indifferent between the four securities, when iB (1 − t Bh ) = iF (1 − t Bh ) = iU (1 − t Bh ) = iG (1 − t Eh ). 1 − tC When firms choose their security trades optimally in this setting they satisfy.44 ϕ[1 + iB (1 − tC )] ≤ 1, ϕ(1 + iF ) ≤ 1, ϕ[1 + iU (1 − tC )] ≤ 1, ϕ(1 + iG ) ≤ 1, for debt, for equitty paying franked dividends, for equity paying unfranked dividends, for equity paying capital gains. Using these conditions we find that the supply condition that makes firms indifferent to the four securities is iB(1 − tC ) = iF = iU(1 − tC ) = iG. Notice how interest and unfranked dividends shield the net cash flows from corporate tax, while franked dividends and capital gains do not. By combining the demand condition in (7.34) with the supply condition in (7.36), we obtain an equilibrium condition Corporate finance (1 − t Bh ) = (1 − t Bh ) = (1 − t Bh ) ≥ < intterest franked unfranked dividends dividends (1 − tC )(1 − t Eh ) ∀h. caapital gains While every consumer is indifferent between cash distributions as interest and dividends, some tax preferences remain under the imputation tax system. i All shareholders prefer to have unfranked income paid as capital gains rather than dividends because they are taxed at lower personal rates. Each dollar paid as capital gains generates consumption expenditure of 1− t Eh , while as dividends they generate less consumption expenditure, 1− t Bh . Indeed, this confirms the important role played by the corporate tax when firms cannot tax capital gains on accrual. ii Some high-tax consumers (with t Bh > tC ) can have a tax preference for equity income paid as capital gains even though it is taxed twice. They become equity specialists such as those identified earlier in the Miller equilibrium, with (1 − t Bh ) < (1 − tC )(1 − t Eh ). While dividend imputation makes debt specialists marginal investors for interest and franked dividends, it has no impact on equity specialists facing the same combination of corporate and personal taxes. Clearly, if capital gains were granted credits for corporate tax every investor would prefer them as they would be subject to lower personal tax rates than cash distributions as dividends and interest. Indeed, that is why investors prefer unfranked income to be paid as capital gains rather than dividends. Dividend imputation is an ingenious solution to the problems we encounter when taxing capital gains because it collects tax revenue on corporate income as it accrues using the corporate tax. By crediting this revenue back to shareholders on income paid to them as dividends, it removes the double tax on dividends without making capital gains preferable for shareholders with personal tax rates less than or equal to the corporate tax rate. It also acknowledges the untaxed income of corporate firms due to differences in measured and economic income. Benge and Robinson (1986) analyse a number of other important issues not examined here. In particular, they look at transitional effects and the taxation of income paid to foreign shareholders. They also stress the importance of setting the top marginal personal tax rate at or below the corporate tax rate to reduce the incentive for tax arbitrage. Problems 1 Consider a tax on corporate income (tC) where the tax base is income after deducting depreciation and interest payments on debt. This is a classical corporate tax base where dividends are subject to tax but interest payments on debt are not. (Assume each firm is a price-taker in the capital market.) i Determine how the market value of the representative corporate firm is affected by changes in its leverage. Is there an optimal debt–equity mix (i.e., choice of leverage) for the firm? ii What is economic depreciation and why in practice do measured depreciation allowances differ from economic depreciation allowances? Corporate finance 2 Let X be the random net cash flows of a firm. i Use the CAPM to derive the firm’s market value when it issues risky debt at a cost of ˜iB, and risky equity at a cost of ˜iE. Derive the firm’s market value when it issues risk-free debt. Demonstrate that for both types of debt the value of the firm is independent of its financial leverage. Describe the role of the CAPM in project evaluation. ii Explain why the firm’s equity becomes more risky when it issues more risk-free debt. Why then does MM leverage irrelevance hold? Does its equity become more risky when the extra debt is risky? 3 Capel Court is a mining company in the north-west of Australia with a current market value (V) of $6000 million. This is the summed value of its debt and equity and is computed using the CAPM when the expected net cash flow ( X ) in 12 months is $612m and the risk-free user cost of capital cF = 0.10. i Calculate the total risk premium Capel Court pays to the capital market when the CAPM holds. Explain how this premium is computed and calculate the covariance between the net cash flow and the return on the market portfolio (i.e., Cov( X , ˜i M)) when the return on the risk-free asset is i = 0.06 and the expected return on the market portfolio is ˜i M = 0.14 with a standard deviation of sM = 0.4. ii Use the information provided above to obtain the CAPM equation for pricing risky assets. Carefully explain why assets are priced in this way. What is the expected return on a risky asset (k) when it has a correlation coefficient with the return on the market portfolio of rkm = 0.6 and a standard deviation of σk = 0.5? iii Compute the current market value of the total equity issued by Capel Court when its debt is risk-free and equity is expected to pay a return of ˜i E = 0.08 based on the CAPM equation in part (ii) above. What is Capel Court’s debt–equity ratio? Explain how the share price would change with a fall in this debt-equity ratio. 4 Consider two firms with the following information about their cash flows and leverage. βx b Firm K Firm J $840m $1600m 0.8 $950 $1000m 0.75 i Use the CAPM to compute the current market value of each firm when ˜iM − i = 0.15 and cF = i −ΦF = 0.10. Explain your workings. Find the share of the project risk bondholders bear in each firm when both firms pay a risk premium on debt of 0.01. Calculate the risk premium paid on each dollar of equity for both firms and explain how it is measured by the CAPM. In particular, explain how risk is measured and priced. ii Now suppose the two firms are merged into a single new firm G which takes over their debt without changing its expected return and converts their equity into its own new shares. If the mean and the variance of the aggregate net cash flows are unchanged by the merger, what share of the project risk will shareholders bear in firm G, and what will the risk premium be on each dollar of its equity? Calculate the expected user cost of capital for firm G and explain what it measures. Is it possible for the value of the firm to rise when the aggregate expected net cash flows fall after the merger? (Assume the CAPM holds.) iii Explain why the value of these firms is unaffected by their leverage policies when the CAPM holds and then identify circumstances where leverage policy remains irrelevant Corporate finance even though the CAPM fails to hold. What are the important assumptions for MM leverage irrelevance? 5 MM leverage irrelevance totally ignores the fact that as you borrow more, you have to pay higher rates of interest. Do you agree with this statement? 6 Derive an expression for a corporate firm’s user cost of capital when there is uncertainty and a classical corporate tax. i How does the tax affect the equilibrium expected returns on debt and equity? Can the expected return on equity be higher? ii When does the cost of capital depend on leverage in a competitive capital market? iii Identify government policies that directly impact the user cost of capital. iv In the absence of the corporate tax do bankruptcy costs result in an all-equity equilibrium in the capital market? 7 The Miller equilibrium relies on personal taxes to explain the presence of equity when firms are subject to classical corporate tax. i Illustrate this equilibrium in the debt–equity space for a corporate tax of 40 per cent when there are investors in each of the following three personal tax brackets: Tax brackets Marginal cash tax rate (%) Marginal capital gains tax rate (%) A B C (Assume all the tax rates are constant and endowed on consumers. To read the table note that the investor in bracket A has a tax rate on cash distributions of 25 per cent, and a tax rate on capital gains of 15 per cent.) Explain why, in this equilibrium, investors only hold their tax-preferred securities when there is uncertainty. Consider whether tax arbitrage would be possible and identify two ways it can be constrained. What would happen in the absence of such constraints? ii Re-do part (i) when there are no investors in the tax bracket B. Consider whether MM leverage irrelevance holds for individual firms in this setting. Identify the conditions that are crucial to this irrelevance result. iii Re-do part (i) when the corporate tax rate is 50 per cent. Does MM leverage irrelevance hold for individual firms under these circumstances? Illustrate your answer in the debt–equity space. 8 Taxes on income paid to shareholders and bondholders have important impacts on the user cost of capital for corporate firms. i Examine the way changes in financial structure affect the value of corporate firms when there is a 30 per cent classical corporate tax and marginal cash tax rates of 20 per cent for low-tax investors and 50 per cent for-high tax investors. For both groups of investors the marginal tax on capital gains is 50 per cent of their marginal cash tax rate. Compute the after-tax consumption flow to investors from a dollar of income paid as interest, dividends and capital gains, and identify their tax preferences for debt and equity. Consider how the user cost of capital and financial structure are affected when the corporate tax is raised from 30 per cent to 40 per cent. (Assume there is certainty, the capital market is competitive and there are no leverage-related costs.) ii How would your answer in part (i) above be changed by the introduction of an imputation tax system that provides tax credits for any corporate tax collected on dividend Corporate finance Identify circumstances where investors will have tax preferences for capital gains over dividends under the imputation tax system. 9 In most countries corporations pay tax on shareholder income. This income is also subject to personal tax when it is realized by shareholders. (Assume there is certainty when answering the following questions.) i Carefully explain the dividend puzzle by summarizing the after-tax income investors receive on corporate income paid as interest, dividends and capital gains when there are two groups of investors who are separated by their marginal personal tax rates. Within each group investors face the same tax rates; group 1 have a personal cash tax rate of t 1B = 0.5, while group 2 have a personal cash tax rate of t B2 = 0.3. Both groups have marginal tax rates on capital gains that are half their respective marginal cash tax rates, and the corporate tax rate is tC = 0.3. Explain why governments tax equity income twice. ii Derive an expression for the user cost of capital when corporate firms sell debt and equity to the investors in part (i). Consider whether firm leverage decisions will affect their market value in this setting. What is the market rate of return on equity when the interest rate on corporate debt is 10 per cent? iii Explain how share repurchase constraints are used to solve the dividend puzzle. Demonstrate the way inter-corporate equity undermines this explanation. iv Derive the after-tax income in part (i) when corporate tax is credited back to shareholders against their personal tax liabilities on dividends. How will this affect the relationship between the market rates of return to debt and equity? 10 Consider an economy where half the investors have a marginal personal tax rate on cash distributions of t BL = 0.25, while the other half have a cash tax rate of t BH = 0.75. (When answering the following questions assume there is certainty, no transactions costs and the capital market is competitive.) i Find the personal tax rates on capital gains that would make each group of investors indifferent between debt and equity when there is a classical corporate tax rate of tC = 0.25. (Assume there is certainty and no transactions costs, all investors pay income taxes and all investment is undertaken by corporate firms.) Now suppose both groups of investors have a marginal personal tax rate on capital gains of tG = 0.15. Identify any investor tax preferences for the way firms distribute their income in the presence of these taxes and explain what this means for the aggregate debt–equity ratio in the economy. Consider whether changes in leverage by individual firms will affect their market valuations under these circumstances. ii Examine the way investor tax preferences are affected by abolishing the corporate tax in part (i) above, and then explain why the Australian government adopted the imputation tax system instead. Identify circumstances where investors have tax preferences for the way corporate firms distribute income under the imputation tax system. 11 When equity income is double-taxed under a classical corporate tax system there is a tax bias against equity in favour of debt. The Australian government took steps to remove this bias by introducing dividend imputation; companies pay corporate tax on their income, and it is credited to shareholders as an offset to any personal tax they are liable to pay on dividends. When the company pays dividends (iD), shareholders gross them up by any corporate tax paid (iD/(1−tC)) and this is used to determine their personal tax liability. The after-tax return to shareholders on a dollar of fully franked dividends is Corporate finance iD (1 − t Pi ), 1 − tC where t Pi is the marginal personal tax rate which rises in steps with income. If personal tax payable under this calculation is equal to the corporate tax already paid, shareholders pay no personal tax on dividend income; it is subject just to corporate tax. (Assume initially that no capital gains are paid by firms to consumers, that is, all equity income is paid as cash dividends.) i Compute the tax payable (or tax credit received) by shareholders on fully franked dividends. ii Derive the demand and supply relationships between equilibrium security returns when consumers can utilize all their corporate tax credits. Are there any tax clienteles like those identified in the Miller equilibrium? (Assume there is certainty.) iii Can you identify any tax clienteles like those identified in the Miller equilibrium when there are some shareholders who cannot utilize all their corporate tax credits (when t Pi < tC)? (Assume there is certainty.) iv Why is there a distinction between franked and unfranked dividends? (Franked dividends are paid from income which has been taxed at the corporate tax rate, while unfranked dividends are paid from income which has not been taxed at the corporate rate.) v Explain how the inclusion of capital gains affects your answer in (iii) above when they are subject to personal tax on realization rather than accrual. vi Can you provide reasons why the Australian government chose dividend imputation rather than to abolish the payment of the corporate tax altogether? 12 In the Miller equilibrium under certainty, firm capital structure choice is irrelevant because there are marginal investors who are willing to hold debt and equity. All other investors form clienteles holding just one of the securities determined by their tax preferences. Explain how the Miller equilibrium obtains with uncertainty where consumers have tax and risk preferences for corporate securities. Why in practice do consumers hold bundles of debt and equity when they have a tax preference for one of them? Does this mean that MM leverage irrelevance fails? 13 i Examine the Miller equilibrium in a certainty setting, and explain why MM leverage irrelevance holds. Extend the model to uncertainty with no marginal investors to provide a critical evaluation of the statement by Edwards. Carefully explain how risk and tax preferences are satisfied in the Miller equilibrium when investors divide into strict tax clienteles. ii Consider the effects on the Miller equilibrium of an imputation tax system where shareholders receive tax credits for corporate tax collected on income distributed as dividends. Derive the equilibrium relationship between the market rates of return on debt and equity, and illustrate this in the aggregate debt–equity space. Explain the equilibrium outcome in a series of steps by starting with no taxes, and then introduce the corporate tax followed by the personal taxes. Identify investor tax preferences for securities when there are high-tax investors and no tax credits received on income paid as capital gains. 14 The following quotation taken from Edwards’ (1989, p. 162) is a summary of the finance literature on corporate leverage decisions: Auerbach and King (1983) show that the Miller equilibrium requires the existence of certain constraints on investors: without such constraints (on, for example, Corporate finance borrowing and short-selling) questions arise concerning the existence of an equilibrium, for with perfect capital markets realistic tax systems provide opportunities for unlimited arbitrage at government expense between investors and firms in different tax positions. Auerbach and King also show that the combined effect of taxation and risk is to produce a situation in which gearing is relevant. With individual investors facing different tax rates and wishing to hold diversified portfolios the Miller equilibrium can no longer be sustained: investors who on tax grounds alone would hold only equity may nevertheless hold some debt because an equity-only portfolio would be too risky. Carefully evaluate this statement. In particular, assess the proposition that the Miller equilibrium cannot be sustained in the presence of risk when investors have tax preferences for debt and equity. Explain why investors will hold only their tax-preferred securities in this setting when the capital market is not double-complete. Why are shortselling constraints used in the Miller equilibrium? Examine the impact of leverage on firm values when there are no marginal investors. Project evaluation and the social discount rate In competitive economies without taxes and other market distortions, private and public sector projects are evaluated in the same way (when distributional effects are not taken into account). That is, their future net cash flows are discounted using the same marginal opportunity cost of time and risk. In reality, however, a number of market distortions and distributional effects drive wedges between social marginal valuations and costs where different rules are used to evaluate private and public projects. Taxes and subsidies are the most familiar distortions, but others include externalities, non-competitive behaviour and the private underprovision of public goods. In this chapter we evaluate public projects where the government provides a pure public good in a tax-distorted economy with aggregate uncertainty. The analysis is initially undertaken in a two-period setting with frictionless competitive markets where consumers have common information and trade in a complete capital market. This chapter consists of two sections: the first isolates conditions for the optimal provision of a pure public good in each time period, while the second derives the social discount rate for public projects in the presence of tax distortions. To make the analysis less complicated we assume the public goods are only supplied by the government, which maximizes social welfare.1 One can think of the public good as national defence which, by law, cannot be supplied by private traders in most countries. Initially we obtain Samuelson conditions for the optimal provision of the public goods (Gt) in each time period (t = 0,1) without taxes and other distortions. This familiar condition equates the current value of the summed marginal consumption benefits from a public good ( MRSGt ) to the current value of its marginal production cost (MRTt). When the consumption benefits and resource costs from providing the public good in the second period are risky they are discounted using a stochastic discount factor which is the same for all consumers and firms. In the presence of taxes on market trades the optimality conditions for public goods change whenever resources are reallocated in distorted markets. We derive the Samuelson conditions when the government raises revenue with lump-sum taxes, but in the presence of distorting trade taxes. There are additional welfare effects when the projects impact on activity in tax-distorted markets. They can raise or lower welfare, and are not taken into account by the private sector when evaluating projects. The analysis is then extended by deriving revised Samuelson conditions when the government raises revenue with the distorting trade taxes. Their marginal excess burden increases the marginal social cost of public funds and reduces the optimal supply of the public goods. In an intertemporal setting projects in one period can affect economic activity in both periods, where additional welfare effects from changes in taxed activities affect the optimal supply of the public goods. Project evaluation and the social discount rate In Section 8.2 we derive the social discount rate in the presence of a tax on capital income. This measures the extra future consumption expenditure generated by saving another dollar of capital in the first period. Since the income tax distorts intertemporal consumption choices the private discount rate deviates from the social discount rate. To make the analysis less complex, and to focus on a number of key issues, we assume the tax rate is the same for all consumers and applies to all capital income. In practice, consumers have different marginal tax rates, and taxes differ across capital assets. For example, consumers in most countries face progressive marginal personal tax rates on income, with higher tax rates on cash distributions, such as dividends and interest, than on capital gains. Moreover, equity income is double-taxed under a classical corporate tax system, once at the corporate rate and then again at the personal tax rates of shareholders. While these are important aspects of taxes in most countries, a much simpler tax system is adopted here to focus on the way income taxes in general impact on the social discount rate. This allows us to anticipate how the social discount rate will change under more realistic tax systems. There has been considerable controversy over what discount rate to use when evaluating public projects in the presence of income taxes and risk. In a two-period certainty setting, Harberger (1969) and Sandmo and Drèze (1971) find the social discount rate is a weighted average of the borrowing and lending rates of interest in the presence of a uniform income tax. By including additional time periods, Marglin (1963a,1963b) finds it should be higher than the weighted average formula, while Bradford (1975) finds it should be approximately equal to the after-tax interest rate paid to savers. Sjaastad and Wisecarver (1977) show how these differences are explained by the treatment of capital depreciation. In a common information setting where private saving rises to replace depreciation of public capital the discount rate becomes the weighted average formula in a multi-period setting. Marglin assumes there is no adjustment in private saving so that depreciation allowances are consumed, while Bradford adopts a Keynesian consumption function which makes saving a constant fraction of aggregate income, thereby precluding endogenous changes in private saving to offset depreciation in public capital. Since optimizing agents make consumption choices in each time period based on their wealth, a Keynesian consumption function seems unsuitable. Private wealth depends on the expected benefits generated by publicly provided goods and services and the taxes levied to fund them. However, it seems unlikely in practice that consumers correctly compute the expected depreciation on every item of public capital, where the discount rate will exceed the weighted average formula. Samuelson (1964), Vickery (1964) and Arrow and Lind (1970) argue the social discount rate should be lower on public sector projects because the government can raise funds at lower risk. They claim the public sector can eliminate diversifiable risk and spread aggregate uncertainty at lower cost than the private sector. Bailey and Jensen (1972) argue these claims are implicitly based on distortions in private risk markets which the public sector can overcome more effectively. They contend, however, that the reverse is much more likely in practice. That is, private markets are likely to provide the same or better opportunities for trading risk, and at lower cost, as private traders are specialists facing better incentives than the public sector. The analysis commences in Section 8.1 using a two-period model of a taxdistorted economy with aggregate uncertainty. A conventional welfare equation is obtained for changes in the provision of the pure public goods and distorting trade taxes in each time period. The Samuelson conditions for these goods are obtained under different funding arrangements to examine the role of tax distortions and risk on optimal policy choices. The model is extended in Section 8.2 by including a tax on capital income. It is used to derive Project evaluation and the social discount rate the weighted average formula for the social discount rate before using the analysis in Bailey and Jensen to reconcile the different discount rates obtained by Marglin and Bradford. Finally, we summarize the claims made by Samuelson, Vickery, and Arrow and Lind that the social discount rate should be lower when projects are risky. 8.1 Project evaluation To illustrate the impact of time and risk on project evaluation we simplify the two-period Arrow–Debreu model examined in Chapter 3 by adopting a single private good (x) and introducing a pure public good (G).2 In previous chapters tax revenue was returned to consumers as lump-sum transfers, but now we introduce a government budget constraint to accommodate public spending. The analysis is undertaken in a competitive equilibrium where consumers with common information maximize time-separable expected utility functions by trading in a complete capital market. In this setting the problem for each consumer is to ( p0 − t0 ) x0h ≤ ( p0 − t0 ) x0h − p0 z0h + Lh0 ⎪⎧ ⎪⎫ 3 max ⎨ EU h ( x h , G ) ⎬, h −h h h ( ps + t1 ) xs ≤ ( p0 + t1 ) xs + ps ys + Ls ∀s ∈ S ⎭⎪ h h with EU ( x , G ) = U h ( x0h , G0 ) + δ E[U h ( xsh , Gs )] and xth := {x0h , x1h , x2h , … , xSh }. Scarcity in the economy is defined by endowments of the private good in each period, x0h and xSh ∀s , where the second-period endowments are state-contingent. Output in the second period is also state-contingent, and the good trades in competitive markets in both periods at equilibrium prices p0 and ps for all s, respectively. All consumers are net suppliers in the first period, with x0h − x0h > 0 for all h, where the market value of their saving ( p0 z0h > 0 for all h) is invested in private firms who make state-contingent payouts of ps ysh in the second period. There are taxes on market trades, where net supplies in the first period are subject to specific tax t0 and net demands in the second period (with ( xsh − xsh ) > 0 for all h) are subject to specific tax t1.4 Supply of the public good in both periods is exogenously determined by the government and is constant across states of nature. Finally, the government makes lump-sum transfers to consumers in each period of Lh0 and h Ls for all s, respectively. They are used in a conventional Harberger (1971) cost–benefit analysis to separate the welfare effects of marginal changes in each policy variable. For example, when the government increases the supply of a public good, and funds it using a distorting tax, we separate the welfare effects of each component of the project by making lump-sum transfers to balance the government budget. The welfare effects from extra output of the public good are separated from the welfare effects from marginally raising the tax to fund its production cost, where the transfers allow them each to be computed with a balanced government budget.5 The final change in the distorting tax is determined by combining these separate components inside the project, where the tax change balances the government budget and offsets the hypothetical lump-sum transfers used to separate the welfare changes.6 If we write the state-contingent payouts to saving as ps ysh = (1 + is)p0 z 0h, and use the firstorder condition in the consumer problem in (8.1) for optimally chosen saving, we obtain state-contingent discount factors of ms = δ λ hs 1 = ∀s, λ 0h 1 + is Project evaluation and the social discount rate where λ 0h and λ hs are Lagrange multipliers on the budget constraints in (8.1), and δ the measure of impatience, with 0 < δ ≤ 1. In a complete capital market consumers and firms use the same discount factors, where optimally chosen investment satisfies ps ∂ysj = p0 ∀s. 1 + is ∂z0j Resource flows through the public sector are summarized by the government budget constraints in each time period, where: T0 ≡ t0 ( x0 − x0 ) = MRT0G0 + L0 , Ts ≡ t1 ( xs − xs ) = MRTsG1 + Ls, ∀s, with endowments and consumption of the private good and the lump-sum transfers aggregated over consumers.7 We assume that the marginal cost to government revenue of producing each public good is constant, with MRT0 = p0 and MRTs = ps for all s.8 Thus, there is risk in the cost of producing the public good in the second period. In a competitive equilibrium the government balances its budget and producer prices adjust endogenously to equate demand and supply for the private good in each time period and in each state, where the respective market-clearing conditions are x0 = x0 + z0 + G0 and xs + ys = xs + G1 for all s. These equilibrium prices also equate aggregate saving and aggregate investment, with x0 − x0 − G0 = z0. 8.1.1 A conventional welfare equation In the following analysis projects are evaluated as combinations of marginal changes in the exogenous policy variables G0, G1, t0 and t1.9 Their impact on individual consumers is obtained by totally differentiating the constrained optimization problem in (8.1) at an interior solution and using the stochastic discount factors in (8.2), where the dollar change in expected utility is10 dEU h λ 0h = ( x0h − x0h )dq0 − z0h dp0 + MRS0h dG0 + dLh0 + ∑ π s ms {−( xsh − xsh )dqs + ysh dps + MRSsh dG1 + dLhs }, with dq0 = dp0 − dt0 and dqs = dps + dt1 being changes in the consumer prices of the private good, and MRS0h ≡ ( ∂U h /∂G0 )/λ hs and MRSsh ≡ ( ∂U h /∂G1 )/λ hs the consumption benefits from marginal increases in the public good. Despite its apparent complexity, the terms in (8.5) are familiar changes in private surplus. Higher consumer prices make consumers better off in the first period when they are net sellers of the private good, with ( x0h − x0h )dq0 > 0, and worse off in the second period when they are net consumers, with −( xsh − xsh )dqs < 0. Endogenous changes in producer prices affect consumers by impacting on their share of profits in private firms, where higher prices make h them worse off in the first period by raising the input cost, with − z0 dp0 < 0, and better off h in the second period by increasing sales revenue, with ys dps > 0 for all s. Extra output of Project evaluation and the social discount rate the public goods endow consumption benefits on consumers, with MRS dG0 > 0 and MRSsh dG1 > 0 for all s, while lump-sum transfers raise private surplus directly by increasing their money income, with dLh0 > 0 and dLhs > 0 for all s. Most of these changes in private surplus are transfers between consumers and producers, and between the private and public sectors of the economy, where, in the absence of distributional effects, they have no impact on aggregate welfare. Thus, once we aggregate the welfare changes in (8.5) over consumers and use the government budget constraints in (8.4) to solve the revenue transfers, the final welfare changes are determined by changes in final consumption. And that makes sense because consumers ultimately derive utility from consuming goods. There are additional welfare changes when the transfers of private surplus have distributional effects. At this point we must decide how to aggregate the welfare changes in (8.5) using a social welfare function. A large literature looks at deriving them as functions of non-comparable ordinal utility functions assigned to consumers. Since they do not contain enough information to allow interpersonal comparisons, we follow the conventional approach and use a Bergson–Samuelson individualistic social welfare function.11 This is a mapping over fully comparable cardinal utility functions when consumers derive utility from their own (individual) consumption bundle, with W(EU 1, EU 2, ... , EU H), where the aggregate welfare change solves h 0 dW = ∑ β h h dEU h , λ 0h with β h = ( ∂W/∂EU h )λ 0h being the distributional weight; this is the change in social welfare from marginally raising the income of each consumer h.12 In a conventional Harberger analysis consumers are assigned the same welfare weights on the grounds that aggregate dollar gains in expected utility can be converted into Pareto improvements through a lumpsum redistribution of income. For most policy changes there are winners and losers, but aggregate gains can be converted into Pareto improvements by transferring income from winners to compensate losers.13 Thus, they represent potential Pareto improvements. A conventional welfare equation is obtained by assigning the same distributional weights to consumers in (8.6), with βh = 1 for all h, and using the market-clearing conditions for the private good in each time period, the dollar changes in expected utility in (8.5), and the government budget constraints in (8.4), to write the aggregate welfare change as dW = ( MRS0 − MRT0 )dG0 − t0 dx0 + ∑ s π s ms {( MRSs − MRTs )dG1 + t1dxs }.14 All the policy changes examined in following sections will be solved using this welfare equation. Direct welfare changes from marginal increases in the public goods are isolated by the net benefits in the first and third terms, where consumers have consumption benefits (MRS) endowed on them less the reductions in private surplus when the government balances its budget to fund the production costs (MRT). Net benefits in the second period are discounted to cover the opportunity cost of time and risk. The remaining terms in (8.7) capture welfare effects from endogenous changes in tax-distorted activities. Whenever policy changes expand taxed activities the extra tax revenue isolates welfare gains from Project evaluation and the social discount rate undoing the excess burden of taxation. For example, the welfare change from marginally raising trade tax t0 is ∂x ∂x ∂W = − t0 0 + ∑ π s ms t1 s , ∂to ∂t0 ∂t0 s where the first term is the conventional measure of the marginal welfare cost of taxation illustrated as the cross-lined rectangle A in the left-hand panel of Figure 8.1. It isolates the increase in the familiar deadweight loss triangle when the net supply of the private good falls in the first period. If current and future consumption of the private good are gross complements then the tax change also increases future demand, where the extra tax revenue is the welfare gain illustrated as the cross-lined rectangle B in the right-hand panel of Figure 8.1.15 It is a related market effect from the tax change, where the final welfare change is A − B. If the extra revenue in B exceeds the welfare loss in A, the tax change actually raises welfare. 8.1.2 Optimal provision of pure public goods Now we are ready to find the optimality conditions for the provision of the public goods. The original Samuelson condition was derived in an economy free of any distortions where the summed marginal consumption benefits from the last unit of the public good supplied is equated to its marginal production cost, with MRS = MRT. We extend the analysis to an economy with tax distortions, and then obtain a revised Samuelson condition when the government raises its revenue using them. One obvious extension is to include time and risk in the analysis. The Samuelson condition in an economy without distortions The original Samuelson condition is obtained by evaluating a public project that marginally increases the supply of a public good in an economy without market distortions, where at a social optimum the net welfare change is zero. For the public goods supplied ps – x0 − x0 δxs >0 δts ps + t1 B p0 − t0 ys − G1 δx0 ΣsπsMRTs to compensate consumers for the extra risk, while the reverse applies when the production costs are more risky. In practice, many public goods are capital projects where governments incur production costs that generate consumption in later periods. Thus, each dollar of benefits has a lower current value than the costs due to the opportunity cost of time (and risk). If the production costs are incurred in the first period the optimal provision of the public good in the second period satisfies ΣsπsmsMRSs = MRT0, where expected benefits must exceed expected costs (even in the absence of risk), with ΣsπsMRSs > MRT0, to compensate consumers for the opportunity cost of time. The Samuelson condition in a tax-distorted economy Welfare effects of public projects are rarely confined to markets where they have direct effects. For the public good projects being considered here there are direct consumption benefits for consumers and production costs which impact directly on the government budget. But this changes the real income of consumers and affects their demands for other goods and services. When these related markets are subject to taxes and other distortions there are additional welfare effects that can affect the optimal supplies of the public goods. Project evaluation and the social discount rate Box 8.1 An equilibrium outcome in the public good economy Numerical solutions are derived here for equilibrium outcomes in a public good economy with a single aggregated consumer in a two-period certainty setting. When the consumer can trade a risk-free security in a frictionless competitive capital market the optimization problem is summarized as ⎧ q0 x0 + δ q1 x1 ≤ q0 x0 + L ⎫ ⎪ ⎪ max ⎨U = ln x0 + ln G0 + δ ln x1 + δ ln G1 L = T − p0G0 − δ p1G1 − R ⎬ , ⎪ T = t0 ( x0 − x0 ) + δ t1 x1 ⎪⎭ ⎩ where q0 = p0 − t0 and q1 = p1 + t1 are consumer prices of the private good in each respective time period, δ = 1/(1 + i) the rate of time preference and T the present value of tax revenue collected by the government. Notice there is no endowment of the private good in the second period here, where some of the current endowment of x–0 = 500 is allocated to future consumption expenditure by trading the risk-free security at market interest rate i = 0.03. Thus, saving in the economy is equal to z0 = x0 − x0 − G0 − R. To simplify the analysis we adopt a linear production possibility frontier to hold producer prices constant at the constant marginal cost of production in each period. Since the private good can be stored and transferred to the second period at no cost, its producer price is higher by the interest rate, with p0 = 1 and p1 = p0 (1 + i) = 1.03. With log utility the ordinary (Marshallian) demands for the private good in each period are x0M = I (1 + i ) I (1 + i ) and x1M = ( 2 + i ) ( p0 − t0 ) ( 2 + i ) ( p1 + t1 ) The general equilibrium (Bailey) demand schedules are obtained by substituting aggregate income, I = ( p0 − t0 ) x0 + T − p0G0 − δp1G1 , into these ordinary demand schedules. Thus, even in circumstances where consumer prices are unaffected by policy changes, income effects will flow through the government budget constraint. For example, extra output of public goods funded by lump-sum taxation will impact directly on the government budget constraint through the increased production costs, and indirectly through endogenous changes in taxed activities. First-best solution: When the government uses lump-sum taxation to fund its spending the equilibrium allocation is summarized as follows: Trade taxes (%) Public goods Bailey demands This gives the consumer the largest possible utility of uˆ = 19.0324 from the initial endowment of the private good. Any other policy choices will lower utility. It may be possible to raise aggregate welfare in an economy with heterogenous consumers by redistributing income between them when they have different distributional weights in the social welfare function. Once we introduce distorting trade taxes, the welfare effects for the two projects at a social optimum are obtained using the conventional welfare equation in (8.7) as dW dT = MRS0 − MRT0 + = 0, dG0 dG0 dW dT = ∑ π s ms ( MRSs − MRTs ) + = 0, dG1 dG1 s Project evaluation and the social discount rate where dT/dG0 and dT/dG1 are the present value of endogenous changes in tax revenue.16 These related market effects were initially identified by Diamond and Mirrlees (1971) and Stiglitz and Dasgupta (1971). Atkinson and Stern (1974) named them spending effects, which Ballard and Fullerton (1992) and Kaplow (1996) argue can reduce the marginal cost of supplying public goods thereby raising the optimal level of government spending. This is confirmed by the Samuelson conditions obtained from (8.11), where MRS0 = MRT0 − dT , dG0 dT ∑s π s ms MRSs = ∑s π s ms MRTs − dG . 1 When each project raises additional tax revenue by expanding taxed activities, with dT0/dG0 > 0 and dT/dG1 > 0, the spending effects reduce the size of the government budget deficit, where, at a social optimum, we have MRS0 < MRT0 and ΣsπsmsMRSs < ΣsπsmsMRTs, respectively. With diminishing marginal valuations the optimal supplies of the public goods are larger in these circumstances.17 The first optimality condition in (8.12) is illustrated in Figure 8.2, where it is assumed that the project expands taxed activities in both time periods and in each state, with dT . MRS0 = MRT0 − dG 18 C D − PV ( E ) A + PV ( B) . The summed marginal consumption benefits are the cross-lined area C (with MRS0 = C), while the production cost is the present value (PV(·)) of the reduction in consumption of the private good isolated by the shaded rectangles, with MRT0 = D – PV(E). In the presence of the trade taxes there is a positive spending effect isolated in the cross-lined rectangles as dT/dG0 = A + PV(B); it is the welfare gain from expanding taxed activities. It is possible to illustrate the spending optimality condition in (8.12) using the same diagrams, but as the analysis is similar it will not be repeated here. The main difference arises from the need to discount the consumption benefits and production costs for the opportunity cost of time and risk. ps dx0 0 dG0 – x0 − x0 Gs dG0 ps + t1 p0 − t0 D z0 + G0 C E Figure 8.2 The Samuelson condition in the first period. xs − – xs Box 8.2 Estimates of the shadow profits from public good production We introduce trade taxes into the two-period certainty economy in Box 8.1 and evaluate the following equilibrium allocation that generates utility of uˆ = 18.6936 . Trade taxes (%) Public goods Bailey demands Aggregate utility can be raised whenever the shadow profit from marginally increasing each public good is positive. When extra output is funded using lump-sum taxation, we have dT ≈ $0.98, dG0 dT π1 = δ Σ MRS1 − δp1 + ≈ $1.32. dG1 π 0 = Σ MRS0 − p0 + We obtain π0 by first computing the dollar value of the (summed) marginal utility it generates: ΣMRS0 = 1/( λG0 ) = 1/(0.00618149 × 85) ≈ $1.90, where 1/G0 is the marginal utility from extra output of the good and 1/λ = 0.00618149 ≈ $161.77 is the dollar value of a marginal increase in utility. Even though prices are unaffected by the project there are income effects that impact on the demands for private goods, where the endogenous change in tax revenue solves ∂x ( q , q , I ) dI ∂x ( q , q , I ) dI dT = − t0 0 0 1 +t 1 0 1 , ∂I ∂I dG0 dG0 dG0 1 with the change in aggregate income being dI dT = − p0 . dG0 dG0 Notice how the income effect feeds through the ordinary demand functions where the consumer chooses private goods facing given prices and money income. After substituting the change in aggregate income, we have − θ p0 dT = ≈ $0.08, dG0 1 − θ with θ = − t0 ∂x0 (⋅) ∂x (⋅) + t1 1 ≈ − 0.0820645. ∂I ∂I Thus, the shadow profit above is decomposed as dT ≈ $0.98. dG0 (1.9) − (1) + (0 0.08) π 0 = ΣMRS0 − p 0 + There are similar workings for calculating the shadow profit of G1. In total, both projects raise utility by approximately $2.30. In a more general analysis with a non-linear aggregate production frontier, the equilibrium price changes are solved using the market-clearing conditions for each good. For an example of the calculations, see Jones (2005). Project evaluation and the social discount rate The revised Samuelson condition in a tax-distorted economy Governments rarely, if ever, raise revenue with non-distorting taxes. Indeed, it is difficult to find taxes on activity that are non-distorting as few goods are fixed in supply, especially in the long run when resources can be moved between most activities. Poll taxes are perhaps the closest thing to non-distorting taxes but they are politically unpopular. Pigou (1947) recognized that governments raised most of their revenue using distorting taxes with excess burdens that reduce the optimal level of government spending by raising the marginal social cost of public funds. We can confirm this reasoning by using the conventional welfare equation in (8.7) to compute the welfare effects for the two public good projects when they are funded with revenue raised with distorting trade taxes, as ⎛ dW ⎞ ⎜⎝ dG ⎟⎠ 0 ⎛ dW ⎞ ⎜⎝ dG ⎟⎠ 1 ⎛ dT ⎞ = 0, = MRS0 − MCF0 ⎜ MRT0 + dG1 ⎟⎠ ⎝ = ∑ π s ms ( MRSs − MCF1 MRTs ) + MCF1 dT = 0, dG1 where the marginal social cost of public funds (MCF) for each tax measures the current value of the direct cost to private surplus from transferring a dollar of revenue to the government budget, with dT , dt0 = ( x0 − x0 ) dT = ∑ π s ms ( xs − xs ) . dt1 s These are conventional Harberger (1964) measures of the MCF where the welfare effects of tax changes are separated from the welfare effects of government spending funded by the extra tax revenue.19 We derive them by using the conventional welfare equation in (8.7) to compute the marginal excess burden of taxation (MEB) for each tax and adding them to unity, with MCF = 1 + MEB, where MEB is the marginal welfare loss on each dollar of tax revenue raised.20 We demonstrate this for tax t0 using Figure 8.3, where MEB0 = a . b+c−a The welfare loss from marginally raising the tax is the cross-lined rectangle in a, while the extra tax revenue is b + c − a. Thus, each dollar of revenue the government collects by using this tax will have an excess burden of MEB0. Whenever the government uses tax t0 to fund the budget deficit it is multiplied by MCF0 to account for the excess burden of taxation, where consumers lose a dollar of surplus on each dollar of revenue raised plus MEB0 due to the excess of burden of taxation. Thus, in project evaluation the MCF is used as a scaling coefficient on revenue transfers made by the government to balance its budget. It is illustrated in Figure 8.3 as Project evaluation and the social discount rate – x0 − x0 b p0 a c p0 − t 0 z0 + G0 Figure 8.3 The revised Samuelson condition in the first period. MCF0 = 1 + MEB0 = b+c > 0, b+c−a where private surplus falls by b + c when the government collects tax revenue of b + c − a. If the net supply of the private good is fixed there is no welfare loss from the tax and the MCF is unity. Thus, each dollar of revenue the government raises will reduce private surplus by a dollar. Once the tax change drives down activity the fall in private surplus is larger than the revenue raised. The revised Samuelson conditions are obtained from (8.13) as ⎛ dT ⎞ MRS0 = MCF0 ⎜ MRT0 − ⎟, dG ⎝ 0⎠ ⎛ dT ⎞ ∑s π s ms MRSs = MCF1 ⎜⎝ ∑s π s ms MRTs − dG ⎟⎠ . 1 It is more costly for the government to fund budget deficits when the MCF exceeds unity, so the optimal supply of each public good will fall (relative to the optimal supplies determined by (8.12)). Since the terms inside the brackets measure the changes in the budget deficit, they are multiplied by the MCF for each trade tax. The welfare changes for the first optimality condition in (8.15) are illustrated in Figure 8.4 when the project has no net impact on trades of the private good. Thus, the reduction in net demand from the higher consumer price is undone by the net increase in demand resulting from extra output of the public good.21 In this special case the welfare loss from increasing the tax to balance the government budget by raising revenue D + E exactly offsets the spending effect in the cross-lined area A, where the revised Samuelson condition can be summarized, as MRS0 = C MCF0 ( MRT0 − dT ⎞ dG ⎟⎠ D+ E (D + E − A = D + E D+ E− A Project evaluation and the social discount rate Box 8.3 Estimates of the marginal social cost of public funds (MCF) The MCF provides important information for policy-makers because it tells them how much private surplus falls when the government raises a dollar of tax revenue. This loss in private surplus exceeds tax revenue when distorting taxes are used, where the excess burden is minimized when all taxes have the same MCF. In the single (aggregated) consumer economy examined earlier in Box 8.2, the MCFs for the two trade taxes are MCF0 = x0 − x0 x −x ≈ 1.10 and MCF1 = 1 1 ≈ 1.16. dT / dG0 dT /dG1 Thus, the government could increase aggregate utility by raising more of its revenue with t0 instead of t1. We will summarize the workings for computing MCF0, where the reduction in private surplus from marginally raising tax t0 is computed, using the Bailey demand schedule, as x0 − x0 ≈ 500 − 202.22 ≈ 297.78. The change in tax revenue solves ∂x (⋅) dq0 ∂x (⋅) dI ∂x (⋅) dq0 ∂x (⋅) dI dT = ( x0 − x0 ) − t0 0 −t 0 + δt1 1 + δt1 1 , ∂q0 dt0 0 ∂I dt0 ∂q0 dt0 ∂I dt0 dt0 with xt (·) ∫ xt (q0, q1, I) being the ordinary (Marshallian) demands in each time period t = 0, 1. With fixed producer prices, we have dq0/dt0 = −1, where the change in aggregate income ( I = q0 x0 + T − p0G0 − δp1G1 − R ) solves dx dx dI = − x0 − t0 0 + δt1 1 . dt0 dt0 dt0 After substitution, and using the Slutsky decomposition, we have: t ( ∂ xˆ0 (⋅)/∂ q0 ) − δ t1 ( ∂ xˆ1 (⋅)/∂ q0 ) dT = ( x0 − x0 ) + 0 ≈ 270.34, 1− θ dt0 where θ = − t0 (∂x0 (·)/∂I) + t1(∂ x1 (·)/∂I) ª −0.0820645 isolates the income effects. The compensated demand functions for the private goods are xˆ0 (⋅) = e u0 / 2 q01/ 2 e u0 / 2 q11/ 2 and xˆ1 (⋅) = , 1 / 2 1 / 2 1 G G1 q0 G0 / 2G11/ 2 q11/ 2 1/ 2 0 where t0 ∂ xˆ0 (⋅)/ ∂ q0 ≈ − 21.78 and δ t1∂ xˆ1 (⋅)/ ∂q0 ≈ 7.92 , for u0 = 18.6936, q0 = 0.80, q1 = 1.133, G0 = 85 and G1 = 70. By combining these welfare changes, we have MCF0 = x0 − x0 297.78 ≈ ≈ 1.10. dT / dG0 270.34 Project evaluation and the social discount rate p0 G0 – x 0 − x0 D p0 A E p0 − t0 dG0 C dx0 z + G0 =0 0 dG0 Figure 8.4 MCF for the trade tax in the first period. There are other ways of financing the budget deficit for each project when the government can transfer resources over time by trading bonds. For example, it could sell bonds to fund extra output of the public good in the first period and then redeem them by raising the trade tax in the second period. We would then use MCF1 instead of MCF0 in the Samuelson condition above, which is an attractive alternative when MCF1 < MCF0 However, the MCF is independent of the tax used to balance the government budget when taxes are (Ramsey) optimal, with MCF1 = MCF0. Box 8.4 Estimates of the revised shadow profits from public good production In Box 8.p2 we computed the shadow profit for each public good when the extra outputs were funded using lump-sum taxation. But they were measured in the presence of distorting trade taxes, which suggests the government cannot raise all its revenue using lump-sum taxation. Indeed, if it could do so it would be preferable to eliminate the trade taxes entirely. When the extra outputs are funded using distorting trade taxes we need to compute their revised shadow profits by multiplying the net change in government spending by the MCF for each tax. Using the estimates of the MCF in Box 8.3, we have ⎛ dT ⎞ ( π 0 )t0 = ΣMRS0 − MCF0 ⎜ p0 − ≈ $0.89 dG0 ⎟⎠ ⎝ (1.9) − (1.10)[(1) − (0.08)] and ⎛ dT ⎞ ( π1 )t1 = δ ΣMRS1 − MCF1 ⎜ δp1 − ≈ $1.17, dG1 ⎟⎠ ⎝ ( 2.24 ) − (1.16 )[(1) − (0.08)] where the marginal excess burden of taxation reduces the shadow profit for both public goods by 11 per cent. Instead of raising utility by $2.30, as was the case in Box 8.3 when the goods were funded using lump-sum taxation, they now raise it by $2.06. Thus, the optimal supply of each public good is lower in these Project evaluation and the social discount rate 8.1.3 Changes in real income (efficiency effects) Dollar changes in expected utility are unreliable welfare measures for discrete (large) policy changes when the marginal utility of income changes with real income. In particular, they are path-dependent, which means welfare measures can be manipulated by reordering a given set of policy changes. This problem is overcome by measuring compensated welfare changes. They isolate the impact of policy changes on the government budget when lumpsum transfers are made to hold constant the utility of every consumer. If a policy change generates surplus revenue (at constant utility) it can be used by the government to raise the utility of every consumer, while the reverse applies when it drives the budget into deficit. Thus, compensated welfare changes are changes in real income that get converted into utility when the government balances its budget. How these changes in real income are distributed across consumers depends, in part, on endogenous price changes and also on the tax changes the government makes to balance its budget.22 We measure compensated welfare changes for the projects examined in the previous section by including foreign aid payments (R measured in units of domestic currency) in the first-period government budget constraint in (8.4), with T0 = MRT0G0 − L0 + R, where the conventional welfare equation in (8.7) becomes dW = ( MRS0 − MRT0 )dG0 − t0 dx0 + ∑ s π s ms ( MRSS − MRTS )dG1 + t1dxs − dR. Endogenous changes in R isolate surplus government revenue from the policy changes when expected utility is held constant at its initial level, with dW = 0, where the compensated welfare equation is obtained from (8.7′) as dRˆ = ( MRS0 − MRT0 )dG0 − t0 dxˆ0 23 + ∑ s π s ms ( MRSs − MRTs )dG1 + t1dxˆs . Welfare gains are surplus revenue the government could pay as foreign aid ( dRˆ > 0), while welfare losses are gifts of foreign aid it would need to receive ( dRˆ < 0) at unchanged domestic utility.24 Thus, they isolate the changes in real income from policy changes. The compensated welfare changes for the projects that provide public goods have the same structure as the dollar changes in utility obtained in the previous subsection, but with endogenous changes in activity determined solely by substitution effects. All the income effects for projects are removed by compensating lump-sum transfers that are referred to as the compensation variation (CV). Rather than rework all the cases examined in the previous subsection, we consider the project that provides an extra unit of the public good in the second period, where the change in real income is solved, using (8.16), as dRˆ dTˆ 25 , = ∑ π s ms ( MRSs − MRTs ) + dG1 s dG1 Project evaluation and the social discount rate with ∂xˆ ∂xˆ dTˆ = − t0 0 + ∑ π s ms t1 s dG1 ∂G1 s ∂G1 being the compensated spending effect. The compensating transfers (CVs) for this project are obtained from (8.5), with dEU h = 0 for all h, as dLˆh0 ∂qˆ ∂pˆ = −( x0h − x0h ) 0 + z0h 0 − MRS0h dG1 ∂G1 ∂G1 h ˆ dL ∂qˆ ∂pˆ CVsh = s = ( xsh − xsh ) s − ysh s − MRSSh ∀s, dG1 ∂G1 ∂G1 CV0h = where the current value of the aggregate expected CV is E (CV ) = ∑ CV0h + ∑ ∑ π s msCVsh . h Since these transfers hold the utility of every consumer constant in each time period and in each state of nature, they completely reverse any distributional effects from the project. Thus, they isolate the change in real income at the initial equilibrium outcome. Graham (1981) and Helms (1985) identify an ex ante measure of the CV that we use here to obtain a measure of the welfare effects from the changes in risk bearing. For the policy change under consideration the ex-ante CV is the single lump-sum transfer in the current period that would hold expected utility constant, and is solved for each consumer using (8.5), with dEU h = 0, as CVexh ante = ∂pˆ dLˆh0 ∂qˆ = −( x0h − x0h ) 0 + z0h 0 − MRS0h dG1 ∂G1 ∂G1 ⎛ ⎞ ∂qˆ ∂pˆ + ∑ π s ms ⎜ ( xsh − xsh ) s − ysh s − MRSsh ⎟ , ∂G1 ∂G1 ⎝ ⎠ s with dLˆhs / dG1 = 0 for all s. When summed over consumers, we have CVex ante = ∑ h CVexh ante. Notice how this CV holds expected utility constant but allows utility to change across states of nature in the second period. Thus, it measures the change in real income from the project without undoing its impact on consumption risk. Weisbrod (1964) refers to the ex-ante CV as the option price which Graham uses to compute the option value for a project by deducting its ex-ante CV from the expected CV in (8.19): OV = CVex ante − E (CV ). This conveniently provides a welfare measure of the project’s impact on consumption risk. Since the expected CV holds utility constant in every future state of nature it completely undoes all aspects of the policy change on real income, including its mean and variance. In contrast, the ex-ante CV measures the change in real income from the project without eliminating its impact on consumption risk. Thus, a positive option value in (8.21) Project evaluation and the social discount rate tells us the project reduces consumption risk, while a negative option value indicates it increases consumption risk.26 The project with efficiency losses could be socially profitable when the risk benefits are large enough. And when that happens the expected CV must be smaller than the ex-ante CV, as the expected CV completely undoes the reduction in consumption risk. When consumers are risk-neutral, or the project has no impact on consumption risk, the option value is zero, with CVex ante = E(CV), and both measures of the CV will isolate the change in expected real income. 8.1.4 The role of income effects The analysis in the previous subsection makes it clear how income effects from policy changes play two roles when there is uncertainty. They redistribute income across consumers as well as across states of nature. In this subsection we relate compensated welfare changes to actual dollar changes in expected utility. Consider the compensated welfare change for the project evaluated in (8.17). It isolates the change in real income (that the government could pay as foreign aid at no cost to domestic utility) when the expected CV is used to hold constant the utility of every consumer in both time periods and in every state. Thus, it measures the extra real income for the true status quo. Once this surplus revenue is distributed through lump-sum transfers back to domestic consumers the income effects raise their expected utility by the welfare change (dW/dG1) in (8.11). This relationship can be formalized as a generalized version of the Hatta (1977) decomposition by writing the social welfare function used in (8.6) over the exogenous policy variables G0, G1, t0, t1, and R, as W(G0, G1, t0, t1, R) where the change in foreign aid payments that would offset the welfare effects from marginally raising output of the public good in the second period solves dW (⋅) = dW ˆ dW ˆ dG1 + dR = 0. dG1 dR We obtain the generalized Hatta decomposition for the project by rearranging these terms, as dW dRˆ 27 = SR , dG1 dG1 where SR = – dW/dR =1 – dT/dR is the shadow value of government revenue; it measures the amount social welfare rises when a dollar of surplus revenue is endowed on the government who transfers it to domestic consumers to balance its budget. This is an important decomposition for two reasons. First, all the income effects from marginal policy changes are isolated by SR, including distributional effects across consumers and states of nature. By measuring the option value defined in (8.21), we can separate the welfare effects of income distribution across consumers from the income redistribution across states of nature. Two main approaches are used to account for distributional effects across consumers in project evaluation. The first is recommended by Boadway (1976) and Drèze and Stern (1990) where different distributional weights are assigned to consumers in (8.6), while the second approach by Bruce and Harris (1982) and Diewert (1983) tests for Pareto improvements. Most policy analysts are Project evaluation and the social discount rate Box 8.5 The shadow value of government revenue in the public good economy In the two-period certainty economy summarized in Box 8.2 the shadow value of government revenue is less than unity. In other words, endowing another dollar of income on the economy will raise aggregate utility by less than a dollar. And this occurs because extra real income contracts the tax base. If a dollar of income is endowed on the economy (with dR < 0) the dollar change in utility is SR = 1 − dT , dR where the change in tax revenue solves − ∂x (⋅) dI dx ∂x (⋅) dI dx dT = t0 0 − t1 1 = t0 0 − t1 1 , ∂I dR ∂R dR dR dR dR with − dI/dR = 1 − dT/dR. After substitution, we have − dT θ = ≈ − 0.08, dR 1 − θ with θ = − t0 (∂x0 (·) / ∂I) + t1(∂ x1 (·) /∂I) ª −0.0820645. There is good economic intuition for this change in tax revenue. With the log-linear preferences summarized in Box 8.1 the demand for the private good in each time period is normal. An extra dollar of income initially raises demand for them and reduces tax revenue by θ as the tax base contracts due to the fall in supply of the good in the first period. When the government transfers this amount from the consumer to balance its budget the income effect increases tax revenue by θ 2. In the next round it falls by θ3, and so on, until the change in tax revenue solves the infinite sequence θ + θ2 + θ3 +… = −θ/(1−θ). Thus, the final welfare change is SR = 1 − dT 1 = ≈ $0.92. dR 1 − θ It is illustrated in the following diagram where the cross-lined areas are changes in tax revenue. The extra real income contracts the tax base in the first period by reducing supply and expanding the tax base in the second period by increasing demand. p0 SR = 1− a + δc ≈ 0.92 – x −x 0 a + b + δd = 1 1.133 1.03 1 0.8 a(0.12) b(0.47) ← d(0.42) x1 – x0 − x0 Project evaluation and the social discount rate reluctant for their subjectively chosen distributional weights to have a major influence on policy outcomes, particularly in circumstances where policies with efficiency losses are promoted on distributional grounds. That is why analysts frequently report the efficiency and distributional effects separately. Other analysts, very much in the spirit of a conventional Harberger analysis, recognize the influence governments have over distributional outcomes when they make tax changes to balance their budgets. For that reason, Bruce and Harris (1982) and Diewert (1983) test to see whether patterns of transfers can be chosen to convert extra real income into Pareto improvements.28 Second, for a positive shadow value of government revenue, there must be efficiency gains from policy changes ( dRˆ / dG1 > 0) whenever dollar changes in expected utility are positive (dW/dG1 > 0). And since SR is an independent scaling coefficient for marginal policy changes, income effects play no role in project evaluation.29 8.2 The social discount rate A major controversy in the evaluation of public sector projects is over the value of the social discount rate. Some argue it should be the same discount rate used by private operators, while others claim it should be lower. In economies with distorted markets due to taxes, externalities and non-competitive behaviour, the social discount rate will, in general, be different from the discount rate used by private investors for the same project. In particular, income taxes drive wedges between the cost of capital for investors and the after-tax returns to savers. Harberger (1969) and Sandmo and Drèze (1971) show how this makes the social discount rate a weighted average of the borrowing and lending rates of interest in a two-period certainty setting. By extending their analysis to additional time periods Marglin (1963a, 1963b) finds it is higher than the weighted average formula, while Bradford (1975) finds it is approximately equal to the after-tax interest rate. Sjaastad and Wisecarver (1977) show how these differences are explained by the treatment of depreciation in public capital. Whenever private saving adjusts to replace this depreciation the weighted average formula also applies in a multi-period setting. Others claim the discount rate for public projects is affected by risk. Samuelson (1964), Vickery (1964) and Arrow and Lind (1970) claim it should be lower than the discount rate used by private firms undertaking the same project. Samuelson and Vickery argue this happens because the public sector undertakes many projects with uncorrelated returns that allow them to eliminate diversifiable risk. Arrow and Lind take a different approach by arguing the public sector can use the tax system to spread risk over a large number of consumers when project returns are uncorrelated with aggregate income. Essentially, both arguments rely on the government being able to eliminate diversifiable risk and trade aggregate uncertainty at lower cost than private markets. Bailey and Jensen (1972) claim this is not, in general, the case, where the risk premium in the discount rate should be the same for the public and private sector when undertaking the same projects. In this section we derive the weighted average formula of Harberger, and of Sandmo and Drèze, before extending their analysis to accommodate uncertainty. Then we consider the social discount rates obtained by Marglin and Bradford when there are more than two time periods, and reconcile them with weighted average formula using the analysis in Sjaastad and Wisecarver. This allows us to isolate the important role of depreciation in public capital. Finally, we examine the arguments by Samuelson, Vickery, and Arrow and Lind that discount rates for public projects should be lower. Project evaluation and the social discount rate 8.2.1 Weighted average formula In the presence of distortions market prices do not, in general, provide us with true measures of the marginal valuation and marginal cost of goods and services. For example, a consumption tax drives a wedge between marginal consumption benefits and marginal production costs, where consumer prices overstate social costs and producer prices understate social benefits. With downward-sloping demand schedules and increasing marginal cost schedules the true (social) value of any good is a weighted average of its consumer and producer prices. This same logic applies to the discount rate that determines the opportunity cost of current consumption. We demonstrate this formally by introducing a tax on capital income in the two-period uncertainty model used earlier in Section 8.1. And to simplify the analysis it is set at the same rate (τ) for all consumers who face common discount factors of ms = 1 ∀s. 1 + is (1 − τ ) This is confirmed by writing the state-contingent payouts to saving ( p0 z0h ) by each consumer in (8.1) as ps ysh = [1 + is (1 − τ )] p0 z0h, where optimally chosen saving satisfies (8.24). It is the same for all consumers because they can trade in a complete competitive capital market. A further adjustment must also be made to the government budget constraints in (8.4) to include income tax revenue in the second period: T0 ≡ t0 ( x0 − x0 ) + τis p0 z0 = MRT0G0 − L0 , Ts ≡ t1 ( xs − xs ) = MRTsG1 − Ls ∀s. We obtain the social discount rate by measuring the welfare change from marginally increasing the first-period endowment of the private good. In effect, this is equivalent to an exogenous increase in the supply of capital to the economy, where the welfare change is referred to as the shadow value of capital (SK). It is the current value of the extra consumption generated by a marginal increase in capital. By allowing this endowment to change exogenously in the presence of the income tax, we obtain an amended conventional welfare equation, dW = ( MRS0 − MRT0 )dG0 − t0 dx0 + p0 d x0 + ∑ π s ms ( MRSs − MRTs )dG1 + t1dxs + τis p0 dz0 , s where p0 d x0 measures the direct welfare gain from marginally increasing the private endowment, and τisp0dz0 the welfare gain from a reduction in the excess burden of the income tax when private investment expands endogenously.30 Using this equation, we obtain a shadow value of capital of S K = p0 Σs π s ms (1 + ψs ),31 where ψs = is (1 − τ ) + τis ∂z0 t0 ∂x0 t ∂x − [1 + is (1 − τ )] + 1 s ∂x0 p0 ∂x0 p0 ∂x0 Project evaluation and the social discount rate is the social discount rate which measures the amount by which private consumption grows in each state of nature. In the absence of taxes and other distortions the social discount rate is equal to the private discount rate, with ψs = is for all s, and the shadow value of capital is its market price, with Sk = p0. That is not in general the case, however, in the presence of the taxes. A marginal increase in the supply of capital is absorbed into the economy through endogenous changes in private saving and investment which have different social values in the presence of the income tax. This causes the social discount rate to deviate from the private discount rate. To identify the separate effects of taxes and risk we derive the social discount rate for a number of special cases. Certainty without trade taxes This replicates the analysis used by Harberger, and by Sandmo and Dréze, who obtain a weighted average formula for the social discount rate. By setting t0 = t1 = 0, and using the market-clearing condition for the private good in the first period (with x0 = x0 + z0 + MRT0G0), the social discount rate in (8.25) becomes (8.26) ψ = α i + (1 − α )i(1 − τ ),32 where α = ∂ z0 / ∂ x0 is the endogenous change in private investment, and 1 − α = ∂x0 / ∂ x0 the endogenous change in private saving. It is illustrated as the cross-lined rectangles in Figure 8.5 where a marginal increase in the supply of capital ( d x0 ) is absorbed into the economy by a lower interest rate which expands private investment demand by α and contracts private saving by 1 − α. (The dashed lines isolate the new equilibrium outcome.) Since ψ measures the growth in aggregate consumption from investing another dollar of capital, it is the social discount rate to use when evaluating public projects. In other words, socially profitable projects must match or better this future change in aggregate consumption. The interest rate is the marginal social value of extra private investment, while the after-tax interest rate is the marginal social value of the reduction in private saving, where, from (8.26), we have i ≥ ψ ≥ i (1 − τ ). i – x0 − x0 − G0 d– x0 i(1−τ) z0 1−α Figure 8.5 Weighted average formula. Project evaluation and the social discount rate i – x0 − x0 − G0 d– x0 i i(1−τ) z0 α=1 Figure 8.6 Fixed saving. Clearly, the interest elasticities of private investment and saving determine where the social discount rate lies within these bounds. With fixed private saving additional capital must be absorbed into the economy by an equal increase in private investment, with α = 1, where the social discount rate in (8.26) becomes ψ = i. It is illustrated by the cross-lined rectangle in Figure 8.6 as the present value of the net increase in consumption due to extra private investment. The same thing happens when private investment is perfectly price-elastic due to a constant net marginal product of capital. With fixed private investment demand the additional capital is absorbed into the economy by crowding out private saving, with α = 0, where the social discount rate in (8.26) becomes ψ = i (1 − τ). This welfare change is illustrated as the cross-lined rectangle in Figure 8.7. It is the value of the net benefits from consuming more of the private good in the first period when saving falls. In general, however, these extremes are unlikely, especially in the long run when consumption and production are more responsive to changes in real income. i – x0 − x0 − G0 d– x0 Figure 8.7 Fixed investment demand. Project evaluation and the social discount rate Box 8.6 The weighted average formula in the public good economy Here we obtain a numerical estimate of the weighted average formula for the shadow discount rate in the two-period certainty model summarized in Box 8.1 for the initial equilibrium allocation. Taxes (%) Public goods Bailey demands In the presence of a 40 per cent tax on interest income and no trade taxes, the shadow price of capital solves SK = ⎛ dx ⎞ dU = p0 + δτip0 ⎜ 1 − 0 ⎟ , dx0 ⎝ dx0 ⎠ with δ= 1 ≈ 0.98. 1 + i(1 − τ ) The change in private saving can be decomposed using the ordinary demand schedules as 1− dx0 ∂x (⋅) dI = 1− 0 , dx0 ∂I d x0 where the income effect is obtained, using aggregate income of I = q0 x0 + δ τ i p0 (x0 − x0 − G0 −R) − p0G0 − δp1G1 − R, as dI /d x0 = S K . After substitution, we have SK = p0 (1 + δ τi ) = $1.01, 1− θ with θ = −δτip0∂x0(·)/∂I ≈ −0.006. Since SK = p0δ(1 + ψ) defines the relationship between the shadow price of capital and the social discount rate, we have ψ = α i + (1 − α )i (1 − τ ) ≈ 0.024, where α = 1 − dx0 /d x0 is the change in private investment which is solved, using the ordinary demand schedules, as ⎛ ∂x (⋅) ⎞ α = S R ⎜ 1 − p0 0 ⎟ ≈ 0.49, ∂I ⎠ ⎝ with SR = 1/(1 − θ) ≈ 0.994. Thus, the shadow discount rate lies between the pre- and post-tax interest rates, i (1 − τ ) ≤ (0.018) ψ ≤ i (0.024 ) (0.03) Private saving falls, despite an unchanged interest rate, because the income effect from increasing the endowment of the private good raises current demand. If we include the trade taxes in Box 8.2 the shadow price of capital falls to SK ≈ 0.93, and the discount rate becomes negative at ψ ≈ − 0.054. This fall in welfare is due to the larger excess burden of taxation as the tax base contracts with the extra real income. Project evaluation and the social discount rate Aggregate uncertainty An interesting, and important, extension to the analysis of Harberger and of Sandmo and Dréze introduces aggregate uncertainty. When evaluating public projects we use a social discount rate that captures the full social opportunity cost of capital, including a risk premium when the net cash flows impose costs on risk-averse consumers. We extend the analysis in the previous section by including aggregate uncertainty in the presence of the uniform income tax (and without trade taxes), where, from (8.25), the social discount rate becomes the state-contingent weighted average formula, ψ s = α is + (1 − α )is (1 − τ ) ∀s. In this setting there is a risk premium in the return to capital, and it is computed in the same way for private and public sector projects when the government has no advantage over the private sector in trading risk. We can see from the general expression for the social discount rate in (8.25) that it can deviate from the weighted average formula when there are other market distortions – in this case, trade taxes. By using the capital market clearing condition, we have ψ s = α is + (1 − α )is (1 − τ ) − t0 t ∂ xs (1 − α )[1 + is (1 − τ )] + 1 ∀s, p0 p0 ∂ x0 where the last two terms are welfare effects from resource movements in distorted markets. When private saving falls, it reduces the net supply of the private good and exacerbates the excess burden of the trade tax, where the reduction in trade tax revenue in the second last term is a welfare loss. In contrast, the additional tax revenue in the last term is a welfare gain from reducing the excess burden of the trade tax in the first period. Whether these additional welfare changes move the social discount rate above or below the weighted average formula depends on the change in net demand for the private good in the second period. If it generates a welfare gain (in the last term) that is large enough to offset the welfare loss from the reduction in current trade tax revenue (in the second last term) the social discount rate rises above the weighted average formula. Once trade tax revenue declines in present value terms, the discount rate falls below the weighted average formula. Ultimately, the final outcome depends upon the size of the taxes as well as consumer preferences and production technologies. These related market effects are often overlooked in the evaluation of small-scale project evaluation because they are too costly to measure. Typically project outputs and inputs have cross-effects in a number of distorted markets, and they can be isolated using a general equilibrium model with parameter values calibrated on data taken from the economy. Alternatively, they can be estimated directly from data using empirical analysis. But these options are often too costly to undertake, where the distortions on project outputs and inputs are the only ones taken into account. Goulder and Williams (2003) find income taxes on capital and labour inputs have the most important welfare effects. Indeed, they often dominate the welfare effects arising from taxes and other distortions on project outputs. Thus, for small-scale projects it would seem prudent to include welfare effects arising from distortions on project outputs and inputs, and to ignore welfare effects from indirect crosseffects in other distorted markets. Project evaluation and the social discount rate A more realistic measure of the social discount rate would accommodate progressive income taxes and include a corporate tax. We could also include distributional effects by assigning different distributional weights to consumers in the welfare change in (8.6). A number of these extensions are examined in Jones (2005) where social discount rates are personalized for consumers facing different taxes on income and different distributional weights. While these extensions make the social discount rate more accurate, they make the analysis more complex without adding greatly to the insights already obtained earlier. Instead, we extend the analysis in the next section by adding more time periods to examine the impact of capital depreciation on the social discount rate. 8.2.2 Multiple time periods and capital depreciation In a two-period setting depreciation plays no role in the analysis because all capital is liquidated in the second period. With extra time periods capital can be carried beyond the second period, where (economic) depreciation measures the change in the market value of the asset in each year of its life. Unless investment rises to replace depreciated capital, future consumption will fall. Marglin (1963a, 1963b) and Bradford (1975) find the social discount rate deviates from the weighted average formula when they add time periods to the analysis of Harberger, and Sandmo and Drèze. Sjaastad and Wisecarver (1977) show that this occurs because depreciation is not matched by additional private saving. Using certainty analysis with an infinite time horizon, Marglin finds the discount rate is higher than the weighted average formula in (8.26). This is demonstrated for a public project that generates a payout of 1 + δ in the second period of its life and none thereafter. It is socially profitable when 1+ δ αi ≥ (1 − α ) + . 1 + i(1 − τ ) i(1 − τ ) This condition makes the present value of the net consumption flow greater than or equal to its social cost. Since this project increases the demand for capital in the economy it is satisfied through a rise in private saving and/or a reduction in private investment, where Marglin identifies the social cost of forgone current consumption due to the increase in private saving as 1 − α, and the present value of forgone future consumption (αi) in perpetuity due to the increase in private investment as αi/[i(1 − τ)]. Rearranging (8.29), we find the project is socially profitable when δ ≥ ψ + ατi > ψ. Using a Keynesian consumption function that makes saving a constant fraction of income in each time period, Bradford finds the same project is socially profitable when 1+ δ 1, 1 + i(1 − τ ) which implies δ i(1− τ ) < ψ . These findings by Marglin and Bradford create a dilemma for policy analysts, as they suggest the social discount rate can range in value from i(1 − τ) to something above the Project evaluation and the social discount rate weighted average formula (ψ). Clearly, some projects may only be viable at a low discount rates and others only at high rates – it depends on the timing and values of their benefits and costs. Sjaastad and Wisecarver show how these different views are explained by the treatment of depreciation in public capital. Once consumers adjust their saving to replace this depreciation, the weighted average formula applies in a multi-period setting. They demonstrate this for the project considered by Marglin, where the payout in the second period becomes 1+ δ − α − αi, with 1 + δ being the direct consumption benefit from the project, α the fall in current consumption when saving rises to offset the depreciation in public capital, and α i the fall in consumption due to the reduction in private investment. Now the project is socially profitable when 1 + δ − α − αi ≥ 1 − α, 1 + i(1 − τ ) where, after rearranging terms, we have δ ≥ ψ. It is likely that consumers will adjust their saving, at least partially, when they observe depreciation in public capital. As wealth-maximizing agents they compute the expected consumption benefits from public capital and the higher expected taxes to replace depreciated capital. If, for what ever reason, consumers do not adjust their saving to offset depreciation in public capital the social discount rate rises above the weighted average formula. 8.2.3 Market frictions and risk A number of studies argue the social discount rate can be lower for public projects when the government can trade risk at lower cost than trades in private markets. Samuelson (1964) and Vickery (1964) argue the government is a relatively large investor in the economy that undertakes many projects with uncorrelated risks that can be diversified inside the public sector. As a consequence, it can pool these risks at lower cost than the private sector by bundling securities in portfolios and purchasing insurance. Arrow and Lind (1970) argue the discount rates on public projects are lower because their returns are uncorrelated with aggregate income and the government can diversify risk across a large number of consumers through the tax system. Thus, the public sector offers better opportunities for trading aggregate risk and eliminating diversifiable risk. Bailey and Jensen (1972) refute both these claims by arguing consumers can achieve the same, if not better, risk-trading opportunities in private markets. Indeed, the absence of a profit motive can make public employees less efficient operators, thereby raising the costs of trading risk. And since most taxes distort activity, the tax system is likely to be a more costly way of spreading risk across consumers.33 Bailey and Jensen also argue public project returns are mostly correlated with aggregate income, which means they contain aggregate risk that cannot be diversified inside the public sector. It is frequently claimed the Project evaluation and the social discount rate arguments by Arrow and Lind for using a lower discount rate on public projects can be justified by the moral hazard and adverse selection problems that arise when traders have asymmetric information. These problems, which were examined in Chapter 5, can raise the cost of eliminating diversifiable risk. If it can be eliminated at lower cost through the tax system, or by pooling it inside the public sector, the discount rate on public projects will be lower. However, as Dixit (1987, 1989) observes, the public sector is also subject to the same information asymmetries as private traders, and it may not be able to lower the cost of trading risk. For that reason it is important to examine the risk-spreading opportunities available to the public sector when it too is subject to moral hazard and adverse selections problems. In the final analysis, a lower discount rate for public projects must be based on some form of market friction (or failure) which the government is able to overcome at lower cost than the private sector, and these cost efficiencies need to be quantified to determine their impact on the discount rate. Problems 1 The government collects revenue from a consumption tax on cigarettes (C) when the aggregate demand function (measured in thousands of cartons) is C = 200 − 5q, where q = p + t is the consumer price per carton (measured in dollars), with p being the producer price and t = 5 the constant tax per carton. i Use a partial equilibrium analysis to compute the marginal social cost of public funds (MCF) for the consumption tax on cigarettes when they are produced at a constant marginal cost of $6 per carton (with no fixed costs). Measure the welfare changes as aggregate dollar changes in private surplus. Calculate the marginal excess burden of taxation (MEB) for this tax and illustrate the MCF in a quantity–price {C, q} space diagram. ii Redo part (i) when the marginal cost of production rises with MC = 0.08C. Consider the capital market for a closed economy in a two-period certainty setting where the aggregate demand (D) for capital is determined by D = a − bi and aggregate supply (S) by S = c + di, with i being the risk-free interest rate and a, b, c and d constant positive parameters. i Use a partial equilibrium analysis to compute an expression for the shadow discount rate when there is an income tax at rate τ on interest income paid to suppliers of capital (so that S = c + di(1 − τ)). Illustrate your answer in a quantity–price space diagram and explain what the welfare changes represent. (Measure the welfare changes as aggregate dollar changes in private surplus.) ii Derive the shadow discount rate when d = 0 and illustrate the welfare changes in a quantity–price space diagram. Compare it to the discount rate in part (i) above. iii Derive the shadow discount rate when b = 0 and illustrate the welfare changes in a quantity–price space diagram. Compare it to the discount rates in parts (i) and (ii) above. In a two-period certainty model a single consumer has an endowment of time ( xT ) in the first period (0) which is divided between leisure (xT) and labour supply to firms. Project evaluation and the social discount rate Labour income is used to purchase a consumption good in the first period (x0) while the rest is saved (s) and returned with interest (i) after tax (s(1 + i − τS)) to fund the purchase of the consumption good in the second period (x1). Thus, the consumer problem is to maximize u(x0, xT, x1) subject to q0 x0 + π + π + s(1 + i − τ S ) q1 x1 = (1 − τ T )( xT − xT ) − s + 0 1 + L, (1 + i − τ S ) (1 + i − τ S ) where: L is a lump-sum transfer from the government; qi = (1 + τC)pi is the consumer price of the consumption good in each period i = 0, 1, with pi being the producer price and τC a uniform ad valorem expenditure tax; τS is the ad valorem tax on interest income; (1− τT )( xT − xT ) is the after-tax income from labour supplied by the consumer, with τT being the ad valorem tax rate on labour income; π0 = p0y0(yT) + yT is profit on private production of the consumption good in the first period, with yT < 0 being labour input used; and π1 = p1y1(kp) − kp(1 + i) is the profit from private production of the consumption good in the second period, with p1y1(kp) being sales revenue and kp(1 + i) the cost of private investment in labour purchased in the current period, kp. We assume private firms have strictly concave production technologies and operate as price-takers. The public sector budget constraint (defined in present value terms) will be ● ● ⎧ p1 x1 L = τT ( xT − xT ) + τC ⎨ p0 x0 + + 1 i − τS ⎩ p1 g1 − (1 + i )k g ⎫ τS s + ⎬+ 1 + i − τS ⎭ 1 + i − τS where p1g1 − (1 + i)kg is profit from public production of the consumption good in the second period, with p1g1 being sales revenue and kg(1 + i) the cost of public investment in labour purchased in the first period, kg. Finally, the market-clearing conditions are x0 = y0, x1 = y1 + g1 and xT − xT + yT = k p + k g = s. There are five exogenous policy variables: τC, τT, τS, g and kg. i Derive a conventional welfare equation for marginal changes in the policy variables and use it to compute the shadow price of capital and the shadow discount rate when the tax on interest income is the only tax (with τT = τC = 0). Illustrate the shadow discount rate in a price–quantity diagram, and compare it with the cost of capital for private firms. (The interest rate and relative prices of the consumption goods are determined endogenously in a competitive equilibrium.) ii Derive the welfare loss from marginally raising the expenditure tax when there are no taxes on income (with τT = τS = 0). iii Derive the welfare loss from marginally raising an income tax (with τT = τS = τ) when there is no expenditure tax (with τC = 0). Project evaluation and the social discount rate iv Compare the expenditure and income tax bases in parts (ii) and (iii) above. You can use the first-order conditions for firms and the market-clearing condition in the labour market to show that the income tax base includes the expenditure tax base. Explain what the additional welfare change is in the income tax base. What determines the welfare cost of raising a given amount of revenue with each tax? Does one of the two taxes always have a lower welfare cost? 1 Introduction 1 It is difficult to isolate purely atemporal trades as most goods embody future consumption. For example, packets of washing powder and breakfast cereals have consumption flows in the future and are, strictly speaking, capital assets. Financial securities have a limited role in facilitating purely atemporal trades in a certainty setting where the quality and quantity of the goods are known to buyers and sellers. Once we introduce time and asymmetric information, financial securities can be used to specify the obligations on both parties to get or provide the necessary information about the product being exchanged. 2 Governments are monopoly suppliers of currency (notes and coins), but the money supply is more broadly defined to include cheque and other deposit account balances used for trading goods and services. Since financial institutions keep a fraction of their deposits as currency to meet the cash demands of depositors, there is a multiplier effect from changes in the supply of currency. The nominal price level equates supply and demand in the market for broadly defined money. 3 This is an important issue for the equilibrium asset pricing models examined in Chapter 4. 4 The terms ‘risk’ and ‘uncertainty’ are frequently treated as the same thing. Knight (1921) defined risk as uncertain outcomes over which individuals assign probabilities, while uncertainty relates to outcomes over which they do not, or cannot, assign such probabilities. But this distinction is less clear when consumers with different information assign subjective probabilities to uncertain outcomes. 5 Futures contracts are standardized forward contracts which trade on formal futures exchanges, whereas forward contracts also include tailor made agreements between buyers and sellers that trade over the counter. 6 We use the conventional analysis recommended by Harberger where aggregate welfare is the sum of the dollar changes in expected utility for consumers. In effect, this approach uses a Bergson– Samuelson individualistic social welfare function (Bergson 1938; Samuelson 1954) and ignores any distributional effects by assigning the same distributional weight of unity to all consumers. Distributional effects can be included in the analysis by assigning different weights to consumers. 7 Initially both schemes were adopted to reduce the variability in producer prices, but they eventually became price support schemes for domestic producers. Both schemes were eventually abandoned due to the very large costs they imposed on 2 Investment decisions under certainty 1 Capital goods are stocks of future consumption, while investment is the flow of resources into capital goods over a specified period of time. 2 Strictly speaking, the capital market is where all intertemporal trade takes place. It includes trades in physical commodities, such as apples, or financial securities which provide income streams in future periods. There are sub-markets in the capital market, including the financial market where financial securities trade, and the real estate market where property trades. A number of other markets are included in the financial market, such as banks, the stock market, the futures exchange, the bond market, and the markets for derivative securities (options, swaps, warrants, etc). In finance it is not uncommon for the financial market to be referred to by default as the capital market, where this reflects a focus on trades in financial securities. Notes h 0 h 1 3 Since there are 2N commodities, consumers are choosing bundles (x , x ) from a 2N-dimensional commodity space. When each consumer h has a weak preference relation h over these bundles that is complete, transitive, and continuous, they can be represented by a utility function, u h : 2 N → + , such that hA 0 hA hB h hB hB h hA h hB , x1hA Ɑ − x0 , x1 ⇔ u ( x0 , x1 ≥ u ( x0 , x1 . A proof of the existence of the utility function can be found in most graduate microeconomics textbooks; see, for example, Mas-Colell et al. (1995). This function is a contemporaneous measure of utility where each consumer chooses their intertemporal consumption in the first period. Thus, they measure utility from future consumption in the first period. 4 These constraints require each element in the set of consumption goods to be less than or equal to its corresponding element in the set of endowments in each time period. 5 For standard preferences we assume the utility function uh is monotone (to rule out satiation), and strictly quasi-concave (to make the indifference schedules strictly convex to the origin in the commodity space). They are adopted by default in the following analysis. 6 We obtain this marginal rate of substitution by totally differentiating the utility function, with: du = u0′ (n) dx0 (n) + u1′ (n) dx1(n) = 0. After rearranging terms, we have dx0 ( n) u ′ ( n) =− 1 , dx1 ( n) du =0 u0′ ( n) where MRS1,0 (n) ∫ − dx0 (n)/dx1 (n) is the inverse of the slope of the indifference curve at the endowment point in Figure 2.1. From the first-order conditions for optimally chosen consumption, we have ut′ (n) = λt (n) for t ∈{0, 1}. When consumers have homothetic preferences their rates of time preference for goods are independent of real income. In other words, their indifference schedules in Figure 2.1 have the same slope along a 45∞ line through the origin. It should be noted that this also accommodates storage when it is the most efficient way of transferring consumption goods to future time periods. After all, production is a process that converts resources into more valuable goods and services, where in a certainty setting they are distinguished from each other by their physical characteristics, geographic location and location in time. This derivation of the marginal rate of substitution uses the fact that ∂z0 (n)/∂x0 (n) = −1. Malinvaud (1972) distinguishes between discount rates for income and discount rates for future consumption goods. In this setting there is a personal discount rate for income of λ1h / λ 0h ≡ 1 / (1 + i h ), and a personal discount rate for each commodity n Œ N of MRS1h,0 ≡ 1 / [1 + ρ h ( n)]. Lengwiler (2004) derives income as the representative commodity in an asset economy where consumers can trade within each time period and between each time period in an uncertainty setting. We introduce financial securities in Section 2.2.3 and show how they allow consumers to choose their distribution of income over time (subject to the constraint that income sums to their wealth in present value terms). In fact, income can be used as a representative commodity in an economy without financial securities if consumers can trade intertemporally using forward contracts. What financial securities do is reduce the number of variables that consumers must determine in the first period, where they choose the value and composition of their consumption bundle and their holding of financial securities. While they decide the composition of future consumption in the first period, the choices are actually made in the second period. However, in exchange economies with forward contracts, consumers must determine the value and composition of their current and future consumption bundles in the first period. 12 When currency is held as a store of value governments collect revenue as seigniorage due to the non-payment of interest. This imposes a distorting tax on currency holders by driving a wedge between their private cost of holding currency, which is the nominal rate of interest, and the social marginal cost of printing currency. Revenue is transferred by this tax as seigniorage to the government because it uses real resources obtained by printing currency at no interest cost. The real wealth of traders who hold currency balances over time as a store of value will be affected by anticipated changes in the nominal money supply that impact on the nominal interest rate. This wealth effect is examined later in Section 2.3. 13 By ‘full trade’ we mean consumers can exchange goods within each time period and over time periods. 14 We use the notation defined in Section 2.2.1, where in each time period t ∈ {0, 1}, Xt and X t are, respectively, the market values of the consumption and endowments for consumer h, with It being income (measured in units of the numeraire good). 15 The discount factor is obtained by noting that R1 = pa (1 + i). 16 The no arbitrage condition makes the security of every firm a perfect substitute because it eliminates any profit from the returns they pay. In other words, they all pay the risk-free return. If firms are large in the capital market, which seems unlikely for the risk-free security but not when securities are segregated into different risk classes, they have market power in the capital market which they can exploit to generate profit. 17 VMP j = ∂Y1 j / ∂Z 0j is the value of the marginal product from investing a dollar of inputs in firm j when the input mix is chosen optimally to maximize profit. 18 This solution is obtained by using the envelope theorem to eliminate the welfare effects of the consumer choice variables in (2.11). For income in the first period we have ∂v0/∂I0 = l0, and for income in the second period ∂v1/∂I1 = l1. 19. In a certainty setting without taxes there is no meaningful distinction between shares and bonds as they are both risk-free securities. 20 The notation for the trading costs (tt) and net expenditures (Dt) at each t ∈ {0, 1} was defined earlier in Section 2.2.2 for the consumer problem in (2.6), while the profit shares in firms ( η0) were defined in Section 2.2.4. 21 Both these optimality conditions are for an interior solution. 22 If consumers hold currency, so that λ1 / λ0 = 1, then they have not maximized utility because there are net benefits from moving resources from currency into the risk-free security when l1/l0 >/(1 + i). 23 When the financial security reduces trading costs we use (2.15) to write the optimality condition for currency demand, as: ( ∂ τ1 ) 1 − D1 ∂ τ0 ( ∂ m0 ) 1 + D0 = , ∂ m0 (1 + i − D1 ( ∂ τ1 / ∂V0 which equates the net cost of holding another dollar of currency in the first period to the discounted value of the net gain from using it in the second period. In practice, there are changes in preferences, production technologies or other environmental variables traders face that cause relative price changes in the economy as resources flow between different activities. These price changes occur even when money demand and supply grow at the same rate. This assumes the nominal net cash flows will rise with the higher expected rate of price inflation. This expression is obtained by using aR1 = V0 (1 + i). If the inflation rate rises by ∆π the nominal interest rate will rise by ∆π(1 + r) when the Fisher effect holds. When interest payments are subject to tax at rate τ, with 1 + i(1 − τ) = (1 + rA)(1 + π), the taxadjusted Fisher effect is di dπ dr A = 0 1+ r A , 1-τ Where rA is the real after-tax interest rate. 29 Increases in the money supply are inflationary when they exceed the growth in money demand. 30 This short-term reduction in unemployment is captured by the Phillips (1958) curve which finds a negative relationship between the rate of inflation and the level of unemployment. Friedman (1968) argued wages would be set in anticipation of increases in the rate of inflation, where this can lead to short-term reductions in employment and output. And it is much more likely when governments persistently print money to finance their spending through higher levels of inflation. 31 In a general equilibrium analysis these real effects impact endogenously on economic activity, causing the saving and investment schedules in Figure 2.13 to shift. But for standard preferences and technologies these changes would reduce the size of the final increase in saving and investment without overturning it. 32 The properties of the Bergson–Samuelson welfare function are examined in more detail in Jones (2005). 33 After totally differentiating the social welfare function and using the first-order conditions for optimally chosen consumption, the aggregate welfare change is: dI p (1 + τ1 )dx1 dW = dI 0 + 1 = p0 (1 + τ 0 )dx0 + 1 . β 1+ i 1+ i These changes in activity can be solved using the budget constraint for the economy. First, we sum the consumer budget constraints: p0 (1 + τ 0 ) x0 + p (1 + τ1 )( x1 + g0 ) p a (1 + i ) p1 (1 + τ1 ) x1 = p0 (1 + τ 0 )( x0 + g1 ) + 1 − pa a0 + a 0 1+ i 1+ i 1+ i m0 p1 y1 − m0 + + − p0 z0 + L, 1+ i 1+ i where L is the sum of the lump-sum transfers from the government budget to consumers. These transfers are used in a conventional Harberger analysis to separate the welfare effects of policy g changes, where the government budget constraint, is L + p0 g0 + p1 g1 /(1 + i ) = im0 /(1 + i ). After combining the private and public sector budget constraints, we obtain the budget constraint for the economy, p0 (1 + τ 0 ) x0 + p1 (1 + τ1 ) x1 p (1 + τ1 ) x1 i ( m0g − m0 ) p1 y1 = p0 (1 + τ 0 ) x0 + 1 + + − p0 z0 . 1+ i 1+ i 1+ i 1+ i After totally differentiating this aggregate constraint for the economy and using the first-order condition for profit-maximizing firms and the goods and money market clearing conditions, we have p0 (1 + τ 0 )dx0 + ∂τ p1 (1 + τ1 )dx1 D ∂τ = − D0 0 dm0 − 1 1 dm0 . 1+ i ∂m0 1+ i∂m0 Finally, we obtain the welfare change in (2.22) by using the first-order condition for optimal currency demand, where: ⎛ ∂τ ⎞ ∂τ ⎞ 1 ⎛ 1 − D1 1 ⎟ = 0. − ⎜ 1 + D0 0 ⎟ + ∂m0 ⎠ ∂m0 ⎠ 1 + i ⎜⎝ ⎝ 34 There are no other welfare effects because this is the only distorted market in the economy. Additional distortions are included in Chapter 8 when evaluating public sector projects. 35 For a detailed examination of private currency, see Dowd (1988), Hayek (1978), Selgin (1988) and White (1989). 36 In this setting commodity prices, asset prices and interest rates are time-specific, where in each period t the vectors of commodity and security prices are denoted pt, and pat, respectively, and the risk-free interest rate it. The endowments of goods in every time period can change over time, and it is relatively straightforward to include additional time periods in a certainty setting because all the equilibrium prices in the future are known in the first period by every agent. Consumers get utility from consumption expenditure in time periods out to infinity when they care about their heirs, but a lower bound has to be placed on their wealth to stop them from creating unbounded liabilities by continually borrowing to delay loan repayments until the infinite future where they have zero current value. While the interest rate and commodity prices can change over time in a certainty setting, they are known in advance by all consumers who use common discount factors on future net cash flows when they trade in frictionless competitive markets. That is not the case, however, when there is uncertainty unless agents have common information. Uncertainty is examined in the next chapter. 37 Using (2.12) we can write the share of profit for each consumer as ηth = Vt h − Z th for all t , where the budget constraint in each time period becomes: X th ≤ X th − Z th ≡ I th∀ t . 38 The long-term interest rate for period T is the geometric mean of the short rates in each period t - 1 to t, with iT = ∏ (1 + t =1 i ) − 1. t −1 t 39 Frequently they are extracted earlier than this due to activity rules governments impose on titles granted for exploration and mining. 40 Since the annual interest rate is the geometric mean of a sequence of short rates over the year the 100-day rate solves: i100 = 100 / 365 1 + i − 1. 41 This valuation assumes the expectations hypothesis holds. 42 When bondholders face this risk and information is costly the shareholders may favour more risky projects, where bondholders respond by discounting bond prices. Firms recognise this by inviting large creditors onto their boards to give them greater access to information and more say in their investment decisions. We examine these issues in more detail in Chapter 7. 43 Income taxes are levied on measured nominal income, while private investment decisions are based on economic real income which isolates the true change in 3 Uncertainty and risk 1 The hedonic prices can be estimated empirically by regressing apple prices on their different characteristics. 2 We follow Savage (1954) and define each state as a full description of the world that is of concern to the consumer; it represents an actual realization of the world at the end of time when all uncertainty is resolved. In prior time periods before the uncertainty is resolved there are possible events which are subsets of the set of true states of the world. In the first time period there is a single event that includes all possible states, while in the final time period there are as many events as there are states of the world. When there are more than two time periods there are fewer states in each event as time passes because some of the uncertainty is resolved. We provide a more complete description of the state space in Section 3.1. In the following analysis we use a two-period model where each event in the second period coincides with one of the states of nature. Thus, there are as many events as states of nature. 3 In effect, consumers have complete information about the demand for and supply of every commodity in every time period and in every state of the world. The analysis is much more complicated when consumers have different information and form different expectations about future equilibrium outcomes, which is the case when information is costly to acquire. Solving an equilibrium outcome for the economy in these circumstances requires that these costs be specified, as well as the technologies consumers use to acquire information. For example, they may take information from the initial market prices of contingent commodity contracts because they provide signals of what the market is expecting commodity prices to be in the future. 4 The economics of insurance is examined separately in Chapter 5. 5 Policy evaluation is examined in Chapter 8. 6 This closely follows the presentation provided in Lengwiler (2004). 7 A partition divides the full set of states into pairwise disjoint non-empty subsets. Thus, the state space is the sum of these subsets. As time passes the partitions become finer. That is, there are fewer and fewer states in each event, until in the final period there as many events as states of nature. 8 Ehrlich and Becker (1972) argue that assumption (iii) rules out self-protection by consumers to reduce the probability of bad outcomes. But we accommodate this activity by adding individual risk to the aggregate (state) uncertainty. In particular, we expand the possible outcomes in each state by including risk that is diversifiable across the population. For example, a portion of the population will suffer losses from car accidents, but they can self-protect and reduce the probability of accidents by driving more slowly and at safer times. Moral hazard arises when this effort to self-protect cannot be costlessly observed by insurers, where consumers have less incentive to selfprotect if they are not directly rewarded with lower insurance premiums for their marginal effort. Individual risk expands the outcome space, but without affecting the state probabilities. The probability of each final outcome is the sum of the state probability plus the probability of incurring losses in that state. When individual risk can be costlessly eliminated through insurance, it is eliminated from the consumption expenditure of individual consumers. 9 It is implicitly assumed in the following discussion that consumers with more information have subjective probabilities that are closer to the ‘true’ underlying objective probabilities; these are the probabilities that would prevail with complete information. Moreover, consumers with the same information have the same beliefs, which is the Harsanyi doctrine. But this may not always apply in reality because consumers can have different technologies for converting information into beliefs, so that two consumers with the same information may form different beliefs. Indeed, they may have different computational skills and different inherent abilities to process information. When we characterize a competitive equilibrium in the following analysis consumers are assumed to have common beliefs so that they agree on the event-contingent prices for goods in future time periods. Allowing them to have different information and beliefs is problematic because we need to specify the way information is collected and processed and at what cost before we can solve the equilibrium outcome. For example, market prices may provide information to consumers that will change their beliefs, and this in turn will impact on prices through their trades. 10 When consumers hold different beliefs about the state-contingent commodity prices, due to incomplete and asymmetric information, the equilibrium outcomes are a function of their information sets. This creates problems when consumers obtain information from endogenously determined variables such as market prices when forming their beliefs. Radner (1972) considers the role of information and consumer beliefs on equilibrium outcomes under uncertainty. 11 Consumer preference rankings can be described by this generalized utility function when they are complete, continuous and transitive. In the following analysis we also assume they are monotonic, to rule out satiation, and strictly quasi-concave, to make the indifference curves strictly convex to the origin in the commodity space. These preferences do not separate the probabilities and utility derived from consumption in each state. This is examined in Section 3.2 where we derive the von Neumann–Morgenstern expected utility function. 12 We follow the practice adopted in the previous chapter and make good 1 numeraire when there is no fiat currency in the economy. One could easily refer to a unit of good 1 as a dollar and then continue to define values in dollar terms. The multiplier on the first-period budget constraint (λh0) is the marginal utility of current income, while the state-contingent constraint multipliers (λhs ) are the marginal utility of income in the second period in each state s. 13 The indirect utility functions are mappings over state-contingent income when consumers optimally choose consumption bundles in each state to equalize their marginal utility from income spent on each good, with [ ∂ush (⋅)/∂xsh ( n)]/ ps ( n) = λ hs for all n, s. We adopt the practice used in Chapter 2 of defining consumption expenditure in each period as X0 = Σnp0(n) x0(n) and Xs = Σn ps(n) xs(n), respectively, and the market value of the endowments in each period as X 0 = Σ n p0 ( n) x0 ( n) and X s = Σ n ps ( n) xs ( n). 14 For interior equilibrium solutions to the consumer problem the first-order conditions for the forward contract are λ hs ps ( n) − λ 0h p fs ( n) = 0 for all n, s, while for current and state-contingent consumption, respectively, they are ∂u h (⋅)/∂x0h ( n) = λ 0h p0 ( n) for all n and ∂ush ( n)/∂xsh ( n) = λ hs ( n) for all n, s. They are straightforward extensions of the optimality conditions in the certainty models examined previously in Chapter 2. 15 The first-order conditions for optimally supplied forward contracts are λ sj ps ( n) − p fs ( n) = 0 for all n, s. 16 In later chapters when we include taxes on income we will separate the capital and income in these payoffs, as Rks = pk (1 + iks) for all k, s, where iks is the rate of return to security k in each state s. 17 After substituting for η0 we can write income in the first period as I 0 ≡ X 0 − Z 0 , where Z 0 is the amount saved. 18 For optimally chosen current consumption, we have ∂u0 (n)/p0 (n) = λ0 for all n, and ∂us(n)/ps (n) = λs for all n, s. Thus, the constraint multipliers are the marginal utility of income in the first period and in each state s, respectively, where ϕs = λs/λ0 is the marginal rate of substitution between income in future state s and the current period; it is the discount factor used by consumers to evaluate income in state s. 19 The payouts in each state have been normalized at unity. 20 A complete capital market is frequently referred to as a full set of insurance markets. DeAngelo and Masulis (1980a, 1980b) exploit this property of a complete capital market when they examine the effects of firm financial policy by working directly with primitive securities. 21 We examine the Miller (1977) equilibrium in Chapter 7 where consumers with progressive personal income taxes have different tax preferences for securities that allow them to increase their wealth through tax arbitrage. This activity continues until they eliminate their tax preferences or have borrowing constraints imposed on them. 22 A unique equilibrium will exist in the absence of taxes when consumers have strictly convex indifference sets and firms have strictly convex production possibility sets. The indifference sets are mappings from ordinal utility functions that are complete, transitive, reflexive, continuous and strictly quasi-concave, while the production possibility sets are mappings from strictly concave production functions with no fixed costs. A unique equilibrium will exist under more general circumstances, where a proof of the existence of equilibrium in the Arrow–Debreu economy is provided by Mas-Colell et al. (1995). Multiple equilibrium outcomes cannot be ruled out by adopting these standard assumptions on preferences and production technologies in economies with taxes and other price distortions. This is demonstrated by Foster and Sonnenschein (1970). 23 These possibilities are examined in Chambers and Quiggin (2000). 24 The utility functional U(·) is a cardinal preference mapping over consumption expenditure. 25 For a discussion of these difficulties, see Grant and Karni (2004) and Karni (1993). 26 Anscombe and Aumann (1963) make the distinction between roulette-wheel lotteries and horserace lotteries so that they can identify the subjective probabilities consumers assign to states when they have state-independent preferences. It allows them to separate randomness in income within each state and between states. 27 If we adopt the common prior assumption, which is referred to as the Harsanyi doctrine, consumers with the same information have the same probability beliefs. But Kreps (1990) argues that since we allow consumers to have different preferences over the same consumption bundles we should also allow them to form different probability beliefs from the same information. 28 The expected utility function can be used to rank preferences over state-contingent outcomes if we extend the independence axiom. Savage does this by adopting the sure-thing principle so that rankings of outcomes depend only on states where they differ. There are also additional axioms to describe the way consumers form their probability beliefs. Ultimately the aim is for consumers to have subjective probabilities that they believe could be the true objective probabilities. Mas-Colell et al. (1995) adopt the extended independence axiom that makes preference rankings over roulette-wheel type lotteries independent of the state of nature. This expands the randomness in consumption expenditure to the state space by mapping all the roulette-wheel type lotteries onto every state, which is not the case when the sure-thing principle is used. 29 Mehra and Prescott (1985) find the risk premium in equity is much larger than is predicted by the CCAPM when consumers are assigned a coefficient of relative risk aversion that is consistent with empirical evidence. Based on behavioural characteristics from experimental studies, Benartzi and Thaler (1995) argue that this puzzle can be explained by consumers having a degree of loss aversion where they place a larger weight on losses than they do on gains. Indeed, this may also be evidence that the CCAPM fails because consumers have state-dependent preferences. Other explanations for the puzzle are examined in Chapter 7. 30 The expectations operator Et (·) uses probabilities that are based on information available at time t. 31 By taking a second-order Taylor series expansion of EU ( I) = U ( I − RP ( I )) around I , we have: EU ( I ) + EU ′( I )( I − I ) + EU ′′( I )( I − I )2 = U ( I ) − U ′( I ) RP ( I ). We obtain the expression for RP ( I ) by noting that EU ′( I )( I − I ) = 0 and EU (I ) = U ( I ). 32 We obtain (3.15) by solving the risk premium as a function of the growth in consumption expenditure, where the variance in consumption can be decomposed as ⎛ I − I ⎞ 2 σ 2I = E ⎜ I = σ 2g I 2 , ⎝ I ⎟⎠ 2 with g = ( I − I ) / I being the growth rate in consumption expenditure. By using this normalization we can solve the risk premium as 1 2 σ I ⋅ RRA. 2 I′ Mean–variance preferences, where consumers only care about the first two moments of the distribution over the consumption outcomes, even when they are not normally distributed, are the less preferred basis for a mean–variance analysis. Prior to Fama, the widely held view was that security prices followed a random walk. But the hypothesis has a number of important limitations which are discussed in LeRoy (1989). When securities pay dividends they need to be reinvested in the security for prices to follow a discounted martingale. In the following analysis we use conventional notation to define the statistical properties of random variables, where for each security k, we have expected return RP ( I ) = E (ik ) = ik = ∑ π s ik s , with Σ s π s = 1 s =1 variance Var (ik ) = σ k2 = E ([ik − E (ik )]2 = ∑ π s (ik s − ik )2 ; S s −1 and standard deviation Std (ik ) = σ k = √ σ k2 . From an economic perspective, the standard deviation is a measure of dispersion that arises naturally in a mean–variance analysis. Since utility is determined by consumption, the welfare effects of uncertainty will depend on the expected value of consumption and, for risk-averse consumers, how far it deviates from that expected value. Since consumption is funded, at least in part, by returns to portfolios of securities, the risk in each security is determined by the covariance of its return with consumption expenditure. When security returns are less than perfectly positively correlated with each other it is possible to eliminate part of their variance by bundling them in portfolios. This diversification effect is determined by the covariance of security returns, given, for any two risk securities k and d, by Cov (ik , id ) = σ k d = E ([ik − E (ik )][id − E (id ) = ∑ π s (ik s − ik )(id s − id ), S s −1 and by the coefficient of correlation Corr (ik , id ) = ρkd = σ kd . σk σd 37 Cochrane (2001) shows how all the popular equilibrium pricing models in the literature, including the CAPM, intertemporal CAPM, APT and consumption-beta CAPM, are obtained as special over a set of state variables that isolate aggrecases of (3.17) by linearizing the pricing kernel m gate consumption risk. Cochrane makes the point that (3.17) also holds for individual consumers when the capital market is incomplete and they have different expectations, but their discount factors and consumption risk can be different in these circumstances. We derive the CBPM in a complete capital market and with a common expectations operator so that (3.17) is the same for all consumers. 38 This decomposition is obtained by writing the covariance term as , Rk ) ≡ E ( m − m)( Rk − Rk ) = E ( mR k ) − E ( m ) E ( Rk ). Cov (m 39 Cochrane and Lengwiler refer to this equation as the consumption-based capital asset pricing model (CCAPM). In this book it is referred to as the CBPM, while the term CCAPM is used in Chapter 4 to refer to the consumption-beta CAPM derived by Breeden and Litzenberger (1978) and Breeden (1979) where the beta coefficient is the covariance between the expected return on any security k and the growth in aggregate consumption divided by the variance in aggregate consumption. It is a conditional beta coefficient in a multi-period setting when the variance in aggregate consumption changes over time. 40 Insurance markets specialize in pooling diversifiable risks. When consumers purchase insurance they create a mutual fund that makes payments to those who incur losses. A common example is car insurance, where drivers face a positive probability of having an accident that can impact on their consumption. By purchasing insurance they reduce this consumption risk and spread the cost of car accidents over all car insurers. However, problems can arise when there is asymmetric information between traders in the insurance market – in particular, when insurers cannot observe effort by consumers to change their probability or size of loss, or when they cannot distinguish between consumers with different risk. We examine these issues in Chapter 5. 41 Using the power utility functions in (3.20) to compute RRA in (3.15), we have ⎧⎪ − γ It−+γ1−1/ It−+γ1 It +1 = γ , for γ ≠ 1 RRA = ⎨ ⎪⎩ ( It−+21/ It−+11 / It +1 = 1 = γ , for γ = 1. −γ 42 From (3.21) we obtain the respective marginal utilities U ′( It +1 ) = It +1 and U ′( It +1 ) = 1/ It +1 . They are substituted into the stochastic discount factor. 43 The log utility CAPM holds unconditionally (which means it is independent of time t) when security returns are identical and independently distributed over time to rule out changes in the investment opportunity set. To see how the return on wealth can be used as a proxy for aggregate consumption in the stochastic discount factor, we solve the return on wealth over period t to t + 1, using (3.22) with γ = 1, as (δ / (1 − δ ) + 1) It +1 W + I 1 1 + iW , t +1 = t +1 t +1 = = Wt I t δ / (1 − δ ) mt +1 Consumption is a constant proportion of wealth for log utility because additional consumption expenditure ( It +1 ) is exactly offset by the lower stochastic discount factor (δ( It +1/ I t , )−1 ). Thus, wealth is unaffected by changes in future consumption. Breeden and Litzenberger (1978) and Breeden (1979) derive the CCAPM in discrete time by adopting the power utility in (3.20) with γ π 1 when security returns are jointly lognormally distributed with aggregate consumption. This model holds unconditionally (which makes it independent of time t) when the interest rate is constant and security returns are independently and identically distributed. The CCAPM is derived in Section 4.3.4. Later in Chapter 4 we summarize the equity premium and low risk-free interest rate puzzles identified by Mehra and Prescott (1985). They show how consumers with power utility need a high CRRA to explain the large risk premium observed in historical data of returns to a stock market index. But this also means they view consumption in different time periods as highly complementary and require a higher equilibrium interest rate to get them to save in a growing economy. Indeed, the interest rate is higher than what is observed in the data. Epstein and Zin (1989) use a generalized expected utility function that separates the coefficient of relative risk aversion from the intertemporal rate of substitution in consumption to provide a solution to the low risk-free rate puzzle. This section shows how diversifiable risk can be eliminated by trading risky securities. In Chapter 5 we show how it is costlessly eliminated by trading insurance in a common information setting. We could allow aggregate uncertainty and then let consumers face loss L in each state of nature with probability πL. Indeed, the loss and the probability of loss could also be made state–dependent. While this may be more realistic, it makes the analysis unnecessarily complex. Aggregate uncertainty is removed from the analysis in this section so that we can focus on diversifiable risk. There are two reasons for consumers to trade primitive securities in this economy: one is to transfer income between the two time periods, while the other is to shift income between the two states in the second period. If consumers have identical preferences and income endowments there are no potential gains from transferring income between periods, but there are potential gains from transferring income between the states. In these circumstances we have aB > 0 and aG < 0 (with aB + aG = 0) to smooth consumption across the states. This generates aggregate net revenue in the first period of H(paB aB + paG aG), and an aggregate net cost in the second period of H(πB aB + πG aG)/(1 + i) measured in present value terms. The risk-free return is used in the discount factor here because, by the law of large numbers, H(πB aB + πG aG) is a certain net payout to securities. In a competitive capital market the no arbitrage condition drives the security prices to pasp = π s /(1 + i ) for s ∈{B, G}. In models with multiple future time periods consumers also care about changes in relative commodity prices over time when they consume bundles of goods. Thus, they care about the real value of consumption expenditure in the future because it determines the combinations of goods they consume. Since the derivative security has β II = 1, we can use (3.31) to solve the risk premium for aggregate consumption risk, as ψ = ( iI − i )/σ Ι2 . After substitution we obtain (3.32). Arrow (1971) made the important observation that quadratic preferences have the unattractive property of IARA. Meyer (1987) makes the observation that joint normal distributions are drawn from a class of linear distribution functions that result in mean–variance preferences when consumers have NMEU functions. Ross (1978) identifies distributions that will lead to two-fund separation where consumers choose the same risky portfolios to combine with the risk-free security. Cochrane (2001) provides a detailed and excellent exposition of the way state variables can be identified in the CBPM and how to make the stochastic discount factor linear in these variables. These discount factors are obtained by using the NMEU function in (3.13) to obtain the CBPM in (3.17) when there are more than two time periods. A risk-free bond pays the same real (nominal) interest payment at every event in each time period where interest is paid. Nominal risk-free bonds can be risky due to inflation risk. Discount bonds are defined in Section 2.4.5. They are coupon bonds with zero coupon interest. Thus, they pay a specified cash flow at their maturity date, and nothing in prior 4 Asset pricing models 1 The efficient mean–variance frontier identifies the highest expected returns to portfolios of risky securities at each level of risk. 2 The notation was defined in Chapter 3, where X0 and Xs are the values of current and future consumption expenditure in each period, respectively, X 0 the market value of current endowments, V0 = Σk Pak ak the market value of the portfolio of securities, and η0 the share of profit in private production. 3 We remove the second-period endowments from the intertemporal budget constraints defined in (3.7) to remove endowment risk from future consumption. 4 Once consumers allocate their wealth to future consumption by choosing the value and composition of their portfolio, they indirectly choose their current consumption expenditure when, as is assumed here, there is non-satiation in each time period. 5 Current consumption expenditure is being optimally chosen in the background of the analysis when consumers choose their portfolios of securities. 6 The minimum variance portfolio is obtained by differentiating the portfolio variance in (4.3) with respect to a and setting the expression to zero, where aˆ = ( σ 2B − σ AB ) / ( σ 2A + σ 2B − σ AB ). 7 A fully indexed bond which pays a constant real interest rate is a pure risk-free security. In most countries short-term government bonds are used as the risk-free security, but they are not normally indexed for unanticipated changes in inflation. Thus, even if the Fisher effect holds, the real interest rate on these bonds will change with unanticipated inflation. 8 At the margin all investors must be equally risk-averse along a linear efficient frontier as their indifference curves have the same slope. It takes a larger proportion of risky asset A in the portfolio of investor 2 to equate the slopes of their indifference curves. 9 The linear factor analysis relates the security returns to random values of the factors as ik = ck + β k 1 F1 + ... + β kG FG + ε k , where ck is a constant. By adding and subtracting ∑ g β kg Fg to this expression, we have ik = ck + ∑ β kg Fg + ∑ β kg f g + ε k , g where f g = Fg − Fg is the deviation in factor g from its mean. We obtain (4.22) by noting that ik = ck + ∑ g β kg Fg . 10 Market risk is priced uniquely in the APT when the residuals are eliminated from the returns to the mimicking portfolios constructed to price factor risk. When this happens it becomes an exact factor analysis. We demonstrate this in Section 4.3.3 when deriving the APT pricing equation from the CBPM in (3.17). 11 The no arbitrage condition was defined earlier in Theorem 3.1. 12 Using vector notation we can write (4.23) as aA[1] = 0, where aA is the (1 × K) vector of security weights in the arbitrage portfolio and [1] the (K × 1) unit vector. As a risk-free portfolio we must have aAβ = [0] and aAε = [0], where β is the (K × G) matrix of beta coefficients, [0] the (1 × G) vector of zeros and ε the (K × 1) vector of residuals, which leads to a A i = [0], with i the (K × 1) vector of expected security returns. Since aA is orthogonal to the vector [1] and the columns in matrix β, which imply it is also orthogonal to the vector of security returns i , there is a linear relationship between these vectors, with i = λ F + βλ β , where λF and λβ are, respectively, the (K×1) and (G×1) vectors of non-zero constants. A crucial feature of this model is the zero price for residuals in the mimicking portfolio returns, and the no arbitrage condition. The residuals attract no premium when they have zero variance, while the absence of arbitrage profits maps security returns onto the premiums for market risk isolated by the factors. In practice, the R2 for empirical estimates of the beta coefficients in (4.22) is less than unity, where this leaves a positive variance in the residuals. In other words, some of the market risk has not been identified by the common risk factors in the regression analysis. As the number of traded securities (K) increases, R2→1 and the variance in the residuals approaches zero. This is examined in detail by Cochrane (2001). The pricing relationship in (4.28) will also hold when the capital market is incomplete, but consumers can have different discount rates across the states of nature. In this setting we continue to assume consumers have conditional perfect foresight where they correctly predict equilibrium outcomes at each event in every future time period. The expectations operator Et(·) is based on probability beliefs formed at current time t. We can think of an infinitely lived consumer as someone who cares as much for their heirs as they do for themselves, which is why the same utility function is used by each consumer in all future time periods. But a lower bound must be placed on wealth to stop them creating unbounded liabilities by rolling their debt repayments out to infinity where they have zero present values. The relationship between the short- and long-term stochastic discount factors in a multi-period setting is summarized in Section 3.4. For the long-term discount factor over period t to T, we have T = δ T −tU ′( IT )/U ′( I t ), which is the product of a full set of short-term stochastic discount m factors, one for each consecutive time period, with: T = t m t +1 ·t +1 m t + 2 · … ·T −1 m T , m τ ,τ +1 = δU ′( Iτ +1 ) / U ′( Iτ ). where the short-term discount factors are m 17 The wealth portfolio is a combination of the risk-free bond and a bundle of risky securities. As noted earlier in Section 4.1, every consumer holds the same risky bundle (M) in the CAPM, which is why it is referred to as the market portfolio, but they hold different combinations of it with the risk-free bond according to their risk preferences. Investors who are relatively more risk-averse at the margin will hold more of the risky portfolio M in their wealth portfolio. 18 A risk-free bond pays the same rate of return in every event in each time period. But it can change over time when the term structure of interest rates (for risk-free government bonds with different maturity dates) rises or falls due to changes in the investment opportunity set. A constant interest rate makes the term structure flat so that the interest rate on a risk-free bond is the same at each event and in each time period. 19 Stein’s lemma states that if iW and Rk are joint normally distributed, and m(iW ) is differentiable with m′(iW ) < ∞ , then Cov[ m(iW ), Rk ] = E0 [ m′(iW )] Cov (iW , Rk ). and R˜k k ) = E ( m ) E ( Rk ) + Cov(m , Rk ) where m It is obtained using the decomposition E ( mR k ) = ∑ i∞=1 E ( m t ) E ( Rkt ) . are time-dependent variables, with E ( mR 20 Breeden (1979) extends Merton’s analysis by allowing changes in relative commodity prices, while Long (1974) derives the ICAPM using discrete-time analysis where security returns and the factors used to isolate consumption risk are multi-variate normal. The normality assumption is not required in the continuous-time model of Merton as the two securities are normally distributed over infinitely small time intervals for the diffusion process used to describe security returns. 21 The wealth portfolio is a combination of the risk-free bond and a bundle of risky securities. In the CAPM every consumer holds the same risky bundle (M), which is why it is referred to as the market portfolio, but they hold different combinations of it and the risk-free bond due to differences in risk preferences. Investors who are relatively more risk-averse at the margin hold more of the risky portfolio M in their wealth portfolio. 22 We obtain (4.34) by expanding (4.28), using Rk = (1 + ik ) pak , as (1 + ik )] = E ( m )(1 + ik ) + Cov[m ,(1 + ik )] = 1 ∀k. E[ m ) = 1/(1 + i ) , we have: Since E ( m ,(1 + ik )] ∀k . ik − i = − Cov[m When security returns are joint-normally distributed Stein’s lemma (summarized in note 19) allows us to decompose the covariance term as ⎛ ∂m ⎞ ⎞ ⎛ ∂m ,(1+ ik )] = E ⎜ ⎟ Cov(iw ,ik ) + E ⎜ ⎟ Cov(z , ik ) , Cov[m ⎝ ∂z ⎠ ⎝ ∂iw ⎠ where, from (4.32), V p aW V ∂m ∂m = δ WW a = δ Wz . and ∂iW ∂z VW VW 23 Merton finds the market portfolio may not be mean–variance efficient in the ICAPM due to the additional state variables (factors). However, Fama (1998) shows that investor portfolios are in fact multi-factor minimum-variance efficient, where consumers combine the market portfolio with a risk-free security and mimicking portfolios to hedge against the factor risk. The return on each mimicking portfolio is perfectly correlated with a state variable and uncorrelated with the return on the market portfolio and all other state variables. Thus, the risk premium in mimicking portfolio returns are compensation paid to investors for bearing non-diversifiable risk described by their state variables. 24 Merton argues that if all traded securities by some quirk of nature are uncorrelated with the interest rate, the term structure of interest rates for a riskless long-term bond will not satisfy the expectations hypothesis. This is based on the observation that consumers will pay a premium for a man-made security such as a long-term bond that is perfectly negatively correlated with changes in the interest rate, and hence by assumption that is not correlated with any other asset. But this premium would be eliminated by arbitrage in a frictionless competitive capital market. 25 Since (4.22) is constructed as a regression equation the factor deviations, which have zero mean values ( E ( f g ) = 0 for all g ), are uncorrelated with each other ( Cov(f g , f j ) = 0 for all g ≠ j ) and the residuals have zero mean (E(ε˜k, ε˜j) = 0 for all k, j. Equation (4.22) describes the returns to each security k and not any arbitrary set of returns by assuming the error terms are uncorrelated across securities, with (E(ε˜k, ε˜j) = for all k, j. As the factors are reported as rates of return the sensitivity coefficients in (4.22) are standard beta coefficients, with β kg = Cov(ik ,ig )/Va r (ig ). 26 Cochrane shows beta pricing models are equivalent to models with linear stochastic discount factors. To see this, start with the exact factor pricing model (without residuals) ik = i + β k λ , where λ is the (G × 1) column vector of factor prices. Based on the linear factor model in (4.22), we have g )for all g and β k = E (ik f )/ E ( ff ′ ) . Since f ′ is the (1 × G) row vector of factor λ g = − (1 + i ) E ( mf deviations from their expected values E(f˜f˜′) is a variance–covariance matrix. Using these decompositions we can write the APT pricing model as ) E ( mf 1 + ik = (1 + i ) − (1 + i ) E (ik f ) . E ( ff ′ ) ) = a = 1/(1 + i ) ≠ 0 and b ′ = E ( mf )/ E ( ff ′ ) with E ( ff ′ ) non-singular, Then, by defining E ( m = a + b ′ f. we have 1 + ik = 1/a − [ E (ik f )b ′ ]/a, for m 27 When security prices follow a Markov process the expected price in the next period depends solely on the current price and not on prices in previous time periods. 28 Breeden derives the CCAPM with stochastic labour income but without leisure. When labour supply is endogenous, leisure has to be included in the measure of aggregate consumption. 29 For the power utility function in (3.20) with γ ≠ 1, wealth can be solved as ∞ Wt = Et ∑ δ t (1 + gt +1 )− γ It +1 t =1 where gt +1 = ( It −1 − I t ) / I t is the growth rate in consumption. 30 The decomposition in (4.39) is obtained in two steps. First, ln (1 + ρ) is solved using (4.38) when 1+ g is continuous and log normally distributed with mean E[ ln (1+ g)] and variance Var[ ln (1+ g)], as ln (1 + ρ) = − γE[ ln (1 + g)] + E[ ln (1 + ik )] + 1 2 γ 2 Var[ ln (1 + g)] + Var[ ln (1 + ik )] + 2 Cov[ − λln (1 + g), ln (1 + ik )] . It is obtained by noting that when the product of two random variables A and B is lognormally distributed, we have: ) = E ( lnA ) + E ( lnB) + ln (AB ) + Var (lnB) + 2 Cov (lnA , lnB)]. [ Var (lnA Next, the price of the risk-free bond, ) = E (m 1 1 = E (1 + g)− γ , 1+ i 1+ ρ is used to solve ln (1 + ρ) when security returns and consumption growth are log-normally distributed, as: ln (1 + ρ) = ln(1 + i ) + ln[ E (1 + g)− γ ] with ln [ E (1 + g)− γ ] = − γE [ ln(1 + g)] + 1 2 γ 2 Var[ln(1 + g)] . We obtain (4.39) by combining (a) and (b). 31 With lognormally distributed consumption growth, we have: ln [ E (1 + ik )] = E[ ln (1 + ik )] + Var[ln (1 + ik )]. 32 There are a number of discrepancies in measures of aggregate consumption in the national accounts. Some capital expenditure is included at the time of purchase, but it should instead be the consumption flows generated over time. There are non-marketed consumption flows, like leisure and home-produced consumption, that are not included in reported data. Most countries make adjustments to include major items such as the rental value of housing services consumed by owneroccupiers. Empirical tests of the CCAPM use a consumer price index to obtain a real measure of consumption expenditure. We summarize the results for some of these tests later in Section 4.5. 33 While consumption in each time period is related indirectly through wealth, which is the discounted present value of future income that can be transferred between periods by trading in the capital market, the utility derived in each time period is independent of consumption expenditure in all other periods. 34 Optimally chosen future consumption expenditure can be summarized using means and variances because consumers have state-independent preferences. This means they care only about the statistical distribution of their consumption expenditure. The mean–variance analysis then follows from assumptions when security returns are completely described by their means and variances. A less satisfactory basis for using a mean–variance analysis is to assume consumers have quadratic preferences. 35 Empirical studies compute economic returns to publicly listed shares by measuring changes in their prices over time and adding dividend payments to them. 36 In the unconditional versions of the CAPM and the CCAPM the parameters in their stochastic discount factor are constant over time, while in the conditional versions of the models they are time-dependent. 37 The Hansen–Jagannathan bound can be obtained by using (4.28) to write the pricing relationship for security k as ,1 + ik ) = E ( m ) E (1 + ik ) + Cov(m ,1 + ik ) = 1. E (m After rearranging these terms, and using E(m˜) = 1/(1 + i), we have , ik ) = (1 + i ) Corr ( − m , ik )σ m σ k , ik − i = (1 + i ) Cov ( − m , ik ) ≤ +1. Since the correlation coefficient cannot exceed unity, we set Corr with − 1 ≤ Corr( − m ˜ ˜ (− m, i k) = 1 and measure the risk premium as ik − i ≤ (1 + i )σ m . σk There is a one-to-one relationship between consumption and wealth when consumers have aconstant coefficient of relative risk aversion. Thus, by using the power utility function U ( I t ) = I t1− γ / (1 − γ ) , t = δ( It / I 0 )− γ = δ(1 + g)− γ , we can write the stochastic discount factor as m where its variance becomes: σ 2m ≈ Var [ − γ ln (1 + g) + ln δ ] ≈ γ 2 Var [ − ln (1 + g)] ≈ γ 2 σ 2g , with g = ( I − I 0 )/ I 0 . From this we have σm ≈ γσg in (4.42). This expression is obtained by writing the risk-free discount factor in logarithmic form, as ln δ − γ ln E (1 + g) = − ln (1 + i ) , where ln (1+ g) ≈ g if the variance in the discount factor (m) is small. McGrattan and Prescott (2003) argue there are no puzzles about the average debt and equity returns over the last century when taxes on security returns, diversification costs and regulatory constraints imposed on US households are taken into account. There are excellent technical summaries of these extensions to the standard CCAPM in Cochrane (2001), Kocherlakota (1990) and Lengwiler (2004). Abel distinguishes between habit determined by past consumption of other consumers as ‘catching up with the Joneses’ and habit determined by current consumption of other consumers as ‘keeping up with the Joneses’. The risk-free rate puzzle cannot be solved when external habit is based solely only on the current consumption of others because saving is not raised in the same way as external habit based on past consumption. Campbell and Cochrane are able to successfully predict changes in the risk premium (the Sharpe ratio) over time with external habit based on past consumption but with a high coefficient of relative risk aversion. In contrast, Constantinides can successfully explain the equity premium and the low risk-free rate puzzles with internal habit (if consumers are highly sensitive to their own consumption risk) but without predicting changes in the risk premium correctly. This relationship was derived earlier in Section 3.3.1 in the previous chapter. Heaton and Lucas find borrowing constraints can lower the risk-free rate considerably when a large enough proportion of consumers cannot sell debt. And they do so by reducing the demand for risk-free funds. Using the CCAPM with a coefficient of relative risk aversion set at unity, which is consistent with estimates from empirical research, the risk premium on equity is less than 1 per cent for the low observed standard deviation in aggregate consumption of approximately 3 per cent. 47 Arrow and Lind argue the public sector faces a lower cost of capital because it can diversify risk by undertaking a large number of projects. Any remaining risk can then be spread across the population by using the tax system to fund these projects. Bailey and Jensen (1972) refute this claim by arguing the returns on most government projects are in fact correlated with national income, which means they contain market risk that cannot be diversified by combining them together. Moreover, the tax system is not a costless way of diversifying idiosyncratic risk. In fact, there are few, if any, non-distorting taxes that governments can use, where lump-sum (or poll) taxes are politically infeasible, while most taxes on trade affect economic activity. We look at how risk affects the social discount in Chapter 8. 48 This expression is obtained by starting with the value of the net cash flows at t − 1, with t , and Vt = NCF 1 ⎞ t )⎛ Vt −1 = Et −1 ( NCF ⎜⎝ 1 + E (i ) ⎟⎠ . t When expectations about the net cash flows are formed using (4.48), with t ) = E ( NCF t )(1 + ε ), Et −1 ( NCF t −2 t −1 their random value at t − 1 becomes 1 ⎞ t )(1 + ε ) ⎛ Vt −1 = Et − 2 ( NCF . t −1 ⎜ ⎝ 1 + E (it ) ⎟⎠ After taking expectations at t − 2, we have 1 ⎞ t )⎛ Et − 2 (Vt −1 ) = Et − 2 ( NCF ⎜⎝ 1 + E (i ) ⎟⎠ , t which allows us to write the value of the net cash flows at t − 2 as t ⎛ 1 ⎞ t ) Vt − 2 = Et − 2 ( NCF ∏ ⎜ 1 + E (i ) ⎟ . j = t −1 ⎝ j ⎠ We obtain (4.49) by iterating back to time 0. 49 By using (4.48) we can write the random value of the net cash flows at time τ < t, as t )(1 + ε ) Vτ = Eτ −1 ( NCF τ ⎛ ⎞ 1 ⎜ 1 + E (i ) ⎟ . j = τ + 1⎝ j ⎠ t Since its covariance with the return on the market portfolio, is t ) Cov (ε , i ) Cov (Vτ , iM τ ) = Eτ −1 ( NCF τ Mτ and its expected value at τ − 1, is ) Eτ −1 (Vτ ) = Eτ −1 ( NCF t ⎛ ⎞ 1 ⎜ 1 + E (i ) ⎟ , j = τ − 1⎝ j ⎠ t ⎛ ⎞ 1 ⎜ 1 + E (i ) ⎟ , j = τ + 1⎝ j ⎠ t we have Cov (Vτ , iM τ ) / Eτ −1 (Vτ ) = Cov (ε τ , iM τ ). 5 Private insurance with asymmetric information 1 A more detailed presentation is available in Laffont (1989) and Malinvaud (1972). 2 In this setting state probabilities are outside of the control of consumers both as individuals and coalitions, but later we allow them to affect the probabilities of their individual risk through self-insurance. Their preferences can be summarized using NMEU functions with common information where they agree on the probabilities of all the possible outcomes, both for states and individual risk. But with asymmetric information subjective expected utility is more appropriate when consumers have different probabilities beliefs, which takes the analysis outside the classical finance model used to generate the consumption-based pricing model in (4.28) where consumers measure and price risk identically. 3 A state-independent utility function may not be appropriate for some applications, such as health insurance, where preference mappings depend on the consumers’ well-being. 4 While we refer to these outcomes as good and bad states, they are not the states of nature defined earlier in Section 3.1.1 that are common to all consumers and outside their control. In contrast, the good and bad outcomes considered here are incurred by different individuals at the same time. Later we allow consumers to change the probability of bad state outcomes through self-protection. 5 There is a competitive equilibrium outcome for a single insurer when the market is perfectly contestable. The threat of entry forces the incumbent to set the price of insurance at the lowest possible marginal cost. 6 The analysis in this section draws from the analysis in Pauly (1974) and Shavell (1979). 7 Shavell (1979) considers ex-ante and ex-post observation with differential costs. Ex-post observation occurs when consumers make claims, while ex-ante observation occurs at the time the policies are written. Ex-ante observation is preferable if it is less costly than ex-post observation by an amount sufficient to offset the extra frequency of observation involved. 8 A considerable amount of work has been undertaken in this area looking at the adjustment processes to equilibrium and the existence properties of these equilibria. See Greenwald and Stiglitz (1986), Harris and Townsend (1985), Riley (1975), Stiglitz (1981, 1982) and Wilson (1977). 9 It is assumed throughout the analysis that insurance is exclusive, so that consumers buy all their insurance from one insurer. It can also be interpreted as meaning that all insurers know how much insurance every consumer buys and stops them from taking more than full insurance. In practice, insurance contracts contain clauses which require consumers to reveal all their insurance cover, with failure to do so releasing insurers from any of their obligations. Exclusivity stops high-risk types from locating to the right of L along the low-risk price line. 6 Derivative securities 1 Most retail outlets provide consumers with a two-week cooling-off period when they purchase major items. In some countries it is mandated by law, but firms still do it voluntarily when the option is valued sufficiently by consumers. 2 This bundle is created by taking long positions in the share and put option and being short in the risk-free bond. The two options have the same exercise price (Sˆ T), which is also the payout on the risk-free bond. 3 When the share price follows a random walk without drift its expected price is equal to its current price, where deviations in the future price are noise with zero mean and constant variance. 4 The stochastic variable z is a continuous random variable with increments that are statistically independent; it is normally distributed with mean zero and variance equal to the increment in time dt. Just like a random walk in discrete time, the variance scales with time. 5 Ito’s lemma takes a second-order Taylor series expansion of the call option value, where: dC = ∂C ∂C 1 ∂ 2C 2 ∂ 2C 1 ∂ 2C 2 dS , dt + dS + dt + dt dS + 2 ∂S 2 ∂t ∂S ∂t ∂S 2 ∂t 2 and then uses (6.8) to substitute for dS = S(µS dt + σS dz) with dz = dt , dt 2 = 0 and dt dS = 0, to obtain equation (6.9). 6 After substituting (6.7) into (6.9) and using dH = iH dt, we have i H dt = dS − ∂S ⎧ ∂C ∂C 1 ∂ 2C 2 2 ⎫ σ S dt ⎬ . dt + ⎨ dS + ∂C ⎩ ∂S ∂t 2 ∂S 2 S ⎭ The first two terms cancel because the risk in the share price is eliminated inside the hedge portfolio. By using H = S − (∂S/∂C)C and then rearranging terms, we obtain (6.10). 7 The Australian Futures Exchange became a wholly owned subsidiary of the Australian Securities Exchange in 2006. It trades standardized futures contracts as well as over-the-counter forward contracts. 8 Sometimes this relationship is presented as 0FNT = (pN0 − 0NYNT)(1 + iT)T, where 0NYNT = 0YNT − 0QNT is the present value of the net marginal convenience yield from storage. 9 The expected annual economic return from holding commodity N for T periods is ⎛ E ( p ) ⎞ E (iNT ) = ⎜ 0 NT ⎟ ⎝ pN 0 ⎠ 1/ T − 1. Thus, for 1 year, with T = 1, we have E (iNT ) = [ E0 ( pNT ) − pN 0 ]/ pN 0 . 10 Intermediate uncertainty was examined earlier in Section 4.6.2. 11 In the ICAPM all future consumption is funded solely from returns to portfolios of securities and there is no risk from labour or other income, where the risk in the market portfolio is the aggregate consumption risk in the first period, while the interest rate and relative commodity price risk determine how aggregate consumption risk changes over time. 7 Corporate finance 1 The consumer problem can be summarized using (2.11), without superscript h as: ⎫⎪ X 0 ≤ X 0 + η0 − paB aB − paE aE ≡ I 0 ⎪⎧ max ⎨v ( I ) ⎬, X 1 ≤ X 1 + aB paB (1 + iB ) + aE paE (1 + iE ) ≡ I1 ⎪⎭ ⎪⎩ where I = {I0, I1} is the vector of consumption expenditures in each period. The first budget constraint makes current consumption expenditure (X0) and the market value of the security portfolio no greater than the market value of the endowments ( X 0 ) plus profit from production (η0), while the second constraint makes future consumption expenditure (X1) no greater than the market value of the endowments ( X 1 ) plus the payouts to securities, where ak is the number of units of security k ∈B, E held by the consumer. 2 This is the personalized discount factor defined in the Arrow–Debreu economy in (3.8) for a single state in the certainty setting, with ϕ1h = λ1h / λ 0h , where λ 0h and λ1h are the Lagrange multipliers on the budget constraints. By the envelope theorem these multipliers measure the marginal utility of income in each period at a consumer optimum when there is non-satiation, with ∂v h / ∂I t = λ th for t ∈{0, 1}. 3 Some consumers may sell both securities to borrow against real income endowments in the second period. 4 In the two-period certainty model in Section 2.2.5 private firms purchase consumption goods in the first period as inputs to future production. We extend the analysis here by allowing them to finance this investment by selling debt and equity, where the problem for each profit-maximizing firm can be summarized using (2.12), without superscript j, as max η0 = paB aB + paE aE − Z 0 aB paB (1 + iB ) + aE paE (1 + iE ) ≤ Y1 ( Z 0 ) , where Z0 ∫ p0z0 is the market value of inputs purchased in the first period, and Y1= p1 y1 the market value of the net cash flows it generates in the second period. The constraint makes the payouts to debt and equity by each firm equal to the market value of their net cash flows. By defining leverage as the proportion of each dollar of capital raised by selling debt as b = paB aB/V, we can write the problem for each firm as: max η0 = V0 − Z 0 V0 (1 + biB + (1 − b )iE ≤ Y1 ( Z 0 ) , where V0 = paB aB + paE aE is its current market value. The expression in (7.3) is obtained from the payout constraint when it binds. 5 While it is fairly common practice in the finance literature to refer to consumers as being short in securities when they sell them and long when they buy them, the same practice is less well established when referring to the positions taken by firms. To avoid any confusion we will refer to firms as being short in a security when they purchase it and long when they sell it. This is consistent with the notion that consumers are in general net buyers and firms net sellers of securities. There are a number of reasons why firms may purchase securities. For tax reasons they repurchase their own shares and the shares of other firms to pay shareholders capital gains rather than cash dividends, and they also purchase securities to spread risk and arbitrage profits. These activities will be examined in the following subsections. 6 We omit the time subscripts and superscript j to simplify the notation. 7 This decomposition is obtained by writing the user cost of capital as 1 + iE = Y( I )/VU , where β EU =Cov(iEU , iM )/Var (iM ) = Cov(Y, iM )/VU [ Var(iM )] = σ YM /VU σ 2M . 8 We have diEL /db = ( iM − i )β EL /(1 − b ), where ( iM − i )β EL = iEL − i . 9 These conditions are obtained from the problem for each firm in (2.12) by replacing their payout constraint, with superscript j omitted, as bV0(1 + iB) (1 − tC) + (1 − b) V0(1 − tC + iE) ≤ Y1(Z0) (1 − tC). By rearranging this expression when the constraint binds we find that the corporate tax base is (1 − b) iEV0 = (Y1 (Z0) − biB V0 − V0) (1 − tc). Since the repayment of capital to debt and equity and interest are tax-deductible expenses the tax falls on the return paid to equity. We follow the usual (often implicit) convention adopted in most finance models by returning tax revenue to consumers as lump-sum transfers. This avoids the need to explicitly model government spending. But even though the tax revenue is returned to consumers, their real income falls due to the excess burden of taxation. 10 In reality, however, the corporate tax is levied on measured income which is not in general the same as economic income. Recall from Chapter 2 that economic income measures the change in the wealth of consumers over a period of time. Thus, it includes capital gains (or losses) on their capital assets. In contrast, measured income applies decay factors to the purchase prices of depreciating assets as a proxy measure for the reduction in their market values, and only includes capital gains when they are realized. Whenever there are differences in economic and measured income the effective corporate tax rate on economic income diverges from the statutory tax rate. For example, when measured income is higher the effective tax rate on economic income rises above the statutory corporate tax rate. We avoid this complexity in the following analysis by assuming economic and measured income are equal. 11 This assumes investors do not take into account any future consumption benefits they might get from government spending funded from corporate tax revenue. In practice, individual investors do not directly link the tax they pay to the benefits they get from government spending. Since the tax each investor pays is small relative to total tax revenue, they do not expect their contribution in isolation to have any noticeable impact on government spending. Moreover, wealth is redistributed through the government budget so that high-income taxpayers are less likely to receive the same value of benefits per dollar of tax revenue they pay. In the current setting tax revenue is returned to consumers as lump-sum transfers by including them in the budget constraints of consumers. This allows us to focus on equilibrium outcomes in the capital market without worrying about the welfare effects of government spending. The government budget constraint is explicitly included in the welfare analysis used in Chapter 8. 12 In practice, firms are not declared bankrupt in every defaulting state because bondholders may decide firm managers will operate more effectively in the future. 13 This is obtained by using the firm payout constraint bV0 (1 + iB) (1− tC) + (1 − b) V0 (1− tC + isE) + hsV0 ≤ Ys (Z0) (1−tC) ∀S, where hs V0 is the default cost in each state s. 14 To properly account for asymmetric information in the Arrow–Debreu economy we need to explicitly introduce information sets for traders as well as the technologies they use to gather and process information. 15 When there are more than two periods the firm recovers depreciation rather than repaying capital to shareholders and bondholders, but measured depreciation allowances are rarely equal to economic depreciation, where the difference changes the effective tax rate on economic income and affects the value of the firm. 16 In Australia some corporations can trade their tax losses inside conglomerates. This happens in the mining and exploration sector where companies have large tax losses in some years. Similar losses are incurred by drug and information technology companies that undertake research and development. They are forced to bear potentially large costs from having to carry their tax losses forward without interest. 17 For a comprehensive summary of non-tax capital structure theories, see the survey by Harris and Raviv (1991) and the recent book by Tirole (2006). They also provide a summary of the results from empirical tests of these theories. 18 Barnea et al. (1981) argue managerial incentives and specialized securities such as convertible debt can be used to reduce, and in some cases, eliminate these agency problems. 19 Governments justify having lower tax rates on capital gains by arguing they promote investment and income growth. But this is frequently inconsistent with other objectives they have to minimize the excess burden of taxation. There is no doubt that part of the reason for the favourable tax treatment of capital gains is the political influence of corporate firms. They make large contributions to political parties, while the costs are diffused over consumers with much less political influence. 20 The reduction in the effective personal tax rate from delaying the realization of capital gains can be demonstrated by comparing the after-tax return to a consumer from realizing a dollar of income today and then reinvesting it for one period (1 − t Bh )[1 + iE (1 − t Bh )] with the after-tax return from leaving the income inside the firm for a year (1+iE)(1−tBh ). When the income is realized now rather than next period the consumer pays additional tax of t BhiE (1− t Bh ). 21 We obtain these first-order conditions from the consumer problem in section 2.2.5 by replacing the budget constraints in (2.11), with: ⎧ X 0 ≤ X 0 − paB aB − paD aD − paG aG ≡ I 0 , ⎪ ⎨ X 1 ≤ X 1 + paB aB [1 + iB (1 − t Bh )] + paD aD [1 + iD (1 − t Bh )] + paG aG [1 + iG (1 − t Eh )] ≡ I1 , ⎪ a ≥ a , for k = B, D, G, k ⎩ k where securities D and G are shares that pay dividends and capital gains, respectively. Since the different personal taxes are endowed on consumers they have unbounded demands for their taxpreferred securities, where tax arbitrage can, in the absence of constraints, exhaust government revenue. Following Miller, we use short-selling constraints to restrict tax arbitrage and bound security demands. Other ways of bounding security demands are examined below. 22 This supply condition is obtained by replacing the payout constraint in the optimization problem for firms in (2.12) with V0 (1 − tC) + iB (1 − tC) paB aB + iDpaDaD + iGpaGaG ≤ Y1(1− tC), where V0 = paBaB + paDaD + paGaG is the current market value of their capital. The tax base is obtained by rearranging this constraint, when it binds, as iDpaD aD + iG paGaG = (Y1 − iBpaBaB − V0) (1 − tC), with interest (iB paB aB) and the repayment of capital (V0) being tax-deductible expenses. When there are borrowing constraints on consumers to restrict tax arbitrage it is important that there are no constraints on the security trades of firms for the no arbitrage condition to hold. Indeed, if consumers cannot arbitrage profits from security returns, firms must be able to perform the task in a competitive capital market. When firms choose their security trades optimally, they satisfy ϕ (1 + iB )(1 − tc ) ≤ 1, for debt, ϕ (1 − tC + iD ) ≤ 1, for equity paying dividends ϕ (1 − tC + iG ) ≤ 1, for equity paying capital gains. In the absence of arbitrage profits these conditions hold with equality and we obtain the supply condition in (7.27). This assumes bondholders and shareholders can claim income payments made to securities they sell as a tax-deductible expense. When governments respond to tax-minimization schemes there can be strategic interactions between the public and private sectors. Examples of this are examined in a dynamic setting by Fischer (1980). Aivazian and Callen (1987) illustrate the Miller equilibrium using an Edgeworth box diagram where they show how constraints on tax arbitrage are required for exogenously endowed tax rates to bound security demands. They also identify the role of firm security trades in making firm leverage policy irrelevant. Some of these issues are raised in Chapter 8 where we examine project evaluation in an intertemporal setting with tax distortions. For examples, see Dammon (1988) and Auerbach and King (1983). Miller (1988) also recognizes the important role of the security trades by financial intermediaries in a competitive capital market. Using US data over the period 1970–1985, Simon (1996) finds that the relationship between the default-free tax exempt and taxable yields in the Miller equilibrium holds in the long run. Deviations in this relationship are due to transitory shocks to the leverage-related costs of debt and bank borrowing costs in the short run. The typical levels of these costs did not cause deviations from the Miller equilibrium in the sample period. In practice, the returns to domestic and foreign securities are converted into a common currency before they are compared. Money has no real effects in the analysis undertaken here so we can use a common numeraire good for all countries. In other words, interest rate parity holds for each currency in this real analysis. Once money has real effects interest rate parity can break down, where a risk-free security can pay a different rate of return across countries due to expected changes in exchange rates. This is obtained when the firm maximizes profit (η0 = V0 - Z0) by choosing investment to make: dV0 dZ 0 = 1. db = 0 32 Expected default costs per dollar of capital can be included if at the margin they are unaffected by changes in investment. In a common information setting they are confined to lost corporate tax shields as bankruptcy and agency costs require asymmetric information. 33 As a first-order condition this expression is evaluated with leverage set at its optimal level. Once MM leverage irrelevance breaks down, firm financing decisions affect their real investment choices, where shareholders may not be unanimous in wanting firms to maximize profit. 34 Lower income taxes expand aggregate output by reducing the excess burden of taxation. Any change in tax revenue is offset by changes in government spending through the government budget. In the current analysis the tax revenue is returned to consumers as lump-sum transfers because there is no government spending. It will be included in Chapter 8 when we undertake a welfare analysis of changes in taxes and government spending. 35 Miller (1977) also considers the role of investors such as religious and other non-profit organizations who are exempt from tax. 36 Australia adopted the imputation tax system in 1998 and New Zealand adopted it the following year. Canada and the United Kingdom adopted a partial imputation system. In the UK it was replaced in 1999 by a system that provided personal tax reductions on dividend income subject to corporate tax. These personal tax concessions range from 100 per cent for basic-rate taxpayers to 25 per cent for high-rate taxpayers. Singapore replaced a full imputation tax system in 2003 with a one-tier corporate tax system which exempts all dividends from personal tax when they have been subject to corporate tax. 37 The analysis here can be extended to accommodate uncertainty with common information by noting that the equilibrium condition on security returns holds in each state of nature when the capital market is complete. 38 Their dividend payout ratio changed over the sample period from around 50 per cent prior to the 1980s to over 60 per cent during the 1980s and 1990s. It peeked at about 100 per cent in 1982 and then declined to around two-thirds in the 1990s due largely to changes in the relative tax treatment of capital gains and dividends. Sarig finds empirical support for the information content of dividends. 39 The taxes on corporate income under a classical corporate tax system were summarized earlier in Table 7.2. 40 Barclay and Smith do, however, find empirical evidence that the benefits from the information content of dividends are large enough to offset their tax disadvantage. 41 The reduction in the effective tax rate on capital gains is illustrated in note 20. 42 Capital gains are included in economic income in the periods when they accrue but, for the most part, they are included in measured income at the time they are realized. Thus, in periods when capital gains are significant, firms with positive economic income can have negative measured income. Similar problems arise with capital losses because measured depreciation allowances are determined by applying standardized decay factors to the original purchase prices of assets, while economic depreciation allowances are determined by the reductions in their market valuations. 43 We obtain these conditions by replacing the budget constraints for consumers in (2.11) with X 0 ≤ X 0 − paB aB − paD aD − paG aG ≡ I 0 , ⎡ ⎤ i X 1 ≤ X 0 + paB aB [1 + iB (1 − t Bh )] + paF aF ⎢1 + F (1 − t Bh ) ⎥ 1 − t C ⎣ ⎦ + paU aU [1 + iU (1 − t Bh )] + paG aG [1 + iG (1 − t Eh )] ≡ I1 , ak ≥ ak forr k = B, F ,U , G, where aF is the number of units of franked dividend paying shares purchased at market price paF, and aU the number of units of unfranked dividend paying shares purchased at market price paU. The borrowing constraints are used to rule out tax arbitrage and bound security demands when investors have different tax preferences for the four types of securities. 44 To capture the different tax treatment of franked and unfranked dividends, we replace the payout constraint in the problem for firms in (2.12) with Notes bV0 (1 + iB)(1 − tC) + dFV0[(1 − tC) + iF] + dUV0(1 + iU) (1 − tC) + gV0[(1 − tC) + iG] ≤ Y1(1 − tC), where b = paB aB / V0, dF = paF aF / V0, dU = paU aU/V0 and g = paGaG/V0. When this constraint binds for profit-maximizing firms we can write their current market value as V0 = Y (1 − tC ) , (1 − tC ) + biB (1 − tC ) + d F iF + dU iU (1 − tC ) + giG with b + dF + dU + g = 1. 8 Project evaluation and the social discount rate 1 We aggregate changes in utility over consumers using the individualistic social welfare function of Bergson (1938) and Samuelson (1954). It is a functional mapping over the utility functions of all consumers in the economy, who each obtain utility from their own consumption bundle. In other words, this rules out consumers deriving utility from the consumption bundles chosen by others. However, the social welfare function can take different forms to reflect variations in social attitudes to inequality, ranging from complete indifference (utilitarian) to complete aversion (Rawlsian). We initially use a conventional Harberger analysis that assigns the same distributional weights to consumers. This removes distributional effects from the welfare analysis and allows us to aggregate dollar changes in utility over consumers, where aggregate welfare gains represent potential Pareto improvements through an appropriate lump-sum redistribution. Distributional effects can be included by assigning different distributional weights to consumers. 2 A pure public good is defined to be perfectly non-rivalrous and non-excludable. It is difficult for private suppliers to extract a fee from consumers when the good is non-excludable, which is why public goods are underprovided in private markets. It creates a free-rider problem where individuals can consume the benefits of goods supplied by others without making a contribution to their cost. Indeed, this can lead to strategic interactions between private suppliers. To avoid these problems, which are not the focus of the current analysis, we assume the good is produced solely by the government. 3 The following analysis could be undertaken using generalized state preferences, but they do not allow us to separate the impact of risk on equilibrium outcomes. Moreover, expected utility is a requirement for using one of the consumption-based pricing models examined earlier in Chapter 4 to compute risk-adjusted discount rates on future cash flows. 4 In the following analysis we assume every consumer is a net supplier of the private good, with x0h − x0h > 0, and a net consumer in the second period, with xsh − xsh < 0 for all h. We could allow some individuals to be net consumers in the first period (with x- 0h − x 0h < 0) and some to be net suppliers in the second period, with xsh − xsh > 0. However, the tax rates would then become subsidies, unless we allow them to change sign. But consumers would then face different relative commodity prices and use different discount factors to evaluate future consumption flows. 5 Since lump-sum transfers are non-distorting they make no contribution to the final welfare effects. This has important practical implications because it allows governments to separate policy evaluation across a number of specialist agencies. Treasury and finance departments can evaluate the marginal social cost of raising revenue with a range of different taxes without knowing how the funds will be spent, while departments responsible for health, education, social security and defence can evaluate the net benefits from their spending programmes independently of the way they are financed. The final changes to the distorting taxes are determined by the impact projects have on the government budget. If they drive it into deficit then distorting taxes must be raised to generate additional revenue, while the reverse applies for projects that drive the budget into surplus. 6 There is a detailed examination of the role of lump-sum transfers in a conventional welfare analysis in Jones (2005). Ballard and Fullerton (1992) argue that a conventional welfare analysis is not possible when lump-sum transfers are ruled out. But they are only hypothetical transfers that we use to separate the welfare effects of each policy variable, and they are eliminated inside projects by tax changes made to balance the government budget. This is demonstrated in Section 2.1.2. 7 In each period the aggregated endowments are x0 = ∑ h x0h and xs = ∑ h xsh for all s, aggregate consumption x0 =Shx0h and xs = Shxsh for all s, and the lump-sum transfers L0=ShL0h and L0=ShLsh for all s. We allow the government to trade in the capital market so that it can transfer tax revenue between the two periods. On that basis, the government budget constraint can be computed in present value terms by using the stochastic discount factors in (8.2). 8 The model can be extended by adding more time periods and redefining the second-period budget constraints over events (e), which are time-specific subsets of the state space. To accommodate saving and investment at times beyond the second period, replace −Z0 and ys for all s by the vector of net outputs ye, with ye < 0 for inputs and ye > 0 for outputs. 9 Since λ h0 is the marginal utility of current income, 1 / λ 0h converts expected utility into current income. Thus, dEU h /γ 0h is a dollar measure of the change in expected utility; it measures areas of surplus below consumer demand schedules. It is well known that dollar changes in utility are in general unreliable welfare measures for discrete (large) policy choices. Once the marginal utility of income changes with real income, dollar changes in private surplus do not map into utility at a constant rate and are therefore path-dependent. This is examined in detail by Auerbach (1985) and Jones (2005). These problems do not arise in a marginal welfare analysis because changes in the marginal utility of income have higher-order effects that do not impact on final welfare changes. 10 This expression is obtained by using the first-order conditions for optimally chosen consumption, with ∂U h /∂x0h = λ 0h p0 and δ ∂U h /∂ xsh = λ hs ps for all s and the first-order condition for optimally chosen saving in (8.2). 11 Bergson (1938) initially described the individualistic social welfare function, while Samuelson (1954) was the first to use it in formal analysis. 12 Individualistic social welfare functions rule out interdependencies between consumers where they get utility from their own consumption bundle as well as the consumption bundles of others. However, they can be given functional forms that reflect social attitudes to income inequality ranging from complete indifference (utilitarian) to complete aversion (Rawlsian). These determine the values of the distributional weights assigned to consumers in the aggregate welfare changes. 13 Distributional effects can be included in the welfare analysis by assigning different distributional weights to consumers. Typically there is an inverse relationship between these weights and income, where low-income consumers are assigned higher relative weights. But they are based on subjective assessments that can differ across policy analysts. In some circumstances policy changes with efficiency losses, which isolate reductions in real income, can be socially profitable due to distributional effects. Most analysts report the efficiency and equity effects separately so that policymakers can see the role played by subjectively determined distributional effects in the welfare analysis. We address these issues in more detail in Section 8.1.4. 14 Using (8.5), we can write the change in social welfare in (8.6) as dW = ( x0 − x0 )( dp0 − dt0 ) − z0 dp0 + MRS0 dG0 + dL0 + ∑ πs ms ( xs − xs )( dps + dt0 ) + ys dps + MRSs dG1 + dLs . s The changes in the lump-sum transfers are solved using the government budget constraints in (8.4) where, after substitution, we have dW = − t0 dx0 + ( x0 − x0 − z0 − G0 ) dp0 + ( MRS0 − MRT0 ) dG0 + ∑ π s ms t1dxs + ( xs − xs + ys − Gs ) dps + ( MRSs − MRTs ) dG1 . s The conventional welfare equation in (8.7) is obtained by using the market-clearing conditions for the private good in each time period, with x0 = x0 + z0 + G0 and xs + ys = xs + G1 for all s respectively, to eliminate the endogenous price changes. 15 Since the tax on net future consumption demand raises the marginal valuation for the good above its marginal cost, the extra revenue is a net gain from expanding this taxed activity. 16 The changes in tax revenue are: ∂x ∂x dT = − t0 0 + ∑ π s ms t1 s ∂G0 ∂G0 dG0 s ∂x ∂x dT = − t0 0 + ∑ πs ms t1 s . ∂G1 s ∂G1 dG1 17 Atkinson and Stern make the important observation that positive revenue effects do not necessarily mean that the optimal supply of the public good is larger than it would be in an economy without tax distortions. The summed benefits and costs are measured in different economies and will not in general be the same at each level of public good provision. In a tax-distorted economy the excess burden of taxation reduces aggregate real income and this impacts on the marginal valuations consumers have for the private and public goods, where the resulting impact on prices can change the cost of producing the public goods. 18 It is quite feasible that the project reduces net consumption of the private good in the second period, but it must increase net supply of the good (by reducing aggregate consumption demand) in the first period to release the resources used to produce extra output of the public good. When net demand falls in the second period the reduction in tax revenue is a welfare loss that increases the marginal cost of providing the good. It is possible for this loss to make the spending effect negative. 19 A number of studies obtain a modified measure of the MCF which combines the spending effects for projects with the conventional MCF in (8.14). Thus, the MCF is project-specific. For examples of this, see Ballard and Fullerton (1992) and Snow and Warren (1996). Jones (2005) derives the formal relationship between the conventional and modified measures of the MCF. 20 Formally, the marginal excess burden of taxation for t0 is MEB0 = − (dW/dt0)/(dT/ dt0), where the welfare loss is solved using the conventional welfare equation in (8.7) as ∂x ∂x dW dT = − t0 o + ∑ π s ms t1 s = − ( x0 − x0 ). ∂t0 ∂t0 dt0 dt0 s It is important to note that this loss is isolated using (hypothetical) lump-sum transfers to balance the government budget. When the tax is marginally raised the government returns the revenue to consumers through lump-sum transfers. Thus, the welfare loss is measured in a balanced equilibrium which is on the economy’s production possibility frontier. The welfare effects for each of the policy changes can be isolated in this way. Once they are combined inside projects the lumpsum transfers are eliminated by the tax changes. Thus, we have ⎛ dT ⎞ MEB0 = ⎜ − ( x0 − x0 )⎟ ⎝ dt0 ⎠ dT , dt0 where MCF0 in (8.14) is equal to 1 + MEB0. 21 This special case is examined by Ballard and Fullerton (1992), and it is the outcome when utility is (log-)linear in the public good. 22 For a detailed examination of the relationship between dollar changes in utility and the compensated welfare changes see Jones (2005), where the Hatta (1977) decomposition is generalized to accommodate endogenous price changes. 23 The hats (^) over variables indicate they are computed with utility held constant and are therefore based solely on substitution effects. 24 These foreign aid payments are purely notional and have no impact on the utility of foreign consumers. Normally any changes in real income stay in the hands of domestic consumers. We simply compute potential foreign aid payments as a way to isolate the changes in real income from policy changes that ultimately impact on the utility of domestic consumers. 25 The actual change in expected utility from the project is derived as the second equation in (8.11). 26 Graham looks at how to maximize the option value by redistributing income across states of nature. But in project evaluation we want to evaluate the option value for consumption risk resulting from the project, which is, in part, endogenously determined by consumers trading risk in private markets. If the government can reduce consumption risk by transferring income across states of nature the value of the ex-ante CV rises. However, that introduces a separate policy choice with additional costs. In particular, we need to consider whether the government can reduce consumption risk at lower cost than private traders, and since it cannot redistribute income across states of nature using lump-sum transfers, it uses distorting taxes which have efficiency costs that make redistribution costly. These same issues arise for income redistribution across consumers. 27 The Hatta decomposition was originally derived in a certainty setting with constant producer prices. Jones (2005) generalises it by allowing variable producer prices. It is also used by Dixit (1985) and Diewert (1983) in a certainty setting with variable producer prices. 28 It should be noted that there are an infinite number of ways for governments to distribute surplus revenue to consumers when using lump-sum transfers. But they may not be able to personalize the revenue transfers when using distorting taxes, which is why it is important to test for Pareto improvements in policy evaluation. Jones derives a revised measure of the shadow value of government revenue when consumers have different distributional weights and the government balances its budget with distorting taxes; it is the distributional weighted sum of personalized measures of the shadow value of government revenue, where strict Pareto improvements are possible when tax changes make these personalized shadow values positive for every consumer. 29 Foster and Sonnenschein (1970) find SR can be negative in tax-distorted economies where multiple equilibrium outcomes are possible. When this happens extra real income reduces social welfare. They show how stable price adjustment mechanisms, like the Walrasian auctioneer in a competitive equilibrium, overcome this problem. 30 The derivation of this welfare equation is not provided here as it is similar to the derivation provided for the original welfare equation in (8.7). 31 The shadow value of capital is obtained using the welfare equation in (8.7′′) as: SK = ⎧ ∂x ∂x ∂z ⎫ dW = p0 − t0 0 + ∑ π s ms ⎨t1 s + τis p0 0 ⎬ . ∂ x0 ∂ x0 ⎭ dx0 s ⎩ ∂ x0 We obtain (8.25) by rearranging these terms to get ⎛ t ∂ x0 ⎞ t0 ∂ x0 ∂ z ⎪⎫ ⎪⎧ + S K = p0 ∑ π s ms ⎨[1 + is (1 − τ )] ⎜ 1 − 0 + τ is 0 ⎬ . ⎟ p ∂ x ∂ ∂ p x x0 ⎭⎪ ⎝ s 0 0⎠ 0 0 ⎩⎪ 32 In a certainty setting the private discount factor is m = 1/[1 + i(1 − τ)] for all s, where in the absence of trade taxes the social discount rate becomes ψ = i(1 − τ) + τiα. This can be rearranged as ψ = αi + (1 − α)i(1 − τ), where from the market-clearing condition 1 − α = 1 − ∂z0 /∂x0 = ∂x0 /∂x0 is the change in private saving. 33 If risk can be traded at lower cost using the tax system it would seem logical to argue that more capital should be funded through the tax system. Indeed, in the extreme all investment would be funded in this way, where the cost reductions could be obtained without the projects being undertaken inside the public sector. The government could simply raise capital for private producers. Clearly, there are important cost advantages from the current risk trading opportunities in private markets which have created considerable wealth for most countries in recent years. Traders in private capital markets specialize in gathering information about projects being funded through sales of shares, debt and other financial instruments. They also specialize in creating risk-spreading opportunities by trading derivative securities such as options and futures contracts which were examined in Chapter 6. Abel, A.B. (1990) Asset pricing under habit formation and catching up with the Joneses, American Economic Review 80(2): 38–42. Aivazian, V.A. and Callen, J.L. (1987) Miller’s irrelevance mechanism: a note, Journal of Finance 42(1): 169–80. Aiyagari, S.R. and Gertler, M. (1991) Asset returns with transactions costs and uninsured individual risk, Journal of Monetary Economics 27(3): 311–31. Alderson, M.J. and Betker, B.L. (1995) Liquidation costs and capital structure, Journal of Financial Economics 39(1): 45–69. Allais, M. (1953) Le comportement de l’homme rationnel devant le risque: critiques des postulats et axiomes de l’Ecole Américaine, Econometrica 21(4): 503–46. Altman, E.L. (1984) A further empirical investigation of the bankruptcy cost question, Journal of Finance 39(4): 1067–89. Andrade, G. and Kaplan, S.N. (1998) How costly is financial (not economic) distress? Evidence from highly levered transactions that became distressed, Journal of Finance 53(4): 1443–93. Anscombe, F.J. and Aumann, R.J. (1963) A definition of subjective probabilities, Annals of Mathematical Statistics 34(1): 199–205. Arrow, K.J. (1953) Le rôle des valeurs boursiéres pour la repartition la meilleure des risques, Econométrie, 11: 41–47. Published in English (1964) as: The role of securities in the optimal allocation of risk-bearing, Review of Economic Studies, 31(2): 91–6. Arrow, K.J. (1971) Essays in the Theory of Risk Bearing. North-Holland, Amsterdam. Arrow, K.J. and Lind, R. (1970) Uncertainty and the evaluation of public investment decisions, American Economic Review 60(2): 364–78. Atkinson, A.B. and Stern, N.H. (1974) Pigou, taxation and public goods, Review of Economic Studies 41(1): 117–27. Auerbach, A.J. (1979) Share valuation and corporate equity finance, Journal of Public Economics 11(3): 291–305. Auerbach, A.J. (1985) The theory of excess burden and optimal taxation, in A.J. Auerbach and M. Feldstein (eds), Handbook of Public Economics, Vol. 1, pp. 61–127. North-Holland, New York. Auerbach, A.J. and King, M.A. (1983) Taxation, portfolio choice, and debt-equity ratios: a general equilibrium model, Quarterly Journal of Economics 98(4): 587–610. Bailey, M.J. (1962) National Income and the Price Level: A Study in Macroeconomic Theory. McGraw-Hill, New York. Bailey, M.J. and Jensen, M.C. (1972) Risk and the discount rate for public investment, in M.C. Jensen (ed.), Studies in the Theory of Capital Markets, pp. 269–93. Praeger, New York. Ballard, C. L. and Fullerton, D. (1992) Distortionary taxes and the provision of public goods, Journal of Economic Perspectives 6(3): 117–31. Barberis, N., Huang, M. and Santos, T. (2001) Prospect theory and asset prices, Quarterly Journal of Economics 116(1): 1–53. Barclay, M. and Smith, C.W. (1988) Corporate payout policy: cash dividends versus open market share repurchases, Journal of Financial Economics 22(1): 61–82. Barnea, A., Haugen, R.A. and Senbet, L.W. (1981) An equilibrium analysis of debt financing under costly tax arbitrage and agency problems, Journal of Finance 36(3): 569–81. Barsky, R.B., Juster, F.T., Kimball, M.S. and Shapiro, M.D. (1997) Preference parameters and behavioral heterogeneity: an experimental approach in the health and retirement study, Quarterly Journal of Economics 112(2): 537–79. Beckers, S. (1980) The constant elasticity of variance model and its implications for option pricing, Journal of Finance 35(3): 661–73. Benartzi, S. and Thaler, R.H. (1995) Myopic loss aversion and the equity premium puzzle, Quarterly Journal of Economics 110(1): 73–92. Benge, M. and Robinson, T. (1986) How to Integrate Company and Shareholder Taxation: Why Full Imputation is the Best Answer. Victoria University Press, Wellington, New Zealand, for the Institute of Policy Studies, Victoria University of Wellington. Bergson, A. (1938) A reformulation of certain aspects of welfare economics, Quarterly Journal of Economics 68(2): 233–52. Bhattacharya, M. (1983) Transaction data tests on the efficiency of the Chicago Board of Options Exchange, Journal of Financial Economics 12(2): 161–85. Bhattacharya, S. (1979) Imperfect information, dividend policy, and the bird in the hand fallacy, Bell Journal of Economics 10(1): 259–270. Binswanger, H. (1981) Attitudes toward risk: theoretical implications of an experiment in rural India, Economic Journal 91(364): 867–90. Black, F. (1972) Capital market equilibrium with restricted borrowing, Journal of Business 45 (3): 444–55. Black, F. and Scholes, M.S. (1973) The valuation of options contracts and a test of market efficiency, Journal of Political Economy 81(3): 637–54. Black, F., Jensen, M.C. and Scholes, M.S. (1972) The capital asset pricing model: some empirical results, in M.C. Jensen (ed.), Studies in the Theory of Capital Markets. Praeger: New York. Blume, M.E. and Friend, I. (1973) A new look at the capital asset pricing model, Journal of Finance 28(1): 19–33. Boadway, R.W. (1976) Integrating equity and efficiency in applied welfare economics, Quarterly Journal of Economics 90(4): 541–56. Bodie, Z. and Rozansky, V.J. (1980) Rise and return in commodity futures, Financial Analysts’ Journal 36(3): 27–31, 33–39. Bradford, D.F. (1975) Constraints on government investment opportunities and the choice of discount rate, American Economic Review 65(5): 887–99. Bradford, D.F. (1981) The incidence and allocative effects of tax on corporate distributions, Journal of Public Economics 15(1): 1–22. Breeden, D.T. (1979) An intertemporal asset pricing model with stochastic consumption and investment opportunities, Journal of Financial Economics 7(3): 265–96. Breeden, D.T. and Litzenberger, R.H. (1978) Prices of state-contingent claims implicit in option prices, Journal of Business 51(4): 621–51. Breeden, D.T., Gibbons, M.R. and Litzenberger, R.H. (1989) Empirical test of the consumptionoriented CAPM, Journal of Finance 44(2): 231–62. Brennan, M.J. (1970) Taxes, market valuation and corporate financial policy, National Tax Journal 23(4): 417–27. Bruce, N. and Harris, R.G. (1982) Cost-benefit criteria and the compensation principle in evaluating small projects, Journal of Political Economy 90(4): 755–76. Campbell, J.Y. (1993) Intertemporal asset pricing without consumption data, American Economic Review 83(3): 487–512. Campbell, J.Y. (1996) Understanding risk and return, Journal of Political Economy 104(2): 298–345. Campbell, J.Y. and Cochrane, J.H. (1999) By force of habit: a consumption-based explanation of aggregate stock market behavior, Journal of Political Economy 107(2): 205–51. Campbell, J.Y. and Cochrane, J.H. (2000) Explaining poor performance of consumption-based asset pricing models, Journal of Finance 55(6): 2863–78. Chambers, R.G. and Quiggin, J. (2000) Uncertainty, Production Choice and Agency – The StateContingent Approach. Cambridge University Press, Cambridge. Chen, N.-F., Roll, R. and Ross, S.A. (1986) Economic forces and the stock market, Journal of Business 59(3): 383–403. Cochrane, J.H. (1996) A cross-sectional test of an investment-based asset pricing model, Journal of Political Economy 104(3): 572–621. Cochrane, J.H. (2001) Asset Pricing. Princeton University Press, Princeton, NJ. Constantinides, G.M. (1990) Habit formation: a resolution of the equity premium puzzle, Journal of Political Economy 98(3): 519–43. Constantinides, G.M. and Duffie, D. (1996) Asset pricing with heterogeneous consumers, Journal of Political Economy 104(2): 219–40. Cootner, P. (1960) Returns to speculators: Telser vs. Keynes, Journal of Political Economy 68(4): 396–418. Copeland, T.E. and Weston, J.F. (1988) Financial Theory and Corporate Policy, 3rd edition. AddisonWesley, New York. Cox, J.C. and Ross, S.A. (1976) A survey of some new results in option pricing theory, Journal of Finance 31(2): 383–402. Cox, J.C., Ross, S.A. and Rubinstein, M.E. (1979) Option pricing: a simplified approach, Journal of Financial Economics 7(3): 229–63. Dammon, R.M. (1988) A security market and capital structure equilibrium under uncertainty with progressive personal taxes, Research in Finance 7: 53–74. Dammon, R.M. and Green, R.C. (1987) Tax arbitrage and the existence of equilibrium prices for financial assets, Journal of Finance 42(5): 1143–66. DeAngelo, H. and Masulis, R.W. (1980a) Optimal capital structure under corporate and personal taxation, Journal of Financial Economics 8(1): 3–29. DeAngelo, H. and Masulis, R.W. (1980b) Leverage and dividend irrelevancy under corporate and personal taxation, Journal of Finance 35(2): 453–64. Debreu, G. (1959) Theory of Value: An Axiomatic Analysis of Economic Equilibrium, Cowles Foundation Monograph 17. Yale University Press, New Haven, CT. Deshmukh, S. (2005) The effect of asymmetric information on dividend policy, Quarterly Journal of Business and Economics 44(1/2): 107–27. Diamond, P.A. and Mirrlees, J.A. (1971) Optimal taxation and public production, American Economic Review 61: 8–27, 261–78. Diewert, W.E. (1983) Cost-benefit analysis and project evaluation: a comparison of alternative approaches, Journal of Public Economics 22(3): 265–302. Dixit, A. (1985) Tax policy in open economies, in A.J. Auerbach and M. Feldstein (eds), Handbook of Public Economics, Vol. 1, pp. 313–74. North-Holland, New York. Dixit, A. (1987) Trade and insurance with moral hazard, Journal of International Economics 23(3/4): 201–20. Dixit, A. (1989) Trade and insurance with adverse selection, Review of Economic Studies 56(2): 235–47. Dowd, K. (1988) Private Money: The Path to Monetary Stability. Institute of Economic Affairs, London. Dréze, J.H. (1987) Essay on Economic Decisions under Uncertainty. Cambridge University Press, Cambridge. Dréze, J. and Stern, N. (1990) Policy reform, shadow prices and market prices, Journal of Public Economics 42(1): 1–45. Dusak, K. (1973) Futures trading and investor returns: an investigation of commodity market risk premiums, Journal of Political Economy 81(6): 1387–1406. Easterbrook, F. (1984) Two-agency cost explanations of dividends, American Economic Review 74(4): 650–9. Edwards, J.S.S. (1989) Gearing, in J. Eatwell, M. Milgate, and P. Newman (eds), The New Palgrave – Finance, pp. 159–163. Macmillan Press, London. Ehrlich, I. and Becker, G.S. (1972) Market insurance, self-insurance and self-protection, Journal of Political Economy 80(4): 623–48. Elton, E.J. and Gruber, M.J. (1995) Modern Portfolio Theory and Investment Analysis, 5th edition. Wiley, New York. Epstein, L.G and Zin, S.E. (1989) Substitution, risk aversion, and the temporal behavior of consumption growth and asset returns I: A theoretical framework, Econometrica 57(4): 937–69. Fama, E.F. (1965) The behavior of stock market prices, Journal of Business 38(1): 34–105. Fama, E.F. (1970) Efficient capital markets: a review of theory and empirical work, Journal of Finance 25(2): 383–417. Fama, E.F. (1977) Risk-adjusted discount rates and capital budgeting under uncertainty, Journal of Financial Economics 5(1): 3–24. Fama, E.F. (1998) Determining the number of priced state variables in the ICAPM, Journal of Financial and Quantitative Analysis 33(2): 217–31. Fama, E.F. and French, K.R. (1987) Commodity futures prices: some evidence on the forecast power, premiums, and the theory of storage, Journal of Business 60(1): 55–73. Fama, E.F. and French, K.R. (1992) The cross-section of expected stock returns, Journal of Finance 47(2): 427–65. Fama, E.F. and French, K.R. (1993) Common risk factors in the returns on stocks and bonds, Journal of Financial Economics 33(1): 3–56. Fama, E.F. and MacBeth, J.D. (1973) Risk, return and equilibrium: empirical tests, Journal of Political Economy 81(3): 607–36. Fischer, S. (1980) Dynamic inconsistency, co-operation and the benevolent dissembling government, Journal of Economic Dynamics and Control 2: 93–107. Fishburn, P.C. (1974) On the foundations of decision making under uncertainty, in M.S. Balch, D.L. MacFadden and S.Y. Wu (eds), Essays on Economic Behavior under Uncertainty. American Elsevier, New York. Fisher, I. (1930) The Theory of Interest. Macmillan, New York. Fisher, S.J. (1994) Asset trading, transactions costs and the equity premium, Journal of Applied Econometrics 9(Supplement): S71–S94. Foster, E. and Sonnenschein, H. (1970) Price distortion and economic welfare, Econometrica 38(2): 281–97. Friedman, M. (1968) The role of monetary policy, American Economic Review 58(1): 1–17. Friend, I. and Blume, M.E. (1975) The demand for risky assets, American Economic Review 65(5): 900–22. Fullenkamp, C., Tenorio, R. and Battalio, R. (2003) Assessing individual risk-attitudes using field data from lottery games, Review of Economics and Statistics 85(1): 218–26. Galai, D. (1977) Test of market efficiency of the Chicago Board of Options Exchange, Journal of Business 50(2): 167–97. Gordon, M.J., Paradis, G.E. and Rorke, C.H. (1972) Experimental evidence on alternative portfolio decision rules, American Economic Review 62(1/2): 107–18. Goulder, H. L. and Williams III, R.C. (2003) The substantial bias from ignoring general equilibrium effects in estimating excess burden, and a practical solution, Journal of Political Economy 111(4): 898–927. Graham, D.A. (1981) Cost-benefit analysis under uncertainty, American Economic Review 71(4): 715–25. Graham, J.R. (2000) How big are the tax benefits of debt? Journal of Finance 55(5): 1901–41. Grant, S.H. and Karni, E. (2004) A theory of quantifiable beliefs, Journal of Mathematical Economics 40(5): 515–46. Grant, S.H. and Quiggin, J. (2004) The risk premium for equity: implications for resource allocation, welfare and policy. Mimeo, Rice University and University of Queensland. Gray, R. (1961) The search for a risk premium, Journal of Political Economy 69(3): 250–60. Greenwald, B. and Stiglitz, J.E. (1986) Externalities in economies with imperfect information, Quarterly Journal of Economics 101(2): 229–64. Guiso, L. and Paiella, M. (2001) Risk aversion, wealth and background risk. CEPR Discussion Paper 2728. Hansen, L.P. and Jagannathan, R. (1991) Implications of security market data for models of dynamic economies, Journal of Political Economy 99(2): 225–62. Hansen, L.P. and Singleton, K.J. (1982) Generalised instrumental variables estimation of nonlinear rational expectations models, Econometrica 50(5): 1269–86. Hansen, L.P. and Singleton, K.J. (1983) Stochastic consumption, risk aversion and the temporal behavior of asset returns, Journal of Political Economy 91(2): 249–65. Harberger, A.C. (1964) The measurement of waste, American Economic Review 54(3): 58–76. Harberger, A.C. (1969) Professor Arrow on the social discount rate, in G.G. Somers and W.D. Wood (eds), Cost–Benefit Analysis of Manpower Policies, pp. 76–88. Industrial Relations Centre, Queen’s University, Kingston, Ontario, Canada. Harberger, A.C. (1971) Three basic postulates for applied welfare economics: an interpretive essay, Journal of Economic Literature 9(3): 785–97. Harris, J.A. and Townsend, R. (1985) Allocation mechanisms, asymmetric information and the revelation principle, in G. Feiwel (ed.), Issues in Contemporary Microeconomics and Welfare, pp. 379–94. State University of New York Press, Albany. Harris, M. and Raviv, A. (1991) The theory of capital structure, Journal of Finance 46(1): 297–355. Hatta, T. (1977) A theory of piecemeal policy recommendations, Review of Economic Studies 44(1): 1–21. Haugen, R.A. and Senbet, L.W. (1978) The insignificance of bankruptcy costs to the theory of optimal capital structure, Journal of Finance 33(2): 383–93. Hayek, F. (1978) Decentralisation of Money: The Argument Refined. Institute of Economic Affairs, London. Heaton, J. and Lucas, D.J. (1996) Evaluating the effects of incomplete markets on risk sharing and asset pricing, Journal of Political Economy 104(3): 443–87. Helms, L.J. (1985) Expected consumer’s surplus and the welfare effects of price stabilisation, International Economic Review 26(3): 603–17. Hicks, J.R. (1939) Value and Capital. Clarendon Press, Oxford. Hirshleifer, J. (1965) Investment decisions under uncertainty: choice theoretic approaches, Quarterly Journal of Economics 79(4): 509–36. Hirshleifer, J. (1970) Investment, Interest and Capital. Prentice Hall, Englewood Cliffs, NJ. Houthakker, H.S. (1968) Normal backwardation, in J.N. Wolfe (ed.), Value, Capital and Growth. Edinburgh University Press, Edinburgh. Jensen, M.C. and Meckling, W.H. (1976) Theory of the firm: managerial behaviour, agency costs and ownership structure, Journal of Financial Economics 3(4): 305–60. Jones, C.M. (2005) Applied Welfare Economics. Oxford University Press, Oxford. Jones, C.M. and Milne, F. (1992) Tax arbitrage, existence of equilibrium and bounded tax rebates, Mathematical Finance 2(3): 189–96. Kaldor, N. (1939) Speculation and economic stability, Review of Economic Studies 7(1): 1–27. Kaplow, L. (1996) The Optimal supply of public goods and the distortionary cost of taxation, National Tax Policy 49: 523–33. Karni, E. (1985) Decision Making under Uncertainty: The Case of State-Dependent Preferences. Harvard University Press, Cambridge, MA. Karni, E. (1993) A definition of subjective probabilities with state-dependent preferences, Econometrica 61(1): 187–98. Karni, E., Schmeidler, D. and Vind, K. (1983) On state-dependent preferences and subjective probabilities, Econometrica 51(4): 1021–31. Keynes, J.M. (1923) Some aspects of commodity markets, Manchester Guardian Commercial Reconstruction Supplement 29, March. Reprinted (1973) in The Collected Writings of John Maynard Keynes, Vol. VII. Macmillan, London. Kim, E.H. (1982) Miller’s equilibrium, shareholder leverage clienteles, and optimal capital structure, Journal of Finance 37(2): 301–23. Kim, E.H., Lewellen, W.G. and McConnell, J.J. (1979) Financial leverage clienteles: theory and evidence, Journal of Financial Economics 7(1): 83–109. King, M.A. (1977) Public Policy and the Corporation. Chapman & Hall, London. Knight, F. (1921) Risk, Uncertainty and Profit. Houghton Mifflin, Boston. Kocherlakota, N.R. (1990) On the ‘discount’ factor in growth economies, Journal of Monetary Economics 25(1): 43–7. Kreps, D.M. (1990) A Course in Microeconomic Theory. Princeton University Press, Princeton, NJ. Kreps, D.M. and Porteus, E.L. (1978) Temporal resolution of uncertainty and dynamic choice theory, Econometrica 46(1): 185–200. Laffont, J.-J. (1989) The Economics of Uncertainty and Information (translated by J.P. Bonin and H. Bonin). MIT Press, Cambridge, MA. Leland, H. and Pyle, D. (1977) Information asymmetries, financial structure, and financial intermediation, Journal of Finance 32(2): 371–88. Lengwiler, Y. (2004) Microfoundations of Financial Economics: An Introduction to General Equilibrium Asset Pricing. Princeton University Press, Princeton, NJ. Lettau, M. and Ludvigson, S. (2001) Resurrecting the (C)CAPM: a cross-sectional test when risk premia are time varying, Journal of Political Economy 109(6): 1238–87. LeRoy, S.F. (1989) Efficient capital markets and martingales, Journal of Economic Literature 27(4): 1583–1621. Lintner, J. (1965) The valuation of risky assets and the selection of risky investment in stock portfolios and capital budgets, Review of Economics and Statistics 47(1): 13–37. Long, J.B. (1974) Stock prices, inflation and the term structure of interest rates, Journal of Financial Economics 1(2): 131–70. Macbeth, J.D. and Merville, L.J. (1979) An empirical examination of the Black–Scholes call option pricing model, Journal of Finance 34(5): 1173–86. McGrattan, E.R. and Prescott, E.C. (2003) Average debt and equity returns: puzzling, American Economic Review 93(2): 392–7. Machina, M. (1982) ‘Expected utility’ analysis without the independence axiom, Econometrica 50(2): 277–323. Malinvaud, E. (1972) The allocation of individual risks in large markets, Journal of Economic Theory 4(2): 312–28. Mankiw, N.G. and Shapiro, M.D. (1986) Risk and return: consumption versus market beta, Review of Economics and Statistics 68(3): 452–9. Marglin, S.A. (1963a) The social rate of discount and the optimal rate of investment, Quarterly Journal of Economics 77(1): 95–111. Marglin, S.A. (1963b) The opportunity costs of public investment, Quarterly Journal of Economics 77(2): 274–89. Markowitz, H. (1959) Portfolio Selection. Yale University Press, New Haven, CT. Mas-Colell, A., Whinston, M.D. and Green, J.R. (1995) Microeconomic Theory. Oxford University Press, Oxford and New York. Mehra, R. and Prescott, E.C. (1985) The equity premium: a puzzle, Journal of Monetary Economics 15(2): 145–61. Merton, R.C. (1973a) An inter-temporal capital asset pricing model, Econometrica 41(5): 867–87. Merton, R.C. (1973b) The theory of rational option pricing, Bell Journal of Economics and Management Science 4(1): 141–83. Meyer, J. (1987) Two-moment decision models and expected utility maximisation, American Economic Review 77(3): 421–30. Micu, M. and Upper, C. (2006) Derivatives markets, BIS Quarterly Review (March): 43–50. Miller, M.H. (1977) Debt and taxes, Journal of Finance 32(2): 261–275. Miller, M.H. (1988) Modigliani-Miller propositions after thirty years, Journal of Economic Perspectives 2(4): 99–120. Miller, M.H. and Rock, K. (1985) Dividend policy under asymmetric information, Journal of Finance 40(4): 1031–51. Modigliani, F. and Miller, M. (1958) The cost of capital, corporate finance, and the theory of investment. American Economic Review 48(3): 261–97. Modigliani, F. and Miller, M. (1961) Dividend policy, growth, and the valuation of shares, Journal of Business 34(4): 411–33. Modigliani, F. and Miller, M. (1963) Corporate income taxes and the cost of capital: a correction, American Economic Review 53(3): 433–43. Molina, C.A. (2005) Are firms underleveraged? An examination of the effect of leverage on default probabilities, Journal of Finance 60(3): 1427–59. Myers, S.C. (1984) The capital structure puzzle, Journal of Finance 39(3): 575–92. Myers, S.C. and Majluf, N.S. (1984) Corporate financing and investment decisions when firms have information that investors do not have, Journal of Financial Economics 13(2): 187–221. Newbery, D.M.G. and Stiglitz, J.E. (1981) The Theory of Commodity Price Stabilization: A Study in the Economics of Risk. Clarendon Press, Oxford. Pauly, M.V. (1974) Over-insurance and public provision of insurance: the roles of moral hazard and adverse selection, Quarterly Journal of Economics 88(1): 44–62. Peress, J. (2004) Wealth, information acquisition and portfolio choice, Review of Financial Studies, 17(3): 879–914. Phillips, A.W. (1958) The relation between unemployment and the rate of change in money wages in the United Kingdom, 1861–1957, Economica 25(100): 283–99. Pigou, A.C. (1947) A Study in Public Finance, 3rd edition. Macmillan Press, London. Pratt, J.W. (1964) Risk aversion in the small and in the large, Econometrica 32(1/2): 122–36. Quizon, J., Binswanger, H. and Machina, M. (1984) Attitudes toward risk: further remarks, Economic Journal 94(373): 144–8. Radner, R. (1972) Existence of equilibrium of plans, prices and price expectations in a sequence of markets, Econometrica 40(2): 289–303. Riley, J.G. (1975) Competitive signalling, Journal of Economic Theory 10(2): 174–86. Rockwell, C.S. (1967) Normal backwardation, forecasting and the return to commodity futures traders, Food Research Institutes 7(Supplement): 107–30. Roll, R. (1977a) A critique of the asset pricing theory tests. Part I: On past and potential testability of the theory, Journal of Financial Economics 4(2): 129–76. Roll, R. (1977b) An analytical valuation formula for unprotected American call options on stocks with known dividends, Journal of Financial Economics 5(2): 251–8. Roll, R. (1984) Orange juice and weather, American Economic Review 74(5): 861–80. Ross, S.A. (1976) The arbitrage theory of capital asset pricing, Journal of Economic Theory 13(3): 341–60. Ross, S.A. (1977a) Return, risk and arbitrage, in I. Friend and J.L. Bicksler (eds), Risk and Return in Finance, Vol. 1, pp. 189–218. Ballinger, Cambridge, MA. Ross, S.A. (1977b) The determination of financial structure: the incentive-signalling approach, Bell Journal of Economics 8(1): 23–40. Ross, S.A. (1978) Mutual fund separation in financial theory – the separating distributions, Journal of Economic Theory 17(2): 254–86. Ross, S.A. (2005) Capital structure and the cost of capital, Journal of Applied Finance 15(1): 5–23. Rothschild, M. and Stiglitz, J. (1976) Equilibrium in competitive insurance markets, Quarterly Journal of Economics 90(4): 629–49. Rozeff, M.S. (1982) Growth, beta and agency costs as determinants of dividend payout ratios, Journal of Financial Research 5(3): 249–59. Rubinstein, M.E. (1985) Nonparametric tests of alternative option pricing models, Journal of Finance 40(3): 455–80. Samuelson, P.A. (1954) The pure theory of public expenditure, Review of Economic and Statistics 36(4): 387–9. Samuelson, P.A. (1964) Principles of efficiency: discussion, American Economic Review 54(3): 93–6. Samuelson, P.A. (1965) Proof that properly anticipated prices fluctuate randomly, Industrial Management Review 6(2): 41–9. Sandmo, A. and Dréze, J.H. (1971) Discount rates for public investment in closed and open economies, Economica 38(152): 396–412. Sarig, O. (2004) A time-series analysis of corporate payout policies, Review of Finance 8(4): 515–36. Sarig, O. and Scott, J. (1985) The puzzle of financial leverage clienteles, Journal of Finance 40(5): 1459–67. Savage, L.J. (1954) The Foundations of Statistics. Wiley, New York. Selden, L. (1978) A new representation of preferences over ‘certain × uncertain’ consumption pairs: the ‘ordinal certainty equivalent’ hypothesis, Econometrica 46(5): 1045–60. Selgin, G. (1988) The Theory of Free Banking. Rowman and Littlefield, Totowa, NJ. Sharpe, W. (1964) Capital asset prices: a theory of market equilibrium under conditions of risk, Journal of Finance 19(3): 425–552. Sharpe, W. (1966) Mutual fund performance, Journal of Business 39(1): 119–38. Shavell, S. (1979) On moral hazard and insurance, Quarterly Journal of Economics 93(4): 541–62. Simon, D.P. (1996) An empirical reconciliation of the Miller model and the generalised capital structure models, Journal of Banking and Finance 20(1): 41–56. Sjaastad, L.A. and Wisecarver, D.L. (1977) The social cost of public finance, Journal of Political Economy 85(3): 513–47. Snow, A. and Warren Jr, R.S. (1996) The marginal welfare cost of public funds: theory and estimates, Journal of Public Economics 61(2): 289–305. Stiglitz, J.E. (1974) On the irrelevance of corporate financial policy, American Economic Review 64(6): 851–66. Stiglitz, J.E. (1981) Pareto optimality and competition, Journal of Finance 36(2): 235–51. Stiglitz, J.E. (1982) Self-protection and Pareto efficient taxation, Journal of Public Economics 17 (2): 213–40. Stiglitz, J.E. and Dasgupta, P. (1971) Differential taxation, public goods and economic efficiency, Review of Economic Studies 38(2): 151–74. Stoll, H.R. (1969) The relationship between put and call option prices, Journal of Finance 24(5): 802–24. Swan, P.L. (2006) Optimal portfolio balancing under conventional preferences and transactions explains the equity premium puzzle. Paper presented at the inaugural Trevor Swan Distinguished Lecture in Economics at the Australian National University. Taylor, B. (2007) GFD guide to total returns on stocks, bonds and bills. Global Financial Data Inc. http://www.globalfinancialdata.com/articles/total_return_guide.doc (accessed August 2007). Tease, W. (1988) The expectations theory of the term structure of interest rates, Economic Record 64(185): 120–7. Telser, L.G. (1981) Why there are organised futures markets, Journal of Law and Economics 24(1): 1–22. Tirole, J. (2006) The Theory of Corporate Finance. Princeton University Press, Princeton, NJ. Tobin, J. (1958) Liquidity preference as behaviour toward risk, Review of Economic Studies 25(2): 65–86. Vickery, W. (1964) Principles of efficiency: discussion, American Economic Review 54(3): 88–92. von Neumann, J. and Morgenstern, O. (1944) Theory of Games and Economic Behavior. Princeton University Press, Princeton, NJ. Warner, J. (1977) Bankruptcy costs: some evidence, Journal of Finance 32(2): 337–47. Weil, P. (1992) Equilibrium asset prices with undiversifiable labour income risk, Journal of Economic Dynamics and Control 16(3/4): 769–90. Weisbrod, B.A. (1964) Collective consumption services of individual consumption goods, Quarterly Journal of Economics 78(3): 471–7. Wheatley, S. (1988) Some tests of international equity integration, Journal of Financial Economics 21(2): 177–212. White, L. (1989) Competition and Currency: Essays in Free Banking and Money. New York University Press, New York. Wilson, C.A. (1977) A model of insurance markets with incomplete information, Journal of Economic Theory 16(2): 167–207. Author index Abel 147, 148n41 Aivazian and Callen 229n25 Aiyagari and Gertler 149 Alderson and Betker 225 Allais 86, 87 Altman 207, 225 Andrade and Kaplan 225 Anscombe and Aumann 84, 85 Arrow 3, 4, 6, 7, 12, 26, 71, 73, 77–81, 83, 88, 91, 92, 95, 96, 100, 102n51, 105, 106, 109, 134, 150, 203, 208n2, 210, 212, 213, 217, 218, 220, 224, 232, 252, 253, 269, 270, 277 Arrow and Lind 12, 150, 252, 253, 269, 276 Atkinson and Stern 259 Auerbach 227, 231, 240, 250, 254n9 Auerbach and King 227, 231n27, 250 Bailey 12, 47, 150, 252, 253, 269, 277 Bailey and Jensen 12, 150n47, 252, 253, 269, 276 Ballard and Fullerton 253n6, 259, 261n19, 262n21 Barberis, Huang and Santos 87, 148 Barclay and Smith 239 Barnea, Haugen and Senbet 225n18 Barsky, Juster, Kimball and Shapiro 89 Beckers 196 Benartzi and Thaler 86n29 Benge and Robinson 245 Bergson 11n6, 46n32, 251n1, 255 Bhattacharya 196, 241 Binswanger 89 Black 9, 32, 127, 144, 184, 192, 194–7 Black and Scholes 9, 184, 192, 194, 196 Black, Jensen and Scholes 144 Blume and Friend 144 Boadway 267 Bodie and Rozansky 202 Bradford 12, 240, 252, 253, 269, 275 Breeden 7, 93n39, 95n44, 107, 136n20, 139–41, 145 Breeden and Litzenberger 7, 93n39, 95n44, 107, 139 Breeden, Gibbons and Litzenberger 145 Brennan 128 Bruce and Harris 267, 269 Campbell 145, 148n43, 156 Campbell and Cochrane 145, 148, 156 Chambers and Quiggin 82n23 Chen, Roll and Ross 131 Cochrane 2, 93n37, 93n39, 103n53, 105, 107, 108, 130n12, 133, 139, 145–8, 156, 194 Constantinides 148150, 160 Constantinides and Duffie 149, 150, 160 Cootner 202 Copeland and Weston 108, 111 Cox and Ross 196 Cox, Ross and Rubinstein 196 Dammon 227, 229, 231n27 Dammon and Green 229 DeAngelo and Masulis 79n20, 224 Debreu 3, 6, 7, 71, 73, 75–8, 80, 81, 83, 92, 95, 109, 134, 208n2, 212, 213, 217, 218, 220, 224, 232, 253 Deshmukh 242 Diamond and Mirrlees 259 Diewert 267, 269 Dixit 9, 162, 267n27, 277 Dowd 48n35 Dréze 12, 252, 267, 269, 271, 274, 275 Dréze and Stern 267 Dusak 200, 202 Easterbrook 241 Edwards 249, 250 Ehrlich and Becker 74n8, 169 Elton and Gruber 128 Epstein and Zin 96n45, 148 Fama 90n34, 126, 137n23, 144, 145, 153, 154, 156, 202 Fama and French 144, 145, 202 Fama and MacBeth 144, 156 Fischer 229n24 Fishburn 85 Author index Fisher 4, 14–16, 23, 32, 35–37, 41–4, 47, 48, 69, 70, 117n7, 149, 209 Foster and Sonnenschein 80n22, 269n29 Friedman 43n30 Friend and Blume 146 Fullenkamp, Tenorio and Battalio 89, 146 Galai 196 Gordon, Paradis and Rorke 89 Goulder and Williams III 274 Graham D. 266, 267n26 Graham J. 207, 225, 226 Grant and Karni 83n25, 85 Grant and Quiggin 150 Gray 202 Greenwald and Stiglitz 175n8 Guiso and Paiella 89 Hansen and Jagannathan 145, 146n37 Hansen and Singleton 145 Harris and Raviv 206, 225n17 Harberger 11, 12, 47n33, 251n1, 252–3, 255, 261, 269, 271, 274, 275 Harris and Townsend 175n8 Hatta 11, 265n22, 267 Haugen and Senbet 224, 225 Hayek 48n35 Heaton and Lucas 149 Helms 266 Hicks 198 Hirshleifer 3, 14, 23, 35 Houthakker 202 Jensen and Meckling 225 Jones 46n32, 229, 253n6, 254n9, 261n19, 265n22, 267n27, 268n28, 275 Jones and Milne 229 Kaldor 198 Kaplow 259 Karni 83, 85 Karni, Schmeidler and Vind 85 Keynes 198 Kim 231 Kim, Lewellen and McConnell 232 King 227, 231, 240, 249, 250 Knight 6n4, 72 Kocherlakota 147n40 Kreps 85n27, 148 Laffont 162n1 Leland and Pyle 225 Lengwiler 2, 23n11, 73n6, 93n39, 147n40 Lettau and Ludvigson 144, 156 LeRoy 90n34 Lintner 7, 107, 122, 136 Long 5, 16, 43, 49–53, 55, 58, 82, 103–5, 127, 131–4, 136n20, 137, 187–9, 200–2, 205, 210, 232, 235, 241, 261, 272 Macbeth and Merville 196 Machina 83, 89 Malinvaud 23n10, 162n1 Mankiw and Shapiro 145 Marglin 12, 252, 253, 269, 275, 276 Markowitz 108 Mas-Colell, Whinston and Green 16n3, 80n22, 83, 85n28 McGrattan and Prescott 147n39 Mehra and Prescott 8, 86n29, 96n45, 144–8, 160 Merton 7, 107, 129, 136, 137, 196 Meyer 102n52 Micu and Upper 9, 183 Miller 2, 10, 64, 80n21, 204–8, 210, 212, 215, 216, 218, 219, 221, 226–33, 236, 238, 239, 241, 245, 247, 249, 250 Miller and Rock 241 Modigliani and Miller 2, 206–8, 212, 215, 216, 219, 221, 238 Molina 207, 226 Myers 225, 242 Myers and Majluf 225, 242 Newbery and Stiglitz 162 Pauly 169n6 Peress 89 Phillips 43n30 Pigou 261 Pratt 88 Quizon, Binswanger and Machina 89 Radner 75n10 Riley 175n8 Rockwell 202 Roll 131, 144, 196, 202 Ross 7, 91, 102, 107, 129, 131, 195, 225, 235 Rothschild and Stiglitz 171 Rozeff 241 Rubinstein 195, 197 Samuelson 11n6, 12, 46n32, 90, 251–3, 255–9, 261, 263, 264, 269, 276 Sandmo and Dréze 12, 252, 269, 271, 274, 275 Sarig 231, 239, 241 Sarig and Scott 231 Savage 71n2, 73, 83, 85, 86 Selden 148 Selgin 48n35 Sharpe 7, 107, 122, 136, 145, 146, 148n43 Shavell 169n6, 171n7 Simon 232n29 Sjaastad and Wisecarver 12, 252, 269, 275, 276 Snow and Warren 261n19 Stiglitz 162, 171, 175n8, 231, 259 Stiglitz and Dasgupta 259 Stoll 189 Swan 150 Author index Taylor 111 Tease 52 Telser 202 Tirole 225n17 Tobin 108 Vickery 12, 252, 253, 269, 276 von Neumann and Morgenstern 73, 83 Warner 207, 225 Weil 149 Weisbrod 266 Wheatley 145 White 48n35, 53, 194 Wilson 175n8 Subject index actuarially fair prices 8, 72, 161 adverse selection 8, 9, 149, 161, 162, 169, 171, 277 agency costs 206, 222, 224, 225, 238, 241 aggregate uncertainty 11, 85, 97, 103, 130, 139, 162, 163, 183, 251, 252, 269, 274 Allais paradox 87 annuity 55 arbitrage pricing theory 7, 8, 107–9, 129–31, 133, 137, 139, 143, 156, 159, 192, 202 Arrow-Debreu 3, 6, 7, 71, 73, 77, 78, 80, 81, 83, 92, 95, 109, 134, 212, 213, 217, 218, 220, 224, 232, 253; pricing model 7, 73, 81, 87, 90-2, 100, 217; state-preference model 3, 6, 73, 75, 77, 82 asset economy 4, 15, 26, 27, 31, 34, 35, 37, 38, 40, 42, 46, 48, 71, 77, 78, 80, 92, 95, 109, 134, 207 asset pricing puzzles 145–7; equity risk premium 8, 144–6, 148–51, 160; low interest rate 146, 147 asset substitution effect 225 asymmetric information 2–4, 8–10, 15, 97, 98, 129, 149, 150, 161, 163, 167, 169–71, 175, 178–81, 204–6, 222–4, 231, 235, 236, 241, 242, 277 autarky economy 3, 4,14, 16–18, 21, 22, 31 Black-Scholes option pricing model 9, 192, 194–6 bonds: consol 57; coupon 57, 186, 187, 189; discount 57, 104, 201 borrowing constraints 79, 80, 123, 127, 149, 208, 210, 229, 231 capital asset pricing model 7, 8, 13, 107–9, 111, 117, 118, 120–3, 125–9, 131, 133–7, 139–41, 143–5, 151, 153, 155, 156, 158–60, 192, 202, 212–14, 216, 217, 246, 247 capital market line 108, 111, 121, 128 capital structure 10, 204–7, 213, 216, 218, 224–6, 231, 249 certainty equivalent net cash flows 151 classical dichotomy 25 classical finance model 2, 4–6, 10,16, 42, 43, 204, 206, 231, 239, 204, 206, 207, 212, 213, 216–18, 220, 222–4, 231, 234–6, 238 common information 2, 7, 9–11,16, 63, 97, 151, 161, 163, 171, 179, 181, 185, 187, 251–3 compensated welfare change 267 compensating variation 12, 265–7; ex ante CV 12, 266, 267; expected CV 12, 266, 267 complete capital market 71, 72, 78–80, 82, 95–7, 183, 218, 220, 233, 235 251, 253, 254; double complete 232 conditional perfect foresight 6, 71, 73, 75, 77, 133 consumption based pricing model 2, 7–9, 107, 109, 130, 133–7, 140, 142, 143, 145, 149, 159, 160, 184, 190 consumption beta capital asset pricing model 7, 8, 73, 87, 92–5, 101, 103–5, 107, 108, 133, 139–41, 143–7, 150, 156, 159, 160, 192, 199, 200 contingent claims 75, 77 continuous compounding 56 convenience yield 199, 200, 202 conventional welfare equation 252, 254, 255, 257, 258, 261, 265, 270, 279 corporate tax shields 10, 206, 207, 220, 222, 224, 234, 235 cost of capital 41, 43, 53, 54, 63, 64, 67, 122, 150, 151, 204, 206, 209, 210, 213–16, 223, 233–6, 246–8, 269, 274, 279; marginal cost of capital 63, 233; user cost of capital 54, 63, 64, 67, 213, 214, 216, 223, 233–6, 246–8; weighted average cost of capital 233 Debreu economy 71, 73, 75–8, 80, 81, 213, 217, 218, 220, 224, 232 depreciating assets 53, 54, 234 depreciation: economic 54, 59, 61–3, 68–70, 234, 235, 246; historic cost 62, 69; measured 61–3, 68, 246 discount factors 3–7, 9, 20, 23, 31, 49, 51, 75, 77–9, 81, 88, 90, 91, 101, 103, 104, 107, 133, 143, 151, 153–6, 210, 217, 253, 254, 257, Subject index distorting taxes 11, 43, 261 distributional effects 12, 251, 255, 257, 266–8, 275 diversification effect 8, 108, 111, 112, 115, 117–19, 139, 158, 159, 161, 183 dividend: imputation 233, 242, 243, 245, 248, 249; puzzle 10, 129, 227, 238, 239, 242, 248 efficiency effects 11, 236, 265 efficient markets hypothesis 87, 90 efficient mean-variance frontier 108, 113, 118–22, 127, 129 endowment economy 22, 24, 26, 28, 37, 75, 96 equation of yield 5, 52, 54, 58, 67 equity: limited liability 10, 58, 205, 206, 215, 218, 222, 225 expectations hypothesis 49–52, 104, 105, 201; pure 105 expected utility 7, 11, 12, 49–52, 73, 78, 83–6, 88, 92, 94, 99–101, 110, 136, 148, 160, 163, 164, 170, 179, 181, 253–5, 265–8; generalised 148; independence axiom 83–7; Neumann-Morgenstern 7, 73, 83–7, 92, 93, 96, 103, 110, 123, 133, 142, 143, 163; statedependent 85, 99; state-dependent subjective 85, 100; subjective 85, 86, 100 externality 171, 181 Fisher effect 4, 16, 41–4, 47, 48, 69, 70 Fisher Separation Theorem 4, 15, 35–7, 209 forward contracts 4, 9, 26, 27, 39, 75–7, 81, 183, 197, 198; over the counter 184, 196, 197 free cash flows 206, 225, 241 futures contracts 72, 76, 183, 184, 197–9; margins 197; marked to market 197; price limits 9, 183, 184, 197 generalised state preferences 73 habit theory 147 Hansen-Jagannathan bound 145 heterogeneous expectations 126 holding period yield 53 homogeneous expectations 5, 42, 123, 131, 143, 159 income: economic 5, 52, 53, 59–63, 69, 70, 80, 243, 245; measured 59, 61, 62, 69, 243, 244 income effects 4, 11, 29, 32, 36, 265, 267, 268 incomplete capital market 79, 82 individualistic social welfare function 46, 255 inflation 2, 4, 5, 16, 40–8, 62, 63, 68–70, 105, 117, 130, 131, 202; expected 16, 41, 43–6, 48, 62, 63, 68–70, 117 information signalling 225 insurance 8, 9, 13, 15,149, 150, 160, 161–72, 175, 177–182, 183, 198, 204, 277 inter-corporate equity 240, 248 interest rate 4–6, 12, 27–32, 35, 38, 40–5, 47, 49–51, 55–58, 94, 103–6, 137, 143–50; forward 104; long term 5, 49–51; short term 5, 49, 50, 201; term structure 5, 49, 50, 58, 66, 103 intermediate uncertainty 153–6, 199, 200 intertemporal consumption based pricing model 7, 8, 107, 108, 129, 133, 136, 137, 139–41, 143–5, 156, 159, 160, 192, 201, 202 investment opportunities 1, 5, 13, 14, 16, 18, 203, 31, 72, 76, 111; private 14, 16, 18, 20–2, 31, 76 law of large numbers 97, 161, 162, 164 leverage related costs 10, 206, 211, 218, 221–3, 226, 234, 247; bankruptcy costs 10, 206, 207, 222–5, 247; costly default 223; lost corporate tax shields 10, 206, 207, 222, 224, 234, 235 lotteries 84–6; horse-race 84; roulette-wheel 84, 85 marginal excess burden of taxation 261, 277 marginal social cost of public funds 11, 251, 261–4, 277 martingale model 90, 91; discounted 90, 91 mean: arithmetic 50; geometric 50 mean-variance analysis 7, 73, 85, 89, 90, 92, 101, 103, 136, 143, 151, 159 mergers 35, 37 Millar equilibrium 226–33; debt specialists 229, 245; equity specialists 229–31, 236, 245; marginal investors 229–31, 233, 245, 249, 250; tax clienteles 207, 229, 231, 236, 249 mimicking factor portfolio 8, 101, 108, 129, 131, 139, 140, 156, 201 minimum variance portfolio 114, 115, 120, 158 Modigliani and Miller 2, 10, 16, 64, 204–8, 210–12, 215, 216, 218–21, 223–5, 229, 231, 232, 234, 238, 247, 249; dividend policy irrelevance 238; leverage irrelevance 64, 205, 207, 211–13, 215–18, 220, 223–5, 231, 232, 234, 247, 249 Money: currency 4, 9, 14–16, 22, 24, 25, 37–40, 42, 44–8, 70, 183, 265; fiat money 14, 15, 23, 37, 38, 40, 42; optimal demand 39; private currency 48; seigniorage 25, 38, 40, 44, 45, 70 moral hazard 8, 9, 149, 161, 162, 169–71, 180, 277 mutuality principle 8, 101, 127, 160, 161 no arbitrage condition 5, 6, 34, 53, 64, 72, 80, 82, 83, 90, 107, 123, 130, 131, 133, 183, 192, 195, 201, 202, 218, 233, 238 normal backwardation 199, 201–3 optimal capital structure 206, 218, 224 Subject index option contracts 6, 9, 37, 72, 183–90, 192, 193, 195–7, 200, 203, 275; American 184; butterfly 188; call option 183–5, 187, 189–96, 203; European 192; hedge portfolio 192–4, 203; put option 183, 185, 187, 189, 203; putcall parity 189, 192; spread 5, 6, 27, 36, 47, 72, 149, 150, 186–8, 252, 269; straddle 187, 188; strap 187; strip 187 Pareto efficiency 23, 27, 35, 39, 75, 95, 161, 163 pecking order theory 225, 242 perpetuity 54, 55, 57, 276 pooling equilibrium 175–8, 180, 181 power utility function 92, 95, 96, 105, 140, 145–8, 159 pricing anomalies: closed end fund effect 87; January effect 87; small firm effect 87; weekend effect 87; see also asset pricing puzzles probabilities: objective 75, 83–6, 90, 100; subjective 75, 83–7, 90, 99, 100 public good 11, 12, 251, 253, 254, 256–8, 261, 263–5, 267 public sector projects 3, 7, 12, 150, 251, 252, 269, 274 quadratic preferences 85, 101–3, 126 rate of time preference 8, 17, 18, 87, 94, 106, 133, 140, 145–7, 159 revenue effect 259 risk: aggregate consumption 7, 8, 86, 87, 105, 107, 108, 133, 136, 139–41, 143, 144, 148–51, 153, 155, 156, 159, 160, 199–202; diversifiable 7, 8, 72, 92, 95–9, 103, 108, 119, 128, 130, 135, 137, 149, 158, 161, 162, 200, 224, 252, 269, 277; individual 9, 13, 72, 85, 161–4, 169, 171, 179, 183, 198; individual consumption 122, 149, 150; market 7, 8, 51, 60, 63, 72, 94, 95, 103, 107–9, 115, 117–19, 122–4, 127, 129, 131, 136, 137, 139, 144, 151–4, 200, 213, 216; non-diversifiable 72, 103, 135, 137, 158, 161, 162, 224 risk aversion 8, 72, 86–9, 94–6, 102, 108, 125, 139, 140, 143–6, 148, 150, 159, 160, 167, 225, 241; coefficient of absolute 88, 89; coefficient of constant absolute 89; coefficient of constant relative 89, 95, 96, 108, 140; coefficient of increasing absolute 89; coefficient of increasing relative 89; coefficient of relative 8, 89, 95, 96, 108, 139, 140, 143, 145, 146, 148, 160; risk neutral 90, 91, 94, 102, 106, 125, 164, 180, 199, 257, 267 Roll critique 144 Samuelson condition 256–9, 261, 263, 264; revised 256, 262, 263, 264 securities: conventional 79; primitive (Arrow) 79, 81, 91, 96, 97, 105, 106, 217, 218, 224 security market line 122, 123, 125 self insurance 149, 150, 169, 182 self protection 161, 169–71 separating equilibrium 172, 175, 177–9, 181; constrained 177–9, 181 shadow discount rate 278, 279; weighted average formula 12, 252, 253, 269–71, 274–6 shadow price of capital 279 shadow value of government revenue 11, 12, 267, 269 share repurchase constraints 10 Sharpe ratio 146 short-selling constraints 229, 231, 250 spending effect 263, 266 state space 6, 73, 74, 80 state-dependent preferences 83, 85–7, 97, 99, 100, 143, 159, 161, 165 state-independent preferences 83, 858, 100 Stein’s Lemma 135, 136 storage 5, 9, 14–16, 18–21, 23, 34, 184, 198–200, 203 substitution effects 30, 36, 37, 265 takeovers 35, 37 tax arbitrage 80, 228, 229, 231, 245, 247 tax preferences 227, 229–32, 238, 245, 247–50 taxes: corporate 10, 219, 227, 233, 236, 238, 244, 245, 247–9; imputation 10, 238, 243–5, 248, 249; income 252, 270, 271, 274, 278, 279; personal rate 207, 227, 228, 245, 248, 249; trading costs 23–5, 35, 37–40, 42–4, 79, 81, 82, 98, 129, 150, 161, 167–9, 239 transactions costs 26, 35, 39, 65, 79, 97, 128, 129, 149, 150, 238–40 value function 23, 134, 136 wealth effects 16, 25, 42, 44, 70 yield curve 50, 51, 58, 66, 105
{"url":"https://silo.pub/financial-economics-l-8568117.html","timestamp":"2024-11-07T13:54:25Z","content_type":"text/html","content_length":"973919","record_id":"<urn:uuid:88e173d0-10c7-4472-ad03-eb9b07635cba>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00644.warc.gz"}
The Influence Network: A New Foundation for Emergent Physics Abstracts and Talks, Foundations, Physics, Quantum Mechanics, Space-Time The Influence Network: A New Foundation for Emergent Physics by • • 0 Comments My former student Prof. Newshaw Bahreyni and my current student James Walsh and I will be making a joint presentation at the Beyond SpaceTime 2015 Workshop organized by Christian Wuthrich, Nick Huggett, and David Rideout in San Diego in March 2015. We are excited to have the opportunity to discuss our foundational ideas resulting in emergent physics. The Influence Network: A New Foundation for Emergent Physics Kevin H. Knuth 1,2 , Newshaw Bahreyni 3, James L. Walsh 1 1. Department of Physics, University at Albany, Albany NY 12222, USA 2. Department of Informatics, University at Albany, Albany NY 12222, USA 3. Department of Physics, Kenyon College, Gambier, OH 43022, USA Foundational physical theories often suffer from the fact that relatively high level concepts, such as space, time, mass and energy, and their basic relationships to one another are often taken as foundational concepts. While such assumptions are convenient and well-founded from the perspective of a relatively well-functioning set of higher-level physical theories, taking them as foundational concepts prevents one from gaining deeper insights, which is critical to developing a theory based on truly foundational concepts. We consider a simplified picture based on the simple idea that things influence one another. The nature of such things and their influences on one another is not assumed to be known. Instead we simply take as a starting point that things exist and they influence one another, and investigate to what degree a mathematical description of such things and their influences is constrained by simple symmetries or order-theoretic relations. Consistent quantification of order-theoretic structures leads to constraint equations that often mirror what we consider to be physical laws [1]. For example, in past work by considering the symmetries associated with combining quantum mechanical measurement sequences as well as consistency with probability theory [2], we have derived the Feynman rules for manipulating quantum amplitudes More recently, we have applied these concepts to a set of objects that influence one another. Influence is assumed to occur in a discrete fashion such that an instance of influence couples precisely two objects in a directed fashion so that one object can be said to influence the other object. Influence is also assumed to be transitive so that one object can influence another via an intermediary. For each instance of influence, we can define two events: the act of one object influencing and the act of the second object being influenced. Together this allows one to describe objects and their influences as a partially ordered set (poset). As such, an object (which, for lack of a better word, we refer to as a particle) is represented as a chain of events. In this sense, and only this sense, is this theory related to casual sets. The theories differ dramatically in their approach to mathematically representing such posets. The critical difference is that we apply the concepts of consistent quantification with respect to a distinguished chain of events, which we refer to as an observer chain or an embedded observer. We (Knuth & Bahreyni) have demonstrated that the quantification of partially-ordered (causally-ordered) sets of events by an embedded observer results in constraint equations that reflect fundamental mathematics of space-time (Minkowski metric and Lorentz transformations) [5, 6, 7]. More recently, we (Walsh & Knuth) have shown that when an object is influenced, it behaves as if reacting to a force [8] in such a way that reproduces the relativistic version of Newton’s second law. Moreover, the receipt of influence results in time dilation, which suggests a pathway to gravity. The concept of influence is also consistent with quantum mechanics. One can also consider situations in which embedded observers make inferences about a particle’s behavior. We have demonstrated, in the idealized case of a free particle (which influences others, but is not itself influenced) that the situation is identical to the Feynman checkerboard problem for the electron, which is known to give rise to the Dirac equation [6, 7]. In addition, other characteristics of fermion physics, such as Zitterbewegung [9] and helicity (in 1+1 dimensions) emerge naturally from the network of influence events. In this extended talk, Kevin Knuth will introduce the concept of consistent quantification and demonstrate how constraint equations enforcing consistency play the role of what we usually consider to be physical laws. Newshaw Bahreyni will consider a partially ordered set of events where observers are represented as chains of events coupled to one another via influence events. She will demonstrate that for coordinated observers to agree with one another, consistent quantification requires that the poset be described with the familiar mathematics of spacetime physics (Minkowski metric and Lorentz transformations) . James Walsh will consider the effect of influence on an object and demonstrate that consistent quantification results in the traditional concept of force. Knuth will conclude by discussing how inferences made about an object’s influence events results in the Dirac equation and fermion physics. [1] Knuth K.H. 2003. Deriving laws from ordering relations. G.J. Erickson, Y. Zhai (eds.), MaxEnt 2003, Jackson Hole WY, AIP Conference Proceedings 707, AIP, Melville NY, pp. 204-235. arXiv:physics/ 0403031 [physics.data-an] [2] Knuth K.H., Skilling J. 2012. Foundations of Inference. Axioms 1:38-73. arXiv:1008.4831 [math.PR] [3] Goyal P., Knuth K.H., Skilling J. 2010. Origin of complex quantum amplitudes and Feynman’s rules, Phys. Rev. A 81, 022109. [4] Goyal P., Knuth K.H. 2011. Quantum theory and probability theory: their relationship and origin in symmetry, Symmetry 3(2):171-206. [5] Knuth K.H., Bahreyni N. 2014. A potential foundation for emergent space-time, Journal of Mathematical Physics, 55, 112501. doi:10.1063/1.4899081, arXiv:1209.0881 [math-ph] [6] Knuth K.H. 2013. Information-Based Physics and the Influence Network, FQXi 2013 Essay Contest Entry (Third Place): “It from Bit or Bit from It?”. arXiv:1308.3337 [quant-ph] [7] Knuth K.H. 2014. Information-based physics: an observer-centric foundation. Contemporary Physics, 55(1), 12-32, (Invited Submission). arXiv:1310.1667 [quant-ph] [8] Walsh J.L., Knuth K.H. 2014. Information-Based Physics, Influence and Forces. Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2014), Amboise, France, AIP Conference Proceedings 1641, AIP, Melville NY, pp. 538-547. arXiv:1411.2163 [quant-ph] [9] Knuth K.H. 2014. The problem of motion: the statistical mechanics of Zitterbewegung. Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2014), Amboise, France, AIP Conference Proceedings 1641, AIP, Melville NY, pp. 588-594. arXiv:1411.1854 [quant-ph]
{"url":"http://kevinknuth.com/blog/2015/02/the-influence-network-a-new-foundation-for-emergent-physics/","timestamp":"2024-11-05T03:21:17Z","content_type":"text/html","content_length":"57338","record_id":"<urn:uuid:614c5939-8c94-4c6d-ac90-4413dd3b36c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00453.warc.gz"}
Jets and other planar perturbations of parallel basic flows Laminar two-dimensional planar flow of a viscous incompressible fluid is considered in a half-plane bounded by a straight wall. The basic flow is parallel to the wall with constant velocity in Case I, and a shear flow in Case II. The dispersion of a jet is analyzed by use of boundary layer assumptions and eigenfunction expansions in both Cases I and II. Provided an initial perturbation is given in a cross section normal to the wall, the decay of the momentum of the resulting perturbation downstream can be studied by use of eigenfunction expansions. Special eigenfunctions are employed to construct bounds for the solution of the total nonlinear problem. Ingenieur Archiv Pub Date: □ Jet Flow; □ Laminar Flow; □ Parallel Flow; □ Two Dimensional Flow; □ Base Flow; □ Eigenvalues; □ Half Planes; □ Incompressible Fluids; □ Viscous Fluids; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1975IngAr..44..385A/abstract","timestamp":"2024-11-10T19:42:51Z","content_type":"text/html","content_length":"34729","record_id":"<urn:uuid:9c7ff1c7-2940-4e29-b201-25c0713b3b60>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00815.warc.gz"}
Multi-State Adaptive-Dynamic Process Monitoring We create this package, mvMonitoring, from the foundation laid by Kazor et al (2016). This package is designed to make simulation of multi-state multivariate process monitoring statistics easy and straightforward, as well as streamlining the online process monitoring component. Installation from CRAN Install the stable version of this package via Installation of Development Version Make sure you have the latest version of the devtools package, and pull the package from GitHub. Load the library after installation by These are the examples shown in the help files for the mspProcessData(), mspTrain(), mspMonitor(), and mspWarning() functions. # Generate one week's worth of normal operating (NOC) data recorded at the one- # minute level nrml <- mspProcessData(faults = "NOC") # The state values are recorded in the first column. n <- nrow(nrml) # Calculate the training summary, but save five observations for monitoring. # This function will treat the first 3 days as in control (IC), and then update # the training window each day. trainResults_ls <- mspTrain( data = nrml[1:(n - 5), -1], labelVector = nrml[1:(n - 5), 1], trainObs = 4320 # While training, we included 1 lag (the default), so we will also lag the # observations we will test. testObs <- nrml[(n - 6):n, -1] testObs <- xts:::lag.xts(testObs, 0:1) testObs <- testObs[-1,] testObs <- cbind(nrml[(n - 5):n, 1], testObs) # Run the monitoring function. dataAndFlags <- mspMonitor( observations = testObs[, -1], labelVector = testObs[, 1], trainingSummary = trainResults_ls$TrainingSpecs # Alarm check the last row of the matrix returned by the mspMonitor function Paper Graphics The R code to build and save the simulation graphics from the paper are in the inst/mspGraphsGrid.R file. This work is supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No: OSR-2015-CRG4-2582 and by the National Science Foundation PFI:BIC Award No: 1632227.
{"url":"https://cran.stat.auckland.ac.nz/web/packages/mvMonitoring/readme/README.html","timestamp":"2024-11-10T17:39:36Z","content_type":"application/xhtml+xml","content_length":"4323","record_id":"<urn:uuid:157000b6-e423-4fd0-b4ba-cc727b0c5767>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00623.warc.gz"}
Home Loan Emi Calculation Formula kolarboat.ru Home Loan Emi Calculation Formula Home Loan Emi Calculation Formula To calculate your EMI, just enter the loan amount, rate of interest and loan tenure, and your EMI is instantly displayed. Formula to determine Personal Loan EMI · E is the Equated Monthly Instalment · P is the principal loan amount · r is the monthly interest rate which can be. Here's a formula to calculate EMI. ; EMI = P x R x (1+R)^N / [(1+R)^N-1] ; P = Principal loan amount borrowed ; R = Rate of interest ; N = Loan tenure/total number. Select or add your desired loan amount. Pick or type your preferred repayment tenor. Choose the rate of interest. The tool will then calculate the tentative. Quick calculation of your Home Loan EMI · Loan Amount: 1L. 10Cr. ₹ · Tenure (months). · Interest Rate: 5%. 15%. A housing loan EMI calculator serves as a versatile tool that aids prospective homebuyers in effective financial planning. It provides instant and accurate. Find your Home Loan EMI amount with our easy-to-use EMI calculator based on your loan amount, interest, and tenure you plan to take. EMI payable within a few seconds. Home Loan EMI Calculation Formula. The formula used for Home Loan EMI Calculation is as follows: EMI = [P x r x (1+r)^n]. Home Loan Calculator - Use Housing Loan EMI Calculator to calculate your EMI & total payable interest amount. A home loan EMI calculator allows you to calculate the EMI that you are required to pay. Based on the details entered by you, such as the loan amount, interest. The formula for calculating the Personal Loan EMI is: EMI = [P x R x (1+R)^ N]/[(1+R)^N-1], where P is the principal loan amount, R is the monthly interest rate. The formula to determine home loan EMI amount ; E · EMI amount ; P · Principal amount ; R · Rate of interest ; N · Loan tenure. EMI Calculator. Home; EMI Calculator. EMI Calculator. Loan Amount. Interest Rates (%). Term (Months). Monthly Payment(EMI). Apply for loan. CHECK LOAN. Calculate Your Home Loan EMI with HomeFirst Home Loan EMI Calculator if you are planning to own a house of your own soon. Understand your Home Loan. An Equated Monthly Instalment or EMI comprises two components, namely the principal amount and the interest payable on the outstanding loan amount. Your EMIs. Personal Loan EMI ; , 90,, 61, ; , 1,69,, 91, ; , 1,88,, 72, ; , 2,10,, 50, You can use the Home Loan EMI Calculator of Groww to calculate your EMI. It is a user-friendly designed calculator that can help you calculate and assess your. Calculate Home Loan EMI. Use our Home Loan Calculator to get insights on your loan plan! Just select an amount, set an approximate interest rate and loan tenure. Formula to determine Home Loan EMI amount · P is the principal loan amount · r is the monthly interest rate (annual rate divided by 12) · n is the number of. A Home Loan EMI Calculator is an online tool for computing the monthly EMI. Based on a few details about the home loan, the calculator gives an accurate amount. To demonstrate how EMI works, let's walk through a calculation of it, using both methods. Assume an individual takes out a mortgage to buy a new home. The. Enter loan details: Enter your desired loan amount, interest rate and loan tenure in the designated fields. · Calculate EMI instantly: Once you enter the details. 2. Home loan EMI calculation using Excel · Open Excel and select a cell to display the EMI. · Use the formula "=PMT(interest rate/12, tenure in months, loan. the formula for calculation is - EMI = [p x (r/) x {1+(r/)}^n]/[{1+(r /)}^(n-1)]; home loan calculator: home loan calculator makes it easy to. Home Loan EMI Calculator is a free online tool that allows you to estimate monthly instalments against the loan amount You can calculate home loan EMI by adding the interest on the outstanding balance of your housing loan. If you want to reduce the EMI, choose a longer repayment. SBI Home Loan EMI calculator is a basic calculator that helps you to calculate the EMI, monthly interest and monthly reducing balance. A home loan EMI calculator helps you calculate your home loan monthly repayment. Know how an EMI calculator works and its formula. How to use this Home Loan EMI Calculator? The calculator takes three inputs namely home loan amount, tenure and the interest rate. By entering these inputs, it. Finance Company, Home Loan Providers In India,Dubai,Kuwait, Home Loans – With you for your dream kolarboat.ru easy Housing Loan for your needs from LIC HFL. How to calculate EMI Home loans typically have huge loan amounts with lengthy tenures. These require sound financial planning to ensure timely repayment. A home loan EMI calculator. EMI = [P x R x (1+R)^T]/[(1+R)^ (T-1)]. The variables used here stand for: P. decoding the home loan calculator formula here, e is the emi amount, p is the principal, r is the interest rate, and n is the loan term. here, e amounts to. Home Loan EMI Calculator - Calculate Monthly EMI Payments. Check Home Loan EMI Calculation Formula. Know the benefits and factors that affect your EMI. Paramount Share | How Do I Make Money From Investing
{"url":"https://kolarboat.ru/market/home-loan-emi-calculation-formula.php","timestamp":"2024-11-09T03:27:48Z","content_type":"text/html","content_length":"12069","record_id":"<urn:uuid:04e773e5-cebe-4517-b5cd-dac5f3f8f51e>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00793.warc.gz"}
Abdelilah Kandri-Rody According to our database , Abdelilah Kandri-Rody authored at least 6 papers between 1984 and 2001. Collaborative distances: Book In proceedings Article PhD thesis Dataset Other On csauthors.net: Unmixed-dimensional Decomposition of a Finitely Generated Perfect Differential Ideal. J. Symb. Comput., 2001 Triviality and Dimension of a System of Algebraic Differential Equations. J. Autom. Reason., 1998 Non-Commutative Gröbner Bases in Algebras of Solvable Type. J. Symb. Comput., 1990 Computing a Gröbner Basis of a Polynomial Ideal over a Euclidean Domain. J. Symb. Comput., 1988 An Ideal-Theoretic Approach to Work Problems and Unification Problems over Finitely Presented Commutative Algebras. Proceedings of the Rewriting Techniques and Applications, First International Conference, 1985 Algorithms for Computing Groebner Bases of Polynomial Ideals over Various Euclidean Rings. Proceedings of the EUROSAM 84, 1984
{"url":"https://www.csauthors.net/abdelilah-kandri-rody/","timestamp":"2024-11-09T06:02:09Z","content_type":"text/html","content_length":"20091","record_id":"<urn:uuid:b4077619-9d81-4f1d-941b-1e3254f8d5fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00159.warc.gz"}
Digital Byzantinist What follows is the English version of an article I was asked to write for Unipress, the magazine of the University of Bern, for a themed issue on “digital realities”. It appears in a German version in this month’s issue. Computers regulate our lives. We increasingly keep our personal and cultural memories in “the cloud”, on services such as Facebook andnstagram. We rely on algorithms to tell us what we might like to buy, or see. Even our cars can now detect when their drivers are getting tired, and make the inviting suggestion that we pull off the road for a cup of coffee. But there is a great irony that lies at the heart of the digital age, which is this: the invention of the computer was the final nail in the coffin of the great dream of Leibniz, the seventeenth-century polymath. The development of scientific principles that took place in his era of the Enlightenment gave rise to a widely shared belief that humans were a sort of extraordinarily complex biological mechanism, a rational machine. Leibniz himself had a firm belief that it should be possible to come up with a symbolic logic for all human thought and a calculus to manipulate it. He envisioned that the princes and judges of his future would be able to use this “universal characteristic” to calculate the true and just answer to any question that presented itself, whether in scientific discourse or in a dispute between neighbors. Moreover, he firmly believed that there was nothing that would be impossible to calculate – no Unknowable. Over the course of next 250 years, a language for symbolic logic, known today as Boolean logic, was developed and proven to be complete. Leibniz’ question was refined: is there a way, in fact, to prove anything at all? To answer any question? More precisely, given a set of starting premises and a proposed conclusion, is there a way to know whether the conclusion can be either reached or disproven from those premises? This challenge, posed by David Hilbert in 1920, became known as the Entscheidungsproblem. After Kurt Gödel demonstrated the impossibility of answering the question in 1930, Alan Turing in 1936 proved the positive existence of insoluble problems. He did this by imagining a sort of conceptual machine, with which both a set of mathematical operations and its starting input could be encoded, and he showed that there exist combinations of operation and input that would cause the machine to run forever, never finding a solution. Conceptually, this is a computer! Turing’s thought experiment was meant to prove the existence of unsolvable problems, but he was so taken with the problems that could be solved with his “machine” that he wanted to build a real one. Opportunities presented themselves during and immediately after WWII for Turing to build his machines, specifically Enigma decryption engines, and computers were rapidly developed in the post-war environment even as Turing’s role in conceiving of them was forgotten for decades. And although Turing had definitively proved the existence of the Unknowable, he remained convinced until the end of his life that a Turing machine should be able to solve any problem that a human can solve–that it should be possible to build a machine complex enough to replicate all the functions of the human brain. Another way to state Turing’s dilemma is: he proved that there exist unsolvable problems. But does human reasoning and intuition have the capacity to solve problems that a machine cannot? Turing did not believe so, and he spent the rest of his life pursuing, in one way or another, a calculating machine complex enough to rival the human brain. And this leads us straight to the root of the insecurity, hostility even, that finds its expression in most developed cultures toward automata and computers in particular. If all human thought can be expressed via symbolic logic, does it mean that humans have no special purpose beyond computers? Into this minefield comes the discipline known today as the Digital Humanities. The early pioneers of the field, known until the early 2000s as “Humanities Computing”, were not too concerned with the question – computers were useful calculating devices, but they themselves remained firmly in charge of interpretation of the results. But as the technology that the field’s practitioners used developed against a cultural background of increasingly pervasive technological transformation, a cultural clash between the “makers” and the “critics” within Humanities Computing was inevitable. Digital Humanities is, more than usually for the academic disciplines of the humanities, concerned with making things. This is “practice-based” research – the scholar comes up with an idea, writes some code to implement it, decides whether it “works”, and draws conclusions from that. And so Digital Humanities has something of a hacker culture – playful, even arrogant or hubristic sometimes – trying things just to see what happens, building systems to find out whether they work. This is the very opposite of theoretical critique, which has been the cornerstone of what many scholars within the humanities perceive as their specialty. Some of these critics perceive the “hacking” as necessarily being a flight from theory – if Digital Humanities practitioners are making programs, they are not critiquing or theorizing, and their work is thus flawed. Yet these critics tend to underestimate the critical or theoretical sophistication of those who do computing. Most Digital Humanities scholars are very well aware of the limitations of what we do, and of the fact that our science is provisional and contingent. Nevertheless, we are often guilty of failing to communicate these contingencies when we announce our results, and our critics are just as often guilty of a certain deafness when we do mention them. A good example of how this dynamic functions can be seen with ‘distant reading’. A scholar named Franco Moretti pointed out in the early 2000s that the literary “canon” – those works that come to mind when you think of, for example, nineteenth-century German literature – is actually very small. It consists of the things you read in school, those works that survived the test of time to be used and quoted and reshaped in later eras. But that’s a very small subset of the works of German literature that was produced in the 19^th century! Our “canon” is by its very nature unrepresentative. But it has to be, since a human cannot possibly read everything that was published in 100 years. So can there be such a thing as reading books on a large scale, with computers? Moretti and others have tried this, and it is called distant reading. Rather than personally absorbing all these works, he has it all digitized and on hand so that comparative statistical analysis can be done, patterns in the canon can be sought against the entire background against which it was written. As a result we now have two models of ‘reading’. One says that human interpretation should be the starting point in our humanistic investigations; the other says that human interpretation should be the end point, delayed as long as possible while we use machines to identify and highlight patterns. Once that’s done, we can make sense of the patterns. And so what we digital humanities practitioners, the makers, tend toward is a sort of hybrid model between human interpretation and machine analysis. In literature, this means techniques such as distant reading; in history, it might mean social network analysis of digitized charters or a time-lapse map of trading routes based on shipping logs. The ultimate question that our field faces is this: can we make out of this hybrid, out of this interaction between the digital and the interpretative, something wholly new? The next great frontier in Digital Humanities will be exactly this: whether we can extend our computational models to handle the ambiguity, uncertainty, and interpretation that is ubiquitous in the humanities. Up to now everything in computer science has been based on 1 and 0, true and false. These two extraordinarily simple building blocks have enabled us to create machines and algorithms of stunning sophistication, but there is no “maybe” in binary logic, no “I believe”. Computer scientists and humanists are working together to try to bridge that gap; to succeed will produce true thinking machines. A round table un-discussion, part 1 In September I had the privilege of sitting on a round table at the Digital Humanities Autumn School at the University of Trier, along with Manfred Thaller, Richard Coyne, Claudine Moulin, Andreas Fickers, Susan Schreibman, and Fabio Ciotti. Per standard procedure we were given a list of discussion questions to think about in advance; as with any such discussion panel, especially when the audience weighs in, the conversation did not follow the prescribed plan. Lively and worthwhile as the round table was, it would still be a pity for my preparation to go entirely to waste. Richard published his own reactions to the questions beforehand; I’ll join him by presenting mine here, in retrospect. This turns out to be a rather longer piece than I initially envisioned, and so here I present the first in what will be a five-part series. 1. The field of digital humanities is characterized by a great variety of approaches, methods and disciplinary traditions. Do you see this openness and dynamic as a chance or even necessary condition of the field or do you think that this endangers its institutionalization in an academic environment which is still dominated by disciplines? Does it make sense to make distinctions between specific disciplinary approaches in DH, for example digital history, digital linguistics, or digital geography? If we leave the word “digital” out of the first sentence, it holds perfectly true – the humanities are heterogeneous. Thus I don’t think it is really any surprise that the digital humanities would be the same. Computational methods have developed fairly independently in the different disciplines, which does make specific disciplinary approaches rather inevitable – there are digital methods used primarily by linguists, that have been developed and shaped by the sorts of questions linguists ask themselves. These questions are very different from the sorts of questions that art historians ask themselves, and so art history will have a different set of tools and different methods for their application. And so we might perhaps better ask: does it makes sense to not make distinctions between specific disciplinary approaches in DH? That said, the answer to this question is not so clear-cut as all that. In practice, very many people inside and outside the field think first of methods for text processing – encoding, tagging and markup – when they hear the words “digital humanities”. And so as a community, digital humanities practitioners tend to be extremely (though not exclusively, of course) text-oriented. Some of these conceive of the elements of DH pedagogy in terms of a specific set of tools; these usually include the XML infrastructure and TEI encoding for different text-oriented problem domains, as well as analysis of large (usually plain-text) corpora, which can include a combination of natural-language processing tools and statistical text-mining tools. That said, while text may be the Alpha of DH it is no longer the Omega – in the last five to ten years the archaeological and historical sciences have brought methods and techniques for mapping, timeline representation, and network analysis firmly onto the scene. One nevertheless retains the impression of Digital Humanities as a grab bag of skills and techniques to be taught to a group of master’s students, knowing that the students will perhaps apply 20-40% of the learned skills to their own independent work, but that the 20-40% will be different for each student. So then, can digital humanities really be called a field or a discipline? It’s such a good question that it comes up again and again, and many people have attempted answers. The answer that has perhaps found the most consensus goes something like this: digital humanities is about method, and specifically about how we bring our humanistic enquiry to (or indeed into) the domain of computationally-tractable algorithms and data. That question of modeling would seem to be the common thread that unites the digital work going on in the different branches of the humanities, and it brings up in its turn questions of epistemology, sociology of academia, and science and technology studies. What bothers me about this answer is that it gives us two choices, neither of which are entirely satisfactory: DH is either an auxiliary science (Hilfswissenschaft, if you speak German) or a meta-field whose object of study is the phenomenon of humanities research. The former is difficult to justify as an independent academic discipline with degree programs; the latter is much easier to justify, but appeals to something like 1% of those who consider themselves DH practitioners. I haven’t come up with an answer that I deem satisfactory, that ties a majority of the practitioners to a coherent set of research agendas. In that case, a reader might reasonably ask, what is it that I am trying to accomplish, that fits under the “Digital Humanities” rubric? To answer that question, I have to say a little about who I am. I am extremely fortunate to be of the generation best-placed to really understand computers and what they can do: young enough that personal computers were a feature of my childhood, but old enough to be there for the evolution of these computers from rather simplistic and ‘dumb’ systems to extremely sophisticated ones, and to remember what it was like to use a computer before operating system developers made any effort to hide or elide what goes on “under the hood”. This means that I have a fluent sense of what computers can be made to do, and how thos things are accomplished, that I have been able to gain gradually over thirty years. In comparison, I began post-graduate study of the humanities twelve years ago. So my work in the digital humanities so far has been a process of seeing how much of my own humanistic enquiries, and the evidence I have gathered in their pursuit, can be supported, eased, and communicated with the computer. It meant computer-assisted collation of texts, when I began to work on a critical edition. It has meant a source-code implementation of what I believe a stemma actually is, and how variation in a text is treated by philologists, as I have come to work on medieval stemmatology. It has recently begun to mean a graph-based computational model of the sorts of information that a historical text is likely to contain, and how those pieces of information relate to each other. And so on. Nowhere in this am I especially concerned with encoding, standards, or data formats, although from time to time I employ all three to get my work done. Rather, I rely on the computer to capture my hypotheses and my insights, and so I find myself needing to express these hypotheses and insights in as rigorous and queryable a way as possible, so as not to simply lose track of them. Critics might say (indeed, have said) that my idea of “digital humanities” is a glorified note-taking system; those critics may as well call Facebook a glorified family newsletter. Rather (and for all the sentiment that DH is first and foremost about collaboration) the computer allows an individual researcher like myself to track, and ingest, and retain, and make sense of, and feel secure in the knowledge that I will not forget, far more information than I could ever deal with alone. Almost as a side effect, it allows me in the long run to present not just the polished rhetoric appearing in some journal or monograph that is the usual output of a scholar in the humanities, but also a full accounting of the assumptions and inferences that produced the rhetoric. That, for me, is what digital humanities is about. Tools for digital philology: Transcription In the last few months I’ve had the opportunity to revisit all of the decisions I made in 2007 and 2008 about how I transcribe my manuscripts. In this post I’ll talk about why I make full transcriptions in the first place, the system I devised six years ago, my migration to T-PEN, and a Python tool I’ve written to convert my T-PEN data into usable TEI XML. Transcription vs. collation When I make a critical edition of a text, I start with a full transcription of the manuscripts that I’m working with, fairly secure in the knowledge that I’ll be able to get the computer to do 99% of the work of collating them. There are plenty of textual scholars out there who will regard me as crazy for this. Transcribing a manuscript is a lot of work, after all, and wouldn’t it just be faster to do the collation myself in the first place? But my instinctive answer has always been no, and I’ll begin this post by trying to explain why. When I transcribe my manuscripts, I’m working with a plain-text copy of the text that was made via OCR of the most recent (in this case, 117-year-old) printed edition. So in a sense the transcription I do is itself a collation against that edition text – I make a file copy of the text and begin to follow along it with reference to the printed edition, and wherever the manuscript varies, I make a change in the file copy. At the same time, I can add notation for where line breaks, page divisions, scribal corrections, chapter or section markings, catchwords, colored ink, etc. all occur in the manuscript. By the end of this process, which is in principle no different from what I would be doing if I were constructing a manual collation, I have a reasonably faithful transcription of the manuscript I started with. But there are two things about this process that make it, in my view, simpler and faster than constructing that collation. The first is the act I’m performing on the computer, and the second is the number of simultaneous comparisons and decisions I have to make at each point in the process. When I transcribe I’m correcting a single text copy, typing in my changes and moving on, in a lines-and-paragraphs format that is pretty similar to the text I’m looking at. The physical process is fairly similar to copy-editing. If I were collating, I would be working – most probably – in a spreadsheet program, trying to follow the base text word-by-word in a single column and the manuscript in its paragraphs, which are two very different shapes for text. Wherever the text diverged, I would first have to make a decision about whether to record it (that costs mental energy), then have to locate the correct cell to record the difference (that costs both mental energy and time spent switching from keyboard to mouse entry), and then deciding exactly how to record the change in the appropriate cell (switching back from mouse to keyboard), thinking also about how it coordinates with any parallel variants in manuscripts already collated. Quite frankly, when I think about doing work like that I not only get a headache, but my tendinitis-prone hands also start aching in Making a transcription So for my own editorial work I am committed to the path of making transcriptions now and comparing them later. I was introduced to the TEI for this purpose many years ago, and conceptually it suits my transcription needs. XML, however, is not a great format for writing out by hand for anyone, and if I were to try, the transcription process would quickly become as slow and painful as I have just described the process of manual collation as being. As part of my Ph.D. work I solved this problem by creating a sort of markup pidgin, in which I used single-character symbols to represent the XML tags I wanted to use. The result was that, when I had a manuscript line like this one: whose plaintext transcription is this: Եւ յայնժամ սուրբ հայրապետն պետրոս և իշխանքն ելին առ աշոտ. և and whose XML might look something like this: <lb/><hi rend="red">Ե</hi>ւ յայնժ<ex>ա</ex>մ ս<ex>ուր</ex>բ հ<ex>ա</ex>յր<ex>ա</ex>պ<ex>ե</ex>տն պետրոս և իշխ<ex>ա</ex>նքն ելին առ աշոտ. և I typed this into my text editor *(red)Ե*ւ յայնժ\ա\մ ս\ուր\բ հ\ա\յր\ա\պ\ե\տն պետրոս և իշխ\ա\նքն ելին առ աշոտ. և and let a script do the work of turning that into full-fledged XML. The system was effective, and had the advantage that the text was rather easier to compare with the manuscript image than full XML would be, but it was not particularly user-friendly – I had to have all my symbols and their tag mappings memorized, I had to make sure that my symbols were well-balanced, and I often ran into situations (e.g. any tag that spanned more than one line) where my script was not quite able to produce the right result. Still, it worked well enough, I know at least one person who was actually willing to use it for her own work, and I even wrote an online tool to do the conversion and highlight any probable errors that could be detected. My current solution Last October I was at a collation workshop in Münster, where I saw a presentation by Alison Walker about T-PEN, an online tool for manuscript transcription. Now I’ve known about T-PEN since 2010, and had done a tiny bit of experimental work with it when it was released, but had not really thought much about it since. During that meeting I fired up T-PEN for the first time in years, really, and started working on some manuscript transcription, and actually it was kind of fun! What T-PEN does is to take the manuscript images you have, find the individual lines of text, and then let you do the transcription line-by-line directly into the browser. The interface looks like this (click for a full-size version): which makes it just about the ideal transcription environment from a user-interface perspective. You would have to try very hard to inadvertently skip a line; your eyes don’t have to travel very far to get between the manuscript image and the text rendition; when it’s finished, you have not only the text but also the information you need to link the text to the image for later presentation. The line recognition is not perfect, in my experience, but it is often pretty good, and the user is free to correct the results. It is pretty important to have good images to work with – cropped to include only the pages themselves, rotated and perhaps de-skewed so that the lines are straight, and with good contrast. I have had the good fortune this term to have an intern, and we have been using ImageMagick to do the manuscript image preparation as efficiently as we can. It may be possible to do this fully automatically – I think that OCR software like FineReader has similar functionality – but so far I have not looked seriously into the possibility. T-PEN does not actively support TEI markup, or any other sort of markup. What it does offer is the ability to define buttons (accessible by clicking the ‘XML Tags’ button underneath the transcription box) that will apply a certain tag to any portion of text you choose. I have defined the TEI tags I use most frequently in my transcriptions, and using them is fairly straightforward. Getting data back out There are a few listed options for exporting a transcription done in T-PEN. I found that none of them were quite satisfactory for my purpose, which was to turn the transcription I’d made automatically into TEI XML, so that I can do other things with it. One of the developers on the project, Patrick Cuba, who has been very helpful in answering all the queries I’ve had so far, pointed out to me the (so far undocumented) possibility of downloading the raw transcription data – stored on their system using the Shared Canvas standard – in JSON format. Once I had that it was the work of a few hours to write a Python module that will convert the JSON transcription data into valid TEI XML, and will also tokenize valid TEI XML for use with a collation tool such as CollateX. The tpen2tei module isn’t quite in a state where I’m willing to release it to PyPI. For starters, most of the tests are still stubs; also, I suspect that I should be using an event-based parser for the word tokenization, rather than the DOM parser I’m using now. Still, it’s on Github and there for the using, so if it is the sort of tool you think you might need, go wild. There are a few things that T-PEN does not currently do, that I wish it did. The first is quite straightforward: on the website it is possible to enter some metadata about the manuscript being transcribed (library information, year of production, etc.), but this metadata doesn’t make it back into the Shared Canvas JSON. It would be nice if I had a way to get all the information about my manuscript in one place. The second is also reasonably simple: I would like to be able to define an XML button that is a milestone element. Currently the interface assumes that XML elements will have some text inside them, so the button will insert a <tag> and a </tag> but never a <tag/>. This isn’t hard to patch up manually – I just close the tag myself – but from a usability perspective it would be really handy. The third has to do with resource limits currently imposed by T-PEN: although there doesn’t seem to be a limit to the number of manuscripts you upload, each manuscript can contain only up to 200MB of image files. If your manuscript is bigger, you will have to split it into multiple projects and combine the transcriptions after the fact. Relatedly, you cannot add new images to an existing manuscript, even if you’re under the 200MB limit. I’m told that an upcoming version of T-PEN will address at least this second issue. The other two things I miss in T-PEN have to do with the linking between page area and text flow, and aren’t quite so simple to solve. Occasionally a manuscript has a block of text written in the margin; sometimes the block is written sideways. There is currently no good mechanism for dealing with blocks of text with weird orientations; the interface assumes that all zones should be interpreted right-side-up. Relatedly, T-PEN makes the assumption (when it is called upon to make any assumption at all) that text blocks should be interpreted from top left to bottom right. It would be nice to have a way to define a default – perhaps I’m transcribing a Syriac manuscript? – and to specify a text flow in a situation that doesn’t match the default. (Of course, there are also situations where it isn’t really logical or correct to interpret the text as a single sequence! That is part of what makes the problem interesting.) If someone who is starting an edition project today asks me for advice on transcription, I would have little reservation in pointing them to T-PEN. The only exception I would make is for anyone working on a genetic or documentary edition of authors’ drafts or the like. The T-PEN interface does assume that the documents being transcribed are relatively clean manuscripts without a lot of editorial scribbling. Apart from that caveat, though, it is really the best tool for the task that I have seen. It has a great user interface for the task, it is an open source tool, its developers have been unfailingly helpful, and it provides a way to get out just about all of the data you put into it. In order to turn that data into XML, you may have to learn a little Python first, but I hope that the module I have written will give someone else a head start on that front too! Coming back to proper (digital) philology For the last three or four months I have been engaging in proper critical text edition, of the sort that I haven’t done since I finished my Ph.D. thesis. Transcribing manuscripts, getting a collation, examining the collation to derive a critical text, and all. I haven’t had so much fun in ages. The text in question is the same one that I worked on for the Ph.D. – the Chronicle of Matthew of Edessa. I have always intended to get back to it, but the realities of modern academic life simply don’t allow a green post-doc the leisure to spend several more years on a project just because it was too big for a Ph.D. thesis in the first place. Of course I didn’t abandon textual scholarship entirely – I transferred a lot of my thinking about how text traditions can be structured and modelled and analyzed to the work that became my actual post-doctoral project. But Matthew of Edessa had to be shelved throughout much of this, since I was being paid to do other things. Even so, in the intervening time I have been pressed into service repeatedly as a sort of digital-edition advice columnist. I’m by no means the only person ever to have edited text using computational tools, and it took me a couple of years after my own forays into text edition to put it online in any form, but all the work I’ve done since 2007 on textual-criticism-related things has given me a reasonably good sense of what can be done digitally in theory and in practice, for someone who has a certain amount of computer skill as well as for someone who remains a bit intimidated by these ornery machines. Since the beginning of this year, I’ve had two reasons to finally take good old Matthew off the shelf and get back to what will be the long, slow work of producing an edition. The first is a rash commitment I made to contribute to a Festschrift in Armenian studies. I thought it might be nice to provide an edited version of the famous (if you’re a Byzantinist) letter purportedly written by the emperor Ioannes Tzimiskes to the Armenian king Ashot Bagratuni in the early 970s, preserved in Matthew’s Chronicle. The second is even better: I’ve been awarded a grant from the Swiss National Science Foundation to spend the next three years leading a small team not only to finish the edition, but also to develop the libraries, tools, and data models (including, of course, integration of ones already developed by others!) necessary to express the edition as digitally, accessibly, and sustainably as I can possibly dream of doing, and to offer it as a model for other digital work on medieval texts within Switzerland and, hopefully, beyond. I have been waiting six years for this moment, and I am delighted that it’s finally arrived. The technology has moved on in those six years, though. When I worked on my Ph.D. I essentially wrote all my own tools to do the editing work, and there was very little focus on usability, generalizability, or sustainability. Now the landscape of digital tools for text critical edition is much more interesting, and one of my tasks has been to get to grips with all the things I can do now that I couldn’t practically do in 2007-9. Over the next few weeks, as I prepare the article that I promised, I will use this blog to provide something of an update to what I published over the years on the topic of “how to make a digital edition”. I’m not going to explore here every last possibility, but I am going to talk about what tools I use, how I choose to use them, and how (if at all) I have to modify or supplement them in order to do the thing I am trying to do. With any luck this will be helpful to others who are starting out now with their own critical editions, no matter their comfort with computers. I’ll try to provide a sense of what is easy, what has a good user interface, what is well-designed for data accessibility or sustainability. And of course I’d be very happy to have discussion from others who have walked similar roads, to say what has worked for them. SOLVED! The mystery of the character encoding Update, two hours later: we have a solution! And it’s pretty disgusting. Read on below. Two posts in a row about the deep technical guts of something I’m working on. Well I guess this is a digital humanities blog. Yesterday I got a wonderful present in my email – a MySQL dump of a database full of all sorts of historical goodness. The site that it powers displays snippets of relevant primary sources in their original language, including things like Arabic and Greek. Since the site has been around for rather longer than MySQL has had any Unicode support to speak of, it is not all that surprising that these snippets of text in their original language are rather badly mis-encoded. Not too much of a problem, I naïvely thought to myself. I’ll just fix the encoding knowing what it’s supposed to have been. A typical example looks like this. The Greek displayed on the site is: μηνὶ Νοἐμβρίω εἰς τὰς κ ´ ινδικτιῶνος ε ´ ἔτους ,ς but what I get from the database dump is: μηνὶ Î Î¿á¼ Î¼Î²Ï á½·Ï‰ εἰς τὰς κ ´ ινδικτιῶνος ε ´ ἔτους ,Ï‚ Well, I recognise that kind of garbage, I thought to myself. It’s double-encoded UTF-8. So all I ought to need to do is to undo the spurious re-encoding and save the result. Right? Sadly, it’s not that easy, and here is where I hope I can get comments from some DB/encoding wizards out there because I would really like to understand what’s going on. It starts easily enough in this case – the first letter is μ. In Unicode, that is character 3BC (notated in hexadecimal.) When you convert this to UTF-8, you get two bytes: CE BC. Unicode character CE is indeed Î, and Unicode character BC is indeed ¼. As I suspected, each of these UTF-8 bytes that make up μ has been treated as a character in its own right, and further encoded to UTF-8, so that μ has become μ. That isn’t hard to undo. But then we get along to that ω further down the line, which has become ω. That is Unicode character 3C9, which in UTF-8 becomes CF 89. Unicode CF is the character Ï as we expect, but there is no such Unicode character 89. Now it is perfectly possible to render 89 as UTF-8 (it would become C2 89) but instead I’m getting a rather inexplicable character whose Unicode value is 2030 (UTF-8 E2 80 B0)! And here the system starts to break down – I cannot figure out what possible mathematical transformation has taken place to make 89 become 2030. There seems to be little mathematical pattern to the results I’m getting, either. From the bad characters in this sample: ρ -> 3C1 -> CF 81 --> CF 81 (correct!!) ς -> 3C2 -> CF 82 --> CF 201A τ -> 3C4 -> CF 84 --> CF 201E υ -> 3C5 -> CF 85 --> CF 2026 ω -> 3C9 -> CF 89 --> CF 2030 [S:Ideas? Comments? Do you know MySQL like the back of your hand and have you spotted immediately what’s going on here? I’d love to crack this mystery.:S] After this post went live, someone observed to me that the ‘per mille’ sign, i.e. that double-percent thing at Unicode value 2030, has the value 89 in…Windows CP-1250! And, perhaps more relevantly, Windows CP-1252. (In character encodings just as in almost everything else, Windows always liked to have their own standards that are different from the ISO standards. Pre-Unicode, most Western European characters were represented in an eight-bit encoding called ISO Latin 1 everywhere except Windows*, where they used this CP-1252 instead. For Eastern Europe, it was ISO Latin 2 / CP-1250.) So what we have here is: MySQL is interpreting its character data as Unicode, and expressing it as UTF-8, as we requested. Only then it hits a Unicode value like 89 which is not actually a character at all. But instead of passing it through and letting us deal with it, MySQL says “hm, they must have meant the Latin 1 value here. Only when I say Latin 1 I really mean CP-1252. So I’ll just take this value (89 in our example), see that it is the ‘per mille’ sign in CP-1252, and substitute the correct Unicode for ‘per mille’. That will make the user happy!” Hint: It really, really, doesn’t make the user happy. So here is the Perl script that will take the garbage I got and turn it back into Greek. Maybe it will be useful to someone else someday! #!/usr/bin/env perl use strict; use warnings; use Encode; use Encode::Byte; while(<>) { my $line = decode_utf8( $_ ); my @chr; foreach my $c ( map { ord( $_ ) } split( '', $line ) ) { if( $c > 255 ) { $c = ord( encode( 'cp1252', chr( $c ) ) ); push( @chr, $c ); my $newline = join( '', map { chr( $_ ) } @chr ); print $newline; [*] Also, as I realized after posting this, except Mac, which used MacRoman. Standards are great! Let’s all have our own! How to have several Catalyst apps behind one Apache server Since I’ve changed institutions this year, I am in the process of migrating Stemmaweb from its current home (on my family’s personal virtual server) to the academic cloud service being piloted by SWITCH. Along the way, I ran into a Perl Catalyst configuration issue that I thought would be useful to write about here, in case others run into a similar problem. I have several Catalyst applications – Stemmaweb, my edition-in-progress of Matthew of Edessa, and pretty much anything else I will develop with Perl in the future. I also have other things (e.g. this blog) on the Web, and being somewhat stuck in my ways, I still prefer Apache as a webserver. So basically I need a way to run all these standalone web applications behind Apache, with a suitable URL prefix to distinguish them. There is already a good guide to getting a single Catalyst application set up behind an Apache front end. The idea is that you start up the application as its own process, listening on a local network port, and then configure Apache to act as a proxy between the outside world and that application. My problem was, I want to have more than one application, and I want to reach each different application via its own URL prefix (e.g. /stemmaweb, /ChronicleME, /ncritic, and so on.) The difficulty with a reverse proxy in that situation is this: • I send my request to http://my.public.server/stemmaweb/ • It gets proxied to http://localhost:5000/ and returned • But then all my images, JavaScript, CSS, etc. are at the root of localhost:5000 (the backend server) and so look like they’re at the root of my.public.server, instead of neatly within the stemmaweb/ directory! • And so I get a lot of nasty 404 errors and a broken application. What I need here is an extra plugin: Plack::Middleware::ReverseProxyPath. I install it (in this case with the excellent ‘cpanm’ tool): $ cpanm -S Plack::Middleware::ReverseProxyPath And then I edit my application’s PSGI file to look like this: use strict; use warnings; use lib '/var/www/catalyst/stemmaweb/lib'; use stemmaweb; use Plack::Builder; builder { enable( "Plack::Middleware::ReverseProxyPath" ); my $app = stemmaweb->apply_default_middlewares(stemmaweb->psgi_app); where /var/www/catalyst/stemmaweb is the directory that my application lives in. In order to make it all work, my Apache configuration needs a couple of extra lines too: # Configuration for Catalyst proxy apps. This should eventually move # to its own named virtual host. RewriteEngine on <Location /stemmaweb> RequestHeader set X-Forwarded-Script-Name /stemmaweb RequestHeader set X-Traversal-Path / ProxyPass http://localhost:5000/ ProxyPassReverse http://localhost:5000/ RewriteRule ^/stemmaweb$ stemmaweb/ [R] The RequestHeaders inform the backend (Catalyst) that what we are calling “/stemmaweb” is the thing that it is calling “/”, and that it should translate its URLs accordingly when it sends us back the The second thing I needed to address was how to start these things up automatically when the server turns on. The guide gives several useful configurations for starting a single service, but again, I want to make sure that all my Catalyst applications (and not just one of them) start up properly. I am running Ubuntu, which uses Upstart to handle its services; to start all my applications I use a pair of scripts and the ‘instance’ keyword. description "Starman master upstart control" author "Tara L Andrews (tla@mit.edu)" # Control all Starman jobs via this script start on filesystem or runlevel [2345] stop on runlevel [!2345] # No daemon of our own, but here's how we start them pre-start script for dir in `ls /var/www/catalyst`; do start starman-app APP=$dir PORT=$port || : end script # and here's how we stop them post-stop script for inst in `initctl list|grep "^starman-app "|awk '{print $2}'|tr -d ')'|tr -d '('`; do stop starman-app APP=$inst PORT= || : end script The application script, which gets called by the control script for each application in /var/www/catalyst: description "Starman upstart application instance" author "Tara L Andrews (tla@mit.edu)" respawn limit 10 5 setuid www-data umask 022 instance $APP$PORT exec /usr/local/bin/starman --l localhost:5000 /var/www/catalyst/$APP/$APP.psgi There is one thing about this solution that is not so elegant, which is that each application has to start on its own port and I need to specify the correct port in the Apache configuration file. As it stands the ports will be assigned in sequence (5000, 5001, 5002, …) according to the way the application directory names sort with the ‘ls’ command (which roughly means, alphabetically.) So whenever I add a new application I will have to remember to adjust the port numbers in the Apache configuration. I would welcome a more elegant solution if anyone has one! Enabling the science of history One of the great ironies of my academic career was that, throughout my Ph.D. work on a digital critical edition of parts of the text of Matthew of Edessa’s Chronicle, I had only the vaguest inkling that anyone else was doing anything similar. I had heard of Peter Robinson and his COLLATE program, of course, but when I met him in 2007 he only confirmed to me that the program was obsolete and, if I needed automatic text collation anytime soon, I had better write my own program. Through blind chance I was introduced to James Cummings around the same time, who told me of the existence of the TEI guidelines and suggested I use them. It was, in fact, James who finally gave me a push into the world of digital humanities. I was in the last panicked stages of writing up the thesis when he arranged an invitation for me to attend the first ‘bootcamp’ held by the Interedition project, whose subject was to be none other than text collation tools. By the time the meeting was held I was in that state of anxious bliss of having submitted my thesis and having nothing to do but wait for the viva, so I could bend all my hyperactive energy in that direction. Through Interedition I made some first-rate friends and colleagues with whom I have continued to work and hack to this day, and it was through that project that I met various people within KNAW (the Royal Dutch Academy of Science.) After I joined Interedition I very frequently found myself talking to its head, Joris van Zundert, about all manner of things in this wide world of digital humanities. At the time I knew pretty much nothing of the people within DH and its nascent institutional culture, and was moreover pretty ignorant of how much there was to know, so as often as not we ended up in some kind of debate or argument over the TEI, over the philosophy of science, over what constitutes worthwhile research. The main object of these debates was to work out who was holding what unstated assumption or piece of background context. One evening we found ourselves in a heated argument about the application of the scientific method to humanities research. I don’t remember quite how we got there, but Joris was insisting (more or less) that humanities research needed to be properly scientific, according to the scientific method, or else it was rubbish, nothing more than creative writing with a rhetorical flourish, and not worth anyone’s time or attention. Historians needed to demonstrate reproducibility, falsifiability, the whole works. I was having none of it–while I detest evidence-free assumption-laden excuses for historical argument as much as any scholar with a proper science-based education would, surely Joris and everyone else must understand that medieval history is neither reproducible nor falsifiable, and that the same goes for most other humanities research? What was I to do, write a Second Life simulation to re-create the fiscal crisis of the eleventh century, complete with replica historical personalities, and simulate the whole to see if the same consequences appeared? Ridiculous. But of course, I was missing the point entirely. What Joris was pushing me to do, in an admittedly confrontational way, was to make clear my underlying mental model for how history is done. When I did, it became really obvious to me how and where historical research ultimately stands to gain from digital methods. OK, that’s a big claim, so I had better elucidate this mental model of mine. It should be borne in mind that my experience is drawn almost entirely from Near Eastern medieval history, which is grossly under-documented and fairly starved of critical attention in comparison to its Western cousin, so if any of you historians of other places or eras have a wildly different perspective or model, I’d be very interested to hear about it! When we attempt a historical re-construction or create an argument, we begin with a mixture of evidence, report, and prior interpretation. The evidence can be material (mostly archaeological) or documentary, and we almost always wish we had roughly ten times as much of it as we actually do. The reports are usually those of contemporaneous historians, which are of course very valuable but must be examined in themselves for what they aren’t telling us, or what they are misrepresenting, as much as for what they positively tell us. The prior interpretation easily outweighs the evidence, and even the reports, for sheer volume, and it is this that constitutes the received wisdom of our field. So we can imagine a rhetorical structure of dependency that culminates in a historical argument, or a reconstruction. We marshal our evidence, we examine our reports, we make interpretations in the light of received wisdom and prior interpretations. In effect it is a huge and intricate connected structure of logical dependencies that we carry around in our head. If our argument goes unchallenged or even receives critical acceptance, this entire structure becomes a ‘black box’ of the sort described by Bruno Latour, labelled only with its main conclusion(s) and ready for inclusion in the dependency structure of future arguments. Now what if some of our scholarship, some of the received wisdom even, is wide of the mark? Pretty much any historian will relish the opportunity to demonstrate that “everything we thought we knew is wrong”, and in Near Eastern history in particular these opportunities come thick and fast. This is a fine thing in itself, but it poses a thornier problem. When the historian demonstrates that a particular assumption or argument doesn’t hold water–when the paper is published and digested and its revised conclusion accepted–how quickly, or slowly, will the knock-on effects of this new bit of insight make themselves clear? How long will it take for the implications to sort themselves out fully? In practice, the weight of tradition and patterns of historical understanding for Byzantium and the Near East are so strong, and have gone for so long unchallenged, that we historians simply haven’t got the capacity to identify all the black boxes, to open them up and find the problematic components, to re-assess each of these conclusions with these components altered or removed. And this, I think, is the biggest practical obstacle to the work of historians being accepted as science rather than speculation or storytelling. Well. Once I had been made to put all of this into words, it became clear what the most useful and significant contribution of digital technology to the study of history must eventually be. Big data and statistical analysis of the contents of documentary archives is all well and good, but what if we could capture our very arguments, our black boxes of historical understanding, and make them essentially searchable and available for re-analysis when some of the assumptions have changed? They would even be, dare I say it, reproducible and/or falsifiable. Even, perish the thought, I was very pleased to find that Palgrave Macmillan makes its author self-archiving policies clear on their website, for books of collected papers as well as for journals. Unfortunately the policy is that the chapter is under embargo until 2015, so I can’t post it publicly until then, but if you are interested meanwhile and can’t track down a copy of the book then please get in touch! J. J. van Zundert, S. Antonijevic, A. Beaulieu, K. van Dalen-Oskam, D. Zeldenrust, and T. L. Andrews, ‘Cultures of Formalization – Towards an encounter between humanities and computing‘, in Understanding Digital Humanities, edited by D. Berry (London: Palgrave Macmillan, 2012), pp. 279-94. Early-career encyclopedism So there I was, a newly-minted Ph.D. enjoying my (all too brief) summer of freedom in 2009 from major academic responsibilities. There must be some sort of scholarly pheromone signal that gets emitted in cases like these, some chemical signature that senior scholars are attuned to that reads ‘I am young and enthusiastic and am not currently crushed by the burden of a thousand obligations’. I was about to meet the Swarm of Encyclopedists. It started innocently enough, actually even before I had submitted, when Elizabeth Jeffreys (who had been my MPhil degree supervisor) offered me the authorship of an article on the Armenians to go into an encyclopedia that she was helping to edit. As it happened, this didn’t intrude again on my consciousness until the following year–I was duly signed up as author, but my email address was entered incorrectly in a database so I was blissfully ignorant of what exactly I had committed to until I began to get mysterious messages in 2010 from a project I hadn’t really even heard of, demanding to know where my contribution was. Lesson learned: you can almost always get a deadline extended in these large collaborative projects. After all, what alternatives do the editors have, really? The second lure came quite literally the evening following my DPhil defense, when Tim Greenwood (who had been my MPhil thesis supervisor) got in touch to tell me about a project on Christian-Muslim relations being run out of Birmingham by David Parker, and that I would seem to be the perfect person to write an entry on Matthew of Edessa and his Chronicle. Flush with victory and endorphins, of course I accepted within the hour. Technically speaking this was a ‘bibliographical history’ rather than an ‘encyclopedia’, but the approach to writing my piece was very similar, and it was more or less the ideal moment for me to summarize everything I knew about Matthew. For a little bit of doctoral R&R, academic style, I flew off a few days later to Los Angeles for the 2009 conference of the Society of Armenian Studies. There in the sunshine I must have been positively telegraphing my relaxation and lack of obligations, because Theo van Lint (who had only just ceased being my DPhil supervisor) brought up the subject of a number of encyclopedia articles on Armenian authors that he had promised and was simply not going to have a chance to do. By this time I was beginning to get a little surprised at the number of encyclopedia articles floating around in the academic ether looking for an authorly home, and I was not so naïve as to accept the unworkable deadline that he had, but subject to reasonability I said okay. He assured me that he would send me the details soon. Around that time, through one of the mailing lists to which I had subscribed in the last month or so of my D.Phil., I got wind of the Encyclopedia of the Medieval Chronicle (EMC). The general editor, Graeme Dunphy, was looking for contributors to take on some of the orphan articles in this project. Matthew of Edessa was on the list, and I was already writing something similar for the Christian-Muslim Relations project, so I wrote to volunteer. And then everything happened at once. Theo wrote to me with his list, which turned out to be for precisely this EMC project. The project manager at Brill, Ernest Suyver, who knew me from my work on another Brill project, wrote to me to ask if I would consider taking on several of the Armenian articles. Before I could answer either of these, Graeme wrote back to me, offering me not only the article on Matthew of Edessa that I’d asked for–not only the entire set of Armenian articles that both Theo and Ernest had sent in my direction–but the job of section editor for all Armenian and Syriac chronicles! The previous section editor had evidently disappeared from the project and it seems that only someone as young and unburdened as me had any hope of pulling off the organization and project management they needed on the exceedingly short timescale they had, or of being unwise enough to believe it could be done. But I was at least learning enough by then to expect that any appeal to more senior scholars than myself was likely to be met with “Sorry, I have too much work already” and an unspoken coda of “…and encyclopedia articles are not exactly a priority for me right now.” There was the rare exception of course, but I turned pretty quickly to my own cohort of almost- or just-doctored scholars to farm out the articles I couldn’t (or didn’t want to) write myself. So I suppose by that time even I was beginning to detect the “yes I can” signals coming from the early-career scholars around me. Naturally the articles were not all done on time–it was a pretty ludicrous time frame I was given, after all–but equally naturally, delays in the larger project meant that my part was completed by the time it really needed to be. And so in my first year as a postdoc I had a credit on the editorial team of a big encyclopedia project, and a short-paper-length article, co-authored with Philip Wood, giving an overview of Eastern Christian historiography as a whole. I remain kind of proud of that little piece. Lesson learned: your authors can almost always get you to agree to a deadline extension in these large collaborative projects. After all, what alternative do you have as editor, short of finding another author, who will need more time anyway, and pissing off the first one by withdrawing the commission? The only trouble with these articles is that it’s awfully hard to know how to express them in the tickyboxes of a typical publications database like KU Leuven’s Lirias. Does each of the fifteen entries I wrote get its own line? Should I list the editorship separately, or the longer article on historiography? It’s a little conundrum for the CV. Nevertheless I’m glad I got the opportunity to do the EMC project, definitely. And here’s another little secret–if I am able to make the time, I kind of like writing encyclopedia articles. It’s a nice way to get to grips with a subject, to cut straight to the essence of “What does the reader–and what do I–really need to know in these 250 words?” This might be why, when yet another project manager for yet another encyclopedia project found me about a year ago, I didn’t say no, and so this list will have an addition in the future. After that, though, I might finally have to call a halt. I have written to Wiley-Blackwell to ask about their author self-archiving policies; I have a PDF offprint but am evidently not allowed to make it public, frustratingly enough. I will update the Lirias record if that changes. Brill has a surprisingly humane policy that allows me to link freely to the offprints of my own contributions in an edited collection, so I have done that here. I don’t seem to have an offprint for all the articles I wrote, though, so will need to rectify that. Andrews, T. (2012). Armenians. In: Encyclopedia of Ancient History, ed. R. Bagnall et al. Malden, MA: Wiley-Blackwell. Andrews, T. (2012). Matthew of Edessa. In: Christian–Muslim Relations. A Bibliographical History 1. Volume 3 (1050- 1200), ed. D. Thomas and B. Roggema. Leiden: Brill. Andrews, T. and P. Wood. (2012). Historiography of the Christian East. In: Encyclopedia of the Medieval Chronicle, general editor G. Dunphy. Leiden: Brill. (Additional articles on Agatʿangełos, Aristakēs Lastivertcʿi, Ełišē, Kʿartʿlis Cxovreba, Łazar Pʿarpecʿi, Mattʿēos Uṙhayecʿi, Movsēs Dasxurancʿi, Pʿawstos Buzand, Smbat Sparapet, Stepʿanos Asołik, Syriac Short Chronicles (with J. J. van Ginkel), Tʿovma Arcruni, Yovhannēs Drasxanakertcʿi. Public accountability, #acwrimo, and The Book Over the course of 2011, among the long-delayed things I finally managed to do was to put together a book proposal for the publication of my Ph.D. research. While I am reasonably pleased with the thesis I produced, it is no exception to the general rule that it would not make a very good book if I tried to publish it as it stands. As it happens there is a reasonably well-known series by a well-respected publisher, edited by someone I know, where my research fits in rather nicely. Even more nicely, they accepted my proposal. Now here is where I have to humblebrag a little: I wrote my Ph.D. thesis kind of quickly, and much more quickly than I would recommend to any current Ph.D. students. Part of this was luck–once I hit upon my main theme, a lot of it just started falling into place–but part of it was the sheer terror of an externally-imposed deadline. I had rather optimistically applied for a British Academy post-doctoral fellowship in October 2008, figuring that either I’d be rejected and it would make no difference at all, or that I’d be shortlisted and have a deadline of 1 April 2009 to have my thesis finished and defended. At the time I applied I had a reasonable outline, one more or less completed chapter and the seeds for two more, and software that was about 1/3 finished. By the beginning of January I was only a little farther along, and I realized that the BA was going to make its shortlisting decisions very soon and, unless I made a serious and concerted effort to produce some thesis draft, I may as well withdraw my name. Amazingly enough this little self-motivational talk worked wonders and I spent the middle two weeks of January writing like crazy and dosing myself with ibuprofen for the increasingly severe tendinitis in my hands. (See? Not recommended.) Then, wonder of wonders, I was shortlisted and I got to dump the entire thing in my supervisor’s lap and say “Read this, now!” The next month was a panic-and-endorphin-fuelled rush to get the thing ready for submission by 20 February, so that I could have my viva by the end of March. This involved some fairly amusing-in-retrospect scenes. I had to enlist my husband to draw a manuscript stemma for me in OmniGraffle because my hands were too wrecked to operate a trackpad. I imposed a series of strict deadlines on my own supervisor for reading and commenting on my draft, and met him on the morning of Deadline Day to incorporate the last set of his corrections, which involved directly hacking a horribly complicated (and programmatically generated) LaTeX file that contained the edited text I had produced. (Yes, *very* poor programming practice that, and I am still suffering the consequences of not having taken the time to do it properly.) In the end the British Academy rejected me anyway, but what did I care? I had a Ph.D. With that experience in mind, I set myself an ambitious and optimistic target of ‘spring 2012’ for having a draft of the book. For the record the conversion requires light-to-moderate revision of five existing chapters, complete re-drafting of the introductory chapter, and addition of a chapter that involves a small chunk of further research. It was in this context, last October, that I saw the usual buzz surrounding the ramp-up to NaNoWriMo and thought to myself “you know, it would be kind of cool to have an academic version of that.” It turns out I’m not the only one who thought this thought–there actually was an “Ac[ademic ]Bo[ok ]WriMo” last year. In the end the project that was paying my salary demanded too much of my attention to even think about working on the book, and the idea went by the wayside. The target of spring 2012 for production of the complete draft was also a little too optimistic, even by my standards, and that deadline whizzed right on by. Here it is November again, though, and AcWriMo is still a thing (though they have dropped the explicit ‘book’ part of it), and my book still needs to be finished, and this year I don’t have any excuses. So I signed myself up, and I am using this post to provide that extra little bit of public accountability for my good intentions. I am excusing myself from weekend work on account of family obligations, but for the weekdays (except *possibly* for the days of ESTS) I am requiring of myself a decent chunk of written work, with one week each dedicated to the two chapters that need major revision or drafting de novo. I won’t be submitting the thing to the publisher on 30 November, but I am promising myself (and now the world) that by the first of December, all that will remain is bibliographic cleanup and cosmetic issues. I am really looking forward to my Christmas present of a finished manuscript, and I am counting on public accountability to help make sure I get it. Follow me on Twitter or App.net (if you don’t already) and harass me if I don’t update! Conference-driven doctoral theses In the computer programming world I have occasionally come across the concept of ‘conference-driven development’ (and, let’s be honest, I’ve engaged in it myself a time or two.) This is the practice of submitting a talk to a conference that describes the brilliant software that you have written and will be demonstrating, where by “have written” you actually mean “will have written”. Once the talk gets accepted, well, it would be downright embarrassing to withdraw it so you had better get busy. It turns out that this concept can also work in the field of humanities research (as, I suspect, certain authors of Digital Humanities conference abstracts are already aware.) Indeed, the fact that I am writing this post is testament to its workability even as a means of getting a doctoral thesis on track. (Graduate students take note!) In the autumn of 2007 I was afloat on that vast sea of Ph.D. research, no definite outline of land (i.e. a completed thesis) in sight, and not much wind in the sails of my reading and ideas to provide the necessary direction. I had set out to create a new critical edition of the Chronicle of Matthew of Edessa, but it had been clear for a few months that I was not going to be able to collect the necessary manuscript copies within a reasonable timeframe. Even if I had, the text was far too long and copied far too often for the critical edition ever to have been feasible. One Wednesday evening, after the weekly Byzantine Studies department seminar, an announcement was made about the forthcoming Cambridge International Chronicles Symposium to be held in July 2008. It was occurring to me by this point that it might be time to branch out from graduate-student conferences and try to get something accepted in ‘grown-up’ academia, and a symposium devoted entirely to medieval chronicles seemed a fine place to start. I only needed a paper topic. Matthew wrote his Chronicle a generation after the arrival of the First Crusade had changed pretty much everything about the dynamics of power within the Near East, and his city Edessa was no exception. Early in his text he features a pair of dire prophetic warnings attributed to the monastic scholar John Kozern; the last of these ends with a rather spectacular prediction of the utter overthrow of Persian (read: Muslim, but given the cultural context you may as well read “Persian” too) power by the victorious Roman Emperor, and Christ’s peace until the end of time. It is a pretty clearly apocalyptic vision, and much of the Chronicle clearly shows Matthew struggling to make sense of the fact that some seriously apocalyptic events (to wit, the Crusade) occurred and yet it was pretty apparent forty years later that the world was not yet drawing to an end with the return of Christ. Post-apocalyptic history, I thought to myself, that’s nicely attention-getting, so I made it the theme of my paper. This turned out to be a real stroke of luck – I spent the next six months considering the Chronicle from the perspective of somewhat frustrated apocalyptic expectations, and little by little a lot of strange features of Matthew’s work began to fall into place. The paper was presented in July 2008; in October I submitted it for publication and turned it into the first properly completed chapter of my thesis. Although this wasn’t the first article I submitted, it was the first one that appeared in print.
{"url":"https://byzantini.st/","timestamp":"2024-11-06T19:08:12Z","content_type":"text/html","content_length":"155859","record_id":"<urn:uuid:f038f610-49a3-4d9b-86b3-ef9bcf5b0870>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00200.warc.gz"}
Features of Dispersed Phase Modelling: Particles Introduction to Particles Let's start with the definitions Dispersed medium – refers to either small particles in a gaseous, liquid or solid aggregate state, or a solid with cavities (pores). Visions of humid sea-side air, sandy beaches and a glass of lemonade evoke thoughts of summer, but are also demonstrative examples of dispersed media. In these examples we can see the key feature of these media - particles do not exist on their own, but rather interact with the continuous phase carrying them: • water droplets are carried by the air flow • gas bubbles float in the drink • water seeps through the sand into the ground Furthermore, the particles are evenly distributed amongst the molecules of the continuous phase, without entering into a chemical reaction with them. The interactions of the dispersed and continuous phases form a dispersed multiphase system. Based on the particle size (d), we can distinguish between coarse and fine dispersed systems. And if the particle diameter is similar to the size of the molecules of the carrier medium, then such a system is dubbed a true solution. It is not at all necessary for all particles in a dispersed system to have the same size and shape. On the contrary, they will most likely be completely unlike each other. Dispersion in FlowVision: Particles and Carcass Depending on the aggregate states of the dispersed and carrier phases, different dispersed systems are formed. All their diversity is displayed in the following table: So what's the difference? Conceptually, the phases "Particles" and "Carcass" are similar - within the framework of the Euler method, each can be considered to be a continuum. However, the physics for these two phases are different. The main difference is that the "Carcass" phase is rigid. It follows that the models and relationships applied to particles and rigid bodies are different: the physical process equations for a Carcass do not have a convective term. And anyway, the equations for Motion and Phase Transfer are pretty much irrelevant for a Carcass. At this stage, let's follow the example set by our developers and split our account of FlowVision’s dispersion capabilities into two parts: Particles and Carcass. From here on the sole focus of the article will be the Particles. The use of a Carcass for modelling heat exchangers, filters, soil and other porous media, will be covered in the third article of this series. Euler's Method in Particle Modelling In FlowVision, the particles being considered are combined into a dispersion cloud. Thus, physical processes are not evaluated for each individual particle, but rather for a volume of space, which has the properties of a continuous medium. Therefore, when simulating a multiphase flow, the cloud of particles and the continuous carrier phase interact as interpenetrating continuous media. However, this approach does not explicitly take into account the collisions of particles with each other and, as a result, there are no stresses inside the cloud. Therefore, it is typical to introduce additional models (e.g. fluidized bed model) in order to account for this interaction of particles with each other within the framework of the Euler method. FlowVision implements a simple repulsion model that introduces an additional term to the particle motion equation. It’s coefficients can be edited within the FlowVision interface. Why does the dispersion solver use Euler's method? Another approach to modelling particles in a continuous medium is the Lagrange method. It involves solving a larger number of equations for the modelled particles. Each modelled particle represents a certain (fairly large) number of real particles. The Lagrangian solver records the movement of these model particles from one face to the next for each cell through which the particle trajectory passes. The Lagrangian method had been implemented in the 2nd generation of FlowVision. But as of FlowVision 3.xx.xx it was decided to switch to the Euler method, which requires less computational resources and less RAM. Both methods (Euler and Lagrange) have their advantages and disadvantages. A detailed overview of these can be found in literature on the subject. What is FlowVision able to simulate using particles? • The movement of gas bubbles in a liquid (taking into account the change in size of the bubbles) • Jet spraying from a nozzle (taking into account the splitting and merging of drops) 1. The current implementation in FlowVision does not support the adding of more than one dispersed phase to a model (i.e. each model is limited to only one Particles or only one Carcass phase). 2. FlowVision is not yet able to model the phase transition of a continuous phase changing to a dispersed one. Aside from limitations, there are also strong capabilities: 1. Particles can participate in multiphase VOF interactions: (particles + continuous #1) + continuous #2. 2. FlowVision 3.12.02 introduced a model for particle condensation. Condensation modelling is currently in beta testing, so if you encounter any difficulties in applying the model, please contact technical support: support@flowvisioncfd.com. 3. Particles are compatible with periodic boundary conditions and sliding surfaces.
{"url":"https://flowvisioncfd.com/en/support-page-en/blog-en/features-of-dispersed-phase-modelling-particles?start=1","timestamp":"2024-11-12T04:10:20Z","content_type":"text/html","content_length":"34593","record_id":"<urn:uuid:9abe0548-ad88-4634-96cf-c0f84f200408>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00082.warc.gz"}
Mathematical Analysis, Modelling, and Applications The activity in mathematical analysis is mainly focussed on ordinary and partial differential equations, on dynamical systems, on the calculus of variations, and on control theory. Connections of these topics with differential geometry are also developed.The activity in mathematical modelling is oriented to subjects for which the main technical tools come from mathematical analysis. The present themes are multiscale analysis, mechanics of materials, micromagnetics, modelling of biological systems, and problems related to control theory.The applications of mathematics developed in this course are related to the numerical analysis of partial differential equations and of control problems. This activity is organized in collaboration with MathLab for the study of problems coming from the real world, from industrial applications, and from complex systems.
{"url":"https://www.math.sissa.it/taxonomy/term/2?page=12","timestamp":"2024-11-08T21:18:53Z","content_type":"application/xhtml+xml","content_length":"39754","record_id":"<urn:uuid:62ee4a70-4ef8-4b50-b660-4e1eb6b6c3e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00055.warc.gz"}
Edge querying in graph theory In this post, I will present three graph theory problems in increasing difficulty, each with a common theme that one would determine a property of an edge in a complete graph through repeated iterations, and seek to achieve a greater objective. ESPR Summer Program Application: Alice and Bob play the following game on a $K_n$ ($n\ge 3$): initially all edges are uncolored, and each turn, Alice chooses an uncolored edge then Bob chooses to color it red or blue. The game ends when any vertex is adjacent to $n-1$ red edges, or when every edge is colored; Bob wins if and only if both condition holds at that time. Devise a winning strategy for Bob. This is more of a warm-up to the post, since it has a different flavor from the other two problems, and isn't as demanding in terms of experience with combinatorics. However, do note that when this problem was first presented, applicants did not know the winner ahead of time; it would be difficult to believe that Bob can hold such a strong condition, especially when Alice seemingly had far more control over the structure of the game, and there are lots of traps that could led to a possible false solution. Say that a "bad" vertex is one that is connected to some blue edge, and a "good" vertex is one that isn't. To solve this problem, I found it motivating to first think about what could lead to a win for Alice, when Bob has no proper choice. The most blatant examples would be things that happen towards the end of the game. For example, if only one good vertex remains but there are uncolored edges not adjacent to it, Bob would lose, since Alice can just query every edge adjacent to it, and Bob ends up either having no good vertices, or ending the game prematurely. This idea can be further generalized; if there is at least one edge between bad vertices that remains uncolored, Alice wins since she can simply query every edge that's adjacent to at least one good vertex. If the game doesn't end by the end of all of Alice's queries, there would be no good vertices left and Bob has no way of winning. Hence, Bob must adhere to a very strict strategy of always keeping every edge between bad vertices colored, and from this we can actually deduce the solution. Solution: Bob marks the first edge blue. Then, on each query: If Alice queries an edge between two good vertices, Bob can simply mark it red. If Alice queries an edge between a good vertex and a bad vertex, Bob would mark it red, unless doing so makes the good vertex connected to every bad vertex and doesn't lead to winning. Then, Bob colors it blue. Note that at any point, Alice cannot query an edge between two bad vertices. Hence, Bob can definitely win when he has only one good vertex remaining. (EDIT: this problem is nearly identical to IOI 2014, Task 3.) The below two problems feature some form of optimization. Canada/USA Mathcamp Application: You are investigating a dangerous cult, with the goal of uncovering its reclusive Mastermind. From previously gathered intelligence, you know that the cult has $n\ge 5$ members: the Mastermind, the Go-between, the Figurehead, and $n-3$ Pawns. Each member of the cult associates with some, but not all, of the other members. In particular: • The Mastermind is very reclusive; they associate with the Go-between, but no other member. • The Go-between associates with the Mastermind and the Figurehead, but none of the Pawns. • The Figurehead associates with everyone except the Mastermind. • The Pawns associate with the Figurehead, but not the Mastermind nor the Go-between. Some pairs of Pawns may also associate with each other (so, each Pawn could potentially have up to 97 total You cannot figure out which members have special roles just by looking. However, each day, you can select a pair of members to investigate, and determine whether they associate with each other. Find a strategy to find the mastermind in under $2n$ days. The staff members of MathCamp presented a solution in $3n+\theta(1)$ time during a math jam. However, that can be further optimized. The main observation is that: we can immediately (almost) determine the type of an individual if we query the edges between them and everyone else, from which we can keep track of two stacks of people, those who might be masterminds and those who aren't. The figurehead would be among the people who aren't masterminds, assuming that we didn't choose a special individual at the start, and the only person in the other stack who can't connect to the figurehead is the mastermind. The full solution is as below: we can first ask the connection between a fixed member "Bob" and everyone else. If there are $n-2$ positives or $1$ positive, we can immediately determine the Mastermind. If there are $3\le m\le n-3$ positives, we know that Bob is a pawn. We can keep a stack $s_1$ of the members connected to Bob and another $s_2$ of the members not connected to Bob. Obviously, $s_2$ contains the mastermind and $s_1$ contains the figurehead, who only fails to connect to the mastermind. Hence, we can keep asking for the interaction between the top member in $s_1$ and the top member in $s_2$; when the stack becomes empty, the mastermind tops $s_2$. If there are $2$ positives, it can't be known if Bob is a pawn or the go-between. However, we can then ask for the connection between the two that Bob has a positive connection with; if they connect, then Bob is a pawn and we can proceed as before. If they don't, then Bob is the in-between, and every other person who Bob can't connect to is a pawn; if Alice is someone Bob connects to, we can try to connect Alice with any pawn. Alice is the mastermind if and only if the result is negative. This can be shown to take at most $2n-3$ days if we also use the process of elimination. Side note: despite this solution, I was still rejected at the program :( Alright, we'll move onto the final entry. ISL 2019 C8: Alice has a $K_n$, where each edge has a direction that's not initially known to her. Each day, she can asks for the direction of one edge. Find a way to, after $4n$ days, determine whether there exists an edge with outdegree at most $1$. The first non-trivial barrier is to come up with an $O(n)$ algorithm, which is simple enough for something near the end of the IMO Shortlist. It's fairly easy to make all but one vertex have out-degree $1$ by simply building a spanning tree, where a vertex is "left alone" once it has an outgoing edge. Then, we can fully eliminate most of them in the same way, down to possibly at most $2$ (if two vertices with outdegree $1$ remain, and the edge between them was already drawn). This achieves a bound of approximately $5n$ queries pretty easily by simply brute-forcing the remaining $3$ candidates. However, that's not quite what we want. To optimize this solution further, say that after the initial two steps, vertices $a,b$ have one outgoing edge and $c$ has no outgoing edges. Suppose that the edge between $a,b$ was drawn during the first stage; neither of them were the final vertex added to the spanning tree while $c$ was, so for the tree to be acyclic, the edges $ac$ and $bc$ cannot have been drawn. After drawing these two edges, we can eliminate one of $a,b,c$ to come up with a solution that achieves the desired bound. 1. Can we have another post? 2. Edge querying in graph theory can seem complex, but understanding it deeply is crucial. Exploring this topic can be greatly supported with effective primary math tuition for foundational knowledge and skills. Read more: primary math tuition Continuing the tradition of past years, our seniors at the Indian IMO camp(an unofficial one happened this year) once again conducted LMAO, essentially ELMO but Indian. Sadly, only those who were in the unofficial IMOTC conducted by Pranav, Atul, Sunaina, Gunjan and others could participate in that. We all were super excited for the problems but I ended up not really trying the problems because of school things and stuff yet I solved problem 1 or so did I think. Problem 1: There is a grid of real numbers. In a move, you can pick any real number , and any row or column and replace every entry in it with . Is it possible to reach any grid from any other by a finite sequence of such moves? It turned out that I fakesolved and oh my god I was so disgusted, no way this proof could be false and then when I was asked Atul, it turns out that even my answer was wrong and he didn't even read the proof, this made me even more angry and guess what? I was not alone, Krutarth too fakesol • Get link • Facebook • Twitter • Pinterest • Email • Other Apps 2 comments Hii everyone! Today I will be discussing a few geometry problems in which once you "guess" or "claim" the important things, then the problem can easily be finished using not-so-fancy techniques (e.g. angle chasing, power-of-point etc. Sometimes you would want to use inversion or projective geometry but once you have figured out that some particular synthetic property should hold, the finish shouldn't be that non trivial) This post stresses more about intuition rather than being rigorous. When I did these problems myself, I used freehand diagrams (not geogebra or ruler/compass) because I feel that gives a lot more freedom to you. By freedom, I mean, the power to guess. To elaborate on this - Suppose you drew a perfect diagram on paper using ruler and compass, then you would be too rigid on what is true in the diagram which you drew. But sometimes that might just be a coincidence. e.g. Let's say a question says $D$ is a random point on segment $BC$, so maybe • Get link • Facebook • Twitter • Pinterest • Email • Other Apps 1 comment
{"url":"https://www.omath.club/2023/01/edge-querying-in-graph-theory.html","timestamp":"2024-11-13T11:09:26Z","content_type":"application/xhtml+xml","content_length":"144388","record_id":"<urn:uuid:acfa54cb-947c-4e8c-90a6-c19b5b12d392>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00077.warc.gz"}
Grid functions Grid operators-and-functions functions are used to calculate with data items of grid (two-dimensional) domains, explicitly using the two-dimensional structure of data in grid, like potential or • dist2 - the distances in a grid towards a point data item • griddist - impedance in a grid towards the nearest point in a pointset, summing the resistances of the shortest path to the nearest point • potential - a neighborhood operation to sum the values of neighbouring cells in a grid, based on a kernel • proximity - a neighborhood operation to get the maximum value of neighbouring cells in a grid, based on a kernel • diversity - the number of different occurrences in the neighbourhood of each cell in a grid • district - partition according to areas of adjacent (horizontal & vertical, not diagonal) grid cells with the same values • district_8 - partition according to areas of adjacent (horizontal & vertical & diagonal) grid cells with the same values • perimeter - for each occurring partition count the number of edges that cells of that partition have with other cells or the raster border • perimeter_weighted - same, but with provided weights for the North, East, South, and West edges • raster_merge - to merge data from smaller to larger grids, e.g. to combine country grids to a European grid
{"url":"https://geodms.nl/docs/grid-functions.html","timestamp":"2024-11-11T07:06:20Z","content_type":"text/html","content_length":"17986","record_id":"<urn:uuid:f72d28b5-3f0a-4c13-a339-1590d5f186c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00041.warc.gz"}
Necessary Conditions for a Fixed Point of Maps in Non-Metric Spaces Necessary Conditions for a Fixed Point of Maps in Non-Metric Spaces () Share and Cite: I. Raykov, "Necessary Conditions for a Fixed Point of Maps in Non-Metric Spaces," Advances in Pure Mathematics, Vol. 2 No. 6, 2012, pp. 371-372. doi: 10.4236/apm.2012.26055. 1. Introduction Let X denote a complete (or compact) metric space and also What must be the conditions, in the means of the meric space X, such that the continuous map We suppose that (see [1-3]): the continuous map We remind that Banach contraction principle for multivalued maps is valid and also the next Theorem, proved by H. Covitz and S. B. Nadler Jr. (see [4]). Theorem 1. Let 2. Main Result We consider now the next theorem: Theorem 2. Let We suppose also that the maps: Then if the rest terms of the sequence Proof. Let Let also the rest of the terms of the sequence and therefore the Cauchy sequence 3. Acknowledgements We express our gratitude to Professor Alexander Arhangelskii from OU-Athens for creating the problem and to Professor Jonathan Poritz and Professor Frank Zizza from CSU-Pueblo for the precious help for solving this problem, and to Professor Darren Funk-Neubauer and Professor Bruce Lundberg for correcting some grammatical and spelling errors.
{"url":"https://www.scirp.org/journal/paperinformation?paperid=24365","timestamp":"2024-11-03T22:15:24Z","content_type":"application/xhtml+xml","content_length":"82552","record_id":"<urn:uuid:1e4ae151-dbde-482b-874e-97ccee1d716b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00500.warc.gz"}
Find the expression of x in terms of $\zeta$ written 6.8 years ago by modified 2.7 years ago by Co-ordinates of the nodes of finite element are given by P(4, 0) and Q(8, 0). Find the expression of x in terms of $\zeta$ when: i) Third node R is taken at (6, 0) ii) Third node R is taken at (5,0) Comment on the result. Subject: Finite Element Analysis Topic: Two Dimensional Finite Element Formulations Difficulty: Medium 1 Answer written 6.6 years ago by • modified 6.5 years ago i) Third node R is taken at (6,0) $ x= x_j \phi_j = \phi_1 x_1 + \phi_2 x_2+\phi_3 x3 $ $ =\frac{1}{2}\xi(\xi-1)4+\frac{1}{2}\xi(\xi+1)8+(1-\xi^2)6 $ $ =2\xi^2-2\xi+4\xi^2+4\xi+6-6\xi^2 $ $ \,\,\,x=2\xi+6$ ii) Third node R is taken at (5,0) $ x=\sum x_j\phi_j=\phi_1x_1+\phi_2 x_2+\phi_3 x3 $ $ =\frac{1}{2}\xi(\xi-1)4+\frac{1}{2}\xi(\xi+1)8+(1-\xi^2)5 $ $ =2\xi^2-2\xi+4\xi^2+4\xi+5-5\xi^2 $ $ \,\,\,x=\xi_2+2\xi+5$ Comment:- When C is taken at midpoint of the element the transformation $x$ and $\xi$ is linear, but when C is taken away from the midpoint the transformation becomes non-linear. Such a transformation is useful to formulate finite elements having curved edges so that curved structural geometry can be modelled.
{"url":"https://www.ques10.com/p/23701/find-the-expression-of-x-in-terms-of-zeta-1/","timestamp":"2024-11-11T14:07:21Z","content_type":"text/html","content_length":"26871","record_id":"<urn:uuid:a739ff81-8573-4eee-acce-003bd02356c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00264.warc.gz"}
High School GPA Calculator GPA calculation The GPA is calculated as a weighted average of the grades, when the number of credit/hours is the weight and the numeric grade is taken from the GPA table. The GPA is equal to the sum of the product of the credit hours weight (w) times the grade (g): GPA = w[1]×g[1]+ w[2]×g[2]+ w[3]×g[3 ]+ ... + w[n]×g[n] The credit hours weight (w[i]) is equal to the credit hours of the class divided by the sum of the credit hours of all the classes: w[i]= c[i] / (c[1]+c[2]+c[3]+...+c[n]) GPA table Grade GPA A+ 4.33 A 4.00 A- 3.67 B+ 3.33 B 3.00 B- 2.67 C+ 2.33 C 2.00 C- 1.67 D+ 1.33 D 1.00 D- 0.67 F 0 P (pass) - NP (no pass) - See also
{"url":"https://rapid-tables.com/calc/grade/150WRxclDNKQdPwyOmzg.html","timestamp":"2024-11-09T07:54:26Z","content_type":"text/html","content_length":"14983","record_id":"<urn:uuid:20f47a6d-6186-4e4a-b110-d5492520325b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00454.warc.gz"}
A Numbers Game If moving to the Ross K. Smith 100-point scale increases the accuracy of speaker points, and if speaker points accurately reflect a team's abilities, we might expect that the scale change would help speaker points predict future winners. To my surprise, there is not a dramatic increase in the predictive accuracy of speaker points in the three large tournaments that moved to the new scale while maintaining the same number of teams and preliminary rounds. I used the following model: Predictions are made for rounds five and later, using at least the first four preliminary rounds. The model makes no prediction if the two teams have different win-loss records. If they have the same win-loss record and the same total points, the model also does not make a prediction. Otherwise, it predicts that whichever team had the higher points for the predicting rounds will win the debate. If two teams have the same record for rounds one through four, but they don't meet until round eight, the model still predicts the winner based on their points in rounds one through four, since the teams might well have debated in round five. It also makes predictions based on their points in rounds one through five, one through six, and one through seven, assuming their win-loss records are the same after rounds five, six, and seven. All predictions are based on rounds one through four, five, six, or seven; no predictions ever discard any early prelims. Wake Forest, 8 prelims │ Year │Teams│Tied points, no prediction │Correct prediction│Incorrect prediction│Accuracy│ │2006-2007│138 │11 │89 │50 │63.0% │ │2007-2008│134 │1 │101 │79 │56.1% │ UNLV open division, 7 prelims │ Year │Teams│Tied points, no prediction │Correct prediction│Incorrect prediction│Accuracy│ │2008-2009│56 │0 │32 │23 │58.2% │ │2009-2010│54 │0 │20 │11 │64.5% │ Harvard, 8 prelims │ Year │Teams│Tied points, no prediction │Correct prediction│Incorrect prediction│Accuracy│ │2008-2009│80 │9 │57 │27 │66.1% │ │2009-2010│87 │4 │68 │33 │66.7% │ The Harvard tournament this year had a very explicit and very simple translation from the 30-point scale to the 100-point (or Ross K. Smith) scale. Using the results packet, we can see how the judging pool interpreted the RKS scale: If we apply the Harvard translation to historical tournaments, the switch to RKS this year was accompanied by more point inflation than usual. The older data (drawn from debateresults.com) shows that a median speaker used to earn between 27.8 and 28.0 points (78 to 80 points, under RKS). This year, he or she earned 28.25 points (82.5, under RKS). A 28.0 (80, under RKS) used to mean 55th to 65th percentile. This year, it meant about 35th percentile. It will be interesting to see how many judges circled the indicator on the front of their ballots indicating that they were conforming to the suggested scale translation. It should also be noted that Harvard's points this year were substantially lower than Kentucky's points this year: (Click here for more on speaker point scale changes) Judge scale variation In a post on edebate (mirror, mirror), Brian DeLong suggests that tournaments adopting the 100-point (RKS) scale provide an interpretation of the scale to judges to make sure points are allocated fairly. His motivation is based partially on a discrepancy he saw between judges in the Kentucky results from this year (mirror): "Some people are on this 87=average boat, while others place average at around 78-80ish". Here is a chart of the average points given out by judges at Kentucky this year: Some of these discrepancies could be between judges who simply saw teams of different abilities. One way correct for this is to compare the points given by each judge to the points given to the same competitors by other judges. From this, we can see how much a judge skews high or low. Applying this skew to the actual average points (86.37), we get an estimate of each judge's perception of the As in the first chart, the red line show the actual average point distribution; the blue line shows the distribution of estimates of judge perceptions of the average. To get a feel for how the estimate works, here are two examples: • Alex Lamballe gave an average of 90.83 points, but other judges judging the same debaters gave them an average of 83.00 points. His skew is 7.83 points, so we estimate that he perceives the average points to be 7.83 points higher than the true average of 86.37. His estimated perceived average is 94.20. • Justin Kirk gave an average of 79.50 points, but other judges judging the same debaters gave them an average of 90.00 points. His skew is -10.50 points, so we estimate that he perceives the average points to be 10.50 points lower than the true average of 86.37. His estimated perceived average is 75.87. From these two extreme examples, it should be clear that this method of estimating judge-perceived averages is quite inexact. I think it is mostly useful as a check to ensure that the point skew in the first graph is not solely due to the differing quality of teams seen by different judges. Clearly, there is some variation in what judges think "average" is. But how can we check if Kentucky showed more variation in judge point scale than other tournaments? One way is to measure judge scale variation with a test that has a scale-invariant expected distribution. The Mann-Whitney U test is approximately normal for reasonably-sized samples, so we can use that to find the Z-score for each judge at a tournament. The larger the variance of judge Z-scores, the more variation there is in judge point scale. (As above, the two samples are the points given by a certain judge and the points given to the same debaters by the rest of the pool.) The 2009 tournament showed more judge scale variance than any of the other Kentucky tournaments in the debateresults.com database. If we compare the distribution of judge scale Z-scores from 2009 to the combined distribution of judge scale Z-scores from Kentucky in 30-point-scale years: There is clearly more judge scale variation under the 100-point scale. The Wake Forest tournament was the first tournament to change to the 100 point scale, in 2007. There was no corresponding jump in judge scale variance at Wake that year, but Wake provided a reference scale with extensive documentation. Reference point scales DeLong also suggested a translation of the 30-point scale to the 100-point scale. The Kentucky judges from 2008 and 2009 also implicitly suggest a translation between the scales by the distribution of their votes. For instance, a 27 was the at 10th percentile of points given in 2008; this value is closest to an 80 in 2009, since 80 was at the 11th percentile in 2009. The chart below compares DeLong's translation with the implicit translation by the Kentucky and Wake judges. It also charts the translations proposed by Michael Hester (mirror, mirror) and Roy Levkovitz as well as a very simple "last digit" translation, calculated subtracting 20 from a score under the 30-point system and then multiplying by 10: 27.5 becomes 75, 29.0 becomes 90, and so on. The implicit translation induced by the Kentucky judging from 2008 is: │30pt scale │100pt scale │ │29.5 │98 │ │29.0 │94 │ │28.5 │91 │ │28 │88 │ │27.5 │84.5 │ │27 │80 │ │26.5 │75 │ │26.0 │72 │ The other translations (except Wake) are lower for most of the scale. Another way to compare translations is to see how they would affect the cumulative point distributions. For example, at Kentucky in 2009, a 30th percentile performance would earn an 85. At Wake in 2007, a 70th percentile performance would earn a 90. (Included in this chart is the point scale of Brian Rubaie, which he gives in terms of position, rather than a translation from the 30-point scale.) In the cumulative distribution chart, as in the translation chart, the "last digit" translation is close to the proposal of DeLong. The comparatively larger point range Hester gives to the 40%-85% area is alluded to in his proposal: "these two have large ranges b/c they are the areas i want to distinguish the most". Levkovitz has a similar reasoning: The difference to me between a 29.5 and 30 is quite negligible; both displayed either perfection or near perfection so their range should be smaller. But it is within what we call now 28.5, 28, and 27.5s that I want to have a larger range of points to choose from. The reasoning for this is that I want to be able to differentiate more between a solid 28 and a low 28, or a strong 28.5 and low 28. Every summer there is discussion about the side bias of prospective controversy areas and resolutions. Using the data from DebateResults.com, we can see what side bias past resolutions have exhibited Under the Bradley-Terry model, the energy, China, and security guarantee topics had small, yet highly statistically significant (p < .001), negative bias. The other three topics show no statistically significant bias either way. The BT model suggests that the largest topic side bias in the past six years was 12.4%, on the energy topic. With a 12.4% neg bias, in an otherwise evenly matched round, the neg has a 52.9% chance of winning. For more details, see the description of how the bias is measured. In response to a question from cross-x.com, I checked to see if the point distributions are any smoother for teams that might clear. It turns out that they aren't, except to the extent that the distributions have a narrower range: The Georgia State tournament is moving to a 100-point speaker point scale next year. This will make them the third large [N.B.] tournament to use a non-standard scale. I have put up some charts of how changing the scale affected point distributions at Wake and USC. As you might expect, the number of distinct speaker points per round and per speaker both increased. On the other hand, judges didn't use the whole range of points. At Wake (100-point scale), points clustered around multiples of 5. At USC (30-point scale, finer granularity), points clustered around the old half-point scale. Are any other tournament directors planning on switching point scales? If you considered doing so, but decided not to, what stopped you? If you are switching, why did you choose the scale you chose? N.B. Two other (smaller) tournaments changed speaker point granularity since '03-'04: the Weber State round robin used a granularity of 0.1 at their '08-'09 tournament, and the Northern Illinois tournament seems to have changed from the usual half-point granularity to a full-point granularity for their '05-'06 tournament. I have put up some charts about CEDA Nats vs. NDT attendance. The summary: • There was a sharp drop off in CEDA Nats attendance in 2009. Some coaches have suggested this was due to its placement in Pocatello, Idaho. • From 2004-2008, 57% of CEDA Nats competitors debated almost exclusively in the open division at other tournaments the year in question. CEDA Nats 2009 had a novice breakout, but we won't be able to see if or how attendance changed until Bruschke's 2008-2009 database dump comes out. Even then, any analysis will have to contend with the decreased total attendance. • Fewer than half of NDT teams skip CEDA Nats. There is very little correlation between NDT success and skipping CEDA. The raw numbers and the code used to generate them are available.
{"url":"http://blog.anumbersgame.net/","timestamp":"2024-11-08T08:39:42Z","content_type":"application/xhtml+xml","content_length":"89486","record_id":"<urn:uuid:f93336ad-8cb5-4eb7-8f5d-6b3d674233c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00682.warc.gz"}
Starobinsky model of cosmic inflation In phenomenology of cosmology, the Starobinsky model of cosmic inflation takes into account – and takes as the very source of the inflaton field – higher curvature corrections to the Einstein-Hilbert action of gravity, notably the term $R^2$ (square of the Ricci curvature). The Starobinsky model stands out among models of inflation as predicting a low value of the scalar-to-tensor ratio $r$, specifically it predicts $r \sim \frac{12}{N^2}$ where $N$ is the number of $e$-foldings during inflation (see e.g. Kehagias-Dizgah-Riotto 13 (2.6)). Observational support Models of Starobinsky-type are favored by experimental results (PlanckCollaboration 13, BICEP2-Keck-Planck 15, PlanckCollaboration 15, BICEP3-Keck 18) which give a low upper bound on $r$, well below $0.1$ (whereas other models like chaotic inflation are disfavored by these values), see (PlanckCollaboration 13, page 12). With respect to this data, the Starobinsky model (or “$R^2$ inflation”) is the model with the highest Bayesian evidence (Rachen, Feb 15, PlanckCollaboration 15XX, table 6 on p. 18) as it is right in the center of the likelihood peak, shown in dark blue in the following plots (PlanckCollaboration 13, figure 1, also Linde 14, figure 5) and at the same time has the lowest number of free parameters This remains true with the data of (PlanckCollaboration 15), see (PlanckCollaboration 15 XIII, figure 22) and in the final analysis (PlanckCollaboration 18X, Fig 8), which gives the following (from $R^2$ inflation has the strongest evidence among the models considered here. However, care must be taken not to overinterpret small differences in likelihood lacking statistical significance. The models closest to $R^2$ in terms of evidence are brane inflation and exponential inflation, which have one more parameter than $R^2$ (PlanckCollaboration 15XX, p. 18) This picture is further confirmed by observations of the BICEP/Keck collaboration reported in BICEP-Keck 2021, whose additional data singles out the dark blue area in the following (Fig. 5): See also Ellis 13, Ketov 13, Efstathiou 2019, 50:49 for brief survey of Starobinsky inflation in relation to observation, and see Kehagias-Dizgah-Riotto 13 for more details. There it is argued that the other types of inflationary models which also reasonably fit the data are actually equivalent to the Starobinsky model during inflation. Embedding into supergravity Being concerned with pure gravity (the inflaton not being an extra matter field but part of the field of gravity) the Starobinsky model lends itself to embedding into supergravity (originally due to Cecotti 87, see e.g. Farakos-Kehagias-Riotto 13). Such embedding has been argued to improve the model further (highlighted e.g. in Ellis 13), for instance by graphics grabbed from Dalianis 16, p. 8 More concretely, in Hiraga-Hyakutake 18 a simple model of 11-dimensional supergravity with its $R^4$higher curvature correction (see there) is considered and claimed to yield inflation with “graceful exit” and dynamical KK-compactification: graphics from Hiraga-Hyakutake 18, p. 8 The model is due to and the analysis of its predictions is due to: The experimental data supporting the model is due to See also Review and exposition includes Discussion with more general higher curvature corrections: Discussion of eternal inflation in Starobinsky-type models See also: Embedding into supergravity Discussion of embedding of Starobinsky inflation in supergravity originates in and is further developed in the following articles: Embedding into 11d supergravity Discussion of Starobinsky inflation in 11-dimensional supergravity with its higher curvature corrections included (see there): Embedding into superstring theory Embedding of Starobinsky inflation into superstring theory is discussed in Concerning further higher curvature corrections: On quasi-realistic “flipped” $SU(5)$-GUT, modeled in 4d heterotic string theory and subsuming realistic Starobinsky-type cosmic inflation:
{"url":"https://ncatlab.org/nlab/show/Starobinsky+model+of+cosmic+inflation","timestamp":"2024-11-11T00:54:13Z","content_type":"application/xhtml+xml","content_length":"55996","record_id":"<urn:uuid:10b68d3e-d3ea-46e4-b03a-ac8d9ae9190c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00411.warc.gz"}
The Wheel: Bayesian model to select the right stocks and time the selling of puts and calls The Wheel is a well known strategy for selling naked puts and covered calls. It is called “picking up dimes in front of a steamroller”. The reason The Wheel is described as such lies in the fact that this strategy gains a lot of small winnings that are often only to be lost to one big loss. (If you are able to read Dutch we have a free eBook that describes The Wheel strategy. To download this Dutch eBook, fill in the form below:) The Bayesian model developed by Trading Behavior Management makes sure that this one big loss is avoided. This achieved by two strategies: 1. Select the right stocks that make the probability of this one big loss extremely low. 2. Select the right time to sell naked puts or covered calls. Our results with this tool are: Dates S&P 500 Trading Behavior Management (as % of the capital requirements of the broker) 1-5-2022 till 1-5-2023 +1% +30% 1-5-2023 till 10-10-2023 +5% +18% The tool looks as follows: The first four lines are the four indicators for the general market (S&P 500, DOW, Nasdaq and the DAX). A1 to A7 are stocks that have been selected by us, but which are anonymized in this picture. Tesla is added to give an example of a stock where the model shows that it is very unwise to use it for The Wheel. The first column is called “Continue?” is the probability that the current wave in the day chart will continue. A simplified version of Elliott wave analysis used. As you can see: in this example the S&P 500 and the Nasdaq are likely to continue their wave #1 down, whereas the DOW and the DAX are less likely to continue their wave #1 down. Whether a wave is continuing or not, is essential for timing the selling of naked puts and covered calls. The best moment to sell naked puts and covered calls is when a wave discontinues and a new wave is started. Rather than buy low and sell high, it is sell puts after a downturn and sell covered calls after an upturn. The second column is called “Good stock?”. Based on a limited number of financial variables this Bayesian model calculates the probability that a stock is handy to use for The Wheel. The final three columns are a simplified form of Elliott wave counting for day, week and month charts. These values are used to calculate the likelihood of the current day wave continuing. To be clear: Elliott wave counting is always 100% right. Unfortunately, this is due to the fact that Elliott waves can be renamed after the fact. So Elliott wave counting is made right, after the fact. That is why we don’t use Elliott wave counting for trading and only use wave counting to feed this Bayesian model (among other financial variables) to calculate the probability that a current wave continues or not. When you want to know more about this tool, please contact us. Every so often we organize a meeting to explain The Wheel and demonstrate how the tool works. When you want to join us at this meeting, fill in the form below. Logistical details: • Date: see the form below. • Time: see the form below. • Venue: Breintraining coöperatie, Binderij 7-L, 1185 ZH Amstelveen • Fee: free event.
{"url":"https://www.tradingbehaviormanagement.com/the-wheel-bayesian-model-to-select-the-right-stocks-and-time-the-selling-of-puts-and-calls/","timestamp":"2024-11-05T19:35:23Z","content_type":"text/html","content_length":"57958","record_id":"<urn:uuid:f025df9f-9ad6-477e-a964-a3701e2b5d30>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00410.warc.gz"}
In The Figure ABCDEF Is A Regular Hexagon BDF Is Drawn By Joining The Alternate Vertices Show That BDF Is Equilateral Discover answers to your most pressing questions at Westonci.ca, the ultimate Q&A platform that connects you with expert solutions. Experience the ease of finding quick and accurate answers to your questions from professionals on our platform. Join our Q&A platform to connect with experts dedicated to providing accurate answers to your questions in various fields. In the figure, ABCDEF is a regular hexagon. △ BDF is drawn by joining the alternate vertices. Show that △ BDF is equilateral Sagot : Step-by-step explanation: Congruent parts of congruent triangles are congruent, so we can easily demonstrate BD ≅ DF ≅ BF. AB ≅ BC ≅ CD≅ DE≅ EF ≅ FA . . . . definition of regular hexagon ∠A ≅ ∠C ≅ ∠E . . . . definition of regular hexagon ΔFAB ≅ ΔBCD ≅ ΔDEF . . . . SAS congruence FB ≅ BD ≅ DF . . . . CPCTC ΔBDF is equilateral . . . . definition of equilateral triangle Answer Link Thank you for choosing our service. We're dedicated to providing the best answers for all your questions. Visit us again. Thanks for using our platform. We aim to provide accurate and up-to-date answers to all your queries. Come back soon. Thank you for choosing Westonci.ca as your information source. We look forward to your next visit.
{"url":"https://westonci.ca/question/28185631","timestamp":"2024-11-06T23:43:32Z","content_type":"text/html","content_length":"152511","record_id":"<urn:uuid:e17510de-4b1c-4489-9ffb-8c66428391dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00550.warc.gz"}
Analyzing Slope Stability Through the Shear Strength Reduction Method Slope stability analysis is essential for ensuring the reliability of dam embankments and safety of people in their proximity. By using a shear strength reduction method in the COMSOL Multiphysics® software, civil and geotechnical engineers can evaluate the stability of dam embankments to predict failure and prevent tragedy. Why Analyze Dam Failure? When a dam fails, the results can be devastating — even fatal. For instance, the Austin Dam, built in the late 1800s near Austin, Texas, was both expensive and difficult to construct. However, many citizens believed that the dam would attract business and supply power to the city. On April 7, 1900, heavy rainfall and flooding from the previous week caused the lake behind the dam to swell. The dam could not resist the force of the water, and it eventually cracked. Sections of the dam caved, releasing torrents of water and causing fatalities and severe damage to over 500 homes. The damage resulting from the Austin dam failure one hour after collapse. Image in the public domain, via Wikimedia Commons. In the aftermath of what would be known as the “Great Granite Dam Failure”, the structural integrity of the dam was called into question. People surmised that the collapse of the dam was inevitable due to its suboptimal design and construction. However, accounting for the stability and reliability of a dam can begin long before it is even built. Slope stability analysis, for instance, can be used to predict the settlement, deformation, and slippage of soil in a dam embankment due to various loading and environmental conditions. There are numerous methods for conducting a slope stability analysis. Here, we discuss a technique for modeling this process with COMSOL Multiphysics and the add-on Geomechanics Module, using the Slope Stability in a Dam Embankment tutorial model from the Application Gallery. Shear Strength Reduction and Factor of Safety Stability refers to the ability of a slope to resist forces that drive Earth’s materials down the slope. The shear strength reduction (SSR) method is used to find the safety factor value of the slope at the point of failure, or the instability point. In the model discussed here, we conduct a slope stability analysis of a dam embankment using the SSR method. This model also uses a plane strain approximation to model the dam embankment in 2D, which is more computationally efficient than a 3D analysis. The factor of safety (FOS) is defined as the ratio of the available shear strength of the soil that is required to maintain an equilibrium across the surface. The FOS ratio demonstrates how much a structure (the dam, in this case) can withstand. In the context of slope stability, the FOS would ideally be a ratio that does not lead to sliding of the materials in the slope (the dam embankment, in this case). FOS is not a measure on the reliability of the embankment, but rather a relative indication of the resistance to any driving force within the slope stability analysis. If the FOS equals 1, the structure or part supports the exact stress it would be subjected to, and increasing or subjecting the part to any higher stress (or load) will result in the structure failing. For an FOS value of 2, the structure or part will fail at twice the working stress. If the FOS is less than 1, it means that the structure is unstable. Think of building a sand castle at the beach when you were a child. If you formed a pile of sand and then slowly placed your hand on it at an angle, the compression of the sand underneath your hand at a certain force would cause the sand to “slip” and translate toward the base of the slope. Now, picture digging a moat in the sand around a sand castle: If you dug deeper and deeper into the sand, the moat would eventually collapse due to the reduced strength of the slope. Slope soil behavior is represented by the following elements: • Darcy’s law: □ Pore pressure □ Flow of fluid through a porous medium • Mohr–Coulomb criterion: Including Darcy’s law for the soil accounts for the pressure head in the embankment and also makes it possible to distinguish between saturated or unsaturated conditions. Then, by adding the Mohr–Coulomb criterion in the Solid Mechanics interface, you can determine the stability of the slope. Mohr–Coulomb Criterion The Mohr–Coulomb theory is a mathematical model that describes how materials — brittle materials specifically — respond to shear stress and normal stress. The Mohr–Coulomb criterion is a common failure criterion in geotechnical engineering, and it demonstrates the linear relationship between normal and shear stresses at the point of failure. In the SSR method, the Mohr–Coulomb material parameters are functions of the FOS. With the SSR technique, the FOS affects the cohesion as well as the angle of internal friction. Cohesion describes how strongly a material will stick together. Think of packing sand into a mold for your sand castle. If the sand is wet or damp, it’s less likely to fall apart when the mold is flipped over. The angle of internal friction describes the frictional shear resistance of the soil. If you pour sand into one specific spot on a surface, the sand accumulates, but if you attempt the same task with a different item, such as marbles, it doesn’t have the same result. Sand collects into a pile due to its higher angle of internal friction (right video), but marbles are perfectly round and will slip past one another to reach the surface onto which you are pouring them (left video). Under the Mohr–Coulomb criterion, these factors specify the shear strength of the soil and can predict the likelihood of a dam embankment’s slope to slip or hold itself together. Interpreting the Simulation Results To find the point at which the dam embankment reaches instability, we can systematically run the model for increasing FOS values until it fails to converge. This point indicates when the slope is no longer stable; that is, we’ve identified its expected FOS. A graph of the maximum displacement versus the factor of safety for the dam embankment. Left: Pressure head in the dam embankment. Middle: Effective plastic strain prior to collapse. Right: Slip circle just before collapse. Here, the elastoplastic analysis does not converge for FOS values over 1.915. As mentioned, the lowest value accepted for the FOS is 1, and a 2 would indicate that the structure fails at twice the working stress. With an FOS of 1.915, the available shear strength of the soil is almost twice what is needed to sustain the slope. At this point, the slope would collapse due to increased strain and a subsequent reduction of the shear strength. This collapse is caused by the localization of plastic strains into a shear band, which results in the formation of a slip circle. Total displacement just before slope collapse. In general, slope stability can be used to evaluate the stability and safety of both manmade and naturally occurring dams and slopes. This type of analysis can be used to observe failure mechanisms due to loading conditions and call attention to other factors, such as vegetation and soil variability, which could affect naturally occurring slopes. Next Steps Try it yourself: Click the button below to access the tutorial model. Further Reading Check out these additional blog posts related to geomechanics:
{"url":"https://www.comsol.com/blogs/analyzing-slope-stability-through-the-shear-strength-reduction-method?setlang=1","timestamp":"2024-11-07T04:20:32Z","content_type":"text/html","content_length":"95600","record_id":"<urn:uuid:f2764a4d-c400-4268-bf67-09446bc4144b>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00046.warc.gz"}
Free Group Study Rooms with Timer & Music | FiveableVolume with Washer Method | AP Calculus AB/BC Class Notes | Fiveable Imagine the region we are revolving is bounded by y = f(x) on the top and y = g(x) on the bottom. Once we revolve the region around the axis, the cross sections will be a disc with a circle cut out from the middle. We call this the washer method because the cross sections look like hardware washers.Β Β With the disc method, we need to find the radius of the disc in order to calculate the area of the cross section. With the washer method, we need to find the inner radius of the bottom function and the outer radius of the top function in order to find the area of the cross section. If youβ re still confused, take a look at the example below.Β The Washer Method is a technique used to find the volume of a solid that is formed by revolving a region around the x- or y-axis. This method involves slicing the solid into many thin washers and then finding the volume of each washer. The total volume of the solid is found by adding up the volumes of all of these washers. Here are the steps to use the Washer Method: Identify the region that is being revolved to form the solid. This region should be defined by two functions, f(x) or g(y) and h(x) or k(y), and should be bounded by two lines, x = a and x = b or y = c and y = d. The two functions define the inner and outer radii of each washer. Decide on the axis of revolution. If the region is being revolved around the x-axis, the width of each washer will be dx and the inner and outer radii will be f(x) and h(x) respectively. If the region is being revolved around the y-axis, the width of each washer will be dy and the inner and outer radii will be g(y) and k(y) respectively. Find the volume of each washer by subtracting the volume of the smaller disk from the volume of the larger disk. The volume of each disk is found by multiplying Ο by the square of the radius and the width of the disk. Use the definite integral to find the total volume of the solid by integrating the function for the volume of each washer with respect to x or y. Example 1: Consider a region defined by the functions f(x) = x^2 and h(x) = x + 1, revolved around the x-axis from x = 0 to x = 1. The width of each washer is dx and the inner and outer radii are f (x) = x^2 and h(x) = x + 1 respectively. The volume of each washer is found by subtracting the volume of the smaller disk from the volume of the larger disk. This gives us Ο (h^2(x) - f^2(x)) * dx. To find the total volume of the solid, we integrate this function from x = 0 to x = 1. This gives us the definite integral from 0 to 1 of Ο (h^2(x) - f^2(x)) dx = (3Ο /4) Example 2: Consider a region defined by the functions g(y) = β (4-y^2) and k(y) = β (9-y^2), revolved around the y-axis from y = 0 to y = 2. The width of each washer is dy and the inner and outer radii are g(y) = β (4-y^2) and k(y) = β (9-y^2) respectively. The volume of each washer is found by subtracting the volume of the smaller disk from the volume of the larger disk. This gives us Ο (k^2(y) - g^2(y)) * dy. To find the total volume of the solid, we integrate this function from y = 0 to y = 2. This gives us the definite integral from 0 to 2 of Ο (k^2(y) - g^2(y)) dy = (25Ο /2) Example 3: Consider a region defined by the functions f(x) = 2β (x) and h(x) = 2β (x) + 2, revolved around the x-axis from x = 0 to x = 1. The width of each washer is dx and the inner and outer radii are f(x) = 2β (x) and h (x) = 2β (x) + 2 respectively. The volume of each washer is found by subtracting the volume of the smaller disk from the volume of the larger disk. This gives us Ο (h^2(x) - f^2(x)) * dx. To find the total volume of the solid, we integrate this function from x = 0 to x = 1. This gives us the definite integral from 0 to 1 of Ο (h^2(x) - f^2(x)) dx = (Ο (2)^2/2) = (2Ο ) Example 4: Consider a region defined by the functions g(y) = y^2 and k(y) = y^2 + 1, revolved around the y-axis from y = 0 to y = 1. The width of each washer is dy and the inner and outer radii are g (y) = y^2 and k(y) = y^2 + 1 respectively. The volume of each washer is found by subtracting the volume of the smaller disk from the volume of the larger disk. This gives us Ο (k^2(y) - g^2(y)) * dy. To find the total volume of the solid, we integrate this function from y = 0 to y = 1. This gives us the definite integral from 0 to 1 of Ο (k^2(y) - g^2(y)) dy = (Ο /2) Example 5: Consider a region defined by the functions f(x) = 2x and h(x) = 2x + 1, revolved around the x-axis from x = 0 to x = 1. The width of each washer is dx and the inner and outer radii are f (x) = 2x and h(x) = 2x + 1 respectively. The volume of each washer is found by subtracting the volume of the smaller disk from the volume of the larger disk. This gives us Ο (h^2(x) - f^2(x)) * dx. To find the total volume of the solid, we integrate this function from x = 0 to x = 1. This gives us the definite integral from 0 to 1 of Ο (h^2(x) - f^2(x)) dx = (Ο /2) The Washer Method is a useful tool for finding the volume of a solid that is formed by revolving a region around the x- or y-axis. The method involves slicing the solid into thin washers, finding the volume of each washer, and then adding up the volumes of all the washers to find the total volume of the solid. The method requires identifying the region being revolved, deciding on the axis of revolution, finding the volume of each washer, and using the definite integral to find the total volume of the solid.
{"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-calc/unit-8/volume-washer-method-revolving-around-x-or-y-axis/study-guide/9kgWFLHEU5oAfAA5aXaq","timestamp":"2024-11-02T21:50:47Z","content_type":"text/html","content_length":"237599","record_id":"<urn:uuid:42e09df5-4708-4b96-b70b-b649355bc6ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00056.warc.gz"}
Effect of rolling process parameters on stability of rolling mill vibration with nonlinear friction Friction-induced vibration is a typical self-excited phenomenon in the rolling process. Since its important industrial relevance, a rolling mill vertical-torsional-horizontal coupled vibration model with the consideration of the nonlinear friction has been established by coupling the dynamic rolling process model and the rolling mill structural model. Based on this model, the system stability domain is determined according to Hurwitz algebraic criterion. Subsequently, the Hopf bifurcation types at different bifurcation points are judged. Finally, the influences of rolling process parameters on the system stability domain are analyzed in detail. The results show that the critical boundaries of vertical vibration modal, horizontal vibration modal and torsional vibration modal will move with the change of rolling process parameters, and the system stability domain will change simultaneously. Among the parameters, the reduction ratio has the most significant effect on the stability of the system. And when rolling the thin strip, the system stability domain may be only enclosed by the critical boundaries of vertical vibration modal and torsional vibration modal. In that case, the system instability induced by horizontal vibration modal would not occur. The study is helpful for proposing a reasonable rolling process planning to reduce the possibility of vibration, as well as selecting an optimal rolling process parameter to design a controller to control the rolling mill vibration. 1. Introduction The mill vibration is considered to be the main factor restricting the productivity of the rolling mill. As its widespread existence and complexity, it has become a research focus and a technique challenge around the world. Yarita et al. [1] and Tlusty et al. [2] are the first to study the rolling mill vibration. And their achievements in theoretical modeling and vibration mechanism laid the foundation for later research. Since then, scholars have done a series of further studies on mill vibration, and also achieved abundant results. The current research generally considered that the mill vibration is a typical kind of self-excited vibration, which is the consequence of interactions between the system structure and the rolling process [3]. This interaction can be represented by the closed-loop shown in Fig. 1. The dynamic forces which are generated in the rolling process deflect the structure of the rolling mill and lead to variations of the roll gap and the rolling speed. These, in turn, result in further variations of the rolling forces. Therefore, simplifying the rolling process effectively and modeling the mill structure reasonably are the key problems to study the rolling mill vibration. Fig. 1Coupling relationship of structure model and rolling process model For the rolling process model, Yun et al. [4] and Hu et al. [5, 6] carried out a systematic research on its modeling. Based on Tlusty model [2], Yun et al. [4] presented a new dynamic rolling process model, in which the strip strain-hardening effect and the metal flow equation in the condition of vibration were taken into consideration. On this basis, Hu et al. [5] further modified the metal flow equation, and constructed a more accurate dynamic rolling process model. For the rolling mill structure model, different models were presented according to different research focuses and assumptions. The most typical structure models include vertical structure models (one degree of freedom [7], two degrees of freedom [7, 8] and four degrees of freedom [1]) and torsional structure models (single drive and twin drives) [9]. In actual production, vibrations of the high-speed rolling mill mostly appear as the coupling of multiple vibration types. Taking a two-high rolling mill as the object, Swiatoniowski [10] studied the interaction between the plastic deformation process and the rolling mill vibration, and constructed a typical vertical-torsional coupling structural model. Through experiments, Paton et al. [11] found that rolls can vibrate not only in vertical direction, but also in horizontal direction. Yan et al. [12] studied the coupling characteristic of torsional vibration and vertical vibration by the finite-element analysis. Furthermore, many experiments and theoretical studies have shown that the lubrication condition in the roll bite is one of the most important factors affecting the rolling mill vibration. Yarita et al. [1] pointed out that lubrication defects may cause vibration, and better lubrication conditions can suppress vibration effectively. Based on the nonlinear friction model proposed by Sims and Arthur [13], Shi et al. [14] studied the stability of the rolling mill main drive system. Vladimir et al. [15] studied the vibration of a hot rolling mill with the consideration of the stick-slip nonlinear friction model [16], and indicated that frictional conditions along the contact arc were indeed the principal cause of the vibration in that rolling mill. However, in the existing dynamic rolling process models, the friction coefficient was usually taken as a constant, which cannot fully display the complex friction characteristics of the real system. Therefore, it is necessary to study the rolling mill multiple-modal-coupling vibration based on the dynamic rolling process model with nonlinear friction considered. Reference [17] has constructed a rolling mill vertical-torsional-horizontal coupled dynamic model with the consideration of nonlinear friction. In this paper, a brief introduction of this mathematical model is given at the beginning. On this basis, the system stability domain is determined and the Hopf bifurcation types at different bifurcation points are judged. Then, the changes of the system stability domain with different rolling process parameters are mainly discussed. And a mean relative sensitivity factor is defined to compare the effects of different parameters. The results can provide a theoretical basis for formulating a reasonable rolling schedule, and drawing up an effective control strategy as well. 2. Mathematical model As Fig. 1 shows, a rolling mill vibration model can be formulated naturally as the result of interactions between the rolling mill structure and the rolling process. Therefore, in this section, dynamic models of the rolling mill structure and the rolling process will be introduced respectively, and then the mathematical model is constructed by coupling these two dynamic models. 2.1. Dynamic model of rolling mill structure The vertical-torsional-horizontal coupled dynamic model of the rolling mill structure is illustrated in Fig. 2. In this structure model, the rolling mill is assumed to be symmetrical with respect to the center plane of the strip, and the vertical subsystem, horizontal subsystem and torsional subsystem are all simplified as ones with single degree of freedom. Thus, the differential equations can be written as: $\left\{\begin{array}{l}{m}_{1}{\stackrel{¨}{x}}_{c}+{c}_{1}{\stackrel{˙}{x}}_{c}+{k}_{1}{x}_{c}=d{F}_{x},\\ {m}_{2}{\stackrel{¨}{y}}_{c}+{c}_{2}{\stackrel{˙}{y}}_{c}+{k}_{2}{y}_{c}=d{F}_{y},\\ {J}_ {M}{\stackrel{¨}{\theta }}_{M}+{c}_{t}{\stackrel{˙}{\theta }}_{M}+{k}_{t}{\theta }_{M}=dM,\end{array}\right\$ where, $d{F}_{x}$ and $d{F}_{y}$ are fluctuations of forces acting on the rolls in $x$ and $y$ directions, $dM$ is the fluctuation of the rolling torque. The expressions of these three dynamic forces can be obtained in the dynamic model of the rolling process. Fig. 2Simplified vertical-torsional-horizontal coupling structure model 2.2. Dynamic model of rolling process with nonlinear friction For cold rolling, when the rolling speed is greater than 0.25 m·s^-1, the friction coefficient along the contact arc can be approximately expressed as [13]: $\mu =a\mathrm{e}\mathrm{x}\mathrm{p}\left(-b{v}_{r}+c\right),$ where, $a$, $b$ and $c$ are constants, which are related to lubricating oil viscosity, lubricating oil concentration and system lubricating state. And the values of $a$, $b$ and $c$ are all greater than zero. ${v}_{r}$ is the work roll peripheral velocity. In this paper, the work roll is allowed to vibrate in vertical, horizontal and torsional directions, there into, both the horizontal vibration and the torsional vibration can affect the peripheral velocity of the work roll. Therefore, when vibration occurs, the expression of the work roll peripheral velocity is rewritten as: ${v}_ {r}={\stackrel{-}{v}}_{r}+{\stackrel{˙}{\theta }}_{M}R\mathrm{"}+{\stackrel{˙}{x}}_{c}$. ${\stackrel{-}{v}}_{r}$ is the work roll peripheral velocity under steady state. For convenience, the friction coefficient $\mu$ near the steady roll peripheral velocity ${\stackrel{-}{v}}_{r}$has been deduced by Taylor series expansion: $\mu =\mu \left({\stackrel{-}{v}}_{r}\right)+\mu \mathrm{"}\left({\stackrel{-}{v}}_{r}\right)\left({v}_{r}-{\stackrel{-}{v}}_{r}\right)+\frac{\mu \mathrm{"}\mathrm{"}\left({\stackrel{-}{v}}_{r}\ right)}{2}{\left({v}_{r}-{\stackrel{-}{v}}_{r}\right)}^{2}$$=a{e}^{-b{\stackrel{-}{v}}_{r}+c}\left[1-b\left({\stackrel{˙}{\theta }}_{M}R"+{\stackrel{˙}{x}}_{c}\right)+\frac{1}{2}{b}^{2}\left({\ stackrel{˙}{\theta }}_{M}R"+{\stackrel{˙}{x}}_{c}{\right)}^{2}\right]$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}={\mu }_{0}\left[1-b\left({\stackrel{˙}{\theta }}_{M}R\mathrm{"}+{\stackrel{˙}{x}}_{c}\ right)+\frac{1}{2}{b}^{2}\left({\stackrel{˙}{\theta }}_{M}R\mathrm{"}+{\stackrel{˙}{x}}_{c}{\right)}^{2}\right],$ where, ${\mu }_{0}$ is the steady friction coefficient when the work roll peripheral velocity is ${\stackrel{-}{v}}_{r}$, ${\mu }_{0}=a\mathrm{e}\mathrm{x}\mathrm{p}\left(-b{\stackrel{-}{v}}_{r}+c\ Using Eq. (3) to represent the friction characteristic along the contact arc, the detailed derivation of the dynamic rolling process model with the consideration of nonlinear friction is given in Appendix A1 [17]. 2.3. Dynamical equation of rolling mill vertical-torsional-horizontal coupled vibration When constructing the dynamical equation of the rolling mill vibration, the inter-stand tension effect must be considered. As the vibration characteristic of a single stand rolling mill is the focus in this study, supposing the variations of both the exit velocity of the upstream stand and the entry velocity of the downstream stand are zero. Then the tension variations at entry and exit can be obtained according to Hooke’s law. That is: $\left\{\begin{array}{l}\frac{d\left(d{\sigma }_{0}\right)}{dt}=\frac{{E}_{1}}{{L}_{0}}d{v}_{0},\\ \frac{d\left(d{\sigma }_{1}\right)}{dt}=-\frac{{E}_{1}}{{L}_{1}}d{v}_{1}.\end{array}\right\$ Substituting $\left(d{F}_{x},d{F}_{y},dM,d{v}_{0},d{v}_{1}\right)$ from Eq. (A11) into Eq. (1) and Eq. (4), the dynamical equation of the rolling mill vertical-torsional-horizontal coupled vibration can be expressed as: $\left\{\begin{array}{l}{\stackrel{˙}{x}}_{1}={x}_{2},\\ {\stackrel{˙}{x}}_{2}=\frac{1}{{m}_{1}}\left[-{k}_{1}{x}_{1}-{c}_{1}{x}_{2}+{a}_{{F}_{x},{y}_{c}}{x}_{3}+{a}_{{F}_{x},{\sigma }_{0}}{x}_{7}+ {a}_{{F}_{x},{\sigma }_{1}}{x}_{8}+{a}_{{F}_{x},{h}_{0}}d{h}_{0}\right],\\ {\stackrel{˙}{x}}_{3}={x}_{4},\\ {\stackrel{˙}{x}}_{4}=\frac{1}{{m}_{2}}\left[\begin{array}{l}{a}_{{F}_{y},{\stackrel{˙}{x}} _{c}}{x}_{2}+\left({a}_{{F}_{y},{y}_{c}}-{k}_{2}\right){x}_{3}+\left({a}_{{F}_{y},{\stackrel{˙}{y}}_{c}}-{c}_{2}\right){x}_{4}+{a}_{{F}_{y},{\stackrel{˙}{\theta }}_{M}}{x}_{6}+{a}_{{F}_{y},{\sigma }_ {0}}{x}_{7}\\ +{a}_{{F}_{y},{\sigma }_{1}}{x}_{8}+{a}_{{F}_{y},{\stackrel{˙}{x}}_{c}^{2}}{x}_{2}^{2}+{a}_{{F}_{y},{\stackrel{˙}{\theta }}_{M}^{2}}{x}_{6}^{2}+{a}_{{F}_{y},{\stackrel{˙}{\theta }}_{M} {\stackrel{˙}{x}}_{c}}{x}_{2}{x}_{6}+{a}_{{F}_{y},{h}_{0}}d{h}_{0}\end{array}\right],\\ {\stackrel{˙}{x}}_{5}={x}_{6},\\ {\stackrel{˙}{x}}_{6}=\frac{1}{{J}_{M}}\left[\begin{array}{l}{a}_{M,{\stackrel {˙}{x}}_{c}}{x}_{2}+{a}_{M,{y}_{c}}{x}_{3}+{a}_{M,{\stackrel{˙}{y}}_{c}}{x}_{4}-{k}_{t}{x}_{5}+\left({a}_{M,{\stackrel{˙}{\theta }}_{M}}-{c}_{t}\right){x}_{6}+{a}_{M,{\sigma }_{0}}{x}_{7}\\ +{a}_{M, {\sigma }_{1}}{x}_{8}+{a}_{M,{\stackrel{˙}{x}}_{c}^{2}}{x}_{2}^{2}+{a}_{M,{\stackrel{˙}{\theta }}_{M}^{2}}{x}_{6}^{2}+{a}_{M,{\stackrel{˙}{\theta }}_{M}{\stackrel{˙}{x}}_{c}}{x}_{2}{x}_{6}+{a}_{M,{h} _{0}}d{h}_{0}\end{array}\right],\\ {\stackrel{˙}{x}}_{7}=\frac{{E}_{1}}{{L}_{0}}\left[\begin{array}{l}{a}_{{v}_{0},{\stackrel{˙}{x}}_{c}}{x}_{2}+{a}_{{v}_{0},{y}_{c}}{x}_{3}+{a}_{{v}_{0},{\stackrel {˙}{y}}_{c}}{x}_{4}+{a}_{{v}_{0},{\stackrel{˙}{\theta }}_{M}}{x}_{6}+{a}_{{v}_{0},{\sigma }_{0}}{x}_{7}+{a}_{{v}_{0},{\sigma }_{1}}{x}_{8}\\ +{a}_{{v}_{0},{\stackrel{˙}{x}}_{c}^{2}}{x}_{2}^{2}+{a}_ {{v}_{0},{\stackrel{˙}{\theta }}_{M}^{2}}{x}_{6}^{2}+{a}_{{v}_{0},{\stackrel{˙}{\theta }}_{M}{\stackrel{˙}{x}}_{c}}{x}_{2}{x}_{6}+{a}_{{v}_{0},{h}_{0}}d{h}_{0}\end{array}\right],\\ {\stackrel{˙}{x}}_ }}_{M}}{x}_{6}+{a}_{{v}_{1},{\sigma }_{0}}{x}_{7}+{a}_{{v}_{1},{\sigma }_{1}}{x}_{8}\\ +{a}_{{v}_{1},{\stackrel{˙}{x}}_{c}^{2}}{x}_{2}^{2}+{a}_{{v}_{1},{\stackrel{˙}{\theta }}_{M}^{2}}{x}_{6}^{2}+{a} _{{v}_{1},{\stackrel{˙}{\theta }}_{M}{\stackrel{˙}{x}}_{c}}{x}_{2}{x}_{6}+{a}_{{v}_{1},{h}_{0}}d{h}_{0}\end{array}\right],\end{array}\right\$ where, $X={\left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6},{x}_{7},{x}_{8}\right)}^{T}={\left({x}_{c},{\stackrel{˙}{x}}_{c},{y}_{c},{\stackrel{˙}{y}}_{c},{\theta }_{M},{\stackrel{˙}{\theta }}_ {M},d{\sigma }_{0},d{\sigma }_{1}\right)}^{T}$. 3. System stability analysis 3.1. Hurwitz algebraic criterion Hurwitz algebraic criterion is an important method, using which bifurcation points can be calculated by an algebraic equation. In particular, it can be effectively applied to analyze Hopf bifurcation of the high-dimensional and complicated nonlinear system [18]. In order to study the effect of the nonlinear friction on the system stability, selecting parameter $b$ as the bifurcation parameter. Then, Eq. (5) is a function of $X$ and $b$, $\stackrel{˙}{X}=f\ left(X,b\right)\mathrm{}$. As the coordinate origin is the equilibrium point of the system, the Jacobian matrix at this point is as follows: $A\left(0,b\right)={\frac{\partial f\left(X,b\right)\mathrm{}}{\partial X}|}_{{X}_{0}=0}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}=\left[\begin{array}{cccccccc}0& 1& 0& 0& 0& 0& 0& 0\\ \frac{- {k}_{1}}{{m}_{1}}& \frac{-{c}_{1}}{{m}_{1}}& \frac{{a}_{{F}_{x},{y}_{c}}}{{m}_{1}}& 0& 0& 0& \frac{{a}_{{F}_{x},{\sigma }_{0}}}{{m}_{1}}& \frac{{a}_{{F}_{x},{\sigma }_{1}}}{{m}_{1}}\\ 0& 0& 0& 1& 0& 0& 0& 0\\ 0& \frac{{a}_{{F}_{y},{\stackrel{˙}{x}}_{c}}}{{m}_{2}}& \frac{{a}_{{F}_{y},{y}_{c}}-{k}_{2}}{{m}_{2}}& \frac{{a}_{{F}_{y},{\stackrel{˙}{y}}_{c}}-{c}_{2}}{{m}_{2}}& 0& \frac{{a}_{{F}_{y},{\ stackrel{˙}{\theta }}_{M}}}{{m}_{2}}& \frac{{a}_{{F}_{y},{\sigma }_{0}}}{{m}_{2}}& \frac{{a}_{{F}_{y},{\sigma }_{1}}}{{m}_{2}}\\ 0& 0& 0& 0& 0& 1& 0& 0\\ 0& \frac{{a}_{M,{\stackrel{˙}{x}}_{c}}}{{J}_ {M}}& \frac{{a}_{M,{y}_{c}}}{{J}_{M}}& \frac{{a}_{M,{\stackrel{˙}{y}}_{c}}}{{J}_{M}}& \frac{-{k}_{t}}{{J}_{M}}& \frac{{a}_{M,{\stackrel{˙}{\theta }}_{M}}-{c}_{t}}{{J}_{M}}& \frac{{a}_{M,{\sigma }_ {0}}}{{J}_{M}}& \frac{{a}_{M,{\sigma }_{1}}}{{J}_{M}}\\ 0& \frac{{E}_{1}{a}_{{v}_{0},{\stackrel{˙}{x}}_{c}}}{{L}_{0}}& \frac{{E}_{1}{a}_{{v}_{0},{y}_{c}}}{{L}_{0}}& \frac{{E}_{1}{a}_{{v}_{0},{\ stackrel{˙}{y}}_{c}}}{{L}_{0}}& 0& \frac{{E}_{1}{a}_{{v}_{0},{\stackrel{˙}{\theta }}_{M}}}{{L}_{0}}& \frac{{E}_{1}{a}_{{v}_{0},{\sigma }_{0}}}{{L}_{0}}& \frac{{E}_{1}{a}_{{v}_{0},{\sigma }_{1}}}{{L}_ {0}}\\ 0& \frac{-{E}_{1}{a}_{{v}_{1},{\stackrel{˙}{x}}_{c}}}{{L}_{1}}& \frac{-{E}_{1}{a}_{{v}_{1},{y}_{c}}}{{L}_{1}}& \frac{-{E}_{1}{a}_{{v}_{1},{\stackrel{˙}{y}}_{c}}}{{L}_{1}}& 0& \frac{-{E}_{1}{a} _{{v}_{1},{\stackrel{˙}{\theta }}_{M}}}{{L}_{1}}& \frac{-{E}_{1}{a}_{{v}_{1},{\sigma }_{0}}}{{L}_{1}}& \frac{-{E}_{1}{a}_{{v}_{1},{\sigma }_{1}}}{{L}_{1}}\end{array}\right].$ The characteristic equation of the Jacobian matrix can be obtained through calculating the determinant $\left|\mathbf{A}\left(0,b\right)-\lambda \mathbf{I}\right|=0$. Here, $\mathbf{I}$ is an eight-order unit matrix: ${\lambda }^{8}+{p}_{1}{\lambda }^{7}+{p}_{2}{\lambda }^{6}+{p}_{3}{\lambda }^{5}+{p}_{4}{\lambda }^{4}+{p}_{5}{\lambda }^{3}+{p}_{6}{\lambda }^{2}+{p}_{7}\lambda +{p}_{8}=0.$ A series of Hurwitz determinants can be constructed as follows: ${\mathrm{\Delta }}_{i}=\left|\begin{array}{cccccc}{p}_{1}& 1& 0& 0& \cdots & 0\\ {p}_{3}& {p}_{2}& {p}_{1}& 1& \cdots & 0\\ {p}_{5}& {p}_{4}& {p}_{3}& {p}_{2}& \cdots & 0\\ ⋮& ⋮& ⋮& ⋮& \cdots & ⋮\\ {p}_{2i-1}& {p}_{2i-2}& {p}_{2i-3}& {p}_{2i-4}& & {p}_{i}\end{array}\right|,$ here, ${p}_{i}=$0 when $i>$8. For the system Hopf bifurcation to occur at point ${b}^{*}$, the necessary and sufficient conditions judging by Hurwitz algebraic criterion should be satisfied: $\left\{\begin{array}{l}{p}_{i}\left({b}^{\mathrm{*}}\right)>0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\left(i=1,2,\dots ,8\right),\\ {\mathrm{\Delta }}_{7}\left({b}^{\mathrm{*}}\right)=0,\\ {\mathrm{\ Delta }}_{i}\left({b}^{\mathrm{*}}\right)>0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\left(i=5,\mathrm{}3,\mathrm{}1\right),\mathrm{}\\ {\frac{d\left({\mathrm{\Delta }}_{7}\left(b\right)\right)}{db}|}_{b= {b}^{\mathrm{*}}}e 0.\end{array}\right\$ 3.2. Bifurcation parameter calculation and stability analysis The simulation parameters used in this paper are from the 4th stand of a 2030 five-stand tandem cold rolling mill [19], and are listed in Table 1. Using these parameters, the distribution of Hopf bifurcation parameters with different steady rolling speeds is illustrated in Fig. 3. For this rolling mill, the frequencies of vertical vibration modal, torsional vibration modal and horizontal vibration modal are about 133 Hz, 12.5 Hz and 52 Hz, respectively. From Fig. 3, it is observed that the system Hopf bifurcation line is spliced by Line 1, Line 2 and Line 3. Taking three points on these three lines randomly (point $A$, point $B$ and point $C$ in Fig. 3), and their eigenvalues are listed in Table 2. Taking point $A$ as an example, when $b=\text{0.7982,}$ a pair of pure imaginary eigenvalues appear in the system, ${\lambda }_{3,4}=$ (–0.0000±3.2682$i$)×10^2, and this pair of conjugate eigenvalues represent the characteristic of horizontal vibration modal, so Line 1 is the critical boundary of horizontal vibration modal. Similarly, Line 2 is the critical boundary of torsional vibration modal. Line 3 is the critical boundary of vertical vibration modal. And the system stability domain is enclosed by coordinate axes and the system Hopf bifurcation line. Table 1Parameters from the 4th stand of a 2030 five-stand tandem cold mill ${m}_{1}$ (kg) ${k}_{1}$ (N·m^-1) ${c}_{1}$ (N·s·m^-1) ${\stackrel{-}{h}}_{0}$(mm) ${\stackrel{-}{h}}_{c}$ (mm) $A$ (MPa) $B$ (m) $R$ (m) 9200 1×10^9 5×10^4 0.789 0.577 810 1.206 0.3 ${m}_{2}$[](kg) ${k}_{2}$ (N·m^-1) ${c}_{2}$ (N·s·m^-1) ${\stackrel{-}{\sigma }}_{0}$ (MPa) ${\stackrel{-}{\sigma }}_{1}$ (MPa) $n$ $H$ (mm) ${R}^{"}$ (m) 203200 6.9×10^10 1.64×10^7 180 189 0.29 2 0.5335 ${J}_{M}$[](kg·m^2) ${k}_{t}$ (N·m·rad^-1) ${c}_{t}$ (N·m·s·rad^-1) ${L}_{0}$ (m) ${L}_{1}$ (m) ${E}_{1}$ (GPa) ${\mu }_{0}$ 1381 7.9×10^6 4178 4.75 4.75 210 0.04 Table 2Eigenvalues about the critical parameters ${v}_{r}$[](m·s^-1) $b$ ${\lambda }_{1,2}$ (×10^2) ${\lambda }_{3,4}$[](×10^2) ${\lambda }_{5,6}$[](×10^2) ${\lambda }_{7}$[](×10^2) ${\lambda }_{8}$[](×10^2) $A$+ 0.8200 –0.1815±8.3787$i$ 0.0008±3.2674$i$ –0.0339±0.7868$i$ –2.6399 –0.9583 18.0000 $A$ 0.7982 –0.1816±8.3783$i$ –0.0000±3.2682$i$ –0.0355±0.7870$i$ –2.6347 –0.9593 $A$- 0.7800 –0.1816±8.3779$i$ –0.0007±3.2689$i$ –0.0369±0.7871$i$ –2.6303 –0.9601 $B$+ 2.2700 –1.8620±8.0343$i$ –0.0146±3.2511$i$ 0.0008±0.8035$i$ –0.9738 –0.2893 6.0000 $B$ 2.2541 –1.8622±8.0342$i$ –0.0147±3.2515$i$ –0.0000±0.8040$i$ –0.9719 –0.2894 $B$- 2.2400 –1.8624±8.0341$i$ –0.0148±3.2518$i$ –0.0007±0.8046$i$ –0.9703 –0.2895 20.6945 $C$+ 0.0009±8.4145$i$ –0.0182±3.2876$i$ –0.0676±0.7824$i$ –2.8762 –1.1455 20.6800 $C$ 0.2885 –0.0000±8.4143$i$ –0.0182±3.2876$i$ –0.0676±0.7824$i$ –2.8742 –1.1447 20.6700 $C$- –0.0006±8.4141$i$ –0.0182±3.2876$i$ –0.0677±0.7825$i$ –2.8729 –1.1441 In Fig. 3, there are three cross points on the curves, their abscissas are 10.9 m·s^-1, 20.684 m·s^-1 and 20.675 m·s^-1, respectively. These three cross points divide the system unstable domain into three regions. At the different regions, the system Hopf bifurcation induced by the variation of the friction coefficient could cause the system instability with different vibration modals. Fig. 3The distribution of bifurcation parameter b* with steady rolling speed v-r 3.3. Hopf bifurcation type judgment Hopf bifurcation can be classified as the super-critical bifurcation and the sub-critical bifurcation. As different types of Hopf bifurcation having different vibration characteristics, therefore, it is important to clear the Hopf bifurcation type at each bifurcation point. Reference [20] defined a coefficient $\eta$ to judge the Hopf bifurcation type at different bifurcation points. The expression of the coefficient is: $\eta ={R}_{e}\left(-U{f}_{xxx}VV{V}^{\mathrm{*}}+2U{f}_{xx}V{A}^{-1}\left(0,b\right){f}_{xx}V{V}^{\mathrm{*}}+U{f}_{xx}{V}^{\mathrm{*}}{\left(A\left(0,b\right)-2i{\omega }_{0}I\right)}^{-1}{f}_{xx} ${f}_{xxx}VV{V}^{\mathrm{*}}={\left(\frac{\partial }{\partial X}\left(\left(\frac{\partial }{\partial X}\left(\left(\frac{\partial f\left(X,b\right)}{\partial X}\right)V\right)\right)V\right)\right) The vector $U$ and the vector $V$ are the left eigenvector and the right eigenvector corresponding to the pure imaginary eigenvalues (±${\omega }_{0}i$) of the Jacobian matrix, that is $UA\left(0,b\ right)=i{\omega }_{0}U$ and $A\left(0,b\right)V=i{\omega }_{0}V$. Moreover, the vector $U$ and the vector $V$ also meet the condition $UV=\text{1}$. ${V}^{*}$ is the conjugate vector of the vector $V$. $I$ is an eight-order unit matrix. If $\eta >0$, the system Hopf bifurcation is the super-critical bifurcation. And when $\eta <0$, the system Hopf bifurcation is the sub-critical bifurcation. Substituting Hopf bifurcation points $A$, $B$ and $C$ into Eq. (10), and the coefficient $\eta$ of each point is calculated. At point $A$, $\eta =\mathrm{}$3.72×10^-18$>$0, it means that the system Hopf bifurcation at this point is the super-critical bifurcation. At point $B$, $\eta =\mathrm{}$5.39×10^-17 $>$0, and the system Hopf bifurcation at point $B$ is the super-critical bifurcation too. At point $C$, $\eta =\mathrm{}$–3.01×10^-25 $<$0, the system Hopf bifurcation at this point is the sub-critical bifurcation. Fig. 4Dynamic response and phase diagram of the system for v-r= 18 m∙s-1 and b= 0.82 Fig. 5Dynamic response and phase diagram of the system for v-r= 6 m∙s-1 and b= 2.27 The system motions and three-dimensional phase diagrams in the period of 190-200 s corresponding to the point $A$+, point $B$+ and point $C$+ are shown in Figs. 4-6. It can be seen that the system motions will form a stable limit-cycle eventually when the Hopf bifurcation occurs at point $A$ and point $B$. But at point $C$, the vibration amplitude is diverging over time when the Hopf bifurcation occurs, and the system will collapse within a short time. The simulation results shown in Figs. 4-6 are consistent with the results calculated by Eq. (10). Through the analysis mentioned above, it can be seen that the system Hopf bifurcation curve is spliced by the critical boundaries of torsional vibration modal, horizontal vibration modal and vertical vibration modal. And for the different Hopf bifurcation points at different critical boundaries, the system Hopf bifurcation types may be different. Therefore, it is important and meaningful to study the movement laws of these three critical lines due to the change of rolling process parameters. And these laws can provide technical support for formulating a reasonable rolling process planning. Fig. 6Dynamic response and phase diagram of the system for v-r= 20.6945 m∙s-1 and b= 2.2885 4. Effect of rolling process parameters on system stability domain 4.1. Tensions at entry and exit The influences of tensions at entry and exit on the system stability domain are depicted in Fig. 7 and Fig. 8. There into, Fig. 7 is produced on the conditions of the entry tension equals to 0.75${\ sigma }_{0}$, ${\sigma }_{0}$ and 1.25${\sigma }_{0}$, respectively, with other parameters unchanged. And Fig. 8 is produced on the conditions of the exit tension equals to 0.75${\sigma }_{1}$, ${\ sigma }_{1}$ and 1.25${\sigma }_{1}$, respectively, with other parameters unchanged. As shown in Fig. 7 and Fig. 8, with the increase of tensions at entry and exit, the critical boundary of vertical vibration modal moves left gradually, and the critical speed decreases accordingly. The reason is that larger tensions corresponding to the smaller rolling stiffness (${a}_{Fy,yc}$). Thus the stability of vertical vibration modal reduces. For torsional vibration modal, the critical boundary moves down as the increase of the entry tension, and moves up as the increase of the exit tension. This is mainly due to the larger exit tension and the smaller entry tension mean the bigger forward slip zone. And the area difference between the size of the backward slip zone and the size of the forward slip zone becomes smaller. The rolling torque decreases accordingly. So the fluctuation of the rolling torque is relatively small with the same variations of the roll gap, and the torsional vibration modal is more stable. For horizontal vibration modal, its stability trend is the same with torsional vibration modal. The reason is that the larger exit tension and the smaller entry tension mean the force acting on the rolls in $x$ direction is smaller. Accordingly, the fluctuation of this force is smaller under the same disturbance, and the horizontal vibration modal is more stable. Fig. 7Influence of entry tension on system stability domain Fig. 8Influence of exit tension on system stability domain Fig. 9Influence of steady friction coefficient on system stability domain 4.2. Steady friction coefficient Fig. 9 represents the influence of the steady friction coefficient on the system stability domain. As Fig. 9 displays, the stability of the vertical subsystem is strengthened with the higher steady friction coefficient. The reason is that the friction in the roll gap acts as a positive damping in the vertical subsystem. For torsional vibration modal, as the increase of the steady friction coefficient, the modal is less stable at very low speeds, and more stable at medium and high speeds. On the one hand, the rolling torque increases as the increase of the steady friction coefficient, and the fluctuation of the rolling torque is greater under the same disturbance. On the other hand, the larger steady friction coefficient means the bigger forward slip zone, and the rolling torque decreases due to the increase of the forward slip zone, so the fluctuation of the rolling torque is smaller under the same disturbance. These two opposite trends play the dominant role alternately in different rolling speed stages. In the low-speed stage, the variation of the forward slip zone is relatively small. The first reason plays the dominant role at the moment. So the modal is less stable with the larger steady friction coefficient. With the continuous increase of the rolling speed, the second reason achieves the dominant role gradually, and the modal is more stable. For horizontal vibration modal, the fluctuation of the neutral point decreases with the increase of the steady friction coefficient. Accordingly, the variations of strip velocities as well as the variations of tensions are smaller. So the fluctuation of the force acting on the rolls in x direction is relatively smaller, and the modal is more stable. 4.3. Reduction ratio and strip thickness Since the functional relationship of the reduction ratio and strip thicknesses at entry and exit, the influences of the reduction ratio and strip thicknesses on the system stability domain are discussed in three different cases. The changes of the system stability domain with different reduction ratios are plotted in Fig. 10 and Fig. 11. There into, Fig. 10 considers the condition of invariable entry thickness, and Fig. 11 considers the condition of invariable exit thickness. In Fig. 12, the influence of the entry thickness on the system stability domain is displayed, in which the reduction ratio is kept unchanged. As shown in Fig. 10 and Fig. 11, with the increase of the reduction ratio, all the critical boundaries move left or down, which mean the stabilities of all the modals are weakened. The reason is that the larger reduction ratio corresponds to the greater rolling torque and forces acting on the rolls in $x$ and $y$ directions. So, the fluctuations of these three forces increase with the same variations of the roll gap, and the emergence of the vibration is caused more easily. Fig. 10Influence of reduction ratio on system stability domain with invariable entry thickness Fig. 11Influence of reduction ratio on system stability domain with invariable exit thickness In Fig. 12, on the premise of the constant reduction ratio, with the increase of the strip entry thickness, the strip exit thickness increases simultaneously. For vertical vibration modal, Fig. 12 illustrates that the thicker the strip is, the more stable the modal will be. It is because that the thicker strip corresponds to relatively smaller volume variations of the strip under the same variations of the roll gap. So, the fluctuation of the force acting on the rolls in y direction decreases, accordingly. And the modal is more stable. But for torsional vibration modal and horizontal vibration modal, the stability trends are opposite to vertical vibration modal. For torsional vibration modal, the reason is that the increase of the entry thickness means the increasing length of the contact arc, so the area difference between the size of the backward slip zone and the size of the forward slip zone becomes bigger, and the rolling torque is larger accordingly. Therefore, the fluctuation of the rolling torque is relatively large with the same variations of the roll gap, so the modal stability is weakened. For horizontal vibration modal, the increase of strip thicknesses at entry and exit mean the increasing of the force acting on the rolls in $x$ direction. Thus, the fluctuation of this force increases with the same variations of the roll gap, so the modal is less Through further observation of Fig. 12, the cross point of the torsional critical boundary and the horizontal critical boundary moves to the right drastically as the decrease of the strip entry thickness. Therefore, it can be inferred that the cross point may overflow or disappear if the entry thickness continues to decrease. In other words, when rolling the thin strip, the whole system would not loss stability induced by horizontal vibration modal. The subgraph of Fig. 12 shows the situation of the cross point overflows, when the strip entry thickness is decreased to 0.4${h}_{0}$. Fig. 12Influence of entry thickness on system stability domain with invariable reduction ratio 5. Comparison of the effects As the influences of rolling process parameters on the system stability domain are nonlinear, it is difficult to measure the effect of each parameter. In this section, a mean relative sensitivity factor is defined using the data in Fig. 7-Fig. 12. For the critical boundaries of horizontal vibration modal and torsional vibration modal (Line 1 and Line 2), this factor is obtained through adding together the ratios of the bifurcation parameters at the same abscissa. Since the critical boundary of vertical vibration modal (Line 3) is more sensitive to the rolling speed, therefore, this factor is obtained through adding together the ratios of the rolling speeds at the same ordinate. Even though this factor is not accurate, but it can be used to compare the effects of different parameters on the stability of different vibration modals. This factor is expressed as: $\left\{\begin{array}{l}\text{Line}\mathrm{}1\text{,}\mathrm{}2:S\left(u\right)=\frac{1}{2n}\left(\sum _{i=1}^{n}\left|\mathrm{l}\mathrm{n}\frac{{b}_{i}\left(0.75u\right)}{{b}_{i}\left(u\right)}\ right|+\sum _{i=1}^{n}\left|\mathrm{l}\mathrm{n}\frac{{b}_{i}\left(1.25u\right)}{{b}_{i}\left(u\right)}\right|\right),\left(i=1,2,\dots ,n\right),\\ \text{Line}\mathrm{}3:S\left(u\right)=\frac{1}{2n} \left(\sum _{i=1}^{n}\left|\mathrm{l}\mathrm{n}\frac{{v}_{ri}\left(0.75u\right)}{{v}_{ri}\left(u\right)}\right|+\sum _{i=1}^{n}\left|\mathrm{l}\mathrm{n}\frac{{v}_{ri}\left(1.25u\right)}{{v}_{ri}\ left(u\right)}\right|\right),\mathrm{}\mathrm{}\mathrm{}\mathrm{}\left(i=1,2,\dots ,n\right).\end{array}\right\$ The mean relative sensitivity factors for the aforementioned parameters are given in Table 3. Among the parameters, the reduction ratio has the most significant influence on the stability of all three vibration modals. In addition, the influence of the entry thickness on horizontal vibration modal and the influences of tensions at entry and exit on torsional vibration modal are relatively larger. For vertical vibration modal, the entry thickness and tensions have the second most influence on modal stability. In conclusion, the system stability domain is closely related to rolling process parameters. And the effect of the reduction ratio is the most significant. Moreover, the stability trends of these three modal with the change of the reduction ratio are the same. Therefore, in continuously rolling production, reasonable allocating the reduction ratio of each stand in a tandem mill is the key to ensure the rolling mill running smoothly and efficiently. Table 3Mean relative sensitivity factors for the aforementioned parameters Vibration modal $S$ (${\sigma }_{0}$) $S$ (${\sigma }_{1}$) $S$ (${\mu }_{0}$) $S$ ($\epsilon$-${h}_{0}$) $S$($\epsilon$-${h}_{1}$) $S$ (${h}_{0}$) Horizontal Line 1 0.077 0.090 0.086 0.384 0.477 0.259 Torsional Line 2 0.101 0.112 0.039 0.412 0.433 0.058 Vertical Line 3 0.025 0.026 0.013 0.100 0.091 0.023 6. Conclusions In this paper, based on the rolling mill vertical-torsional-horizontal coupled dynamic model with the consideration of the nonlinear friction, the system stability domain has been determined by the Hurwitz algebraic criterion. Then, the system Hopf bifurcation types have been judged. Finally, the influences of rolling process parameters on the system stability domain have been analyzed in detail. The following conclusions are drawn: 1) The system stability domain is enclosed by the instability critical boundaries of torsional vibration modal, horizontal vibration modal and vertical vibration modal. At different Hopf bifurcation points, the system Hopf bifurcation types may be different. 2) The critical boundaries will move with the change of rolling process parameters, which in turn change the system stability domain simultaneously. Among the parameters, the influence of the reduction ratio is the most significant. In addition to the reduction ratio, the stability of horizontal vibration modal is more sensitive to the strip entry thickness. The stability of torsional vibration modal is more sensitive to tensions at entry and exit. And the stability of vertical vibration modal is more sensitive to both tensions and the strip entry thickness. 3) When rolling the thin strip, the cross point of the torsional critical boundary and the horizontal critical boundary may overflow or disappear. In that situation, the instability of the whole system induced by horizontal vibration modal would not occur. 4) In actual production, clearing the movement trends of stability domain boundaries with the change of different rolling process parameters can provide a theoretical reference for optimizing the rolling process planning as well as selecting an optimal rolling process parameter to construct a state feedback controller. • Yarita I., Furukawa K., Seino Y. An analysis of chattering in cold rolling of ultrathin gauge steel strip. Transactions ISIJ, Vol. 18, Issue 1, 1978, p. 1-10. • Tlusty J., Critchley S., Paton D. Chatter in cold rolling. Annals of the CIRP, Vol. 31, Issue 1, 1982, p. 195-199. • Gao Z. Y., Zang Y., Zeng L. Q. Review of chatter in the rolling mills. Journal of Mechanical Engineering, Vol. 51, Issue 16, 2015, p. 87-105. • Yun I. S., Wilson W. R. D., Ehmann K. F. Chatter in the strip rolling process. Part 1: dynamic model of rolling; Part 2: dynamic rolling experiments; Part 3: chatter model. Journal of Manufacturing Science and Engineering, Vol. 120, Issue 5, 1998, p. 330-348. • Hu P. H., Ehmann K. F. A dynamic model of the rolling process. Part 1: homogeneous model; Part 2: inhomogeneous model. International Journal of Machine Tools and Manufacture, Vol. 40, Issue 1, 2000, p. 1-31. • Hu P. H., Zhao H. Y., Ehmann K. F. Third-octave-mode chatter in rolling. Part 1: chatter model; Part 2: stability of a single-stand mill; Part 3: stability of a multi-stand mill. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, Vol. 220, Issue 8, 2006, p. 1267-1303. • Tamiya T., Furui K., Lida H. Analysis of chattering phenomenon in cold rolling. Proceedings of Science and Technology of Flat Rolled Products, International Conference on Steel Rolling, Tokyo, Vol. 2, 1980, p. 1191-1207. • Hu P. H., Ehmann K. F. Fifth octave mode chatter in rolling. Proceedings of the Institution of Mechanical Engineerings, Part B: Journal of Engineering Manufacture, Vol. 215, Issue 6, 2001, p. • Krot P. Nonlinear vibrations and backlashes diagnostics in the rolling mills drive trains. ENOC, Saint Petersburg, Russia, Vol. 30, Issue 6, 2008, p. 26-30. • Swiatoniowski A. Interdependence between rolling mill vibrations and the plastic deformation process. Journal of Materials Processing Technology, Vol. 61, Issue 4, 1996, p. 354-364. • Paton D. L., Critchley S. Tandem mill vibration: its cause and control. Iron and Steel Making, Vol. 12, Issue 3, 1985, p. 37-43. • Yan X. Q., Shi C., Cao X., et al. Research on coupled vertical-torsion vibration of mill-stand of CSP mill. Journal of Vibration, Measurement and Diagnosis, Vol. 28, Issue 4, 2008, p. 377-381. • Sims R. B., Arthur D. F. Speed-dependent variables in cold strip rolling. Journal of Iron and Steel Institute, Vol. 172, Issue 3, 1952, p. 285-295. • Shi Peiming, Xia Kewei, Liu Bin, et al. Dynamics behaviors of rolling mill’s nonlinear torsional vibration of multi-degree-of-freedom main drive system with clearance. Journal of Mechanical Engineering, Vol. 48, Issue 17, 2012, p. 57-64. • Panjkovic V., Gloss R., Steward J., et al. Causes of chatter in a hot strip mill: Observations, qualitative analyses and mathematical modelling. Journal of Materials Processing Technology, Vol. 212, Issue 4, 2012, p. 954-961. • Thomsen J. J. Using fast vibrations to quench friction-induced oscillations. Journal of Sound and Vibration, Vol. 228, Issue 5, 1999, p. 1079-1102. • Zeng L. Q., Zang Y., Gao Z. Y., et al. Stability analysis of the rolling mill multiple-modal-coupling vibration under nonlinear friction. Journal of Vibroengineering, Vol. 17, Issue 6, 2015, p. • Gao Z. Y., Zang Y., Wu D. P. Hopf bifurcation and feedback control of self-excited torsion vibration in the drive system. Noise and Vibration Worldwide, Vol. 42, Issue 10, 2001, p. 68-74. • Zou J. X., Xu L. J. Tandem Mill Vibration Control. Metallurgical Industry Press, Beijing, 1998. • Liu B., Liu S., Zhang Y. K., et al. Bifurcation control for electromechanical coupling vibration in rolling mill drive system based on nonlinear feedback. Journal of Mechanical Engineering, Vol. 46, Issue 8, 2010, p. 160-166. About this article Chaos, nonlinear dynamics and applications rolling mill vertical-torsional-horizontal coupling vibration nonlinear friction rolling process parameters This study is supported by the National Natural Science Foundation of China (No. 51175035), the Ph.D. Programs Foundation of Ministry of Education of China (No. 20100006110024), the Beijing Higher Education Young Elite Teacher Project (No. YETP0367) and the Fundamental Research Funds for the Central Universities (No. FRF-BR-14-006A). Copyright © 2016 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/16569","timestamp":"2024-11-12T12:44:15Z","content_type":"text/html","content_length":"208709","record_id":"<urn:uuid:9390d6b3-933e-4fef-ba8b-8708e3ca6232>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00157.warc.gz"}
The Principia: The Authoritative Translation : Mathematical Principles of Natural Philosophy Sir Isaac Newton; Professor I Bernard Cohen; Anne Whitman University of California Press In his monumental 1687 work, Philosophiae Naturalis Principia Mathematica, known familiarly as thePrincipia, Isaac Newton laid out in mathematical terms the principles of time, force, and motion that read more&mldr; guided the development of modern physical science. Even after more than three centuries and the revolutions of Einsteinian relativity and quantum mechanics, Newtonian physics continues to account for many of the phenomena of the observed world, and Newtonian celestial dynamics is used to determine the orbits of our space vehicles. This authoritative, modern translation by I. Bernard Cohen and Anne Whitman, the first in more than 285 years, is based on the 1726 edition, the final revised version approved by Newton; it includes extracts from the earlier editions, corrects errors found in earlier versions, and replaces archaic English with contemporary prose and up-to-date mathematical forms. Newton's principles describe acceleration, deceleration, and inertial movement; fluid dynamics; and the motions of the earth, moon, planets, and comets. A great work in itself, thePrincipiaalso revolutionized the methods of scientific investigation. It set forth the fundamental three laws of motion and the law of universal gravity, the physical principles that account for the Copernican system of the world as emended by Kepler, thus effectively ending controversy concerning the Copernican planetary system. The translation-only edition of this preeminent work is truly accessible for today's scientists, scholars, and students. BOOKSTORE TOTAL {{condition}} {{price}} + {{shipping}} s/h This book is currently reported out of stock for sale, but WorldCat can help you find it in your local library:
{"url":"https://bookchecker.com/0520290739","timestamp":"2024-11-05T09:39:42Z","content_type":"text/html","content_length":"115606","record_id":"<urn:uuid:92c758d0-6b37-4f05-8ed9-8731e868b0ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00167.warc.gz"}
Geometry Calculator: A Practical Guide to Everyday Shapes Geometry Calculator Geometry Calculator: A Practical Guide to Everyday Shapes Geometry is a fascinating branch of mathematics that deals with shapes, sizes, relative positions, and properties of space. From simple shapes like squares and triangles to complex three-dimensional figures like cylinders and spheres, geometry plays a vital role in various fields, including architecture, engineering, and everyday life. With the advent of technology, the traditional methods of calculating geometry have evolved, and now, online geometry calculators have become invaluable tools for students, professionals, and anyone interested in mathematical calculations. Table of Contents: Geometry Calculator What is a Geometry Calculator? A geometry calculator is an online tool designed to perform various geometric calculations, providing users with accurate results for different shapes and dimensions. Whether you need to find the area, perimeter, volume, or other essential measurements, a geometry calculation tool can simplify the process and save time. 1. Circle: The Perfect Round ๐ Formula for Area: Area=ฯ ร r2 Where (r) is the radius of the circle, and (\pi) (pi) is approximately 3.14159. ๐ Formula for Circumference: Circumference=2ร ฯ ร r Circles are everywhere โ from wheels, to clocks, to pizzas. Knowing how to calculate a circleโ s area helps us understand how much space is inside the circle. For example, if you’re planning to build a circular garden or a swimming pool, calculating the area tells you how much material (soil, water) youโ ll need. The circumference, on the other hand, helps when you’re dealing with fences or borders around the circle, such as wrapping a rope around a tree. Geometry Calculator 2. Rectangle: The Basic Block of Construction ๐ Formula for Area: Area=Lengthร Width ๐ Formula for Perimeter: Perimeter=2ร (Length+Width) Rectangles are the backbone of most buildings, rooms, and furniture. When you buy a piece of land or lay down flooring in a room, you need to calculate the area to know how much material to buy. The perimeter is essential when you’re thinking about putting up a fence around a garden or laying out a border. Geometry Calculator 3. Square: A Special Type of Rectangle ๐ Formula for Area: Area=Sideร Side ๐ Formula for Perimeter: Perimeter=4ร Side Squares are symmetrical, making them ideal for tiling floors or building frames. Theyโ re also easier to work with because all sides are equal. Whether you’re creating a perfect square lawn or tiling a room, knowing the area ensures you use the right amount of materials. 4. Triangle: The Stable Shape ๐ Formula for Area Area=1/2โ ร Baseร Height Triangles are known for their structural stability, making them important in architecture and engineering (think of the pyramids or bridges). The formula helps when youโ re working with triangular shapes in construction, landscaping, or even crafting. Knowing the area helps you understand the size and space it occupies. Geometry Calculator 5. Cylinder: The 3D Circle ๐ Formula for Volume: Volume=ฯ ร r^2ร h ๐ Formula for Surface Area: Surface Area=2ร ฯ ร rร (r+h) Cylinders are common in objects like cans, pipes, and containers. Calculating the volume of a cylinder is crucial when determining how much liquid it can hold. For example, if youโ re working with a water tank, knowing its volume ensures you don’t over- or underfill it. Surface area calculations help when you need to paint or cover the outside of a cylindrical object. Geometry Calculator 6. Sphere: The Ultimate 3D Shape ๐ Formula for Volume: Volume=4/3โ ร ฯ ร r^3 ๐ Formula for Surface Area: Surface Area=4ร ฯ ร r^2 Spheres are all around us: balls, bubbles, and even the Earth itself. Understanding a sphere’s volume is important in industries like packaging, sports, and manufacturing. For example, if you’re filling up a spherical balloon with gas, calculating the volume ensures it holds the correct amount. The surface area is essential when considering how much material (like fabric for a ball) is needed to cover it. 7. Parallelogram: A Tilted Rectangle ๐ Formula for Area: Area=Baseร Height Parallelograms, like rectangles, are useful in construction and design. They appear in slanted roofs, ramps, and abstract art. Calculating the area helps in practical situations where you need to determine how much space the shape covers, even if it looks skewed. Enhancing Learning for Students For students, understanding geometry concepts can be challenging. An online geometry calculator not only helps in performing calculations but also serves as a learning aid. By inputting values and observing the outputs, students can visualize and better comprehend geometric principles. Free and Accessible Tool The availability of a free geometry calculator online means that anyone can access this valuable resource without financial barriers. It is a great asset for both students studying at home and professionals working on projects requiring geometric calculations. Problem-Solving Capabilities A geometry problem solver can tackle various questions, from basic shapes to complex geometric problems, making it a versatile tool for anyone in need of mathematical assistance. Custom Calculations Many geometry calculators allow users to input custom dimensions, making it easy to adapt the tool for specific needs. For instance, if you are working on a unique architectural design, a geometry calculator can help you calculate the necessary measurements efficiently. Why Geometry Calculations Matter Geometry isnโ t just something you learn in school; itโ s a tool you can use throughout your life. These formulas arenโ t just about solving math problems โ they help you make decisions in everyday activities, from home improvements to planning events. • Efficiency: Understanding the area and volume of different shapes allows you to work more efficiently, whether you’re calculating how much paint you need for a room or the volume of a tank. • Resource Management: Geometry helps in conserving resources. By knowing exactly how much space or material is required, you avoid wastage. • Planning and Design: If you’re an architect, designer, or artist, geometry is critical in bringing your ideas to life in a structured and aesthetically pleasing way. In summary, a geometry calculator is an essential tool that simplifies geometric calculations, making them accessible to everyone, from students to professionals. With capabilities to perform area, perimeter, volume, and surface area calculations for various shapes, it enhances learning and problem-solving experiences. Whether you’re trying to find the area of a circle or the volume of a cylinder, a geometry calculator can save you time and improve your understanding of mathematical concepts. So, take advantage of these innovative tools and make your geometry calculations easier and more efficient today! Leave a Comment
{"url":"https://tejcalculator.com/geometry-calculator/","timestamp":"2024-11-08T07:22:09Z","content_type":"text/html","content_length":"229480","record_id":"<urn:uuid:750a08d5-016e-4b03-aeb0-8242e81e320d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00079.warc.gz"}
How Do You Find the Partial Derivative of a Function? How Do You Find the Partial Derivative of a Function? Find the Partial Derivative : preg_split(): Passing null to parameter #3 ($limit) of type int is deprecated in on line How Do You Find the Partial Derivative of a Function? The partial derivatives are used in vector calculus and differential geometry to find derivatives of a function having several, but we find the derivative with respect to one of its variables. By using the partial derivatives we find the differential equation, which is used to find the points of maximum and the minimum points when analyzing a surface, or plotting a graph. We can use the partial derivatives calculator to find the partial derivative of the function in a three-dimensional plane. Basically, we do derivation to find the rate of change or a slope of a tangent line. When we are working on the three- dimensional surface, we use the partial derivative to find how the function is changing at any given point; these surfaces are perpendicular to each other, in a three-dimensional plane. In this article we try to understand the partial derivative and the simple method of how to solve it: How we can define the partial derivative We can understand the partial derivative by its definition, considering a function f(x,y), the function depends on two variables x and y, the x and y are independent of each other but have a direct relation with the rate of change of the function f. In this situation, the derivative we get is a partial derivative. When we solve the derivative with respect to x, then we consider y is a constant if we are calculating the derivative with respect to y, we then consider x and a constant one. The partial derivative can be calculated by a partial derivative solver, this would also increase your understanding of the partial derivative. The partial derivative symbol and formula. When we are calculating the partial derivative with respect to x, then the symbols of the partial derivative with respect to the x are f’x, ∂xf or ∂f/∂x, the symbol of the partial derivative is “∂”. Now if f(x,y) is a function, and we want to find the partial derivative with respect to x and keep the “y” as a constant, Then the formula would be: Now the formula for the partial derivative with respect to y, and keeping the x a constant, we get: We can find the second derivative by using the partial derivatives calculator, for our convenience. Now consider a function f = x 3 + 4xy, this function has more than one variable, we can find its derivative by partial derivative method, we can use the partial derivative calculator to find the partial derivative of this fx=x( x 3 + 4xy)=3×2+4y Now to find the tangent line slope at point P(2,2) and we can put the values to find the slope of the tangent: The value of the fxat (2,2) is : 3×2+4y=3(2)2+ 4(2)=3(4)+8=12+8=20 This means the slope of the tangent line is 20, therefore the x=20 at (2,2), for finding the partial derivative directly, we can use the partial derivative calculator. Partial derivative rules: There are four types of rules for partial derivative, product rule, quotient rule, power, and the chain rule for partial derivative: Product rule: If we need to multiply two partial derivatives, we apply the following product rule: If u=f(x,y).g(x,y) , then we multiply as: Ux=Ux=g(x,y) fx+f(x,y) gx Uy=Uy=g(x,y) fy+f(x,y) gy The product of derivatives can be found by using the partial derivative calculator, to avoid the length product calculations. There are separate formulas for quotient, power, and chain rules formulas for the partial derivative; we can use the derivative chain rule calculator to find the partial derivative of the independent variables. Deprecated: preg_split(): Passing null to parameter #3 ($limit) of type int is deprecated in /home/dailwtkh/public_html/wp-content/themes/jannah/framework/functions/post-functions.php on line 863
{"url":"https://dailyonoff.com/how-do-you-find-the-partial-derivative-of-a-function/","timestamp":"2024-11-08T07:33:28Z","content_type":"text/html","content_length":"81888","record_id":"<urn:uuid:635b8186-76c9-4602-8785-1866f2068f0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00377.warc.gz"}
Example 1 - Construct a triangle similar to triangle ABC with its side Example 1 - Constructions Last updated at April 16, 2024 by Teachoo Example 1 Construct a triangle similar to a given triangle ABC with its sides equal to 3/4 of the corresponding sides of the triangle ABC (i.e. of scale factor 3/4). Here, we are given Δ ABC, and scale factor 3/4 ∴ Scale Factor < 1 We need to construct triangle similar to Δ ABC Let’s follow these steps Steps of construction Draw any ray BX making an acute angle with BC on the side opposite to the vertex A. Mark 4 (the greater of 3 and 4 in 3/4 ) points 𝐵_1,𝐵_2,𝐵_3 and 𝐵_4 on BX so that 〖𝐵𝐵〗_1=𝐵_1 𝐵_2=𝐵_2 𝐵_3=𝐵_3 𝐵_4. Join 𝐵_4C and draw a line through 𝐵_3 (the 3rd point, 3 being smaller of 3 and 4 in 3/4) parallel to 𝐵_4 𝐶, to intersect BC at C′. 4. Draw a line through C′ parallel to the line CA to intersect BA at A′. Thus, Δ A′BC′ is the required triangle Justification Since scale factor is 3/4, we need to prove (𝑨^′ 𝑩)/𝑨𝑩=(𝑨^′ 𝑪^′)/𝑨𝑪=(𝑩𝑪^′)/𝑩𝑪 =𝟑/𝟒. By construction, BC^′/𝐵𝐶=(𝐵𝐵_3)/(𝐵𝐵_4 )=3/4. Also, A’C’ is parallel to AC So, they will make the same angle with line BC ∴ ∠ A’C’B = ∠ ACB Now, In Δ A’BC’ and Δ ABC ∠ B = ∠ B ∠ A’C’B = ∠ ACB Δ A’BC’ ∼ Δ ABC Since corresponding sides of similar triangles are in the same ratio (𝐴^′ 𝐵)/𝐴𝐵=(𝐴^′ 𝐶^′)/𝐴𝐶=(𝐵𝐶^′)/𝐵𝐶 So, (𝑨^′ 𝑩)/𝑨𝑩 =(𝑨^′ 𝑪^′)/𝑨𝑪=(𝑩𝑪^′)/𝑩𝑪 =𝟑/𝟒. Thus, our construction is justified
{"url":"https://www.teachoo.com/5758/2007/Example-1---Construct-a-triangle-similar-to-triangle-ABC-with-its-side/category/Constructing-similar-triangle-as-per-scale-factor---Scale-factor---1/","timestamp":"2024-11-10T14:45:21Z","content_type":"text/html","content_length":"115656","record_id":"<urn:uuid:d4d2192f-9860-4b87-9439-0ad9d7e044ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00041.warc.gz"}
If f '(x) = 8x^3 - 16x, how do you use the second derivative test to find maximum and minimum? | Socratic If #f '(x) = 8x^3 - 16x#, how do you use the second derivative test to find maximum and minimum? 1 Answer Without knowing the function $f \left(x\right)$ we cannot find the values, but we can find where the extrema occur. Given $f ' \left(x\right) = 8 {x}^{3} - 16 x$ and assuming that domain of $f$ includes the zeros of $f '$, we proceed: Finding Critical Numbers for $f$ A critical number for $f$ is a number in the domain of $f$ at which either $f \left(x\right) = 0$ or $f ' \left(x\right)$ does not exist. In this problem, $f '$ is a polynomial, so it exists for all $x$. The critical numbers for this $f$ are the zeros of $f '$. $f ' \left(x\right) = 8 {x}^{3} - 16 x = 8 x \left({x}^{2} - 2\right) = 0$ at $- \sqrt{2}$, $0$, and $\sqrt{2}$ These are our critical numbers. Note To use the first derivative test for local extrema, we check the sign of $f '$ on both sides of each critical number. Testing the Critical Numbers To use the second derivative test for local extrema, we check the sign of $f ' '$at each critical number. $f ' ' \left(x\right) = 24 {x}^{2} - 16$ At $- \sqrt{2}$, we get $f ' ' \left(- \sqrt{2}\right) = 24 {\left(- \sqrt{2}\right)}^{2} - 6$ which is clearly positive. $f \left(- \sqrt{2}\right)$ is a local minimum. At $0$, we get $f ' ' \left(0\right) = 24 {\left(0\right)}^{2} - 16$ which is negative. $f \left(0\right)$ is a local maximum. At $\sqrt{2}$, we get $f ' ' \left(\sqrt{2}\right) = 24 {\left(\sqrt{2}\right)}^{2} - 6$ which is clearly positive. $f \left(\sqrt{2}\right)$ is a local minimum. Impact of this question 1886 views around the world
{"url":"https://socratic.org/questions/if-f-x-8x-3-16x-how-do-you-use-the-second-derivative-test-to-find-maximum-and-mi#155734","timestamp":"2024-11-09T10:26:50Z","content_type":"text/html","content_length":"36686","record_id":"<urn:uuid:db1379c1-c3c5-4b17-8f77-4282386de940>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00828.warc.gz"}
OR Gate: Among The Basic Logic Gates - Shiksha Online OR Gate: Among The Basic Logic Gates OR Gate: Among The Basic Logic Gates 5 mins read223 Views Comment Updated on Oct 3, 2024 18:40 IST An OR gate is a fundamental digital logic gate used in electronic circuits, producing a high output (1) when any of its inputs are high. It operates based on Boolean algebra, symbolizing the logical addition operation. The gate’s output is low (0) only when all its inputs are low. OR gate refers to the digital logic gate having two or more inputs with one output to perform logical or inclusive disjunction. Recommended online courses Best-suited IT & Software courses for you Learn IT & Software with these high-rated online courses Table of Contents What is an OR gate? This logic gate is represented with a plus (+) sign as it is used for logical addition. It is one of the three basic logic gates using which you can construct any Boolean circuit. Any function in binary mathematics can be implemented with them. To distinguish it from the XOR gate, it is also called as the inclusive OR gate. OR gates are available in the families of TTL and CMOS ICs logic. Standard 4000 series CMOS IC is 4071 which includes 2-input OR gates. TTL device is 7432. The above diagram represents the switching circuit of the OR gate operation. In this switching circuit, A and B are the two switches with one lamp connected to the voltage source. According to the switching circuit, lamp L will be lit up in two cases. 1. Either when both switches A and B are closed. 2. One of two switches i.e. A or B is closed. In case both the switches are open, then the lamp will not get lit up. An OR gate operates on the basis of boolean algebra. An OR gate follows the logic operation of input and output signals allowing the signal to pass and stop through it. The combination of inputs is 2n. Here n denotes the number of inputs. • The output of OR gate is true (1) even if at least one of the inputs is true. • Unless every input is 0, the output of the OR gate will always be 1. In circuit diagrams, OR logic gate is represented by a curved shape that has two inputs and one output side. It is symbolized as one of the two following logic designs: 1. American Logic Gate Symbol (MIL/ANSI Symbol) 2. European Logic Gate Symbol (IEC Symbol) Here, “≥1” represents that the output is activated by at least one active input. Analytical Representation: f(a,b) = a + b – a * b f(0,0) = 0 + 0 – 0 * 0 = 0 f(0,1) = 0 + 1 – 0 * 1 = 1 f(1,0) = 1 + 0 – 1 * 0 = 1 f(1,1) = 1 + 1 – 1 * 1 = 1 Types of OR Gate Technically, you can have an OR logic gate with any number of inputs. The output will be true if any of the N inputs are true. Here, we are discussing the two common types of OR gates. 2-Input OR Gate Also known as Basic OR logic gate, this type of OR gate takes in two input values to produce a single output value. Here there are two input values, hence the possible combination of inputs will be OR Gate Truth Table For 2-Input Gate The following table represents the OR gate truth table. Remember that here, 0 represents false and 1 represents true. A B A OR B 3-Input OR Gate This OR logic gate has three inputs and gives an output of true if any one, two, or all three of its inputs are true. OR Gate Truth Table For 3-Input Gate The following table is the 3-input OR truth table: A B C A OR B OR C Addition Using OR Logic Gate Let us understand how addition can be performed using 2-input OR logic gate considering the below-given table for reference: A B A OR B Case 1: If A = 0 and B = 0 Using the truth table, A OR B = 0. It means that neither A nor B is true, so the output is 0. Case 2: If A = 0 and B = 1 Using the truth table, A OR B=1. This means at least one of A or B is true, so the output is 1. Case 3: If A = 1 and B = 0 Using the truth table, A OR B =1. Again, since one of A or B is true, so the output is 1. Case 4: If A = 1 and B = 1 Using the truth table, A OR B = 1. Both A and B are true, so the output is 1. How many inputs can an OR gate have? While the most common OR gates have two inputs, they can be designed with multiple inputs. There's no theoretical limit to the number of inputs. What is the Boolean expression for an OR gate? For a 2-input OR gate with inputs A and B, the Boolean expression is A + B or A ∨ B. How is an OR gate different from an AND gate? An OR gate outputs '1' if any input is '1', while an AND gate outputs '1' only if all inputs are '1'. Can OR gates be used to create other logic gates? Yes, OR gates can be combined with other gates to create more complex logic functions. What are some real-world applications of OR gates? OR gates are used in various electronic systems, including alarm systems (where any triggered sensor activates the alarm) and computer memory addressing. Can OR gates be implemented with transistors? Yes, OR gates can be built using transistors in various configurations, such as with bipolar junction transistors (BJTs) or field-effect transistors (FETs). About the Author Jaya is a writer with an experience of over 5 years in content creation and marketing. Her writing style is versatile since she likes to write as per the requirement of the domain. She has worked on Technology, Fina... Read Full Bio Top Picks & New Arrivals Popular Blogs Latest Blogs
{"url":"https://www.shiksha.com/online-courses/articles/understanding-or-gate/","timestamp":"2024-11-14T18:07:42Z","content_type":"text/html","content_length":"509857","record_id":"<urn:uuid:122db18c-d371-48d0-82dc-78ec31f24772>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00614.warc.gz"}
Java - For Each Loop for Associative Array The for-each loop is specially used to handle elements of a collection or mostly i is used to handling associative array. A collection is represented by a group of elements or objects. The list is an example of the collection since it stores a group of objects. The for-each loop repeatedly executes a group of statements for each element of the collection, but unfortunately, you can’t use for-each loop everywhere. Java Source public class ForEach { public static void main(String[] args) { int arr[]={1, 2, 3, 4, 5, 6,3,4,5,6,7}; for(int i:arr){ In the above example, the class is ForEach, which has the main function. Inside the main function, the array is defined as integer data type. int arr[] = {1, 2, 3, 4, 5, 6, 3, 4, 5, 6, 7,}; and after that the for each loop is used, to use for-each loop you have to use variable name and collection i.e for(int i[Variable]:arr[Collection of array]){ statements; }. It will go repeatedly up the end of the element Java Source This program will help you to find out the even number from the collection of the list. public class ForEach { public static void main(String[] args) { int arr[]={1, 2, 3, 4, 5, 6,3,4,5,6,7}; for(int i:arr){
{"url":"https://tutsmaster.org/java-for-each-loop-for-associative-array/","timestamp":"2024-11-10T17:46:19Z","content_type":"text/html","content_length":"85923","record_id":"<urn:uuid:634fe702-8811-45ff-987f-2870b42ebc07>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00219.warc.gz"}
Divisor function (Redirected from Counting divisors) The divisor function is denoted $\sigma_k(n)$ and is defined as the sum of the $k$th powers of the divisors of $n$. Thus $\sigma_k(n) = \sum_{d|n}d^k = d_1^k + d_2^k + \cdots + d_r^k$ where the $d_i$ are the positive divisors of $n$. Counting divisors Note that $\sigma_0(n) = d_1^0 + d_2^0 + \ldots + d_r^0 = 1 + 1 + \ldots + 1 = r$, the number of divisors of $n$. Thus $\sigma_0(n) = d(n)$ is simply the number of divisors of $n$. Example Problems Consider the task of counting the divisors of 72. First, we find the prime factorization of 72: $72=2^{3} \cdot 3^{2}.$ Since each divisor of 72 can have a power of 2, and since this power can be 0, 1, 2, or 3, we have 4 possibilities. Likewise, since each divisor can have a power of 3, and since this power can be 0, 1, or 2, we have 3 possibilities. By an elementary counting principle, we have $3\cdot 4=12$ divisors. We can now generalize. Let the prime factorization of $n$ be $p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$. Any divisor of $n$ must be of the form $p_1^{f_1}p_2^{f_2} \cdots p_k^{e_k}$ where the $f_i$ are integers such that $0\le f_i \le e_i$ for $i = 1,2,\ldots, k$. Thus, the number of divisors of $n$ is $\sigma_0(n) = (e_1+1)(e_2+1)\cdots (e_k+1)$. Introductory Problems Sum of divisors The sum of the divisors, or $\sigma_1(n)$, is given by $\sigma_1(n) = (1 + p_1 + p_1^2 +\cdots p_1^{e_1})(1 + p_2 + p_2^2 + \cdots + p_2^{e_2}) \cdots (1 + p_k + p_k^2 + \cdots + p_k^{e_k}).$ There will be $(e_1+1)(e_2+1)(e_3+1)\cdots (e_k+1)$ products formed by taking one number from each sum, which is the number of divisors of $n$. Clearly all possible products are divisors of $n$. Furthermore, all of those products are unique since each positive integer has a unique prime factorization. Since all of these products are added together, we can conclude this gives us the sum of the divisors. Sum of kth Powers of Divisors Inspired by the example of the sum of divisors, we can easily see that the sum of the $k^\text{th}$ powers of the divisors is given by \begin{align*} \sigma_k(n) &= (1+p_1^k+p_1^{2k}+\cdots +p_1^ {e_1k})(1+p_2^k+p_2^{2k}+\cdots +p_2^{e_2k})\cdots (1+p_i^k+p_i^{2k}+\cdots +p_i^{e_ik}) \\ &= \prod_{a=1}^{i}\left(\sum_{b=0}^{e_a}p_a^{bk}\right) \end{align*} where $p_1,p_2,...,p_i$ are the distinct prime divisors of $n$. This is proven in a very similar way to the $\sigma_1$ case. See also This article is a stub. Help us out by expanding it.
{"url":"https://artofproblemsolving.com/wiki/index.php/Counting_divisors","timestamp":"2024-11-10T08:34:28Z","content_type":"text/html","content_length":"44764","record_id":"<urn:uuid:ffc8b204-7f81-4613-b2fe-8d74cb8bee56>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00260.warc.gz"}
Analyzing CVE-2022-0778: When Square Root Results in a Denial of Service How could a humble SSL certificate entirely gridlock a system? Walk with us through the math Threat Research cryptography DoS OpenSSL Security Operations Generally speaking, secure communications should be part of infosec’s solution set, not part of the problem. When CVE-2022-0778 revealed that a maliciously crafted SSL certificate could lead to near-total CPU utilization and denial of service on a targeted machine, we thought it would be instructive to see exactly how something so right can, in very specific situations, go so wrong. CVE-2022-0778 in a Nutshell OpenSSL is a very popular library, widely used by many organizations and software applications requiring secure communications. A recently revealed OpenSSL vulnerability, CVE-2022-0778, can use specially crafted certificates to cause a Denial of Service (DoS). The vulnerability affects both clients and servers. The vulnerability lies in OpenSSL’s implementation of the Tonelli-Shanks algorithm, used to find the square roots of numbers in the elliptic curve cryptography at the heart of the encryption library. This vulnerability occurs when instead of a prime number, a composite number is passed to the algorithm. This results in a computational problem like integer factorization. In this report we’ll first explain the basics of elliptic curve cryptography, and then provide a detailed analysis of the issue that leads to CVE-2022-0778. This is interesting math, so we’ll walk through it carefully. Elliptic Curve Cryptography in Short Elliptic curve cryptography (ECC) is a public-key cryptography based on elliptic curves. It is a modern successor of the legendary RSA cryptosystem, but using the interesting properties of elliptic curves instead of RSA’s multiplication of prime numbers. Because ECC can use smaller key sizes, it is faster than RSA – an obvious benefit. Elliptic curves are algebraic curves. The following equation describes an elliptic curve: Ax^3 + Bx^2 y + Cxy^2 + Dy^3 + Ex^2 + Fxy + Gy^2 + Hx + Iy +J = 0 Here A,B…J defines the curve. In cryptography, however, a simplified form of the equation – called the Weierstrass elliptic function — is used: y^2 = x^3 + ax + b If we visualize the curve, we see something like this: Figure 1: An elliptic curve. (“Elliptic” in this context refers to the algebra describing the curve, not to the oval shape known as an ellipse.) Image source: https://www.desmos.com/ If we take two points on this curve — point 1 and point 2 — and draw a line, that line intersects the curve at point 3. If we take the opposite point (that is, the point equal to point 3, on the other side of the X axis), it will be point 1 + point2 as shown in the above image. ECC uses an elliptic curve over finite fields, and the points on the curve are limited to integer coordinates. Thus, the above curve in the modular form will be as follows: y^2 ≡ x^3 + ax + b (mod p) where p is a prime number denoting the size of the field. (The symbol ≡ denotes congruence.) Secondly, elliptic curves are defined over finite fields. Because of this, there exists for every elliptic curve a pre-defined constant point, which is denoted G — the generator point, also known as the base point. Any point P in the subgroup over the elliptic curve can be generated by multiplying G with some integer, K, like so: P = K*G Finally, performing the algebraic equation on any two points in the field will result in another point in the field. All the points in the elliptic curve over a finite field form a finite group; the total number of points is called the order of curve, and is denoted by n. There can be multiple non-overlapping subgroups and those will be denoted by h; this is called the cofactor. The number of points in each subgroup is denoted by r. So: We will not delve further into order and cofactor in this report; it is sufficient for our purposes here to note they exist. How Is It Used for Encryption? So, in an elliptic curve we have the following: 1. The elliptic curve over a finite field of form – y^2≡ x^3 + ax + b (mod p) 2. The generator point or base point — denoted by G. 3. An integer value K which, when multiplied with G, results in another point P on the curve – P = K*G. Now, it is quick and easy to calculate P as shown, by multiplying K and G. But if we want to figure out K by dividing P by G, i.e., K =P/G, then it is very difficult or infeasible for large values of K. This asymmetry, in which multiplying is easy but dividing (factoring) is hard, is the basis of elliptic curve cryptography and is known as the elliptic curve discrete logarithmic problem (ECDLP). Many ECC algorithms rely on this problem by using carefully chosen elliptic curves — fields for which no efficient algorithm exists. In ECC encryption, K is the private key, P is the public key, and G is the generator point. By understanding ECDLP as shown above, we know that given a generator point G, it’s very easy to figure out public key P on the elliptic curve by multiplying G by private key K. But even with the generator point G and public key P known, it remains very difficult to calculate private key K! Saving Space with Compressed EC Points… at a Price Now we know that a public key is simply a point on the elliptic curve. As such, it will have an X coordinate and a Y coordinate. As a reminder, the equation for the elliptic curve itself is y^2 ≡ x^3 + ax + b (mod p) So, if we know x, we can easily figure out two values of y by solving the following two equations: p is an odd prime number, so one value of y would be even and another one would be odd, and either will satisfy the equation. So we don’t need to use {x,y} coordinates in the public key; we can simply use {x, [even] or [odd]} for our x coordinates, with even or odd denoted by an extra parity bit called a Compressed EC Point. Doing so can save us some space, which is useful for network encryption since it’s less data to transfer. However, it’s at this point we encounter the vulnerability in OpenSSL’s implementation of the Tonelli-Shanks algorithm, which is used to calculate the two square-root values that give us y1 and y2. Basically, thanks to incautious implementation, the issue in OpenSSL can be triggered while finding the value of y from x if the public key uses coordinates in the Compressed EC Point format. How Is It Used in SSL Certificates? With all this in mind, we can see a number of things when looking at an SSL certificate that uses elliptic curve cryptography, as shown in Figure 2: Figure 2: Details of an SSL certificate As we see, in SSL certificates we have the following fields: 1. Public Key (pub) = this is P, a point on the curve; i.e., x and y coordinates. This can be in compressed format. 2. Prime = this is the prime number p, which will be used as (mod p) 3. A and B = these two define the curve, i.e., y^2 ≡ x^3 + ax + b (mod p) 4. Generator = This is the base point on the curve by which any other point on the curve can be calculated. 5. Order and Cofactor = These denote the order and cofactor of the curve. (Again, we will not be using these for this report.) This certificate can be used by the server or by clients. Once this certificate reaches the receiver, the receiver will use the public key to encrypt. If the public key is in the compressed format, the receiver needs to decompress it to figure out y, and to do that they need to solve the curve equation — y^2 ≡ x^3 + ax + b (mod p) . Since solving the curve equation requires calculating a modular square root of a potentially big number, the function BN_mod_sqrt() will be called. This is the specific function where the CVE-2022-0778 vulnerability can be triggered by a specially crafted certificate with malicious values. Calculating the Modular Square Root As mentioned, OpenSSL uses the function BN_mod_sqrt() to calculate modular square roots. As per OpenSSL’s documentation, BN_mod_sqrt() returns the modular square root of a such that in^2 = a (mod p). The modulus p must be a prime or an error or an incorrect “result” will be returned. The result is stored into in which can be NULL. The result will be newly allocated in that case. We can see, therefore, that p must be a prime number or the result would be incorrect. As mentioned, the CVE-2022-0778 vulnerability lies in OpenSSL’s implementation of the Tonelli-Shanks algorithm used to perform that calculation. This algorithm finds a square root or a number n modulo p and expects two parameters, p and n. Here p is a prime number, and n is the number for which we want to find a square root. There are a few fundamental assumptions and steps for this algorithm, which are summarized below. (For detailed information on the algorithms and proofs supporting this, please check the reference section at the end of this blog.) 1. If given a non-zero number n and an odd prime p, as per Euler’s criterion, n will have a square root if and only if following holds true: n will be a quadratic residue. 2. If a number z doesn’t have a square root, then as per Euler’s criterion the following will hold true: Half of the integers between 1 and p-1 will have this property. z will be non-residue. 3. Since p is a prime number, we can write p – 1 = 2^s * Q (Here Q is an odd number.) Now if we try: Here, if we say t = n^Q, we can have the following conditions: 1. If t=1, then R will be the square root of n, as the equation will become R^2 ≡ nt (mod p) Also, for M=S it will satisfy the following: t is 2^M-1 root of 1 2. If t = 0, then R will be 0 as per the above equation. 3. If t is not 0 or 1, we need to calculate another value of R and t for M-1, and we need to repeat this until t becomes 1 (i.e., t becomes 2^0, at which point R will be the square root of n). 4. To find new values of R and t, we can multiply it by a factor b^2 which will be 2^M-2 — the root of -1. To calculate b, z^Q will be repeatedly squared. If the first solution is R, the second will be p-R. Replicating the Issue and Debugging the Code Proof-of-concept code has been posted to GitHub by drago-96 (Riccardo Zanotto). We can generate a certificate with crafted parameters and use the following command to replicate the issue with a vulnerable OpenSSL version: Figure 3: Passing OpenSSL a certificate with improper parameters We can see the CPU utilization is almost 100%: Figure 4: OpenSSL 100% CPU utilization For debugging purposes we will use drago96’s simple proof of concept, which calls the vulnerable function. On compiling and running it, we can see that it goes into an infinite loop causing almost 100% CPU utilization. Figure 5: Parsing the improper certificate leads to severe CPU utilization Looking at the certificate we can see that it uses two specific values of p and a, which are 697 and 696 respectively. If we look at the elliptic curve equation using those values, it becomes where x^3 + ax + b = 696 Since we know the value of p should be prime, we should notice that 697 is not actually a prime number. (It can be written as 2^3 * 87 + 1.) Figure 6: Prime or not prime? Not prime. If we try this program with any other random numbers, then this issue will not be replicated. (The logic behind using these specific values, and finding more such values of p and a, is discussed later in this post.) On running OpenSSL with this proof of concept and looking at the call to bn_mod_sqrt, we can see that parameters passed to it are 696 and 697, as shown in Figure 7. Figure 7: Passing the parameters to the square-root function. There is a check to see if p is odd, or if it is 1 or 2, but there is no check to see whether p is prime, as shown below. First p is checked: Figure 8: p is an odd number that is neither 0 nor 1, so it passes the checks Then there is a check to see if a is 0 or 1: Figure 9: Making sure a is neither 1 nor 0 At this point, the function sets the value of e as 1 and calls the BN_is_bit_set function, which basically converts it in the form 2^e * q as shown below, incrementing the value of e for each loop Figure 10: Converting the value of e So, e is the power of 2, and q is an odd number. At this point there are different potential outcomes depending on the value of e. If the value of e is either 1 or 2, the vulnerable code will not be reached. But if the value of e is 3 or more, then the vulnerable code can be reached, and the Tonelli-Shanks algorithm will be used: Figure 11: In search of a value of y that is not a square It takes y from 2 to 22 and then finds a Kronecker symbol (which is a generalized version of a Jacobi symbol, which is a generalized version of a Legendre symbol) with a value of q. This in turn is used to find the non-quadratic residue modulo p. There are few conditions at this point: 1. If the returned value is 0 or < -1, then the program will exit. 2. If the returned value is 1, then the do while loop will continue. 3. If the returned value is -1, then the program will go to the next step, and we have found z ; that is, the non-quadratic residue modulo p. It will then calculate the value of b, and enter into a while loop: Figure 12: In the loop It then checks if b is one; if so, then the solution is found. Otherwise, it will calculate the value of t = b^2(mod p). But in our example, t will be 1 for the first time, as the value of p is 697 and b is 696. Figure 13: And now t=1 Under these circumstances, the program will not enter the second while loop. After performing a few operations, the value of t will be changed: Figure 14: A change in value Then it will move the value of i, which is 1, to e: Figure 15: Adding the value of i to e It will then again go to the while loop start. At this point t is not 1, so this time it will enter in the while loop. The value of i is 2 but the value of e is still 1, so the exit condition is not Figure 16: No exit Next the process will call another function, BN_mod_mul, to calculate the value of t: Figure 17: Enter BN_mod_mul If we step through this function, we can see the following: Figure 18: With a and b equivalent, the BN_sqrt function is called Here a and b are the same, so the process will call the BN_sqrt function, which will calculate t = a^2(mod p). The value of t will never become 1, as p is not a prime number but a composite number. This loop will therefore continue forever. The endless, pointless calculation will thus cause extreme resource utilization and ultimately DoS in the application. How does a loop like this happen? In this article we’re looking at the math and code, not the coding choices, but our colleagues at Naked Security have a post concerning CVE-2022-0778 that leads into a discussion of the oddly framed and nested loops in OpenSSL’s code that led to this problem. Generating Numbers That Can Cause DoS So far during the debugging we have used p=697 and n=696 as our values. But there are many such pairs of numbers that can cause this infinite loop (and thus the DoS). We can easily set forth the conditions for such pairs: 1. p should be a composite odd number. (Again, if it is a prime number, the value of t becomes 1 and we exit the loop.) 2. As mentioned earlier in the blog, if p is an odd prime number, then we can write p-1 as p-1 = 2^Q * S Now we need value of e >= 3, so let’s use 3. This equation becomes p-1 = 2^3 * 2^c * S (where c=Q-3) , which we can write as p-1 = 8 * 2^c * S This means p-1 must be a multiple of 8. We can write that as: p-1 = 8 * d * S Now if we calculate p, it will be: p= 8*d*S+1 Here S should be an odd number. 3. We have seen how the Kronecker symbol was calculated from 2 to 22 with p: 1. If it’s 1, we continue the loop. 2. If it’s 0, we discard that value. 3. If it’s -1, that’s the number. Based on this we can write a simple program in C that can generate various problematic pairs for p and p-1. Figure 19: A simple C program to find dangerous pairs of values If we run this program, we get the following: Figure 20: Five pairs of numbers that can trigger the CVE-2022-0778 DoS Twitter user fwarashi has explained this in detail in a post. (A sample python program is also available at that URL.) As we see in Figure 20, a few other number pairs that can cause the CVE-2022-0778 DoS are (184,185), (328,329), (376,377), (424,425), and (472,473). There are more, but to look briefly at two sets we y^2 ≡ 184(mod 185) — i.e., x^3 + ax + b = 184 y^2 ≡ 328(mod 329) — i.e., x^3 + ax + b = 328 and proper values of x,a and b can be selected that satisfy this equation. Fixing a Hole In the unpatched version of OpenSSL, there was a while loop checking for the value of t, and inside that there was an if condition which was checking the value of i ==e, so the program was missing cases where i>e. If the value of i > e, and the value of t does not become 1 in case of a non-prime number, an infinite loop results, and DoS results from that. Figure 21: The unpatched code If we look at the patched code, we can see that instead of the while loop, a for loop is added. This runs till the value of i<e, thus preventing the infinite loop issue. Figure 22: Patched code If the value of i > e, then it will simply exit. OpenSSL is a popular, broadly used library; it is not limited to any specific platform or application. Thus, the CVE-2022-0778 vulnerability could potentially affect systems of all sorts around the globe. The issue in OpenSSL can be triggered while finding the value of y from x if the public key uses coordinates in the Compressed EC Point format, causing high system utilization and potentially leading to a Denial of Service attack. Since implementing cryptographic algorithms is a niche task, sometime errors such as these may go unnoticed. In cases such as OpenSSL, where the code itself is complex, the complexity of the math itself can make it hard to spot a potential bug. All is not lost, though, as careful analysis can unpack the problem and guide us toward a solution. Sophos coverage Sophos IPS customers are protected against this threat by following sids which checks for malicious certificate download: 2306976, 2306977 . References and Credits This vulnerability involves cryptography, number theory, algorithms, and so on. During this research I was helped by various available resources over the web and offline books. The following are some of the online materials I have used. Source code and proof of concept: OpenSSL: https://www.openssl.org/ OpenSSL’s repository: https://github.com/openssl/openssl POC for CVE-2022-0778 from Drago-96 [Riccardo Zanotto]: https://github.com/drago-96/CVE-2022-0778 Analysis by Kurenaif [@fwarashi]: https://zenn.dev/kurenaif/articles/ec2eec4ec7ec52 For further information: Elliptic Curve Cryptography (from Nakov, Svetlin, Practical Cryptography for Developers [2018]): https://wizardforcel.gitbooks.io/practical-cryptography-for-developers-book/content/ Hasan, Harady, Elliptic Curves: A journey through theory and its applications (2019): https://uu.diva-portal.org/smash/get/diva2:1334316/FULLTEXT01.pdf Integer Factorization Problem (HandWiki): https://handwiki.org/wiki/Integer_factorization “OpenSSL patches infinite-loop DoS bug in certificate verification,” Naked Security (Sophos blog, 18 March 2022), https://nakedsecurity.sophos.com/2022/03/18/ Tonelli-Shanks algorithm (HandWiki): https://handwiki.org/wiki/Tonelli%E2%80%93Shanks_algorithm Tonelli-Shanks algorithm (Wikipedia): https://en.wikipedia.org/wiki/Tonelli%E2%80%93Shanks_algorithm May 19, 2021 What’s New in Sophos EDR 4.0 May 19, 2021 Sophos XDR: Driven by data
{"url":"https://news.sophos.com/en-us/2022/06/01/cve-2022-0778/","timestamp":"2024-11-01T21:01:55Z","content_type":"text/html","content_length":"102258","record_id":"<urn:uuid:176fdcdd-dc23-4630-a270-d42f6408fede>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00847.warc.gz"}
Database of Original & Non-Theoretical Uses of Topology Coordinate-Free Coverage in Sensor Networks With Controlled Boundaries via Homology (2006) V. de Silva, R. Ghrist Abstract Tools from computational homology are introduced to verify coverage in an idealized sensor network. These methods are unique in that, while they are coordinate-free and assume no localization or orientation capabilities for the nodes, there are also no probabilistic assumptions. The key ingredient is the theory of homology from algebraic topology. The robustness of these tools is demonstrated by adapting them to a variety of settings, including static planar coverage, 3-D barrier coverage, and time-dependent sweeping coverage. Results are also given on hole repair, error tolerance, optimal coverage, and variable radii. An overview of implementation is given.
{"url":"https://donut.topology.rocks/?q=tag%3A%22sensor+networks%22","timestamp":"2024-11-10T12:23:09Z","content_type":"text/html","content_length":"17078","record_id":"<urn:uuid:8ba9dc00-672b-4fba-aacc-c777e8098443>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00748.warc.gz"}
MarkLogic Server 11.0 Product Documentation MarkLogic Server 11.0 Product Documentation $circle as cts:circle, $arc-tolerance as xs:double, [$options as xs:string*] ) as cts:region Construct a polygon approximating a circle. circle A cts circle that defines the circle to be approximated. arc-tolerance How far the approximation can be from the actual circle, specified in the same units as the units option. Arc-tolerance should be greater than the value of the tolerance option. Options with which you can customize this operation. The following options are available: Use the given coordinate system. Valid values are wgs84, wgs84/double, etrs89, etrs89/double, raw and raw/double. Defaults to the governing coordinating system. options Use the coordinate system at the given precision. Allowed values: float and double. Defaults to the precision of the governing coordinate system. Measure distance, radii of circles, and tolerance in the specified units. Allowed values: miles (default), km, feet, meters. Tolerance is the largest allowable variation in geometry calculations. If the distance between two points is less than tolerance, then the two points are considered equal. For the raw coordinate system, use the units of the coordinates. For geographic coordinate systems, use the units specified by the units option. Tolerance should be smaller than the value of the arc-tolerance parameter. Usage Notes When approximating the polygon, if the distance between two points is less than tolerance, then they are considered to be the same point. The arc-tolerance parameter specifies the allowable error in the polygon approximation. That is, the resulting polygon will differ from the provided circle by at most arc-tolerance. The arc-tolerance parameter value must be greater than the tolerance, and both arc-tolerance and tolerance should be expressed in the same units. For example, if the units option is set to "km" and you're using a geodetic coordinate system, then arc-tolerance and tolerance (if specified) should also be in kilometers. The default tolerance is 0.05km (or the equivalent in other units). Use the tolerance option to override the default. The value of the precision option takes precedence over that implied by the governing coordinate system name, including the value of the coordinate-system option. For example, if the governing coordinate system is "wgs84/double" and the precision option is "float", then the operation uses single precision. It is recommended to use this function for queries, as circle geometry matching cannot be optimized. Convert circles to polygons before passing them in as arguments to your queries to get better performance. A common question to ask is 'what points do I have in my database that are within 10 kilometers of point A?' This function provides an optimized path for this query. See Also geo:circle-polygon(cts:circle(7,cts:point(10,20)),4, ("tolerance=1")); => A cts:region with the following coordinates: 10.10185,20 10.050913,20.088997 9.9490623,20.08897 9.8981495,20 9.9490623,19.91103 10.050913,19.911001 10.10185,20 xquery version "1.0-ml"; import module namespace op = 'http://marklogic.com/optic' at 'MarkLogic/optic.xqy'; import module namespace ogeo = 'http://marklogic.com/optic/expression/geo' at 'MarkLogic/optic/optic-geo.xqy'; (: Optic example using the Value Processing Function ogeo:intersects() and geo:circle-polygon() :) (: $circlePolygonLit is a polygon representing a 10 kilometer radius around Tucson, Arizona, US :) let $circlePolygonLit := geo:circle-polygon(cts:circle(10,cts:point('POINT(-110.97345746274591 32.213051181896034)')),0.01,('tolerance=0.001','units=km')) let $plan:= op:from-view('buildings', 'builds') =>op:select(('name', op:col('geoid'))) return $plan=>op:result() rows representing names and geoids where values in the 'poly' column INTERSECT circlePolygonLit Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
{"url":"http://docs.marklogic.com/geo:circle-polygon","timestamp":"2024-11-08T04:56:30Z","content_type":"application/xhtml+xml","content_length":"34743","record_id":"<urn:uuid:fca0729f-8bee-4337-8b7f-0d1db1bd24df>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00387.warc.gz"}
4.4 The Mean Value Theorem Learning Objectives • Explain the meaning of Rolle’s theorem. • Describe the significance of the Mean Value Theorem. • State three important consequences of the Mean Value Theorem. The Mean Value Theorem is one of the most important theorems in calculus. We look at some of its implications at the end of this section. First, let’s start with a special case of the Mean Value Theorem, called Rolle’s theorem. Rolle’s Theorem Informally, Rolle’s theorem states that if the outputs of a differentiable function [latex]f[/latex] are equal at the endpoints of an interval, then there must be an interior point [latex]c[/latex] where [latex]f^{\prime}(c)=0[/latex]. (Figure) illustrates this theorem. Rolle’s Theorem Let [latex]f[/latex] be a continuous function over the closed interval [latex][a,b][/latex] and differentiable over the open interval [latex](a,b)[/latex] such that [latex]f(a)=f(b)[/latex]. There then exists at least one [latex]c \in (a,b)[/latex] such that [latex]f^{\prime}(c)=0[/latex]. Let [latex]k=f(a)=f(b)[/latex]. We consider three cases: 1. [latex]f(x)=k[/latex] for all [latex]x \in (a,b)[/latex]. 2. There exists [latex]x \in (a,b)[/latex] such that [latex]f(x)>k[/latex]. 3. There exists [latex]x \in (a,b)[/latex] such that [latex]f(x)<k[/latex]. Case 1: If [latex]f(x)=0[/latex] for all [latex]x \in (a,b)[/latex], then [latex]f^{\prime}(x)=0[/latex] for all [latex]x \in (a,b)[/latex]. Case 2: Since [latex]f[/latex] is a continuous function over the closed, bounded interval [latex][a,b][/latex], by the extreme value theorem, it has an absolute maximum. Also, since there is a point [latex]x \in (a,b)[/latex] such that [latex]f(x)>k[/latex], the absolute maximum is greater than [latex]k[/latex]. Therefore, the absolute maximum does not occur at either endpoint. As a result, the absolute maximum must occur at an interior point [latex]c \in (a,b)[/latex]. Because [latex]f[/latex] has a maximum at an interior point [latex]c[/latex], and [latex]f[/latex] is differentiable at [latex]c[/latex], by Fermat’s theorem, [latex]f^{\prime}(c)=0[/latex]. Case 3: The case when there exists a point [latex]x \in (a,b)[/latex] such that [latex]f(x)<k[/latex] is analogous to case 2, with maximum replaced by minimum. An important point about Rolle’s theorem is that the differentiability of the function [latex]f[/latex] is critical. If [latex]f[/latex] is not differentiable, even at a single point, the result may not hold. For example, the function [latex]f(x)=|x|-1[/latex] is continuous over [latex][-1,1][/latex] and [latex]f(-1)=0=f(1)[/latex], but [latex]f^{\prime}(c) \ne 0[/latex] for any [latex]c \in (-1,1)[/latex] as shown in the following figure. Let’s now consider functions that satisfy the conditions of Rolle’s theorem and calculate explicitly the points [latex]c[/latex] where [latex]f^{\prime}(c)=0[/latex]. Using Rolle’s Theorem For each of the following functions, verify that the function satisfies the criteria stated in Rolle’s theorem and find all values [latex]c[/latex] in the given interval where [latex]f^{\prime}(c)=0 1. [latex]f(x)=x^2+2x[/latex] over [latex][-2,0][/latex] 2. [latex]f(x)=x^3-4x[/latex] over [latex][-2,2][/latex] Verify that the function [latex]f(x)=2x^2-8x+6[/latex] defined over the interval [latex][1,3][/latex] satisfies the conditions of Rolle’s theorem. Find all points [latex]c[/latex] guaranteed by Rolle’s theorem. Show Solution Find all values [latex]c[/latex], where [latex]f^{\prime}(c)=0[/latex]. The Mean Value Theorem and Its Meaning Rolle’s theorem is a special case of the Mean Value Theorem. In Rolle’s theorem, we consider differentiable functions [latex]f[/latex] that are zero at the endpoints. The Mean Value Theorem generalizes Rolle’s theorem by considering functions that are not necessarily zero at the endpoints. Consequently, we can view the Mean Value Theorem as a slanted version of Rolle’s theorem ((Figure) ). The Mean Value Theorem states that if [latex]f[/latex] is continuous over the closed interval [latex][a,b][/latex] and differentiable over the open interval [latex](a,b)[/latex], then there exists a point [latex]c \in (a,b)[/latex] such that the tangent line to the graph of [latex]f[/latex] at [latex]c[/latex] is parallel to the secant line connecting [latex](a,f(a))[/latex] and [latex](b,f Mean Value Theorem Let [latex]f[/latex] be continuous over the closed interval [latex][a,b][/latex] and differentiable over the open interval [latex](a,b)[/latex]. Then, there exists at least one point [latex]c \in (a,b)[/latex] such that The proof follows from Rolle’s theorem by introducing an appropriate function that satisfies the criteria of Rolle’s theorem. Consider the line connecting [latex](a,f(a))[/latex] and [latex](b,f(b)) [/latex]. Since the slope of that line is and the line passes through the point [latex](a,f(a))[/latex], the equation of that line can be written as Let [latex]g(x)[/latex] denote the vertical difference between the point [latex](x,f(x))[/latex] and the point [latex](x,y)[/latex] on that line. Therefore, Since the graph of [latex]f[/latex] intersects the secant line when [latex]x=a[/latex] and [latex]x=b[/latex], we see that [latex]g(a)=0=g(b)[/latex]. Since [latex]f[/latex] is a differentiable function over [latex](a,b)[/latex], [latex]g[/latex] is also a differentiable function over [latex](a,b)[/latex]. Furthermore, since [latex]f[/latex] is continuous over [latex][a,b][/latex], [latex]g [/latex] is also continuous over [latex][a,b][/latex]. Therefore, [latex]g[/latex] satisfies the criteria of Rolle’s theorem. Consequently, there exists a point [latex]c \in (a,b)[/latex] such that [latex]g^{\prime}(c)=0[/latex]. Since we see that Since [latex]g^{\prime}(c)=0[/latex], we conclude that In the next example, we show how the Mean Value Theorem can be applied to the function [latex]f(x)=\sqrt{x}[/latex] over the interval [latex][0,9][/latex]. The method is the same for other functions, although sometimes with more interesting consequences. Verifying that the Mean Value Theorem Applies For [latex]f(x)=\sqrt{x}[/latex] over the interval [latex][0,9][/latex], show that [latex]f[/latex] satisfies the hypothesis of the Mean Value Theorem, and therefore there exists at least one value [latex]c \in (0,9)[/latex] such that [latex]f^{\prime}(c)[/latex] is equal to the slope of the line connecting [latex](0,f(0))[/latex] and [latex](9,f(9))[/latex]. Find these values [latex]c[/latex] guaranteed by the Mean Value Theorem. One application that helps illustrate the Mean Value Theorem involves velocity. For example, suppose we drive a car for 1 hr down a straight road with an average velocity of 45 mph. Let [latex]s(t)[/ latex] and [latex]v(t)[/latex] denote the position and velocity of the car, respectively, for [latex]0 \le t \le 1[/latex] hr. Assuming that the position function [latex]s(t)[/latex] is differentiable, we can apply the Mean Value Theorem to conclude that, at some time [latex]c \in (0,1)[/latex], the speed of the car was exactly [latex]v(c)=s^{\prime}(c)=\frac{s(1)-s(0)}{1-0}=45[/latex] mph. Mean Value Theorem and Velocity If a rock is dropped from a height of 100 ft, its position [latex]t[/latex] seconds after it is dropped until it hits the ground is given by the function [latex]s(t)=-16t^2+100[/latex]. 1. Determine how long it takes before the rock hits the ground. 2. Find the average velocity [latex]v_{\text{avg}}[/latex] of the rock for when the rock is released and the rock hits the ground. 3. Find the time [latex]t[/latex] guaranteed by the Mean Value Theorem when the instantaneous velocity of the rock is [latex]v_{\text{avg}}[/latex]. Suppose a ball is dropped from a height of 200 ft. Its position at time [latex]t[/latex] is [latex]s(t)=-16t^2+200[/latex]. Find the time [latex]t[/latex] when the instantaneous velocity of the ball equals its average velocity. Show Solution First, determine how long it takes for the ball to hit the ground. Then, find the average velocity of the ball from the time it is dropped until it hits the ground. Corollaries of the Mean Value Theorem Let’s now look at three corollaries of the Mean Value Theorem. These results have important consequences, which we use in upcoming sections. At this point, we know the derivative of any constant function is zero. The Mean Value Theorem allows us to conclude that the converse is also true. In particular, if [latex]f^{\prime}(x)=0[/latex] for all [latex]x[/latex] in some interval [latex]I[/latex], then [latex]f(x)[/latex] is constant over that interval. This result may seem intuitively obvious, but it has important implications that are not obvious, and we discuss them shortly. Corollary 1: Functions with a Derivative of Zero Let [latex]f[/latex] be differentiable over an interval [latex]I[/latex]. If [latex]f^{\prime}(x)=0[/latex] for all [latex]x \in I[/latex], then [latex]f(x)[/latex] is constant for all [latex]x \in I Since [latex]f[/latex] is differentiable over [latex]I[/latex], [latex]f[/latex] must be continuous over [latex]I[/latex]. Suppose [latex]f(x)[/latex] is not constant for all [latex]x[/latex] in [latex]I[/latex]. Then there exist [latex]a,b \in I[/latex], where [latex]a \ne b[/latex] and [latex]f(a) \ne f(b)[/latex]. Choose the notation so that [latex]a<b[/latex]. Therefore, [latex]\frac{f(b)-f(a)}{b-a} \ne 0[/latex]. Since [latex]f[/latex] is a differentiable function, by the Mean Value Theorem, there exists [latex]c \in (a,b)[/latex] such that Therefore, there exists [latex]c \in I[/latex] such that [latex]f^{\prime}(c) \ne 0[/latex], which contradicts the assumption that [latex]f^{\prime}(x)=0[/latex] for all [latex]x \in I[/latex]. From (Figure), it follows that if two functions have the same derivative, they differ by, at most, a constant. Corollary 2: Constant Difference Theorem If [latex]f[/latex] and [latex]g[/latex] are differentiable over an interval [latex]I[/latex] and [latex]f^{\prime}(x)=g^{\prime}(x)[/latex] for all [latex]x \in I[/latex], then [latex]f(x)=g(x)+C[/ latex] for some constant [latex]C[/latex]. Let [latex]h(x)=f(x)-g(x)[/latex]. Then, [latex]h^{\prime}(x)=f^{\prime}(x)-g^{\prime}(x)=0[/latex] for all [latex]x \in I[/latex]. By Corollary 1, there is a constant [latex]C[/latex] such that [latex]h(x)=C[/latex] for all [latex]x \in I[/latex]. Therefore, [latex]f(x)=g(x)+C[/latex] for all [latex]x \in I[/latex]. The third corollary of the Mean Value Theorem discusses when a function is increasing and when it is decreasing. Recall that a function [latex]f[/latex] is increasing over [latex]I[/latex] if [latex] f(x_1)<f(x_2)[/latex] whenever [latex]x_1<x_2[/latex], whereas [latex]f[/latex] is decreasing over [latex]I[/latex] if [latex]f(x_1)>f(x_2)[/latex] whenever [latex]x_1<x_2[/latex]. Using the Mean Value Theorem, we can show that if the derivative of a function is positive, then the function is increasing; if the derivative is negative, then the function is decreasing ((Figure)). We make use of this fact in the next section, where we show how to use the derivative of a function to locate local maximum and minimum values of the function, and how to determine the shape of the graph. This fact is important because it means that for a given function [latex]f[/latex], if there exists a function [latex]F[/latex] such that [latex]F^{\prime}(x)=f(x)[/latex]; then, the only other functions that have a derivative equal to [latex]f[/latex] are [latex]F(x)+C[/latex] for some constant [latex]C[/latex]. We discuss this result in more detail later in the chapter. Corollary 3: Increasing and Decreasing Functions Let [latex]f[/latex] be continuous over the closed interval [latex][a,b][/latex] and differentiable over the open interval [latex](a,b)[/latex]. 1. If [latex]f^{\prime}(x)>0[/latex] for all [latex]x \in (a,b)[/latex], then [latex]f[/latex] is an increasing function over [latex][a,b][/latex]. 2. If [latex]f^{\prime}(x)<0[/latex] for all [latex]x \in (a,b)[/latex], then [latex]f[/latex] is a decreasing function over [latex][a,b][/latex]. We will prove 1.; the proof of 2. is similar. Suppose [latex]f[/latex] is not an increasing function on [latex]I[/latex]. Then there exist [latex]a[/latex] and [latex]b[/latex] in [latex]I[/latex] such that [latex]a<b[/latex], but [latex]f(a) \ge f(b)[/latex]. Since [latex]f[/latex] is a differentiable function over [latex]I[/latex], by the Mean Value Theorem there exists [latex]c \in (a,b)[/ latex] such that Since [latex]f(a) \ge f(b)[/latex], we know that [latex]f(b)-f(a) \le 0[/latex]. Also, [latex]a<b[/latex] tells us that [latex]b-a>0[/latex]. We conclude that [latex]f^{\prime}(c)=\frac{f(b)-f(a)}{b-a} \le 0[/latex]. However, [latex]f^{\prime}(x)>0[/latex] for all [latex]x \in I[/latex]. This is a contradiction, and therefore [latex]f[/latex] must be an increasing function over [latex]I[/latex]. Key Concepts • If [latex]f[/latex] is continuous over [latex][a,b][/latex] and differentiable over [latex](a,b)[/latex] and [latex]f(a)=0=f(b)[/latex], then there exists a point [latex]c \in (a,b)[/latex] such that [latex]f^{\prime}(c)=0[/latex]. This is Rolle’s theorem. • If [latex]f[/latex] is continuous over [latex][a,b][/latex] and differentiable over [latex](a,b)[/latex], then there exists a point [latex]c \in (a,b)[/latex] such that This is the Mean Value Theorem. • If [latex]f^{\prime}(x)=0[/latex] over an interval [latex]I[/latex], then [latex]f[/latex] is constant over [latex]I[/latex]. • If two differentiable functions [latex]f[/latex] and [latex]g[/latex] satisfy [latex]f^{\prime}(x)=g^{\prime}(x)[/latex] over [latex]I[/latex], then [latex]f(x)=g(x)+C[/latex] for some constant • If [latex]f^{\prime}(x)>0[/latex] over an interval [latex]I[/latex], then [latex]f[/latex] is increasing over [latex]I[/latex]. If [latex]f^{\prime}(x)<0[/latex] over [latex]I[/latex], then [latex]f[/latex] is decreasing over [latex]I[/latex]. 1. Why do you need continuity to apply the Mean Value Theorem? Construct a counterexample. 2. Why do you need differentiability to apply the Mean Value Theorem? Find a counterexample. 3. When are Rolle’s theorem and the Mean Value Theorem equivalent? 4. If you have a function with a discontinuity, is it still possible to have [latex]f^{\prime}(c)(b-a)=f(b)-f(a)[/latex]? Draw such an example or prove why not. For the following exercises, determine over what intervals (if any) the Mean Value Theorem applies. Justify your answer. 5. [latex]y= \sin (\pi x)[/latex] 6. [latex]y=\frac{1}{x^3}[/latex] 7. [latex]y=\sqrt{4-x^2}[/latex] 8. [latex]y=\sqrt{x^2-4}[/latex] 9. [latex]y=\ln (3x-5)[/latex] For the following exercises, graph the functions on a calculator and draw the secant line that connects the endpoints. Estimate the number of points [latex]c[/latex] such that [latex]f^{\prime}(c) 10. [T] [latex]y=3x^3+2x+1[/latex] over [latex][-1,1][/latex] 11. [T] [latex]y= \tan (\frac{\pi}{4}x)[/latex] over [latex][-\frac{3}{2},\frac{3}{2}][/latex] 12. [T] [latex]y=x^2 \cos (\pi x)[/latex] over [latex][-2,2][/latex] 13. [T] [latex]y=x^6-\frac{3}{4}x^5-\frac{9}{8}x^4+\frac{15}{16}x^3+\frac{3}{32}x^2+\frac{3}{16}x+\frac{1}{32}[/latex] over [latex][-1,1][/latex] For the following exercises, use the Mean Value Theorem and find all points [latex]0<c<2[/latex] such that [latex]f(2)-f(0)=f^{\prime}(c)(2-0)[/latex]. 14. [latex]f(x)=x^3[/latex] 15. [latex]f(x)= \sin (\pi x)[/latex] 16. [latex]f(x)= \cos (2\pi x)[/latex] 17. [latex]f(x)=1+x+x^2[/latex] 18. [latex]f(x)=(x-1)^{10}[/latex] 19. [latex]f(x)=(x-1)^9[/latex] For the following exercises, show there is no [latex]c[/latex] such that [latex]f(1)-f(-1)=f^{\prime}(c)(2)[/latex]. Explain why the Mean Value Theorem does not apply over the interval [latex][-1,1] 20. [latex]f(x)=|x-\frac{1}{2}|[/latex] 21. [latex]f(x)=\frac{1}{x^2}[/latex] 22. [latex]f(x)=\sqrt{|x|}[/latex] 23. [latex]f(x)=⌊x⌋[/latex] (Hint: This is called the floor function and it is defined so that [latex]f(x)[/latex] is the largest integer less than or equal to [latex]x[/latex].) For the following exercises, determine whether the Mean Value Theorem applies for the functions over the given interval [latex][a,b][/latex]. Justify your answer. 24. [latex]y=e^x[/latex] over [latex][0,1][/latex] 25. [latex]y=\ln (2x+3)[/latex] over [latex][-\frac{3}{2},0][/latex] 26. [latex]f(x)= \tan (2\pi x)[/latex] over [latex][0,2][/latex] 27. [latex]y=\sqrt{9-x^2}[/latex] over [latex][-3,3][/latex] 28. [latex]y=\frac{1}{|x+1|}[/latex] over [latex][0,3][/latex] 29. [latex]y=x^3+2x+1[/latex] over [latex][0,6][/latex] 30. [latex]y=\frac{x^2+3x+2}{x}[/latex] over [latex][-1,1][/latex] 31. [latex]y=\frac{x}{ \sin (\pi x)+1}[/latex] over [latex][0,1][/latex] 32. [latex]y=\ln (x+1)[/latex] over [latex][0,e-1][/latex] 33. [latex]y=x \sin (\pi x)[/latex] over [latex][0,2][/latex] 34. [latex]y=5+|x|[/latex] over [latex][-1,1][/latex] For the following exercises, consider the roots of the equation. 35. Show that the equation [latex]y=x^3+3x^2+16[/latex] has exactly one real root. What is it? 36. Find the conditions for exactly one root (double root) for the equation [latex]y=x^2+bx+c[/latex] 37. Find the conditions for [latex]y=e^x-b[/latex] to have one root. Is it possible to have more than one root? For the following exercises, use a calculator to graph the function over the interval [latex][a,b][/latex] and graph the secant line from [latex]a[/latex] to [latex]b[/latex]. Use the calculator to estimate all values of [latex]c[/latex] as guaranteed by the Mean Value Theorem. Then, find the exact value of [latex]c[/latex], if possible, or write the final equation and use a calculator to estimate to four digits. 38. [T] [latex]y= \tan (\pi x)[/latex] over [latex][-\frac{1}{4},\frac{1}{4}][/latex] 39. [T] [latex]y=\frac{1}{\sqrt{x+1}}[/latex] over [latex][0,3][/latex] 40. [T] [latex]y=|x^2+2x-4|[/latex] over [latex][-4,0][/latex] 41. [T] [latex]y=x+\frac{1}{x}[/latex] over [latex][\frac{1}{2},4][/latex] 42. [T] [latex]y=\sqrt{x+1}+\frac{1}{x^2}[/latex] over [latex][3,8][/latex] 43. At 10:17 a.m., you pass a police car at 55 mph that is stopped on the freeway. You pass a second police car at 55 mph at 10:53 a.m., which is located 39 mi from the first police car. If the speed limit is 60 mph, can the police cite you for speeding? 44. Two cars drive from one spotlight to the next, leaving at the same time and arriving at the same time. Is there ever a time when they are going the same speed? Prove or disprove. 45. Show that [latex]y= \sec^2 x[/latex] and [latex]y= \tan^2 x[/latex] have the same derivative. What can you say about [latex]y= \sec^2 x - \tan^2 x[/latex]? 46. Show that [latex]y= \csc^2 x[/latex] and [latex]y= \cot^2 x[/latex] have the same derivative. What can you say about [latex]y= \csc^2 x - \cot^2 x[/latex]? mean value theorem if [latex]f[/latex] is continuous over [latex][a,b][/latex] and differentiable over [latex](a,b)[/latex], then there exists [latex]c \in (a,b)[/latex] such that rolle’s theorem if [latex]f[/latex] is continuous over [latex][a,b][/latex] and differentiable over [latex](a,b)[/latex], and if [latex]f(a)=f(b)[/latex], then there exists [latex]c \in (a,b)[/latex] such that
{"url":"https://courses.lumenlearning.com/suny-openstax-calculus1/chapter/the-mean-value-theorem/","timestamp":"2024-11-03T07:42:13Z","content_type":"text/html","content_length":"101128","record_id":"<urn:uuid:1e580dd7-1f69-4ba2-8f25-a026e5af08bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00111.warc.gz"}
The Fundamental Data Structures within StdRegions 5.2 The Fundamental Data Structures within StdRegions In almost all object-oriented languages (which includes C + +), there exists the concepts of class attributes and object attributes. Class attributes are those attributes shared by all object instances (both immediate and derived objects) of a particular class definition, and object attributes (sometimes called data members) are those attributes whose values vary from object to object and hence help to characterize (or make unique) a particular object. In C + +, object attributes are specified a header file containing class declarations; within a class declaration, attributes are grouped by their accessibility: public attributes, protected attributes and private attributes. A detailed discussion of the nuances of these categories are beyond the scope of this guide; we refer the interested reader to the following books for further details: [62, 55]. For our purposes, the main thing to appreciate is that categories dictate access patters within the inheritance hierarchy and to the “outside” world (i.e. access from outside the object). We have summarized the relationships between the categories and their accessibility in Tables 5.1, 5.2 and 5.3 . Within the StdRegions directory of the library, there exists a class inheritance hierarchy designed to try to encourage re-use of core algorithms (while simultaneously trying to minimize duplication of code). We present this class hierarchy in Figure 5.5. As is seen in Figure 5.5, the StdRegions hierarchy consists of three levels: the base level from which all StdRegion objects are derived is StdExpansion. This object is then specialized by dimension, yielding StdExpansion0D, StdExpansion1D, StdExpansion2D and StdExpansion3D. The dimension-specific objects are then specialized based upon shape. The object attributes (variables) at various levels of the hierarchy can be understood in light of Figure 5.6. At its core, an expansion is a means of representing a function over a canonically-defined region of space evaluated at a collection of point positions. The various data members hold information to allow all these basic building blocks to be specified. The various private, protected and public data members contained within StdRegions are provided in the subsequent sections. 5.2.1 Variables at the Level of StdExpansion Private: There are private methods but no private data members within StdExpansion. • Array of Basis Shared Pointers: m_base • Integer element id: m_elmt_id • Integer total number of coefficients in the expansion: m_ncoeffs • Matrix Manager: m_stdMatrixManager • Matrix Manager: m_stdStaticCondMatrixManager • IndexKeyMap Matrix Manager: m_IndexMapManager Public: There are public methods but no public data members within StdExpansion. 5.2.2 Variables at the Level of StdExpansion$D for various Dimensions Private: There are private methods but no private data members within StdExpansion$D. • 0D and 1D: std::map<int, NormalVector> m_vertexNormals • 2D: Currently does not have any data structure. It should probably have m_edgeNormals • 3D: std::map<int, NormalVector> m_faceNormals • 3D: std::map<int, bool> m_negatedNormals Public: There are public methods but no public data members within StdExpansion$D. 5.2.3 Variables at the Level of Shape-Specific StdExpansions 5.2.4 General Layout of the Basis Functions in Memory 5.2.5 General Layout Basis functions are stored in a 1D array indexed by both mode and quadrature point. The fast index runs over quadrature points while the slow index runs over modes. This was done to match the memory access pattern of the inner product, which is the most frequently computed kernel for most solvers. Bases are built from the tensor product of three different equation types (subsequently called Type 1, Type II and Type III respectively): Here, P is the polynomial order of the basis and P[p]^α,β are the p^th order jacobi polynomial. A Note Concerning Adjustments For C[0] Continuity Before going further it is worth reviewing the spatial shape of each node. The term [0] continuity with surrounding elements (as we do in the continuous Galerkin method), then these local modes must be assembled together with the correct local modes in adjacent elements to create a continuous, global mode. For instance are zero at both end points z = ±1. As a result, they are trivially continuous with any other function which is also equal to zero on the boundary. These “bubble” functions may be treated entirely locally and thus are used to construct the interior modes of a basis. Only bases with p > 1 have interior modes. All of this holds separately in one dimension. Higher dimensional bases are constructed via the tensor product of 1D basis functions. As a result, we end up with a greater number of possibilities in terms of continuity. When the tensor product is taken between two bubble functions from different bases, the result is still a bubble function. When the tensor product is taken between a hat function and a bubble function we get “edge” modes (in both 2D and 3D). These are non-zero along one edge of the standard domain and zero along the remaining edges. When the tensor product is taken between two hat functions, they form a “vertex” mode which reaches its maximum at one of the vertices and is non-zero on two edges. The 3D bases are constructed similarly. Based upon this convention, the 1D basis set consists of vertex modes and bubble modes. The 2D basis function set consists of vertex modes, edge modes and bubble modes. The 3D basis set contains vertex modes, edge modes, face modes and bubble modes. 5.2.6 2D Geometries Quadrilateral Element Memory Organization Quads have the simplest memory organization. The quadrilateral basis is composed of the tensor product of two Type I functions ϕ[p](ξ[0,i])ϕ[q](ξ[1,j]). This function would then be indexed as where nq<b> is the number of quadrature points for the b^th basis. Unlike certain mode orderings (e.g. Karniadakis and Sherwin [46]), the two hat functions are accessed as the first and second modes in memory with interior modes placed afterward. Thus, correspond to Triangle Element Memory Organization Due to the use of collapsed coordinates, triangular element bases are formed via the tensor product of one basis function of Type I, and one of Type II, i.e. ϕ[p](η[0,i]ϕ[p]q(η1,j)). Since ϕ[p] is also a Type I function, its memory ordering is identical to that used for quads. The second function is complicated by the mixing of ξ[0] and ξ[1] in the construction of η[1]. In particular, this means that the basis function has two modal indices, p and q. While p can run all the way to P, The number of q modes depends on the value of the p index q index such that 0 ≤ q < P - p. Thus, for p = 0, the q index can run all the way up to P. When p = 1, it runs up to P - 1 and so on. Memory is laid out in the same way starting with p = 0. To access all values in order, we mode = 0 for p in P: for q in P - p: out[mode*nq + q] = basis0[p*nq]*basis1[mode*nq + q] mode += P-p Notice the use of the extra “mode” variable. Since the maximum value of q changes with p, basis1 is not simply a linearized matrix and instead has a triangular structure which necessitates keeping track of our current memory location. The collapsed coordinate system introduces one extra subtlety. The mode represents the top right vertex in the standard basis. However, when we move to the standard element basis, we are dealing with a triangle which only has three vertices. During the transformation, the top right vertex collapses into the top left vertex. If we naively construct an operators by iterating through all of our modes, the contribution from this vertex to mode Φ[01] will not be included. To deal with this, we add its contribution as a correction when computing a kernel. The correction is Φ[01] = ϕ[0]ϕ[01] + ϕ[1]ϕ[10] for a triangle. 5.2.7 3D Geometries Hexahedral Element Memory Organization The hexahedral element does not differ much from the quadrilateral as it is the simply the product of three Type I functions. Prismatic Element Memory Organization Cross sections of a triangular prism yield either a quad or a triangle based chosen direction. The basis, therefore, looks like a combination of the two different 2D geometries. Taking ϕ[p]ϕ[pr] on its own produces a triangular face while taking ϕ[p]ϕ[q] on its own produces a quadrilateral face. When the three basis functions are combined into a single array (as in the inner product kernel), modes are accessed in the order p,q,r with r being the fastest index and p the slowest. The access pattern for the prism thus looks like mode_pqr = 0 mode_pr = 0 for p in P: for q in Q: for r in P - p: out[mode_pqr*nq + r] = basis0[p*nq]*basis1[q*nq]*basis2[mode_pr + r] mode_pqr += P - p mode_pr += P - p As with the triangle, we have to deal with complications due to collapsed coordinates. This time, the singular vertex from the triangle occurs along an entire edge of the prism. Our correction must be added to a collection of modes indexed by q Tetrahedral Element Memory Organization The tetrahedral element is the most complicated of the element constructions. It cannot simply be formed as the composition of multiple triangles since η[2] is constructed by mixing three coordinate directions. We thus need to introduce our first Type III function. The r index is constrained by both p and q indices. It runs from P - p - q to 1 in a similar manner to the Type II function. Our typical access pattern is thus mode_pqr = 0 mode_pq = 0 for p in P: for q in P - p: for r in P - p - r: out[mode_pqr*nq + r] = basis0[p*nq]*basis1[mode_pq + q]*basis2[mode_pqr + r] mode_pqr += (P - p - r) mode_pq += (P - p) The tetrahedral element also has to add a correction due to collapsed coordinates. Similar to the prism, the correction must be applied to an entire edge indexed by r Pyramidic Element Memory Organization Like the tetrahedral element, a pyramid contains a collapsed coordinate direction which mixes three standard coordinates from the standard region. Unlike the tetrahedra, the collapse only occurs along one axis. Thus it is constructed from two Type I functions and one Type III function The product ϕ[p]ϕ[q] looks like the a quad construction which reflects the quad which serves as the base of the pyramid. A typical memory access looks like mode_pqr = 0 for p in P: for q in P - p: for r in P - p - r: out[mode_pqr*nq + r] = basis0[p*nq]*basis1[q*nq]*basis2[mode_pqr*nq + r] mode_pqr += (P - p - r)
{"url":"https://doc.nektar.info/developerguide/5.3.0/developer-guidese25.html","timestamp":"2024-11-08T03:05:19Z","content_type":"text/html","content_length":"40712","record_id":"<urn:uuid:4dd32893-8cec-46c6-9c65-83156913e4af>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00417.warc.gz"}
N. C. Yavuz Et Al. , "Searching threshold effects in the interest rate: An application to Turkey case," PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS , vol.379, no.2, pp.621-627, 2007 Yavuz, N. C. Et Al. 2007. Searching threshold effects in the interest rate: An application to Turkey case. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS , vol.379, no.2 , 621-627. Yavuz, N. C., GÜRİŞ, B., & Yilanci, V., (2007). Searching threshold effects in the interest rate: An application to Turkey case. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS , vol.379, no.2, Yavuz, Nilgun, Burak GÜRİŞ, And VELİ YILANCI. "Searching threshold effects in the interest rate: An application to Turkey case," PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS , vol.379, no.2, 621-627, 2007 Yavuz, Nilgun C. Et Al. "Searching threshold effects in the interest rate: An application to Turkey case." PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS , vol.379, no.2, pp.621-627, 2007 Yavuz, N. C. GÜRİŞ, B. And Yilanci, V. (2007) . "Searching threshold effects in the interest rate: An application to Turkey case." PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS , vol.379, no.2, pp.621-627. @article{article, author={Nilgun Cil Yavuz Et Al. }, title={Searching threshold effects in the interest rate: An application to Turkey case}, journal={PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS}, year=2007, pages={621-627} }
{"url":"https://avesis.comu.edu.tr/activitycitation/index/1/e443ca64-4be6-4d7f-a8ec-7704e7a8ba76","timestamp":"2024-11-07T05:57:43Z","content_type":"text/html","content_length":"12042","record_id":"<urn:uuid:568043af-f6d2-479c-aac4-8cc373ead0c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00804.warc.gz"}
6(t - 1) = 9(t - 4) T = ? t = 43 My reasoning is that I just did this and if you were to times 4 x 9 you would get minus ce tre avergae t pens The lateral area of the cylinder is 226.2 cm² . What is lateral surface area of cylinder? The lateral surface of an object is all of the sides of the object, excluding its base and top. The lateral surface area is the area of the lateral surface. For a right circular cylinder of radius r and height h, the lateral area is the area of the side surface of the cylinder: A = 2πrh. According to the question radius of cylinder = 3 cm height of cylinder = 12 cm The lateral area of the cylinder: A = 2πrh A = 2 * [tex]\frac{22}{7}[/tex] * 3 * 12 A = 226.2 cm² Hence, the lateral area of the cylinder is 226.2 cm² . To know more about lateral area of the cylinder here:
{"url":"https://diemso.unix.edu.vn/question/6t-1-9t-4-br-t-58dx","timestamp":"2024-11-09T03:31:28Z","content_type":"text/html","content_length":"64204","record_id":"<urn:uuid:ec842297-8c8d-4b2e-8870-68e5b77b89b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00354.warc.gz"}
Re: Again the requrest: SIMULATION TO INTERPRET AND PRESENT LOGIT RESULT Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Again the requrest: SIMULATION TO INTERPRET AND PRESENT LOGIT RESULTS without formatting - hopefully looks better From "Alexander M. Jais" <[email protected]> To [email protected] Subject Re: Again the requrest: SIMULATION TO INTERPRET AND PRESENT LOGIT RESULTS without formatting - hopefully looks better Date Fri, 02 Mar 2012 18:02:24 +0100 Hi Everyone, Has perhaps anyone an idea how I can solve the problem explained below? Would be really highly appreciated!! All the best Dear Statalisters, For some reason I’ve a really hard time to figure out how I can graph my logistic regression model in Stata. I am aware of the explanation of Zellner (2009; Strategic Management Journal) and Tomz M, Wittenberg J, King G. 2001. CLARIFY: Software for interpreting and presenting statistical results, 2.0 ed. Harvard University: Cambridge, MA). I ve the following issue: - My logistic regression has the following expression simplified: logit EquityMode Psychic Distance Means-Driven PsychDxMeansD - Dependent Variable_Equity Mode: Code 0 = for Non-Equity and Code 1 = Equity - Interaction Variable_Psychic Distance: Construct based on 6 items with 7 point likert scale - Independent Variable_Means-Driven: : Construct based on 6 items with 7 point likert scale - Moderation_ PsychDxMeansD: Psychic Distance times Means-Driven * In general I want to graph figures 2 and 3 in Zellner 2009 (pls see link for source:http://faculty.fuqua.duke.edu/~charlesw/s591/Methods/c09_Bennet/SMJ_note_final.pdf). * How do I code the Interaction Variable and Independent Variable, which are NOT binary – the explanation in Zellner shows only the binary interaction term (see lines 3, 10, … in Do file below; based on Zellner 2009, also attached, see pages 21-23) ? * Why is in line 8 all independent variables set to zero? My current Do-Files looks like this: 1. set seed 9999 2. noisily estsimp logit EquityMode PsychicDistance Means-Driven PsychDxMeansD, nolog 3. foreach var in X Y0 Y1 Y0lb Y1lb Y0ub Y1ub dY dYlb dYub { 4.gen `var' = . 5. } 6. forvalues obs = 1(1)18 { 7. replace X = .01*(`obs'+1) in `obs' 8. setx 0 9. setx Means-Driven .01*(`obs'+1) 10. foreach as_lev in 0 1 { 11.setx PsychicDistance `as_lev' PsychDxMeansD `as_lev'*.01*(`obs'+1) 12.simqi, genpr(Y`as_lev'_tmp) prval(1) 13.sum Y`as_lev'_tmp, meanonly 14.replace Y`as_lev' = r(mean) in `obs' 15._pctile Y`as_lev'_tmp, p(2.5, 97.5) 16.replace Y`as_lev'lb = r(r1) in `obs' 17.replace Y`as_lev'ub = r(r2) in `obs' 19.gen dY_tmp = Y1_tmp - Y0_tmp 20.sum dY_tmp, meanonly 21.replace dY = r(mean) in `obs' 22._pctile dY_tmp, p(2.5,97.5) 23.replace dYlb = r(r1) in `obs' 24.replace dYub = r(r2) in `obs' 25.drop *_tmp 26. } 27. twoway rbar Y0ub Y1lb X, mw msize(1) lcolor(gs0) fcolor(gs16)|| line Y0 X, color(gs0) || rspike Y1ub Y1lb X, color(gs0) lp(dot) || line Y1 X, color(gs0) || , yscale (r(0 1)) ylabel(0(.2)1) legend(off) xtitle("Means Driven") ytitle("Equity Mode") graphregion(fcolor(gs16)) * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2012-03/msg00096.html","timestamp":"2024-11-14T13:41:26Z","content_type":"text/html","content_length":"12865","record_id":"<urn:uuid:3f27c374-bd40-40df-ae37-789fbebe00bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00016.warc.gz"}
T-schema - Wikipedia Republished // WIKI 2 The T-schema ("truth schema", not to be confused with "ConventionT") is used to check if an inductivedefinition of truth is valid, which lies at the heart of any realisation of AlfredTarski's semantictheoryoftruth. Some authors refer to it as the "Equivalence Schema", a synonym introduced by MichaelDummett.^[1] The T-schema is often expressed in naturallanguage, but it can be formalized in many-sortedpredicatelogic or modallogic; such a formalisation is called a "T-theory." T-theories form the basis of much fundamental work in philosophicallogic, where they are applied in several important controversies in analyticphilosophy. As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S): 'S' is true ifandonlyif S. Example: 'snow is white' is true if and only if snow is white. YouTube Encyclopedic • 1/3 • Schemas, assimilation, and accommodation | Processing the Environment | MCAT | Khan Academy • Database - Advanced Schema Design The inductive definition By using the schema one can give an inductive definition for the truth of compound sentences. Atomic sentences are assigned truthvalues disquotationally. For example, the sentence "'Snow is white' is true" becomes materially equivalent with the sentence "snow is white", i.e. 'snow is white' is true if and only if snow is white. The truth of more complex sentences is defined in terms of the components of the sentence: • A sentence of the form "A and B" is true if and only if A is true and B is true • A sentence of the form "A or B" is true if and only if A is true or B is true • A sentence of the form "if A then B" is true if and only if A is false or B is true; see materialimplication. • A sentence of the form "not A" is true if and only if A is false • A sentence of the form "for all x, A(x)" is true if and only if, for every possible value of x, A(x) is true. • A sentence of the form "for some x, A(x)" is true if and only if, for some possible value of x, A(x) is true. Predicates for truth that meet all of these criteria are called "satisfaction classes", a notion often defined with respect to a fixed language (such as the language of Peanoarithmetic); these classes are considered acceptable definitions for the notion of truth.^[2] Natural languages JosephHeath points out^[3] that "The analysis of the truthpredicate provided by Tarski's Schema T is not capable of handling all occurrences of the truth predicate in natural language. In particular, Schema T treats only "freestanding" uses of the predicate—cases when it is applied to complete sentences." He gives as "obvious problem" the sentence: • Everything that Bill believes is true. Heath argues that analyzing this sentence using T-schema generates the sentencefragment—"everything that Bill believes"—on the righthand side of the logicalbiconditional. See also External links This page was last edited on 20 June 2024, at 00:35
{"url":"https://wiki2.org/en/T-schema","timestamp":"2024-11-11T00:31:48Z","content_type":"application/xhtml+xml","content_length":"93970","record_id":"<urn:uuid:643eb9e4-079b-4242-8267-8b559782b486>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00521.warc.gz"}
How to use the XOR Function - Excel Campus How to use the XOR Function Bottom Line: Learn how to use the XOR function in Excel to analyze attendance data. Skill Level: Intermediate Watch on YouTube & Subscribe to our Channel Download the Excel File The file that I work within the video can be found below. You can use it to follow along and reconstruct what I'm doing in the video. Compatibility: This file uses the new Dynamic Array Functions that are only available on the latest version of Office 365. This includes both the desktop and web app versions of Excel. I'm planning to post a bonus episode in this series that covers how to make the dashboard with older versions of Excel using pivot tables instead. Part of a Series Just so you have some context, this post is the first of six in a series that resulted from an Excel Hash competition. Each year, Excel geeks like myself are tasked with building a worksheet that contains specific features. We compete to see whose solution is best. It's a lot of fun. My entry from this year is a salute to one of my favorite shows, The Office. It's a dashboard that takes simple timestamp data and turns it into an attendance reporting tool that Dwight Schrute could proudly use to police his fellow coworkers. If you'd like to see my entry for this year, watch this: Excel Hash: Attendance Report with Storm Clouds & Fireworks. Here are the other posts in the series: The XOR Function Our first step in creating this attendance dashboard is to take all of the timestamp data and determine if each entry is an “In” or “Out” entry. The logic is fairly straightforward. If there is one entry for any given employee, it means that they have come in to work and haven't yet left. A second entry would indicate that they are now out of the office. So what we are really looking for, initially, is to know if there are an odd or an even amount of entries in the running list. That way, we can label the employee “In” or “Out” of the That's where the XOR function comes in. This function returns a TRUE result if there is an odd number of TRUE results in a range. Likewise, it returns FALSE if there is an even amount. Note: For the purposes of our attendance tracker, the data should be sorted into chronological order. Compatibility: The XOR function is available in Excel 2013 and later. If you have an older version you can use the ISODD(COUNTIF()) formula that I explain in the video and below. Writing the XOR Function To use the XOR function, simply type =XOR and Excel will prompt you to enter logical statements. You can also just feed it an array of true/false values. For example in the video, the reference range that we want is (A2=A$2:A2). This tells Excel to return TRUE if the name in cell A2 is the same as any of the names found in the range from A2 to A2. Of course, that is only one cell, but as the formula is copied down to the cells below it, the range automatically expands. (The beginning of the range will always remain A2 because of the absolute reference (dollar sign), while the end of the range will change.) The XOR function essentially evaluates how many times the selected cell matches the entries in the selected range. It returns TRUE when there is an odd amount and FALSE when even. To make the report more understandable, we can wrap the function in an IF statement. If the function returns TRUE, the cell can read “In” and if FALSE, it can say “Out.” So the final formula would be =IF(XOR(A2=A$2:A2),”In”,”Out”) Using Table Range References One quirk with Excel is that if you use the range references as I've outlined above (A$2:A2), it may not automatically extend when you add data to the bottom of your range. So an alternative is to use table range references. Using INDEX, we can compare the first cell in a column with a range of cells from that same column. This accomplishes the same thing we did above with the regular range reference. With our table reference, our range would be replaced with ([Employee]=INDEX([Employee],1):[@Employee]). This may look a bit gnarly if it's the first time you've dealt with the INDEX function. However, using this option is much better if data is continually being added to your report. If you don't want to use the XOR function, you can use a combination of COUNTIF and ISODD as an alternative. It would look like this: =ISODD(COUNTIF(A$2:A2,A2)) See the video above for details about how to construct that. The next video in the series will take a look at calculating the duration of time that the employees were in the office. I hope this post was helpful in explaining how the XOR function works. Please leave a comment below if you have questions about it. 20 comments • Congratulations Jon for winning the Excel hash competition, your solution is awesome and demonstrates a clever use of infrequent functions, great!! It’s the first time I hear about the XOR function, Excel never stops with surprises, wow! I have a question about using table references, why was the Index function used? • Great video! Is there any calculation speed difference between the XOR method vs. the ISODD(COUNTIF ) method. Also are structured references faster than cell references. I agree that the structured reference looks more confusing, and would rather avoid them if sharing the workbook. □ Hey Rich, Great questions! I have not done speed tests on either. My hunch is that COUNTIF might be faster but I could be wrong. The structured references are going to be more bulletproof if you are adding rows to the table with that running total reference. The regular references with mixed absolute/relative references don’t always expand well in Tables. I hope that helps. Thanks again and have a nice day! 🙂 • Hi John, First, congratulations on winning the competition. Secondly, the XOR function isn’t available on all Excel versions. Can you please specify which versions have it? Best Regards, Meni Porat □ Thank you, Meni! I added a compatibility section above. XOR is available on Excel 2013 and later. On older versions, you can use ISODD(COUNTIF()) instead. I explain that in the video and article as well. Thanks again and have a nice day! 🙂 • I had the same question, except I found that control-shift-enter makes it work as with other array functions I’ve used. But I also noted that many of your functions in the workbook are for future functionality not yet on my version of O365. Curious about what they are (since we are only on Video 1), but also when they might be available to a wider audience. Thank you for all of your videos. I learn something with each of them. □ Hi Terry, Sorry, I should have made that more clear. The formulas use the functionality of the new Dynamic Array Formula update for Office 365 subscribers. The features are currently being rolled out to subscribers, but it does depend on which channel you are on. Currently you will have to be on the Monthly channel to get the latest updates. Here is a post and video that explains more about Dynamic Arrays. I hope that helps. Thanks again and have a nice day! 🙂 • Hi Jon, Can you re-post example workbook? It keeps crashing when I try to open it. □ Hey Colin, I’m sorry to hear that. I just reposted two files. One is the “follow along” version with the data only. The second is the final file. It doesn’t crash for me. However, it could be a compatibility issue. If you are on an older verison of Excel there could be an issue with the new Dynamic Array Functions that are being used in this file. You should still be able to open files with DA functions in older versions even though the formulas won’t work. Microsoft is slowly releasing the DA features and still working out the bugs. I hope that helps. Thanks again and have a nice day! 🙂 • Hi, Here is a parsed example from the workbook. For some reason, I’m returning TRUE even when a repeat name is present? Dwight Schrute TRUE =XOR(A1=A$1:A1) Oscar Martinez TRUE =XOR(A2=A$1:A2) Phyllis Lapin TRUE =XOR(A3=A$1:A3) Dwight Schrute TRUE =XOR(A4=A$1:A4) □ The formula should be array-entered, with Ctrl+Shift+Enter ☆ Thanks Debra! 🙂 ☆ thanks a lot Debra ! i was encountering the same issue than Colin !!! and it does work now !! awesome ! □ Hi Colin, Those are the correct results. You won’t start getting FALSE values until row 16. The running total reference A$1:A2 is only looking for duplicates in the current row and ABOVE. Not in the rows below. • Was always in a little bit of confusion in regards to XOR function, this has helped me a lot. I will make sure to practice this all over again. □ Thank you Rishi! 🙂 • A very good job, Jon . Thanks for sharing . • Hello Jon, Thank you for this video of XOR function. I copied the name list as in your example to my Excel in my computer then typed this function below The function doesn’t seem to check through all the array values that it ends up giving TRUE although some names appear more than once in the array list. Do you know what was the issue? Am using Microsoft Office Home 2019. □ Got it! It appears that someone else had posted the same question before and someone else had given a reply on that. Thank you • I am just getting to review your dashboard series. I looked at the XOR Function, lesson 1. I downloaded the file and the final file. After watching the video I tried creating the formulas in the file but I had a problem. As long as I used the cell reference, A2, both the ISODD XOR and the COUNTIF formulas worked fine. My problem is with the formula using the @EMPLOYEE. When I enter the formula pointing to cell A2 I get tblData[@EMPLOYEE]. When I copy the formula down it give me the result but the formula does not adjust for each name. Here is the formula as it comes into Excel: When I remove the tblData portion the formula does not work at all. I went to Options, Formulas, and turned off formula reference. At this setting it just gives me the cell address, A2, when I point to it. What am I doing incorrectly? I used column I for the @EMPLOYEE formula (all filled in with IN), column J for the ISODD, COUNTIF formula (all filled in with IN and OUT) and K for the XOR formula (all filled in with TRUE and Also, the formulas did not automatically fill for each employee, I had to copy them down. I tried using the Ctrl+Shift+Enter to put the curly braces around the formula and it still did not work for the @EMPLOYEE formula. Thank you for your assistance in resolving my question.
{"url":"https://www.excelcampus.com/functions/xor-function-explained/","timestamp":"2024-11-04T04:26:44Z","content_type":"text/html","content_length":"219259","record_id":"<urn:uuid:30c734cb-d619-4ff7-810d-7cfebf4e03d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00027.warc.gz"}
Understanding Concrete Strength Variation through Standard Deviation Concrete is a crucial construction material, and assessing its compressive strength is vital. The standard deviation method gauges the consistency of compressive strength results within a concrete batch. This statistical approach helps control variations in test results for a specific concrete batch. Explaining Standard Deviation In simpler terms, standard deviation illustrates the spread or diversity of results from the mean or expected value. It uses statistical analyses like correlation, hypothesis testing, analysis of variance, and regression analysis to compare compressive strength series for concrete batches. Two Methods of Calculating Standard Deviation 1. Assumed Standard Deviation If there are insufficient test results, an assumed standard deviation is used. Once a minimum of 30 cube test samples is available, the derived standard deviation is calculated based on the IS-456 Table 8. The assumed standard deviation values are determined according to the concrete grade. Table 1: Assumed Standard Deviation Sl.No Grade of Concrete Characteristic Compressive Strength (N/mm2) Assumed Standard Deviation (N/mm2) 1 M10 10 3.5 2 M15 15 – 3 M20 20 4 … … … … Note: Values are site-control-dependent, emphasizing proper storage, batching, water addition, and regular quality checks. 2. Derived Standard Deviation When more than 30 test results are available, the standard deviation is derived using the formula: • ϕ: Standard Deviation • μ: Average Strength of Concrete • n: Number of Samples • x: Crushing value of concrete in N/mm2 A lower standard deviation indicates better quality control, aligning test results closely with the mean value. Understanding Standard Deviation Variation Fig 1: Variation Curve for Standard Deviation The permissible deviation in mean compressive strength, as outlined in IS-456 Table No-11, is crucial for compliance. Table 2: Characteristic Compressive Strength Compliance Requirement Specified Grade Mean of Group of 4 Non-Overlapping Consecutive Test Results (N/mm2) Individual Test Results (N/mm2) M-15 fck+0.825×Derived Standard Deviation ≥fck−3N/mm2 M-20 and above fck+0.825×Derived Standard Deviation ≥fck−4N/mm2 Example Calculation for M60 Grade Concrete A concrete slab of 400Cum was poured, and 33 cubes were cast for a 28-day compressive test. The standard deviation for these 33 cubes is calculated below. Table 3: Test Result of Concrete Cubes SL No Weight of the Cube (Kg) Max Load (KN) Density (Kg/Cum) Compressive Strength (Mpa) Remarks 1 8.626 1366 3594.2 60.71 Pass 2 8.724 1543 3635.0 68.57 Pass … … … … … … Table 4: Calculation of Standard Deviation Sum of (x−μ)2=1132.55 Standard Deviation=1132.5533−1=5.94N/mm2 As per IS-456, for concrete of grade above M-20, fck+0.825×Derived Standard Deviation=60+0.825×5.94=64.90N/mm2 The higher value is considered, leading to a Standard Deviation of 64.90N/mm2. Considering the average compressive strength of 65.12N/mm2 from Table-3, it surpasses the standard deviation 64.90N/mm2. Despite five cubes having results below 60N/mm2, the standard deviation calculation suggests concrete approval, and non-destructive tests are not mandated. This highlights the importance of statistical methods in ensuring concrete quality.
{"url":"https://www.civilengineeringtopics.com/understanding-concrete-strength-variation-through-standard-deviation/","timestamp":"2024-11-08T14:56:32Z","content_type":"text/html","content_length":"32772","record_id":"<urn:uuid:0cbd6353-4c96-4b1d-8dae-179457c1793d>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00886.warc.gz"}
Scale e Serpenti: welcome back game Want to make creations as awesome as this one? Scale e Serpenti: welcome back game Silvia Salvadori Created on September 7, 2024 inizio scuola studenti di prima media More creations to inspire you Snakesand ladders Welcome back to school board game Roll the dice! Count from 11 to 15 What's the English for "Bianco"? Write the word Box 9 What the right pronunciation of the word: FRIEND? What's the day after Wednesday? 6 words of FOOD Chiedi ad un compagno come sta Write the number "8" in English What's the month before June? Choose the right word:He is/am/are a student. Box 55 Scegli con la parola giusta: You am/is/are great! The month after October is... what day is it today? Saluta una insegnante che va via... Can you spell your name? Name 5 school objects Players start with a token - which represents each of them - in the initial square and take turns rolling the die. The tokens move according to the numbering on the board, in ascending order. If, at the end of a move, a player lands on a square where a ladder begins, they move up it to the square where it ends. If, on the other hand, they land on a square where a snake's tail begins, they move down it to the square where its head ends.The player who reaches the final square is the winner.There is a variation where, if a player is six or fewer squares away from the end, they must roll precisely the number needed to reach it. If the number rolled exceeds the number of remaining squares, the player cannot move. If the player falls on the bottom of a ladder, they move up to the top square where the ladder ends. If the player lands on a square where the tail of a snake starts, they go down to a lower square where the headis located. Write the number after fourteen Can you spell the word "Kate"?
{"url":"https://view.genially.com/66dc6bd52b62da0f9a913a0c/interactive-content-scale-e-serpenti-welcome-back-game","timestamp":"2024-11-13T21:49:39Z","content_type":"text/html","content_length":"31300","record_id":"<urn:uuid:a3fa9f81-bd8b-4605-9319-5688e4ad8847>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00213.warc.gz"}
Watts to Megawatts Conversion: Easy Power Unit Calculator ConvertPower ConversionWatts to megawatts conversion Watts to megawatts conversion Watts (W) and megawatts (MW) are units of power commonly used to measure the rate of energy transfer or the rate at which work is done. Here’s an explanation of each: Watts (W) Watt is the standard unit of power in the International System of Units (SI). It is defined as one joule of energy transferred per second. In simple terms, one watt is equal to one joule of energy used or produced in one second. Watts are typically used to measure small-scale power requirements, such as the power consumption of household appliances, light bulbs, and electronic devices. For example, a standard 60-watt light bulb consumes 60 joules of energy every second it is turned on. Megawatts (MW) Megawatt is a unit of power that is equal to one million watts or one thousand kilowatts. It is often used to describe larger power outputs, such as those of power plants, industrial facilities, and large electrical grids. A megawatt is a much larger unit compared to a watt and is used to measure power on a larger scale. For example, a medium-sized wind turbine may have a power output of 2.5 MW, and a small gas power plant might have a capacity of a few hundred megawatts. How to convert watts to megawatts To convert watts (W) to megawatts (MW), you need to divide the power value in watts by 1,000,000 since there are one million watts in a megawatt. Here’s the conversion formula: P[(MW)] = P[(W)] / 1000000 For Example – convert 5 watt to megawatt Ans. P[(MW)] = 5W / 1000000 = 0.000005MW Watts to megawatts conversion table Power (watts) Power (megawatts) 0 W 0 MW 1 W 0.000001 MW 10 W 0.00001 MW 100 W 0.0001 MW 1000 W 0.001 MW 10000 W 0.01 MW 100000 W 0.1 MW 1000000 W 1 MW FAQs on Watts to megawatts conversion What is 1 MW equal to? MW (megawatt) is equal to 1,000,000 watts. It is a unit of power commonly used to measure large-scale energy outputs, such as those of power plants and industrial facilities. How do you convert MW to watts? To convert megawatts (MW) to watts (W), simply multiply the power value in megawatts by 1,000,000. The formula is: Watts (W) = Megawatts (MW) × 1,000,000. How many units is 1 MW? The unit MW represents power, not energy consumption. It indicates the rate at which energy is produced or consumed. Energy units are typically measured in kilowatt-hours (kWh) or megawatt-hours (MWh), not in units. How many kW is 1 unit? Unit likely refers to 1 kilowatt-hour (kWh). So, 1 unit is equal to 1 kilowatt (kW) of power used for one hour. How much watt is 1 unit? Unit refers to 1 kilowatt-hour (kWh). Since 1 kWh is equal to 1,000 watts used for one hour, 1 unit is equal to 1,000 watts. How many units is 1 KV? KV typically stands for kilovolt, a unit of electrical potential difference. It is not directly related to energy consumption or power units. The unit used for energy consumption is kilowatt-hour What is 1000 watts per unit? 1000 watts per unit is equivalent to 1 kilowatt-hour (kWh). It means consuming 1000 watts of power continuously for one hour. How many units does a 1.5-ton AC consume? The energy consumption of an air conditioner depends on various factors such as its efficiency, usage patterns, and the local climate. However, on average, a 1.5-ton (18,000 BTU) air conditioner can consume around 1.2 to 1.6 kilowatts (kW) per hour. So, if used continuously for an hour, it would consume approximately 1.2 to 1.6 units of electricity.
{"url":"https://infinitylearn.com/surge/convert/watts-to-megawatts","timestamp":"2024-11-07T08:50:47Z","content_type":"text/html","content_length":"167162","record_id":"<urn:uuid:9eee0df0-2504-4002-ae1e-6c86f7a023dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00138.warc.gz"}
Generating Partition of A Set Problem: given a set of discrete value {A,B,C} generate k partition of set where there is no empty partition and no duplicate value exist in subset of a partition. • No empty subset is returned. • No duplicate value exist in each partitions, i.e. when partitioning into two subset, if subset one contain {A} and then the another subset must not contain {A} • If k is 1 return the original set. • If k is equal to number of value in set, return each element of set as a partition and as a subset. Given input set {A,B,C}, if k is 1 then the number of generated partition is 1 which is equal to original set: {A,B,C}. If k is 2 then the number of generated partition is 3 which consist of {{A}, {BC}}, {{B},{C,D}}, and {{C},{A,B}}. If k is three then the number of generated partition is 3 which consist of {{A},{B},{C}}. Another example, given input set {A,B,C,D}, if k is 1 then the number of generated partition is 1: {A,B,C,D}. If k is 2 then the number of generated partition is 7, which is {{A},{B,C,D}}, {{B}, {A,C,D}}, {{C},{A,B,D}}, {{D},{A,B,C}}, {{A,B},{C,D}}, {{A,C},{B,D}}, {{A,D},{B,C}}. If k is 3 then the number of generated partition is 6, which is {{A},{B},{C,D}}, {{A},{C},{B,D}}, {{A}{D}{B,C}}, {{A,B},{C},{D}}, {{A,C},{B},{D}}, {{A,D},{B},{C}}. If k is 4 then the number of possible generated partition is 4: {{A},{B},{C},{D}} In mathematics, this problem is known as subproblem of combinatrics where the number of partition can be computed using "Stirling number of the second way" [1], which take n objects and the number of partition or k, and return number of possible partition of n using k. In computer science, the problem is called "Partition of a set". If you are a thinker and interested on solving this problem, go ahead, grab some paper and a pencil and close this journal. Now, back to the problem. This is an old problem, oldest than computer it self. There are more papers out there which trying to be a fastest algorithm using iterative or parallel method (of course the last paper is the winner). Some of them using sequence of bit to mark wether a value is a group of partition. Here is the gist of it, given three value with two partition the possible sequence of bits are, 0 0 0 = 1 partition 0 0 1 = 2 partition 0 1 0 = 2 partition 0 1 1 = 2 partition 1 0 0 = 2 partition 1 0 1 = 2 partition 1 1 0 = 2 partition 1 1 1 = 3 partition Can you see the pattern? The 0's bit is group one and 1's bit is another group. There are some problem with this solution: first, we must check for duplicate partition, i.e. 001 is duplicate with 110. Second problem, we must generate all bit for all values, for example partition with 3 subset is generated even if we only need 2 partitions. There is another alternative without needed to check for duplicate and does not waste than k defined partition. The solution is using recursive function. Here is the algorithm, • A: set of value • k: number of partition P: a set contain subsets of A into k possible partition without an empty set and duplicate value. (1) If k equal 1, then return A (2) if k equal to length of A, then (2.2) for each value in A as a (2.2.1) create new partition p contain only a (2.2.2) add p to the new set A’ (3) Create new set B for partitions (4) move the first elemen of A to a1, which make A contain n-1 element. (5) call function partition with parameter A and k, save the result to A’ (6) for each partition in A’ as p (6.1) for each subset in p as sub (6.1.1) create new partition p’ by joining element a1 with subset sub and add it to B (7) call function partition with parameter A and k-1, save the result to A’' (8) for each partition in A’' as p (8.1) create new partition p’ by appending element a1 as subset to partition p and add it to B Procedurally, if we give set {A,B,C} with 2 as partition number and call and print the algorithm, we will see the output like these, A: partition({B,C},2) return {B},{C} A: partition({B,C},1) return {B,C} return B
{"url":"https://kilabit.info/journal/2015/11/Generating_Partition_of_A_Set/","timestamp":"2024-11-04T05:03:05Z","content_type":"text/html","content_length":"20452","record_id":"<urn:uuid:87fb2658-8f37-4a2c-a526-385fc1ae957a>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00360.warc.gz"}
Dr. Michio Kaku: Math is the mind of God Dr. Michio Kaku is the co-founder of string field theory, and is one of the most widely recognized scientists in the world today. He has written 4 New York Times[…] Sign up for the Smarter Faster newsletter A weekly newsletter featuring the biggest ideas from the smartest people Michio Kaku says that God could be a mathematician: “The mind of God we believe is cosmic music, the music of strings resonating through 11 dimensional hyperspace. That is the mind of God.” Michio Kaku: Some people ask the question of what good is math? What is the relationship between math and physics? Well, sometimes math leads. Sometimes physics leads. Sometimes they come together because, of course, there’s a use for the mathematics. For example, in the 1600s Isaac Newton asked a simple question: if an apple falls then does the moon also fall? That is perhaps one of the greatest questions ever asked by a member of Homo sapiens since the six million years since we parted ways with the apes. If an apple falls, does the moon also fall? Isaac Newton said yes, the moon falls because of the Inverse Square Law. So does an apple. He had a unified theory of the heavens, but he didn't have the mathematics to solve the falling moon problem. So what did he do? He invented calculus. So calculus is a direct consequence of solving the falling moon problem. In fact, when you learn calculus for the first time, what is the first thing you do? The first thing you do with calculus is you calculate the motion of falling bodies, which is exactly how Newton calculated the falling moon, which opened up celestial mechanics. So here is a situation where math and physics were almost conjoined like Siamese twins, born together for a very practical question, how do you calculate the motion of celestial bodies? Then here comes Einstein asking a different question and that is, what is the nature and origin of gravity? Einstein said that gravity is nothing but the byproduct of curved space. So why am I sitting in this chair? A normal person would say I'm sitting in this chair because gravity pulls me to the ground, but Einstein said no, no, no, there is no such thing as gravitational pull; the earth has curved the space over my head and around my body, so space is pushing me into my chair. So to summarize Einstein's theory, gravity does not pull; space pushes. But, you see, the pushing of the fabric of space and time requires differential calculus. That is the language of curved surfaces, differential calculus, which you learn in fourth year calculus. So again, here is a situation where math and physics were very closely combined, but this time math came first. The theory of curved surfaces came first. Einstein took that theory of curved surfaces and then imported it into physics. Now we have string theory. It turns out that 100 years ago math and physics parted ways. In fact, when Einstein proposed special relativity in 1905, that was also around the time of the birth of topology, the topology of hyper-dimensional objects, spheres in 10, 11, 12, 26, whatever dimension you want, so physics and mathematics parted ways. Math went into hyperspace and mathematicians said to themselves, aha, finally we have found an area of mathematics that has no physical application whatsoever. Mathematicians pride themselves on being useless. They love being useless. It's a badge of courage being useless, and they said the most useless thing of all is a theory of differential topology and higher dimensions. Well, physics plotted along for many decades. We worked out atomic bombs. We worked out stars. We worked out laser beams, but recently we discovered string theory, and string theory exists in 10 and 11 dimensional hyperspace. Not only that, but these dimensions are super. They're super symmetric. A new kind of numbers that mathematicians never talked about evolved within string theory. That's how we call it “super string theory.” Well, the mathematicians were floored. They were shocked because all of a sudden out of physics came new mathematics, super numbers, super topology, super differential geometry. All of a sudden we had super symmetric theories coming out of physics that then revolutionized mathematics, and so the goal of physics we believe is to find an equation perhaps no more than one inch long which will allow us to unify all the forces of nature and allow us to read the mind of God. And what is the key to that one inch equation? Super symmetry, a symmetry that comes out of physics, not mathematics, and has shocked the world of mathematics. But you see, all this is pure mathematics and so the final resolution could be that God is a mathematician. And when you read the mind of God, we actually have a candidate for the mind of God. The mind of God we believe is cosmic music, the music of strings resonating through 11 dimensional hyperspace. That is the mind of God. Directed / Produced by Jonathan Fowler & Elizabeth Rodd Many mavericks look to Einstein as a unique figure, whose lone genius revolutionized the Universe. The big problem? It isn’t true. The controversial theory about magic mushrooms and human evolution gets a much-needed update. The secret sauce is the real world. An in-depth interview with astronomer Kelsey Johnson, whose new book, Into the Unknown, explores what remains unknown about the Universe. Within our observable Universe, there’s only one Earth and one “you.” But in a vast multiverse, so much more becomes possible.
{"url":"https://bigthink.com/videos/dr-michio-kaku-math-is-the-mind-of-god/","timestamp":"2024-11-01T18:59:58Z","content_type":"text/html","content_length":"142870","record_id":"<urn:uuid:d0971977-0e5f-46b9-803d-22b99cd50e9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00809.warc.gz"}